Microsoft is having a hard time convincing skeptics that its Azure cloud services will support non .NET languages.
Consider a presentation given at this week's Cloud Computing Expo in New York by Yousef Khalidi, a distinguished engineer for Microsoft's cloud infrastructure services. Khalidi emphasized that its forthcoming Azure cloud platform will support both native and managed code, and not just .NET including Java, Ruby and PHP.
"I'm sitting here telling you, you can run anything you want on this platform," Khalidi insisted during a session Monday in response to a question by an attendee about Microsoft's support for different languages on its Azure Services Platform. The attendee seemed taken aback by that.
Kristof Kloeckner, CTO of enterprise initiatives and VP of cloud computing at IBM heard Khalidi's presentation and told me he liked what he heard but said he too has his doubts.
"I think the real question is going to be around how open is it going to be? What role are ecosystem partners going to play in Microsoft's environment? Certainly what I heard in the presentation that they want to support multiple languages makes sense. Whether that means a greater openness, I don't know. Let's just wait and see what they come up with."
Certainly Kloeckner can't be blamed for his skepticism. But it also underscores the fact that providers see a lot at stake and will use the interoperability card to jockey for position.
Kloeckner talked up the Open Cloud Manifesto, the controversial position statement signed by 70 providers ranging from startups to large organizations including Cisco, the Eclipse Foundation, EMC, IBM, the Open Group, and Sun Microsystems. But when key signatures in the cloud ecosystem were notably absent on that document -- Amazon, Google, Microsoft and Salesforce.com -- the worthiness of the effort fell into question.
I talked with numerous attendees at the conference, both those who signed it and those who held off. All agreed interoperability is an important end goal but pointed out it is still quite early in the game.
"This manifesto is a work in progress, it's a draft. As a set of goals, I don't think anyone can disagree with it; as a prescription for achieving them, I think there's a lot of discussion still to take place," said Peter Coffee, Salesforce.com's director of platform research."
"Generally speaking, I think it's too early in the game to be talking about standardization; open source has really changed the way standards evolve," said Ian Murdock, Sun Microsystems' vice president of emerging platforms.
"I think we are still grappling at what levels do we need standards, what do they need to describe and what does it need to contain," added Thorsten von Eicken, CTO of RightScale Inc., a leading provider of cloud provisioning and administration software.
"What we believe strongly in is start with de facto standards, things that work with interfaces that exist where there is customer momentum, and then build from there, as opposed to with some committee approach out of thin air, where everyone tries to come up with the end-all-be-all."
For its part, Microsoft is pushing ahead. The company today opened access to those who want to test its SQL Data Services and other components of the Azure platform, according to the Azure Journal (until now an invitation was required). Developers can now sign up on the Azure Services Platform portal.
So is the skepticism about Microsoft well-earned or do you think the company is blazing a new path to openness with its cloud services? Drop me a line at [email protected].
Posted by Jeffrey Schwartz on 04/01/2009 at 1:15 PM0 comments
Microsoft has been quietly working to revive an old but trusted data access technology that some thought it had left for dead -- Open Database Connectivity or ODBC.
The popular API for providing SQL access to C and C++ applications (as well those built in other languages), ODBC has remained dormant for the better part of the past decade after Microsoft shifted emphasis to its COM-based OLE DB. Microsoft recommitted to ODBC two years ago and, in January, released ODBC 3.8 in beta 1 of the Windows 7 SDK .
Besides adding support for SQL Server 2008, the new release provides bug fixes and three key improvements: 1) asynchronous operations, which should lend itself well to occasionally connected applications and those connected to cloud-based services; 2) streamed output parameters for retrieving large blocks of data and 3) support for C data type extensibility.
The latter will allow those who build drivers to create their own data structures that they can return to an application that don't now exist in ODBC, explained Rob Steward, vice president of research and development at DataDirect, a subsidiary of Progress Software Corp.
"It's good for driver-writers such as myself or the database vendors in that they may create types that may not exist in the specification," said Steward whose company this week released a set of ODBC drivers. Still he questions whether it's good for ODBC overall.
For example, in the case of a time stamp data type representing different time zones, "the original ODBC specification doesn't have a very clean way to return that, but a driver-writer may then create their own structure, with which to return that time zone data," he said. The problem is that it creates sort of a trap for ODBC developers.
"If you use the types the driver vendors create, now your application and your code only works with that one driver, which defeats part of the purpose of ODBC," he said. "So it's kind of a built-in way for people to extend the spec on their own, which again is good and bad but overall that was the big value of ODBC -- that you can write a single set of code that works against multiple databases."
Steward is concerned that with these driver specific C data types, "your code doesn't work against multiple databases, it only works against one driver," he said.
But that shouldn't be a problem, so long as Microsoft's changes don't affect the driver manager, said Amyn Rajan, president and CEO of Vancouver-based Simba Technologies Inc., which builds ODBC drivers for ISVs. David Sceppa, Microsoft's ADO.NET program manager, was not available for comment.
"The real risk is about changes they are making in the driver manager -- this is code that probably hasn't been touched in at least a decade," Rajan said. So far, in its tests with the new ODBC driver in the Windows 7 SDK Simba's engineers haven't found any problems.
Rajan says he finds it interesting that after years of effectively trying to cast aside ODBC in favor of OLE DB, that Microsoft has come back to ODBC. "The fact that they are extending ODBC tells me they actually decided ODBC was something that was good and something they are going to invest in and they look at this as a first class API," he said. "If they were trying to kill ODBC, they would have added this functionality to their .NET provider."
Will ODBC make Windows 7 a better client for accessing data-driven applications and content? Are Steward's concerns about the potential for C data type extensibility a concern? Drop me a line at [email protected].
Posted by Jeffrey Schwartz on 03/26/2009 at 1:15 PM2 comments
If IBM actually ends up acquiring Sun Microsystems, the rumor that surfaced yesterday
, it could have interesting implications for the database market.
Before I go on, let me be clear -- this deal is still rumored and while reports suggest it could happen in days, it could fall apart. Now on for the speculation.
Of course the repercussions of such a megadeal transcend way beyond one particular component of Sun's arsenal, which includes a contracting server business that is nonetheless well regarded technically, microprocessors, software and a deep bench of R&D. And of course there's perhaps Sun's most visible asset -- Java.
Some question what IBM would do with MySQL, considering Big Blue has the means to clone it with an express-type version of its DB2 database. Dana Gardner, principal analyst with Interarbor Solutions LLC, a Gilford, N.H. consultancy, is among those who sees no sense in IBM acquiring Sun, as he posted in his blog yesterday.
"There's no reason for IBM to take open source any further than it already has, given that it still commercially has a very good business in DB2," Gardner said in a subsequent interview. "If they want to further their open source database strategy, they can accomplish that without buying Sun; they could buy Ingres or spin off an open source version of DB2."
Forrester analyst Noel Yuhanna disagrees. MySQL has cache with customers that no other open source database has achieved, Yuhanna says. Moreover it has more revenues than any other open source database -- $400 million for the most recent year, he estimates, though he points out that's miniscule compared to the overall $16 billion market for database software.
MySQL, Yuhanna told me, would give IBM an opportunity to offer an alternative database to customers looking to move to an open source database.
"In these economic conditions today, companies are looking at open source databases more aggressively because they want to lower the cost, and my SQL is very mature in terms of technology," Yuhanna said. "They have that niche of becoming a more scalable, high-performance database."
Many customers see DB2 as a mainframe-class database, he added, and IBM has failed to make strong gains with that database on Windows and Linux. "If you look at the mix, it would be really complimentary for IBM," he said. Analyst Curt Monash, of Monash Research agrees. "There's little reason to think IBM would orphan MySQL or any other DBMS product," Monash wrote in a blog posting.
One of the key areas Sun has failed in growing the MySQL business is its lack of migration services and tools from higher end databases, Yuhanna added. Many Forrester clients have indicated that Sun hasn't improved MySQL with performance and scalability. "They fear if they don't provide a very good support for high end, we may just move away from MySQL," he said.
A resurgent MySQL would perhaps most be competitive with Microsoft's SQL Server and with its recent acquisition of DATAllegro, is gaining more credibility for high end implementations including business intelligence and data warehousing.
Yet Yuhanna says .NET customers prefer SQL Server for obvious reasons, especially now as Microsoft is providing even tighter integration with its framework and tooling. "That's where IBM would have to attack, try to provide an integration point with Java and MySQL to make it more appealing to customers."
Of course, we'll see if this all happens. If you have any thoughts, drop me a line at [email protected].
Posted by Jeffrey Schwartz on 03/19/2009 at 1:15 PM0 comments
Microsoft in recent weeks began dropping hints that it would be announcing a revamped iteration of its SQL Data Services -- its cloud-based database service that's been available for testing for four months -- after the testers insisted they wanted SDS to have native relational capabilities.
In a surprise move, Microsoft said yesterday that it would expose its Tabular Data Stream (TDS) over-the-wire protocol for accessing SQL Server via its forthcoming Azure Services Platform. The move reverses the existing plan to offer SDS via the REST Web services interface. I spoke today with Niraj Nagrani, a senior product manager for SDS at Microsoft, about the changes.
Is it fair to say this is a major revamp from your initial plan?
The plan was always to deliver a relational database. A major part of this acceleration came from the feedback, but we always planned to deliver a relational database.
Did you, in effect, give up on the Entity Attribute Value [EAV] tables?
In the course of our acceleration, we heard a lot of feedback that people wanted the experience of a traditional SQL Server database with its T-SQL compatibility. To deliver that aspect of it we were kind of working around it. We always wanted to deliver the SQL Server experience that we took the traditional Entity Model and we were trying to imitate what SQL Server does, but we felt that based on the feedback we heard, customers preferred more the traditional T-SQL-based support so we decided to go in this direction.
Were you surprised at the reaction?
We were very happy with the reaction. Initially we were thinking going with the traditional entity model, we were calling it SQL Server. But it really was not similar to a SQL Server-type experience. So the question was, should we toy with the brand and not call it SQL Server or should we keep SQL Server and then deliver a traditional, more familiar experience to our existing customers? But we didn't have enough data points. Until we actually went to the market and got some data points, we didn't really have any justification to do it. Now we have enough proof points. We were not surprised, but we were happy to see that customers confirmed our hypothesis that they do want to have a traditional SQL-like experience.
How much did the fact that the Azure Tables and SDS were seen as indistinguishable data storage services?
With the current acceleration to relational databases, definitely the T-SQL-based compatibility and working with the traditional TDS proxy protocol, SQL Server becomes more like a traditional RDBM database. It's very similar to a SimpleDB-type storage, which is a simple, structured storage with no relational capabilities. So there was a big differentiation between somebody needing an RDBMS database in the cloud versus a shared distributed database that's a highly scalable database built in with HA [high availability], self-healing and data protection, as opposed to structured storage with stored metadata and files.
Are you basically not going to be offering SDS with the EAV tables any more?
We are looking into our future roadmap to make sure that Astoria [ADO.NET Data Services] can be leveraged on top of SDS and Entity Data Model continues to exist, and we will continue to provide for that through Astoria. We will continue to work with the Astoria framework and figure out how SDS can support that.
TDS is not meant to be an Internet-friendly protocol. Is that going to affect performance?
We actually did a lot of benchmarking and testing. We think it's appropriate for what we are doing and the direction we are taking it. We feel comfortable, as we get more early-adopter customers and we look at the type of workloads they are building, they will keep modifying and tweaking our protocols so it's more workload-friendly.
Are you looking at other protocols, as well?
Now we are going to take the TDS and see how we can scale our services and start working with early-adopter customers. SDS will support breadth protocols including the existing TDS over TCP/IP and also options to support TDS over other transports such as HTTP for high-latency scenarios without making modifications to TDS.
So you're not concerned about the speed issues related to TDS?
If you look at any other product in a hosted environment, there is always going to be a latency issue coming from the typical service but also just going over the wire. There are always going to be workloads that are OK with the latency and will adopt to the cloud initially, and as we go in the future, the whole cloud infrastructure will enhance and will propose more high-performance workloads. As adoption grows and as we need efficiencies over the Web, I am sure the latency will become a non-issue for quite a bit of workloads.
What about the scalability questions of relational databases versus the EAV tables used in SDS?
SDS was built on SQL Server as a back end. The engineering team did a lot of re-engineering of the existing SQL Server architecture to have it work in a scale-out infrastructure manner. One of the biggest value benefits of SQL Data Services will be that it's a scale-out architecture and infrastructure, which means that workloads can scale out based on the usage, so not only the low-end workflows that don't need to have a scale-out architecture but also the high-end workloads that currently may have a limitation on the existing Web environment, in terms of how they scale out the infrastructure.
Will SDS support data partitioning?
In SDS V1, data partitioning will need to be handled in the application. Developers who need to scale out across multiple databases will need to share the data out themselves. In the future, based on customer feedback we will provide automatic provisioning.
In [senior program manager] David [Robinson's] blog posting yesterday, he wrote the majority of database applications will work. What types of applications are not going to be suited for this environment that developers should beg off?
There are certain workloads that are natural to clouds. In terms of Web workloads, we see them going to the cloud. We see a lot of data protection and storage-type workloads going to the cloud, like CRM applications, content management, product lifecycle management, supply chain and collaboration across enterprise. Where we continue to work toward is where we can have data warehouses and data-marts in the cloud. We are seeing a lot of excitement around BI workloads in the cloud. Or reporting-type applications living in the cloud. There is probably a natural tendency for these early-adopter workloads to go to the cloud right away and there is going to be a tendency of some other workloads like data warehouse and real OLTP workloads to go to the cloud in time.
What will be the surcharge for SDS over Azure Table Services?
We are still working on the pricing. I think sometime in the middle of the year, we will have some more information on the actual business model.
Do you think it will be competitive with Amazon's EC2 60-cent standard or more the $1.20-per-hour enterprise standard that Amazon is offering?
We are still working on that. We certainly don't have a range or a price point at this point.
Will the new SDS run on SQL Server 2008?
It is currently using 2005, but we have a roadmap to move to 2008.
That's the plan.
Will SDS use Enterprise Edition?
It will use Enterprise Edition. Just to be clear, when we say Enterprise Edition, we don't just take the box and put it in the cloud. You're really not going to take the code bit by bit and line by line and put it in a box and run it on SDS because it is not a hosted environment -- it's a shared database infrastructure. The code base is taken from the enterprise; we have an enhanced architecture to run on datacenter machines. We can leverage the cost benefit of running it on cheap hardware but deliver an enterprise-class, mission-critical database.
Will it be TDE [Transparent Data Encryption] Enabled?
We are looking at different security features of how we can enable it. The thing is, there is a list of features that are available on-premises and quite frankly there's going to be some features that we leverage from inside-out and there are going to be a lot of features coming from outside-in based on the customer feedback.
How will users of TDI [Trusted Database Interpretation] and column-level encryption protect their private keys from unauthorized access?
We are looking into the type of workload and requirements for row-level security and column-level security and based on the requirements, we will actually enable those features.
How will data partitioning be handled?
We built an intelligent software infrastructure across the nodes that actually knows the size of each node and partitions data across the nodes.
Will all SQL Server transaction scopes be supported?
That's the plan.
What should developers be on the lookout for next week regarding SDS?
People will see the code and the bits running. There will be a demo of our SDS relational data model and you will see it working and will have a good level of the discussion about the architecture under the hood and the types of applications that can be built in real time. That will give a sense of how easy it is to actually use some of the T-SQL-based language into applications or running existing T-SQL applications in the cloud.
Posted by Jeffrey Schwartz on 03/11/2009 at 1:15 PM1 comments
Microsoft appears to be revamping its SQL Data Services with plans to add relational services, a move that does not seem to be catching too many observers by surprise.
As reported by blogger Mary Jo Foley last week, it appears Microsoft is overhauling SDS, launched initially one year ago as SQL Server Data Services. For its part, Microsoft is promising some big SDS news at MIX 09 in two weeks. "We will be unveiling some new features that are going to knock your socks off," wrote Microsoft senior program manager David Robinson in the SDS team blog last week.
Perhaps putting pressure on Microsoft is the availability of SQL Server hosted on Amazon's EC2 service and the launch of a cloud-based relational database service launched last week by a two-person startup based on Sun Microsystems' My SQL no less by a former .NET developer .
"The way they built SQL Data Services looks a lot like Amazon's SimpleDB and that's really not a database," said Andrew Brust, chief of new technology at consulting firm twentysix New York and a Microsoft regional director. "It's really an entity store, which works well for some things. It's great for content management for example but for what relational databases are typically used for, not so much.
Making matters worse was that developers had higher expectations, said Oakleaf Systems principal Roger Jennings. "What they promised was full text search, secondary indexes, schemas, and a few other relational niceties but didn't deliver on those. They did deliver support for BLOBS," said Jennings, who tested SDS last summer .
But Microsoft and others may face challenges even with hosting relational versions of databases in the cloud, Jennings has maintained in his blog postings. "I don't think they will be able to scale it," Jennings said, re-iterating his posting last week. "Traditional relational databases don't deliver the extreme scalability expected of cloud computing in general and Azure in particular," he wrote.
"I think the move to the cloud is going to be very hard. It's one of those easier said than done things," Brust added. "This isn't just about hosting the server products."
Are you anxious to hear what Microsoft has planned for SDS? Drop me a line at [email protected].
Posted by Jeffrey Schwartz on 03/04/2009 at 1:15 PM0 comments
Microsoft's announcement that it will offer an Oracle database plug-in for the next release of Visual Studio Team System is a coup for SQL Server developers who have little or no experience with the rival but widely deployed data repository.
The Oracle plug-in is a Database Schema Provider (DSP) that will be made available as an option to VSTS 2010 by Quest Software Inc., said Jason Zander, Microsoft's general manager for Visual Studio, who made the announcement at the VSLive! conference in San Francisco, as reported by Redmond Developer News editor Kathleen Richards.
"When you use those two things together, I will be able to write my code and explore my schemas and do all of that advanced functionality with Oracle," Zander said. "That gives Team System support for the three most popular databases in use by database programmers."
Quest is no stranger to the Oracle database platform -- it makes the widely used Toad for Oracle tools, which it has offered for more than 10 years. "In supporting Visual Studio Team System we are supporting another platform that an Oracle DBA or developer, if they want to be part of this Team System methodology, can use," said Daniel Norwood, a product manager at Quest.
The Oracle DSP will not come out of the box, but will be made available as a third-party add-on. Quest has not disclosed pricing and availability. Microsoft had earlier said that IBM will offer a VSTS 2010 plug-in for its own DB2 database platform. I talked with Norwood and Daniel Wood, development manager at Quest, to get an understanding of what this means.
Based on that interview, here's a brief FAQ:
How will this benefit database developers?
An application developer that needs to spend time working against a back-end database that one day that might be SQL Server, may move on to another project that's going to be on an Oracle database. The developer can maintain consistency with their tool set by sticking with VSTS and working against the different database platforms.
Does this presume the Visual Studio developer perhaps is not familiar with Oracle's PL/SQL language or do they have to have some understanding of that?
They are going to have some limited PL/SQL experience just by the nature that they're developing against Oracle. But developers can go in with limited PL/SQL experience, they can click, file, add new items inside of VSTS and get basic scripts that show them how to create the tables, the indexes, the views and the various other objects that they need for their application. They can then model those objects and change how they need to be to fit what they're developing.
Why can't it work with current or earlier versions of Visual Studio?
The code that enables the extensibility for companies like Quest or IBM to plug in these database schema providers will actually be released publicly for the first time in the 2010 via Microsoft's new Managed Extensibility Framework.
What Oracle databases will it work against?
Oracle 9i through 11g and above.
When will VSTS 2010 testers be able to try Quest's new plug-in?
A beta is planned in the coming weeks.
Are you testing Visual Studio 2010? We'd love to hear your findings for a cover story that Kathleen Richards is writing and will appear in Visual Studio Magazine next month. Please drop a line to her at
Posted by Jeffrey Schwartz on 02/25/2009 at 1:15 PM3 comments
When I pointed last week
to the potential conflict that Microsoft's Office SharePoint Server (MOSS) can have on database developers and DBAs, I apparently struck a nerve. Some said I hit the nail on the head, while one said I was oversimplifying the matter and creating FUD.
But for some database developers and administrators -- and in many cases even higher up in the IT food chain, the unintended consequences of SharePoint's growth can lead to a lack of control for how data is kept in sync as more data ends up in MOSS.
"You don't have to be a developer to go in there," said Ed Smith, a systems analyst at Tetra Pak, a global supplier of packaging machines. "You can get two secretaries together, they can figure out what they want to do and they can start putting stuff in there."
While SharePoint is popular for storing and sharing documents and other unstructured content, when an individual has complex, structured data based on large numbers of rows columns, where multiple lists have to be joined together, and certainly when referential integrity is critical, it needs to be in a relational database server.
"We discovered that lists are the place to store information but they can't be substitute for tables and relations and of course referential integrity," wrote Prabhash Mishra, CEO of Skysoft IT Services Pvt. Ltd., based in Noida, India in an e-mail.
In an interview last week, Paul Andrew, Microsoft's technical product manager for the SharePoint Developer Platform, said many are already building custom applications on SharePoint that use a mix of SQL Server schema and tables within MOSS. "Of course, each has its own strengths, and each is better suited in different parts of an application," Andrew says.
Not all DBA organizations are wary of SharePoint. "My manager, the head of our DBA organization, loves SharePoint," wrote Kevin Dill, IT innovation analyst at Grange Mutual Casualty Co. in an e-mail. "For example, he regularly stores information on SQL clustering and business-specific SAS functions in the wiki and document library, for easy searching."
Dill added that SharePoint will not completely replace traditional DBAs. "While you can configure many aspects in the Central Admin, you still need DBAs to monitor data growth and backups," he said. "In fact, SharePoint can help DBAs maximize their time on the things that really matter, instead of provisioning little one-off databases for internal projects."
But in many shops, SharePoint is outside the purview of database developers and DBAs. One way to avoid the problem of allowing employees to work around them is for IT organizations to put controls over who is permitted to commit data to SharePoint servers. That's the case for the City of Prince George, BC.
"We've locked it down pretty much so that all they can do is put in content directly," says programmer/analyst Rob Woods. "Nobody else really has the option of doing any of this kind of stuff except for the IT staff."
Posted by Jeffrey Schwartz on 02/18/2009 at 1:15 PM0 comments
It has been well chronicled how pervasive Microsoft's SharePoint Server is becoming in all enterprises.
Just look at the large pharmaceutical conglomerate Pfizer, which has 6,000 SharePoint sites used by 63,000 employees -- that's two-thirds of its entire rank and file.
But if you're a database developer or administrator, the rampant growth of SharePoint has to be a concern. In a recent conversation I had with independent consultant and Microsoft MVP Don Demsak, this is a common concern that is only going to continue to grow. That's because of the amount of data that is going into SharePoint that is better suited for a SQL database. The reason is clear.
"SharePoint is very successful because you are removing levels of impedance, and a lot of the DBAs hate SharePoint for those same reasons," says Demsak. "Basically you are storing everything in a BLOB, and you can't relate to object relational mapping, and you can't do good entity relationships, you can't do relational models because there's all sorts of problems when people try to extend that SharePoint model past where it's supposed to go."
This is reminiscent of the 1990s, when Lotus Notes started to grow in enterprises, Demsak recalls. "I remember when every database was a Notes database, and it shouldn't have been. You're storing documents and that sort of stuff in there. That's great but you've got rows, columns, and you're trying to join this list to that list and that's not what those tools were made for," he says. When you need a relational database you use a relational database, the way it was supposed to be."
This is more than an annoyance though, because when an organization requires referential integrity of information, that's what relational databases were intended for. If you're a database developer or DBA, drop me a line and tell me how you are dealing with this. I'd like to hear from SharePoint developers too. I'm at [email protected].
Posted by Jeffrey Schwartz on 02/12/2009 at 1:15 PM4 comments
It has taken longer than initially planned, but the release candidate of Microsoft's ASP.NET Model View Controller, design pattern for Test-Driven Development of enterprise scale Web applications, is now available.
As reported Tuesday, Microsoft is urging developers to check out the feature-complete release candidate, which is slated to ship next month presuming no major issues arise, said Scott Guthrie, corporate vice president of Microsoft's developer division in a blog posting announcing the release.
While a release candidate suggests that software is just about ready to roll, that doesn't mean the RC 1 is bug free. "Unfortunately, a few minor bugs did crop up at the last second, but we decided we could continue with this RC and fix the bugs afterwards as the impact appears to be relatively small and they all have workarounds," wrote Phil Haack, a senior program manager for Microsoft's ASP.NET MVC Framework, in a blog posting.
One particular bug he points to is the fact that the controls collection cannot be modified and he offers what he describes as a straightforward workaround.
Those issues notwithstanding, Microsoft has suggested that ASP.NET MVC, its answer to the Ruby on Rails framework used by Web developers for rapid prototyping of applications based on the Ruby dynamic language, will appeal to small subset of the overall .NET developer community.
I'd be curious to hear thoughts from those who see it changing the way they build Web apps as well as those skeptical of this rapid application development model. Drop me a line at [email protected].
Posted by Jeffrey Schwartz on 01/29/2009 at 1:15 PM0 comments
I'm not big on making New Year's resolutions. Instead, every year at this time I make a promise to myself that I will try something new. That thought struck a chord last week when I was chatting with independent consultant Don Demsak, a Microsoft MVP.
While we were talking in general about these tough economic times, Demsak lamented if you're a .NET developer with a broad set of skills you may be better off than many others in the IT profession these days. However just being a .NET developer won't necessarily make you stand out in the crowd, he warned.
"Right now anyone that's done real WCF [Windows Communications Foundation] work, doesn't have a problem finding a job," he said. "The average ASP.NET Web Form developers are having a harder time finding a job." Who else is in for a hard time? "The general .NET developer who doesn't know good object-oriented programming practices are the ones I see having the hardest time," he said.
Specific product certifications aren't enough in this day and age. "That's why I am thinking along the lines that people learning ASP.NET MVC, that's going to be a delineator on your resume," he suggested. I asked, isn't that going to appeal to a small percentage of development requirements? The point, he said, is that whether or not it applies to the work you are doing, it shows that you are learning new programming habits.
Scott Hanselman, a principal program manger at Microsoft, who two years ago made the now oft-cited comment that every year developers should learn a new language is good advice, according to Demsak. "If you're a developer who does a lot of functional programming via SQL, go learn an imperative language, like C# or Java," he said.
Another example: Database developers with expertise might want to learn business intelligence. Learning things like Multidimensional Expressions, or MDX, the language for creating cross platform OLAP cubes is going to be crucial. "More and more companies are trying to add BI capabilities to their applications both public facing and internal, and it's a big leap to actually get there," he said. If you're not familiar with Microsoft's BI strategy, Redmond Developer News columnist Andrew Brust gave a good synopsis last month.
"I love it. I am switching all my stuff over to it now from standard ASP.NET Web forms," he said, adding that a large population of developers may want to do the same.
Want to give it a spin? You can find some good tips here from Microsoft corporate VP Scott Guthrie and from Rick Strahl, president of Maui-based West Wind Technologies, which specializes in distributed application development.
In any case, these are uncertain times: Make it a point to try something new this year.
Posted by Jeffrey Schwartz on 01/14/2009 at 1:15 PM0 comments
Microsoft's controversial decision to position the ADO.NET Entity Framework
has generated a lot of backlash among developers who made early bets on LINQ
to SQL, which the company had released with Visual Studio 2008 and the .NET
Framework 3.5. See my complete story here. I received quite a few e-mails from
developers partial to LINQ To SQL and suffice to say, many are felt left at
While some I spoke with are coming to terms with it, others are angry. "I
feel stabbed in the back by this but I'm not moving," said Howard
Richards a principal with UK-development firm Conficient, who says he invested
a year with LINQ to SQL and now feels blind sided. "What annoys me most
is Microsoft's cavalier attitude to its developers in this regard. It took
me six months to port my application from a 'homebrew' data layer
to LINQ to SQL."
Yesterday I spoke with Tim Mallalieu, the program manager for both LINQ to
SQL and the Entity Framework, who says Microsoft anticipated the backlash but
said both data access interfaces will be better off for this move. For those
who don't think the Entity Framework is a suitable replacement, Mallalieu
said stay tuned.
"There's some pretty nifty stuff we're doing in the beta 2 time frame that
we are not speaking about as yet, I think it will give you a better experience
and will reduce the anxiety that people have around the Entity Framework," Mallalieu
said. "In terms of capabilities, I think will make the overall integrated experience
of ASP.NET, Dynamic Data, MVC and these other things easier, we did a little
bit of triaging and feedback, there is some valid feedback around complexity
in the Entity Framework and we are doing things to address that. "
What follows is an edited transcript of our conversation.
How widely deployed is LINQ to SQL today?
All indications were that it was like any nascent technology, there was a lot
of interest from an exploratory perspective but there wasn't a lot of significant
-- in terms of the entire .NET eco system -- pushes going into production. There
were a bunch of people kicking the tires, there were some pretty interesting
things going into production. We are still trying to get a better way in general
in the company to gauge technology adoption, but today, I can't give you a definitive
Were you surprised at the reaction?
We knew that this wasn't going to be a popular decision just because LINQ
to SQL is a really interesting technology. It's very nice. The problem
is though, when you make a decision like this, you can either say we don't
want to [tick] off the community, which means that you get a bunch of people
just betting on the technology to a level which will not meet with their expectations
of future innovation, release after release. Or you could actually take the
hit and get the tomatoes thrown at you early in an effort to do right by the
customers. So what we were trying to do, maybe we could have done it better,
is to do right by the customer and set expectations early for where we were
For those who say this was a political decision and not a technology decision,
is that an unfair characterization?
There were a number of political aspects to why we released two technologies
to begin with but in the grand scheme, what we were trying to do with the .NET
Framework right now was to reduce the amount of overlapping technologies that
we keep on dumping out as opposed to increasing the number. We convinced ourselves
internally that it was okay to release LINQ to SQL and Entity Framework because
there was clear differentiation between the two, and the markets we were going
to go after were different.
The reality is if you go look at what people are asking for, the rate of convergence
from a feature set perspective of the two stacks was two releases away from
convergence. So you look at that and say we could spin up two teams of equal
size to go do this work, and within two releases you are talking about two stacks
that look almost exactly alike, or you can say one of these technologies has
already been identified as a strategic piece of a bigger data platform vision.
From a shared investment perspective and technology roadmap perspective, it
seemed like the right thing to do. The problem is because there were some initial
conflicts that people have rumbled about from the history of the two technologies,
it's hard to see that there was actually an attempt to make pragmatic decisions
that were not covered by any political intentions.
If you had two technologies that were covering the ORM space, and one was pretty
nifty, was very fast, lightweight but people were saying they wanted things
like a provider model people were saying they wanted things like many-to-many
relationships, people were saying they wanted the more complex inheritance mapping
but you said there's another technology that has already done that stuff and
we think is the foundation for a broader data platform vision, would you build
those features into the other technology, or would you say, "it sounds like
what people are saying they want all of those scenarios but they want the added
simplicity"? So from a roadmap perspective it just did not make sense to duplicate
efforts from in two code basis.
What can you say to those who feel stabbed in the back or duped by this
change in strategy?
There are few things I can say that will actually make it better. But as soon
as we came to the decision, we felt the best thing to do was to come up early
and tell people so they understood what the situation was as opposed to playing
them alone. I think it would have been much more painful to wait two years to
be talking about why we weren't investing at that level in the technology.
We expect that people will continue to develop with LINQ to SQL, it's a
great technology, we are going to provide with the patterns and practices group
in Microsoft for how to design LINQ to SQL so if you are happy with them, you
just stay with it. If you at some point after using LINQ to SQL want to move
to the Entity Framework, hopefully if you follow the guidance that we will give,
it won't be as hard to move. You don't just go down a path where you've
fallen off a cliff. But beyond that, it's not the kind of message that
I can sit here and say something to you that would be a panacea for the community.
In hindsight do you regret releasing LINQ to SQL and not just waiting for
the Entity Framework to be ready?
I think LINQ to SQL is a very important technology, it's unfortunate how
this is ending up for customers, but I think given where we were from a product
perspective and a technology perspective that LINQ to SQL is really important,
and I think in it's current existence and with the kinds of work that we
expect to do with it moving forward, it's still going to have a good following.
It's just not going to be the be-all-and-end-all enterprise O/RM that has
every little knob and bell and whistle; quite frankly, if you were to add every
little knob and bell and whistle, you'd wake up and find all the elegance
and simplicity of LINQ to SQL would be gone.
Do you see there being new LINQ to SQL features?
We see there being new LINQ to SQL features, I don't know if there will be substantial
new LINQ to SQL features in .NET 4.0. After .NET 4.0 we have every intention
of doing feature work in LINQ to SQL. We are also doing a bunch of bug fixing,
servicing, and that kind of work. LINQ to SQL was developed by the C# team,
when Visual Studio 2008 .NET Framework 3.5 shipped, there was a transition of
the technology into our team. The problem that we had was the transition didn't
come with people, it came with just the technology, and we immediately were
tying to do work for .NET Framework 3.5 SP1, we wanted to add support for the
new SQL Server date types into LINQ to SQL so we focused tactically on SP1 just
on getting some of the features and design change requests that the C# team
said needed to be in to get the service pack done. That meant that when we shipped
the technology we had to officially take ownership of it, which meant we had
to get the technology on boarded. We are different teams and have slightly different
focuses, and we had to get new people ramped up on the technology, given that
.NET Framework 3.5 SP1, released halfway through our development cycle for .NET
Framework 4.0, and given the adoption work I just described, it was really hard
for us to do any significant work in .NET 4.0, but we intend to do feature work
in the future.
Posted by Jeffrey Schwartz on 12/18/2008 at 1:15 PM1 comments
There is no shortage of opinion over Microsoft's efforts to point database developers away from its year-old LINQ to SQL data access method to its more recently released ADO.NET Entity Framework.
Microsoft's push, pointed out last week, is certainly not a revelation to those who follow it. But what should one who hasn't followed the machinations of this issue make of it? Or even more pointedly, what about someone who is moving to SQL Server and the .NET Framework?
Telerik CTO Stephen Forte recommends learning raw SQL, so if they use an object-relational modeling tool or either LINQ to SQL or the Entity Framework in the future, "they will know what is going on behind the scenes and use the raw SQL for the reporting solution as well as any complex queries and consider an ORM/LINQ/EF for the CRUD and simple stuff."
While Forte is concerned Microsoft's guidance on how to reconcile its various data access protocols won't be adequate for some time, he believes the shakeout will be organic. "Unfortunately the thing that makes Microsoft great and innovative, its sometimes disparate teams, leads to the confusion in the marketplace," Forte says.
In a blog posting of his own earlier this week , Forte pointed to a survey released by Data Direct Technologies last month that finds that 8.5 percent of.NET apps in production use LINQ to SQL as their primary data access method. "While this number is not huge, you can't ignore these developers voting with their feet by using LINQ to SQL in their applications," Forte says.
What's a LINQ to SQL developer to do? "Throw it all away and learn EF? Use nHibernate? No. The LINQ to SQL developer should continue to use LINQ to SQL for the time being. If the next version of the EF is compelling enough for a LINQ to SQL developer to move to EF, their investment in LINQ to SQL is transferrable to LINQ to Entities. If LINQ to SQL developers are to move in the future, Microsoft will have to provide a migration path, guidance and tools/wizards. (The EF team has started this process with some blog posts, but the effort has to be larger and more coordinated.)"
Microsoft will make sure LINQ to SQL continues to work in the .NET Framework 4.0 and will fix existing issues, wrote Damien Guard, a software development engineer in Microsoft's data programmability group (who works on both LINQ to SQL and the Entity Framework) in a blog posting in October during PDC.
"We will evolve LINQ to Entities to encompass the features and ease of use that people have come to expect from LINQ to SQL," Guard wrote. "In .NET 4.0 this already includes additional LINQ operators and better persistence-ignorance."
That's not to say new features won't shop up in LINQ-to-SQL, he added. "The communities around LINQ to SQL are a continuous source of ideas and we need to consider how they fit the minimalistic lightweight approach LINQ to SQL is already valued for."
Forte says LINQ to SQL developers will be ready to move to the Entity Framework when its feature set is a superset of the former and Microsoft offers migration wizards and tools for LINQ to SQL developers. "If Microsoft is serious about the Entity Framework being the preferred data access solution in .NET 4.0, [they will have to do a few things: "Make EF 2.0 rock solid. Duh. Explain to us why the EF is needed. What is the problem that the EF is solving? Why is EF a better solution to this problem? This is my big criticism of the EF team, the feedback I gave them at the EF Council meeting, is that they are under the assumption that 'build it they will come' and have not provided the compelling story as to why one should use EF. Make that case to us!"
Also, Forte is calling on Microsoft to engage with the LINQ to SQL, nHHibernate and stored procedures crowds.
Still there are many who are not happy with Microsoft's decision to give short shrift to LINQ to SQL, notably Oakleaf Systems' Roger Jennings, who last week said that LINQ to SQL will not go away simply because it is part of the current .NET Framework. Forte takes issue with that thinking.
"Just because something is in the framework is no guarantee that it will have a bright future," Forte said in his e-mail to me.
Jennings points to others who are weighing on this issue as well, such as Stu Smith, a developer at UK-based BinaryComponents Ltd.:
There's no one correct way to write an ORM. Different applications have different requirements. A general purpose ORM will never satisfy 100 percent of developers. Fine. I'm happy with that; there's a nice market for specialist providers.
What I'm not happy with is that while LINQ to SQL seemed to make 90 percent of developers happy, it's being replaced with LINQ to Entities that (judging by the feedback I've seen) makes far less developers happy.
I'm fine with the ADO.NET team writing a solution that fills that 10 percent gap or otherwise augments LINQ to SQL. I'm not happy with them replacing a 90 percent solution with a specialist 10 percent solution.
In the end, how this will all turn out remains to be seen, Forte points out. "We are still at the station buying tickets (to an unknown destination)."
What's your opinion? Drop me a line at [email protected].
Posted by Jeffrey Schwartz on 12/10/2008 at 1:15 PM1 comments