Even though the release of Microsoft's SQL Server 2008 SP1 last week didn't generate much buzz, it is a noteworthy turning point for Microsoft's key database platform.
As reported by Kurt Mackie, Microsoft released SP1, which includes Cumulative Updates 1 through 3 all rolled up in the service pack. Microsoft also added some administrative improvements, including a slipsteam facility, a service pack uninstall capability and Report Builder 2.0 Click Once.
The latter, actually released in October, lets users query SQL Server and build reports using Microsoft Office Tools.
Those incremental improvements aside, the release of SP1 gives a green light to IT organizations that insist on these key updates before putting new applications in production. It is well known that many enterprises won't put business-critical applications onto a major new software platform until that first service pack arrives.
But is this green light going to be enough to open the floodgates and encourage organizations to upgrade their older databases to SQL Server 2008? Even those who subscribe to Microsoft's Software Assurance plan, meaning the upgrade is already paid for, are looking to hold the line on costs that exceed the software license.
And as noted last week, many organizations are looking to open source alternatives to address costs. Open source databases, still a small slice of the market, are probably not expected to have a huge impact on the larger scale database market despite its growth, but they are a looming factor.
Cost issues notwithstanding, SQL Server 2008 does offer higher levels of performance, scalability, policy management and security, as well as its improved T-SQL for developers and support for the ADO.NET Entity Framework. While not all developers have welcomed some of these new features with open arms, SQL Server 2008 will also be an important component of the IDE Evolution Microsoft is embarking on and is the cover story of this month's Visual Studio Magazine by executive editor Kathleen Richards. In that piece, Richards points out that those migrating to Visual Studio Team System 2010 will need to take a hard look at SQL Server 2008:
VSTS 2010, which includes role-based client tools that incorporate VS Professional and a license to TFS, is the first major upgrade to the collaboration environment since its debut in VS 2005. TFS will drop support for SQL Server 2005 as the back-end source control system and thus require an upgrade to SQL Server 2008.
Team System also rolls up the former Developer edition into the Database edition, resulting in Architect, Tester and Database roles in addition to Team Suite, which includes all of the aforementioned functionality in a single SKU.
What's your take on SQL Server 2008? Does the release of SP1 have much affect on your organization? Drop me a line at [email protected].
Posted by Jeffrey Schwartz on 04/15/20093 comments
It seems the open source world is gunning for a bigger piece of the SharePoint pie these days.
As Alfresco Software Inc. continues to emerge as the leading provider of open source software enterprise collaboration software, rival open source vendors are stepping up their efforts.
For its part, Alfresco last week said it finished 2008 with 103 percent year-over-year revenue growth, as well as 92 percent year-over year growth during the last quarter of 2008 for the period ended February 28. Since it didn't disclose its' revenues, it's hard to get too excited about that stat, but the company does appear to be on a roll.
Alfresco said it has added 270 enterprise customers such as Federal Express, Fox Broadcasting, the New York Police Department, the State of Kansas, Sun Microsystems and Virgin Mobile.
I attended the local Industry Association of Software Architects meeting in New York a few weeks ago, where the topic was enterprise content management. It was hosted in Microsoft's offices and the speaker, coincidentally, was Jean Barmash, Alfresco's director of technical services.
While this was a vendor-neutral technology presentation, Barmash pointed out that Sharepoint 2007 rearranged the competitive stakes for ECM players. "They entered the collaboration space and all of a sudden it was a billion dollar industry and all of a sudden everyone in the industry needed to have some kind of SharePoint strategy," Barmash told attendees.
One that is making a push is Paris-based Nuxeo Corp., which last month moved into the North American market. The company, founded in 2000, offers what it calls a complete ECM suite that carries no license fees using the LGPL open source license.
Like Alfresco, it positions itself as a SharePoint Server alternative -- the company has just rolled out the Nuxeo Enterprise Platform 5.2, which adds SharePoint services support. A feature called MS WSS allows developers to implement file-based services, allowing Nuxeo to be seen as a Sharepoint Server via Windows Explorer and Office. "Users can save their documents into Nuxeo as if it were SharePoint," said Nuxeo CEO Eric Barroca.
The new release also includes a SQL-based storage repository, allowing for integration at the SQL level with business intelligence and ETL tools. Also new is WebWorkspace, which allows developers to create workspaces, including wikis and blogs and other collaborative Web sites.
The Paris-based company is not well-known in the United States, but it hopes to change that in the coming year, Barroca said.
Another player that is targeting the open source collaborative space is MindTouch Inc., which is focused more as a wiki-based collaborative application development platform. The company launched at last week's Web 2.0 Conference MindTouch 2009, which it describes as a developer platform for building rich collaborative apps and communities.
MindTouch was founded by two researchers who worked under Craig Mundie at Microsoft, Redmond's chief research and strategy officer. One of them is Aaron Fulkerson, MindTouch's CEO and founder.
"Collaboration is very inefficient and unproductive because you have to plug in a dozen-plus disconnected application data silos, getting access is incredibly painful and time consuming," Fulkerson said. "We built MindTouch to provide a collaborative canvas to stretch across all of your existing technological assets, inside your infrastructure, Web services, services-oriented architectures, databases and applications. We have this connective tissue for all these disconnected systems."
There are many others. If you're with an enterprise developing some of these new apps using wikis and other new capabilities to enable new forms of collaboration, we'd like to hear some of the successes and challenges you are experiencing from a development and deployment perspective. Drop me a line at [email protected].
Posted by Jeffrey Schwartz on 04/08/20093 comments
Microsoft is having a hard time convincing skeptics that its Azure cloud services will support non .NET languages.
Consider a presentation given at this week's Cloud Computing Expo in New York by Yousef Khalidi, a distinguished engineer for Microsoft's cloud infrastructure services. Khalidi emphasized that its forthcoming Azure cloud platform will support both native and managed code, and not just .NET including Java, Ruby and PHP.
"I'm sitting here telling you, you can run anything you want on this platform," Khalidi insisted during a session Monday in response to a question by an attendee about Microsoft's support for different languages on its Azure Services Platform. The attendee seemed taken aback by that.
Kristof Kloeckner, CTO of enterprise initiatives and VP of cloud computing at IBM heard Khalidi's presentation and told me he liked what he heard but said he too has his doubts.
"I think the real question is going to be around how open is it going to be? What role are ecosystem partners going to play in Microsoft's environment? Certainly what I heard in the presentation that they want to support multiple languages makes sense. Whether that means a greater openness, I don't know. Let's just wait and see what they come up with."
Certainly Kloeckner can't be blamed for his skepticism. But it also underscores the fact that providers see a lot at stake and will use the interoperability card to jockey for position.
Kloeckner talked up the Open Cloud Manifesto, the controversial position statement signed by 70 providers ranging from startups to large organizations including Cisco, the Eclipse Foundation, EMC, IBM, the Open Group, and Sun Microsystems. But when key signatures in the cloud ecosystem were notably absent on that document -- Amazon, Google, Microsoft and Salesforce.com -- the worthiness of the effort fell into question.
I talked with numerous attendees at the conference, both those who signed it and those who held off. All agreed interoperability is an important end goal but pointed out it is still quite early in the game.
"This manifesto is a work in progress, it's a draft. As a set of goals, I don't think anyone can disagree with it; as a prescription for achieving them, I think there's a lot of discussion still to take place," said Peter Coffee, Salesforce.com's director of platform research."
"Generally speaking, I think it's too early in the game to be talking about standardization; open source has really changed the way standards evolve," said Ian Murdock, Sun Microsystems' vice president of emerging platforms.
"I think we are still grappling at what levels do we need standards, what do they need to describe and what does it need to contain," added Thorsten von Eicken, CTO of RightScale Inc., a leading provider of cloud provisioning and administration software.
"What we believe strongly in is start with de facto standards, things that work with interfaces that exist where there is customer momentum, and then build from there, as opposed to with some committee approach out of thin air, where everyone tries to come up with the end-all-be-all."
For its part, Microsoft is pushing ahead. The company today opened access to those who want to test its SQL Data Services and other components of the Azure platform, according to the Azure Journal (until now an invitation was required). Developers can now sign up on the Azure Services Platform portal.
So is the skepticism about Microsoft well-earned or do you think the company is blazing a new path to openness with its cloud services? Drop me a line at [email protected].
Posted by Jeffrey Schwartz on 04/01/20090 comments
Microsoft has been quietly working to revive an old but trusted data access technology that some thought it had left for dead -- Open Database Connectivity or ODBC.
The popular API for providing SQL access to C and C++ applications (as well those built in other languages), ODBC has remained dormant for the better part of the past decade after Microsoft shifted emphasis to its COM-based OLE DB. Microsoft recommitted to ODBC two years ago and, in January, released ODBC 3.8 in beta 1 of the Windows 7 SDK .
Besides adding support for SQL Server 2008, the new release provides bug fixes and three key improvements: 1) asynchronous operations, which should lend itself well to occasionally connected applications and those connected to cloud-based services; 2) streamed output parameters for retrieving large blocks of data and 3) support for C data type extensibility.
The latter will allow those who build drivers to create their own data structures that they can return to an application that don't now exist in ODBC, explained Rob Steward, vice president of research and development at DataDirect, a subsidiary of Progress Software Corp.
"It's good for driver-writers such as myself or the database vendors in that they may create types that may not exist in the specification," said Steward whose company this week released a set of ODBC drivers. Still he questions whether it's good for ODBC overall.
For example, in the case of a time stamp data type representing different time zones, "the original ODBC specification doesn't have a very clean way to return that, but a driver-writer may then create their own structure, with which to return that time zone data," he said. The problem is that it creates sort of a trap for ODBC developers.
"If you use the types the driver vendors create, now your application and your code only works with that one driver, which defeats part of the purpose of ODBC," he said. "So it's kind of a built-in way for people to extend the spec on their own, which again is good and bad but overall that was the big value of ODBC -- that you can write a single set of code that works against multiple databases."
Steward is concerned that with these driver specific C data types, "your code doesn't work against multiple databases, it only works against one driver," he said.
But that shouldn't be a problem, so long as Microsoft's changes don't affect the driver manager, said Amyn Rajan, president and CEO of Vancouver-based Simba Technologies Inc., which builds ODBC drivers for ISVs. David Sceppa, Microsoft's ADO.NET program manager, was not available for comment.
"The real risk is about changes they are making in the driver manager -- this is code that probably hasn't been touched in at least a decade," Rajan said. So far, in its tests with the new ODBC driver in the Windows 7 SDK Simba's engineers haven't found any problems.
Rajan says he finds it interesting that after years of effectively trying to cast aside ODBC in favor of OLE DB, that Microsoft has come back to ODBC. "The fact that they are extending ODBC tells me they actually decided ODBC was something that was good and something they are going to invest in and they look at this as a first class API," he said. "If they were trying to kill ODBC, they would have added this functionality to their .NET provider."
Will ODBC make Windows 7 a better client for accessing data-driven applications and content? Are Steward's concerns about the potential for C data type extensibility a concern? Drop me a line at [email protected].
Posted by Jeffrey Schwartz on 03/26/20092 comments
If IBM actually ends up acquiring Sun Microsystems,
the rumor that surfaced yesterday, it could have interesting implications for the database market.
Before I go on, let me be clear -- this deal is still rumored and while reports suggest it could happen in days, it could fall apart. Now on for the speculation.
Of course the repercussions of such a megadeal transcend way beyond one particular component of Sun's arsenal, which includes a contracting server business that is nonetheless well regarded technically, microprocessors, software and a deep bench of R&D. And of course there's perhaps Sun's most visible asset -- Java.
Some question what IBM would do with MySQL, considering Big Blue has the means to clone it with an express-type version of its DB2 database. Dana Gardner, principal analyst with Interarbor Solutions LLC, a Gilford, N.H. consultancy, is among those who sees no sense in IBM acquiring Sun, as he posted in his blog yesterday.
"There's no reason for IBM to take open source any further than it already has, given that it still commercially has a very good business in DB2," Gardner said in a subsequent interview. "If they want to further their open source database strategy, they can accomplish that without buying Sun; they could buy Ingres or spin off an open source version of DB2."
Forrester analyst Noel Yuhanna disagrees. MySQL has cache with customers that no other open source database has achieved, Yuhanna says. Moreover it has more revenues than any other open source database -- $400 million for the most recent year, he estimates, though he points out that's miniscule compared to the overall $16 billion market for database software.
MySQL, Yuhanna told me, would give IBM an opportunity to offer an alternative database to customers looking to move to an open source database.
"In these economic conditions today, companies are looking at open source databases more aggressively because they want to lower the cost, and my SQL is very mature in terms of technology," Yuhanna said. "They have that niche of becoming a more scalable, high-performance database."
Many customers see DB2 as a mainframe-class database, he added, and IBM has failed to make strong gains with that database on Windows and Linux. "If you look at the mix, it would be really complimentary for IBM," he said. Analyst Curt Monash, of Monash Research agrees. "There's little reason to think IBM would orphan MySQL or any other DBMS product," Monash wrote in a blog posting.
One of the key areas Sun has failed in growing the MySQL business is its lack of migration services and tools from higher end databases, Yuhanna added. Many Forrester clients have indicated that Sun hasn't improved MySQL with performance and scalability. "They fear if they don't provide a very good support for high end, we may just move away from MySQL," he said.
A resurgent MySQL would perhaps most be competitive with Microsoft's SQL Server and with its recent acquisition of DATAllegro, is gaining more credibility for high end implementations including business intelligence and data warehousing.
Yet Yuhanna says .NET customers prefer SQL Server for obvious reasons, especially now as Microsoft is providing even tighter integration with its framework and tooling. "That's where IBM would have to attack, try to provide an integration point with Java and MySQL to make it more appealing to customers."
Of course, we'll see if this all happens. If you have any thoughts, drop me a line at [email protected].
Posted by Jeffrey Schwartz on 03/19/20090 comments
Microsoft in recent weeks began dropping hints that it would be announcing a revamped iteration of its SQL Data Services -- its cloud-based database service that's been available for testing for four months -- after the testers insisted they wanted SDS to have native relational capabilities.
In a surprise move, Microsoft said yesterday that it would expose its Tabular Data Stream (TDS) over-the-wire protocol for accessing SQL Server via its forthcoming Azure Services Platform. The move reverses the existing plan to offer SDS via the REST Web services interface. I spoke today with Niraj Nagrani, a senior product manager for SDS at Microsoft, about the changes.
Is it fair to say this is a major revamp from your initial plan?
The plan was always to deliver a relational database. A major part of this acceleration came from the feedback, but we always planned to deliver a relational database.
Did you, in effect, give up on the Entity Attribute Value [EAV] tables?
In the course of our acceleration, we heard a lot of feedback that people wanted the experience of a traditional SQL Server database with its T-SQL compatibility. To deliver that aspect of it we were kind of working around it. We always wanted to deliver the SQL Server experience that we took the traditional Entity Model and we were trying to imitate what SQL Server does, but we felt that based on the feedback we heard, customers preferred more the traditional T-SQL-based support so we decided to go in this direction.
Were you surprised at the reaction?
We were very happy with the reaction. Initially we were thinking going with the traditional entity model, we were calling it SQL Server. But it really was not similar to a SQL Server-type experience. So the question was, should we toy with the brand and not call it SQL Server or should we keep SQL Server and then deliver a traditional, more familiar experience to our existing customers? But we didn't have enough data points. Until we actually went to the market and got some data points, we didn't really have any justification to do it. Now we have enough proof points. We were not surprised, but we were happy to see that customers confirmed our hypothesis that they do want to have a traditional SQL-like experience.
How much did the fact that the Azure Tables and SDS were seen as indistinguishable data storage services?
With the current acceleration to relational databases, definitely the T-SQL-based compatibility and working with the traditional TDS proxy protocol, SQL Server becomes more like a traditional RDBM database. It's very similar to a SimpleDB-type storage, which is a simple, structured storage with no relational capabilities. So there was a big differentiation between somebody needing an RDBMS database in the cloud versus a shared distributed database that's a highly scalable database built in with HA [high availability], self-healing and data protection, as opposed to structured storage with stored metadata and files.
Are you basically not going to be offering SDS with the EAV tables any more?
We are looking into our future roadmap to make sure that Astoria [ADO.NET Data Services] can be leveraged on top of SDS and Entity Data Model continues to exist, and we will continue to provide for that through Astoria. We will continue to work with the Astoria framework and figure out how SDS can support that.
TDS is not meant to be an Internet-friendly protocol. Is that going to affect performance?
We actually did a lot of benchmarking and testing. We think it's appropriate for what we are doing and the direction we are taking it. We feel comfortable, as we get more early-adopter customers and we look at the type of workloads they are building, they will keep modifying and tweaking our protocols so it's more workload-friendly.
Are you looking at other protocols, as well?
Now we are going to take the TDS and see how we can scale our services and start working with early-adopter customers. SDS will support breadth protocols including the existing TDS over TCP/IP and also options to support TDS over other transports such as HTTP for high-latency scenarios without making modifications to TDS.
So you're not concerned about the speed issues related to TDS?
If you look at any other product in a hosted environment, there is always going to be a latency issue coming from the typical service but also just going over the wire. There are always going to be workloads that are OK with the latency and will adopt to the cloud initially, and as we go in the future, the whole cloud infrastructure will enhance and will propose more high-performance workloads. As adoption grows and as we need efficiencies over the Web, I am sure the latency will become a non-issue for quite a bit of workloads.
What about the scalability questions of relational databases versus the EAV tables used in SDS?
SDS was built on SQL Server as a back end. The engineering team did a lot of re-engineering of the existing SQL Server architecture to have it work in a scale-out infrastructure manner. One of the biggest value benefits of SQL Data Services will be that it's a scale-out architecture and infrastructure, which means that workloads can scale out based on the usage, so not only the low-end workflows that don't need to have a scale-out architecture but also the high-end workloads that currently may have a limitation on the existing Web environment, in terms of how they scale out the infrastructure.
Will SDS support data partitioning?
In SDS V1, data partitioning will need to be handled in the application. Developers who need to scale out across multiple databases will need to share the data out themselves. In the future, based on customer feedback we will provide automatic provisioning.
In [senior program manager] David [Robinson's] blog posting yesterday, he wrote the majority of database applications will work. What types of applications are not going to be suited for this environment that developers should beg off?
There are certain workloads that are natural to clouds. In terms of Web workloads, we see them going to the cloud. We see a lot of data protection and storage-type workloads going to the cloud, like CRM applications, content management, product lifecycle management, supply chain and collaboration across enterprise. Where we continue to work toward is where we can have data warehouses and data-marts in the cloud. We are seeing a lot of excitement around BI workloads in the cloud. Or reporting-type applications living in the cloud. There is probably a natural tendency for these early-adopter workloads to go to the cloud right away and there is going to be a tendency of some other workloads like data warehouse and real OLTP workloads to go to the cloud in time.
What will be the surcharge for SDS over Azure Table Services?
We are still working on the pricing. I think sometime in the middle of the year, we will have some more information on the actual business model.
Do you think it will be competitive with Amazon's EC2 60-cent standard or more the $1.20-per-hour enterprise standard that Amazon is offering?
We are still working on that. We certainly don't have a range or a price point at this point.
Will the new SDS run on SQL Server 2008?
It is currently using 2005, but we have a roadmap to move to 2008.
Upon release?
That's the plan.
Will SDS use Enterprise Edition?
It will use Enterprise Edition. Just to be clear, when we say Enterprise Edition, we don't just take the box and put it in the cloud. You're really not going to take the code bit by bit and line by line and put it in a box and run it on SDS because it is not a hosted environment -- it's a shared database infrastructure. The code base is taken from the enterprise; we have an enhanced architecture to run on datacenter machines. We can leverage the cost benefit of running it on cheap hardware but deliver an enterprise-class, mission-critical database.
Will it be TDE [Transparent Data Encryption] Enabled?
We are looking at different security features of how we can enable it. The thing is, there is a list of features that are available on-premises and quite frankly there's going to be some features that we leverage from inside-out and there are going to be a lot of features coming from outside-in based on the customer feedback.
How will users of TDI [Trusted Database Interpretation] and column-level encryption protect their private keys from unauthorized access?
We are looking into the type of workload and requirements for row-level security and column-level security and based on the requirements, we will actually enable those features.
How will data partitioning be handled?
We built an intelligent software infrastructure across the nodes that actually knows the size of each node and partitions data across the nodes.
Will all SQL Server transaction scopes be supported?
That's the plan.
What should developers be on the lookout for next week regarding SDS?
People will see the code and the bits running. There will be a demo of our SDS relational data model and you will see it working and will have a good level of the discussion about the architecture under the hood and the types of applications that can be built in real time. That will give a sense of how easy it is to actually use some of the T-SQL-based language into applications or running existing T-SQL applications in the cloud.
Posted by Jeffrey Schwartz on 03/11/20091 comments
Microsoft appears to be revamping its SQL Data Services with plans to add relational services, a move that does not seem to be catching too many observers by surprise.
As reported by blogger Mary Jo Foley last week, it appears Microsoft is overhauling SDS, launched initially one year ago as SQL Server Data Services. For its part, Microsoft is promising some big SDS news at MIX 09 in two weeks. "We will be unveiling some new features that are going to knock your socks off," wrote Microsoft senior program manager David Robinson in the SDS team blog last week.
Perhaps putting pressure on Microsoft is the availability of SQL Server hosted on Amazon's EC2 service and the launch of a cloud-based relational database service launched last week by a two-person startup based on Sun Microsystems' My SQL no less by a former .NET developer .
"The way they built SQL Data Services looks a lot like Amazon's SimpleDB and that's really not a database," said Andrew Brust, chief of new technology at consulting firm twentysix New York and a Microsoft regional director. "It's really an entity store, which works well for some things. It's great for content management for example but for what relational databases are typically used for, not so much.
Making matters worse was that developers had higher expectations, said Oakleaf Systems principal Roger Jennings. "What they promised was full text search, secondary indexes, schemas, and a few other relational niceties but didn't deliver on those. They did deliver support for BLOBS," said Jennings, who tested SDS last summer .
But Microsoft and others may face challenges even with hosting relational versions of databases in the cloud, Jennings has maintained in his blog postings. "I don't think they will be able to scale it," Jennings said, re-iterating his posting last week. "Traditional relational databases don't deliver the extreme scalability expected of cloud computing in general and Azure in particular," he wrote.
"I think the move to the cloud is going to be very hard. It's one of those easier said than done things," Brust added. "This isn't just about hosting the server products."
Are you anxious to hear what Microsoft has planned for SDS? Drop me a line at [email protected].
Posted by Jeffrey Schwartz on 03/04/20090 comments
Microsoft's announcement that it will offer an Oracle database plug-in for the next release of Visual Studio Team System is a coup for SQL Server developers who have little or no experience with the rival but widely deployed data repository.
The Oracle plug-in is a Database Schema Provider (DSP) that will be made available as an option to VSTS 2010 by Quest Software Inc., said Jason Zander, Microsoft's general manager for Visual Studio, who made the announcement at the VSLive! conference in San Francisco, as reported by Redmond Developer News editor Kathleen Richards.
"When you use those two things together, I will be able to write my code and explore my schemas and do all of that advanced functionality with Oracle," Zander said. "That gives Team System support for the three most popular databases in use by database programmers."
Quest is no stranger to the Oracle database platform -- it makes the widely used Toad for Oracle tools, which it has offered for more than 10 years. "In supporting Visual Studio Team System we are supporting another platform that an Oracle DBA or developer, if they want to be part of this Team System methodology, can use," said Daniel Norwood, a product manager at Quest.
The Oracle DSP will not come out of the box, but will be made available as a third-party add-on. Quest has not disclosed pricing and availability. Microsoft had earlier said that IBM will offer a VSTS 2010 plug-in for its own DB2 database platform. I talked with Norwood and Daniel Wood, development manager at Quest, to get an understanding of what this means.
Based on that interview, here's a brief FAQ:
How will this benefit database developers?
An application developer that needs to spend time working against a back-end database that one day that might be SQL Server, may move on to another project that's going to be on an Oracle database. The developer can maintain consistency with their tool set by sticking with VSTS and working against the different database platforms.
Does this presume the Visual Studio developer perhaps is not familiar with Oracle's PL/SQL language or do they have to have some understanding of that?
They are going to have some limited PL/SQL experience just by the nature that they're developing against Oracle. But developers can go in with limited PL/SQL experience, they can click, file, add new items inside of VSTS and get basic scripts that show them how to create the tables, the indexes, the views and the various other objects that they need for their application. They can then model those objects and change how they need to be to fit what they're developing.
Why can't it work with current or earlier versions of Visual Studio?
The code that enables the extensibility for companies like Quest or IBM to plug in these database schema providers will actually be released publicly for the first time in the 2010 via Microsoft's new Managed Extensibility Framework.
What Oracle databases will it work against?
Oracle 9i through 11g and above.
When will VSTS 2010 testers be able to try Quest's new plug-in?
A beta is planned in the coming weeks.
Are you testing Visual Studio 2010? We'd love to hear your findings for a cover story that Kathleen Richards is writing and will appear in Visual Studio Magazine next month. Please drop a line to her at
[email protected].
Posted by Jeffrey Schwartz on 02/25/20093 comments
When
I pointed last week to the potential conflict that Microsoft's Office SharePoint Server (MOSS) can have on database developers and DBAs, I apparently struck a nerve. Some said I hit the nail on the head, while one said I was oversimplifying the matter and creating FUD.
But for some database developers and administrators -- and in many cases even higher up in the IT food chain, the unintended consequences of SharePoint's growth can lead to a lack of control for how data is kept in sync as more data ends up in MOSS.
"You don't have to be a developer to go in there," said Ed Smith, a systems analyst at Tetra Pak, a global supplier of packaging machines. "You can get two secretaries together, they can figure out what they want to do and they can start putting stuff in there."
While SharePoint is popular for storing and sharing documents and other unstructured content, when an individual has complex, structured data based on large numbers of rows columns, where multiple lists have to be joined together, and certainly when referential integrity is critical, it needs to be in a relational database server.
"We discovered that lists are the place to store information but they can't be substitute for tables and relations and of course referential integrity," wrote Prabhash Mishra, CEO of Skysoft IT Services Pvt. Ltd., based in Noida, India in an e-mail.
In an interview last week, Paul Andrew, Microsoft's technical product manager for the SharePoint Developer Platform, said many are already building custom applications on SharePoint that use a mix of SQL Server schema and tables within MOSS. "Of course, each has its own strengths, and each is better suited in different parts of an application," Andrew says.
Not all DBA organizations are wary of SharePoint. "My manager, the head of our DBA organization, loves SharePoint," wrote Kevin Dill, IT innovation analyst at Grange Mutual Casualty Co. in an e-mail. "For example, he regularly stores information on SQL clustering and business-specific SAS functions in the wiki and document library, for easy searching."
Dill added that SharePoint will not completely replace traditional DBAs. "While you can configure many aspects in the Central Admin, you still need DBAs to monitor data growth and backups," he said. "In fact, SharePoint can help DBAs maximize their time on the things that really matter, instead of provisioning little one-off databases for internal projects."
But in many shops, SharePoint is outside the purview of database developers and DBAs. One way to avoid the problem of allowing employees to work around them is for IT organizations to put controls over who is permitted to commit data to SharePoint servers. That's the case for the City of Prince George, BC.
"We've locked it down pretty much so that all they can do is put in content directly," says programmer/analyst Rob Woods. "Nobody else really has the option of doing any of this kind of stuff except for the IT staff."
Posted by Jeffrey Schwartz on 02/18/20090 comments
It has been well chronicled how pervasive Microsoft's SharePoint Server is becoming in all enterprises.
Just look at the large pharmaceutical conglomerate Pfizer, which has 6,000 SharePoint sites used by 63,000 employees -- that's two-thirds of its entire rank and file.
But if you're a database developer or administrator, the rampant growth of SharePoint has to be a concern. In a recent conversation I had with independent consultant and Microsoft MVP Don Demsak, this is a common concern that is only going to continue to grow. That's because of the amount of data that is going into SharePoint that is better suited for a SQL database. The reason is clear.
"SharePoint is very successful because you are removing levels of impedance, and a lot of the DBAs hate SharePoint for those same reasons," says Demsak. "Basically you are storing everything in a BLOB, and you can't relate to object relational mapping, and you can't do good entity relationships, you can't do relational models because there's all sorts of problems when people try to extend that SharePoint model past where it's supposed to go."
This is reminiscent of the 1990s, when Lotus Notes started to grow in enterprises, Demsak recalls. "I remember when every database was a Notes database, and it shouldn't have been. You're storing documents and that sort of stuff in there. That's great but you've got rows, columns, and you're trying to join this list to that list and that's not what those tools were made for," he says. When you need a relational database you use a relational database, the way it was supposed to be."
This is more than an annoyance though, because when an organization requires referential integrity of information, that's what relational databases were intended for. If you're a database developer or DBA, drop me a line and tell me how you are dealing with this. I'd like to hear from SharePoint developers too. I'm at [email protected].
Posted by Jeffrey Schwartz on 02/12/20094 comments
It has taken longer than initially planned, but the release candidate of Microsoft's ASP.NET Model View Controller, design pattern for Test-Driven Development of enterprise scale Web applications, is now available.
As reported Tuesday, Microsoft is urging developers to check out the feature-complete release candidate, which is slated to ship next month presuming no major issues arise, said Scott Guthrie, corporate vice president of Microsoft's developer division in a blog posting announcing the release.
While a release candidate suggests that software is just about ready to roll, that doesn't mean the RC 1 is bug free. "Unfortunately, a few minor bugs did crop up at the last second, but we decided we could continue with this RC and fix the bugs afterwards as the impact appears to be relatively small and they all have workarounds," wrote Phil Haack, a senior program manager for Microsoft's ASP.NET MVC Framework, in a blog posting.
One particular bug he points to is the fact that the controls collection cannot be modified and he offers what he describes as a straightforward workaround.
Those issues notwithstanding, Microsoft has suggested that ASP.NET MVC, its answer to the Ruby on Rails framework used by Web developers for rapid prototyping of applications based on the Ruby dynamic language, will appeal to small subset of the overall .NET developer community.
I'd be curious to hear thoughts from those who see it changing the way they build Web apps as well as those skeptical of this rapid application development model. Drop me a line at [email protected].
Posted by Jeffrey Schwartz on 01/29/20090 comments
I'm not big on making New Year's resolutions. Instead, every year at this time I make a promise to myself that I will try something new. That thought struck a chord last week when I was chatting with independent consultant Don Demsak, a Microsoft MVP.
While we were talking in general about these tough economic times, Demsak lamented if you're a .NET developer with a broad set of skills you may be better off than many others in the IT profession these days. However just being a .NET developer won't necessarily make you stand out in the crowd, he warned.
"Right now anyone that's done real WCF [Windows Communications Foundation] work, doesn't have a problem finding a job," he said. "The average ASP.NET Web Form developers are having a harder time finding a job." Who else is in for a hard time? "The general .NET developer who doesn't know good object-oriented programming practices are the ones I see having the hardest time," he said.
Specific product certifications aren't enough in this day and age. "That's why I am thinking along the lines that people learning ASP.NET MVC, that's going to be a delineator on your resume," he suggested. I asked, isn't that going to appeal to a small percentage of development requirements? The point, he said, is that whether or not it applies to the work you are doing, it shows that you are learning new programming habits.
Scott Hanselman, a principal program manger at Microsoft, who two years ago made the now oft-cited comment that every year developers should learn a new language is good advice, according to Demsak. "If you're a developer who does a lot of functional programming via SQL, go learn an imperative language, like C# or Java," he said.
Another example: Database developers with expertise might want to learn business intelligence. Learning things like Multidimensional Expressions, or MDX, the language for creating cross platform OLAP cubes is going to be crucial. "More and more companies are trying to add BI capabilities to their applications both public facing and internal, and it's a big leap to actually get there," he said. If you're not familiar with Microsoft's BI strategy, Redmond Developer News columnist Andrew Brust gave a good synopsis last month.
For his part, Demsak recently gave jQuery a shot. Microsoft thrilled a lot of developers back in October when it said it will package the lightweight and simple JavaScript library that enables interaction between JavaScript and HTML with Visual Studio and ASP.NET.
"I love it. I am switching all my stuff over to it now from standard ASP.NET Web forms," he said, adding that a large population of developers may want to do the same.
Want to give it a spin? You can find some good tips here from Microsoft corporate VP Scott Guthrie and from Rick Strahl, president of Maui-based West Wind Technologies, which specializes in distributed application development.
In any case, these are uncertain times: Make it a point to try something new this year.
Posted by Jeffrey Schwartz on 01/14/20090 comments