Microsoft last week updated its latest WCF Data Services version so it will work with Entity Framework 6.
Rather than requiring the download of a new WCF DS version, the update to version 5.6.0 comes in the form of an out-of-band alpha1 NuGet package called, appropriately, WCF Data Services Entity Framework Provider.
WCF DS 5.6.0
was released in August with support for Visual Studio 2013, portable libraries and other enhancements. The VS 2013 support lets developers consume OData services with the Add Service Reference functionality. The portable libraries support lets developers use the new streamlined JSON format
(part of the OData v3 protocol, sometimes called "JSON light") in Windows Store and Windows Phone 8 apps. While core libraries have support for .NET Framework 4.0, the WCF DS client portable library support targets .NET 4.5. Both the core libraries and the WCF Client have support for Windows Phone 8, Windows Store and Silverlight 5 apps.
Users, however, wanted more. A couple of readers responded in the comments asking about OData v4 support, and one asked, "Does this release include EF 6 support." Microsoft's Mark Stafford last week replied, "Sort of. The EF 6 support will come in a different NuGet package, which will go into alpha today."
The Oct. 2 NuGet update that catches up to EF 6 was made possible by some work the WCF DS team was doing to make providers public. The team wanted to override provider behavior so developers could use features such as spatial properties and enums, which lack native support in the OData v3 protocol. "Unfortunately we ran into some non-trivial bugs with $select and $orderby and needed to cut the feature for this release," the team said in its August announcement.
However, that work paid off later, the team said in last week's update announcement. "We were able to build this provider as an out-of-band provider (that is, a provider that ships apart from the core WCF DS stack) because of the public provider work we did recently" the team said.
The new support for EF 6 basically results from a new class, EntityFrameworkDataService<T>, where T is type DbContext. For previous EF versions, developers should use the base class DataService<T>, where T is type DbContext or the older ObjectContext.
"The new public provider functionality comes with a little bit of code you need to write," the team said. "Since that code should be the same for every default EF 6 WCF DS provider, we went ahead and included a class [the new EntityFrameworkDataService class] that does this for you."
The team admitted that it didn't have time to do full testing on the new provider because the developers were "heads down" preparing for OData v4 support. "So ... we're going to rely on you, our dear customer, to provide feedback of whether or not this provider works in your situation. If we don't hear anything back, we'll go ahead and release the provider in a week or so." Which should be right around now.
Have you tried it yet? Comment here or drop me a line.
Posted by David Ramel on 10/10/2013 at 6:41 AM0 comments
I was dropped by my previous auto insurance company for a couple of at-fault accidents on my wife's driving record.
Trouble was, she was not involved in those accidents in any way. They happened to somebody else and somehow got on her report from a data collection company used by the insurer. And, try as I might, I could not convince the insurance company of this. I provided the company with a note from my previous insurer confirming that those accidents were not hers. I even provided an official driving record from the state showing those weren't her accidents. It didn't make any difference to the insurance company (as much as I'd like to see the company burned to the ground in an agonizing bankruptcy, I won't name it, but it definitely wasn't on my side). The accidents were on the ChoicePoint report--that's all that mattered.
I contacted the data collection company and began the nightmarish process of trying to get their information corrected. I eventually gave up; it just wasn't worth the hassle they were putting me through. (Ironically, I've never--ever--been in an at-fault accident. Believe it or not, I've never even received a moving violation, in several decades of driving. I was probably one of the best customers the insurance company could've had.)
I bring up these painful memories because of recent reports about a Big Data company, Acxiom, that this month announced a portal where individuals can look up information collected about them. Several articles noted that the portal, AboutTheData.com, reported some incorrect information. So I checked it out.
Sure enough, the site had a few things wrong, including my birth date, which was strange because I had just provided that date as part of registering for the privilege of looking up my info (pretty sly way to collect data, when you think about it--these people aren't stupid, like people in some other companies, if you know what I mean). They also got my education level, race and age of children wrong, among a few other things. Keep in mind the portal is in beta, and it gives you the chance to correct the data (I didn't even try to go there) and even opt out of the system.
So, just a warning: If you're a developer and your company is hopping on the Big Data bandwagon and you're assigned to the project, be very careful about the quality of the information you collect, especially if the data will have a significant impact on the success of the project--and the company's bottom line.
I mean, just imagine how much money that previous insurance company left on the table if the fiasco I experienced was commonplace among its multitude of customers. Fortunately, my new insurer uses a different data collection company that actually has accurate information and I got a sweet rate. And my present insurer is soaking up those monthly premiums and hasn't had to pay out a dime. Think of that, repeated thousands and thousands of times. If they only knew, I imagine the headquarters honchos in Columbus, Ohio, would be kicking themselves.
Have you any Big Data horror stories? Comment here or drop me a line
Posted by David Ramel on 09/19/2013 at 6:47 AM0 comments
Microsoft may have been late to the cloud party, but its Windows Azure ranks near the top when it comes to popularity for data-related development, according to a new survey from Forrester.
The Forrsights Developer Survey, Q1 2013 found that North American and European developers preferred the Amazon EC2 cloud service for their compute services by a significant margin over Windows Azure, but it's a different story for Relational Database Management Systems (RDBMS) services.
"Microsoft and Amazon are neck and neck amongst users of cloud RDBMS," said Forrester analyst Jeffrey Hammond in a blog post yesterday.
The survey found that the top three types of cloud services adopted by developers are compute, storage and relational data services. Hammond said 47 percent of respondents regularly use compute and storage services, while 36 percent use RDBMS services. These numbers come from 325 developers in the North America and European regions (out of 1,611 total) who reported they had used cloud computing or elastic applications.
Of those respondents using cloud compute services, 62 percent said they were using Amazon EC2 or planned to expand their use of it in the next year. That compares to 39 percent for Windows Azure and 29 percent for the Google Cloud Platform.
That gap of more than 20 points in adoption "is well outside a standard margin of error, so we have to give the nod to AWS when it comes to compute," Hammond said.
"Things are very different when it comes to developers using cloud-based RDBMS services," Hammond said. There, 48 percent of developers reported they were "implementing and expanding" use of Microsoft SQL Azure, followed by 45 percent for the Amazon RDS service (see Figure 1). However, that 3-point gap is within the margin of error.
"Also note that there are a high number of developers that are planning to implement Amazon's RDS service (27 percent) while 7 percent of Microsoft SQL Azure developers plan to decrease or remove their RDBMs workloads," Hammond said. "Overall, we'd have to rate this workload as a push--with no clear adoption leader."
The percentage of developers who reported using cloud storage and plan to expand that usage over the next 12 months was almost equally divided among Amazon Web Services, Windows Azure and Google Cloud Storage, at 25 percent, 22 percent and 23 percent, respectively.
"Amazon still has a larger body of developers that have implemented but are not expanding AWS S3 (17 percent compared to 10 percent for Microsoft Azure and 9 percent for Google, respectively)" Hammond said. "Our take: this workload looks like it's headed for a strongly competitive market in 2014."
So, despite a lag of about 4 years between the introduction of Amazon EC2 (August 2006) and Windows Azure (February 2010), Microsoft has caught up in attracting developers to its cloud platform. That's interesting news, considering the popular backlash about Microsoft's decision to not provide developers early access to the Windows 8.1 RTM.
What is it that makes Windows Azure database-related services so popular among developers? Comment here or drop me a line.
Posted by David Ramel on 09/05/2013 at 1:57 PM0 comments
The Microsoft Entity Framework has a spotty history of inconsistent release strategies, lagging feature requests and other issues, but things seem to be getting better with new leadership and even community contributions since it went open source.
The past problems with the Object Relational Mapping (ORM) tool were candidly discussed this week when EF team members Rowan Miller and Glenn Condron visited with Robert Green on the Visual Studio Toolbox Channel 9 video show to preview EF 6, which is "coming soon."
Reviewing past release history, Miller admitted the 2 years it took to update EF 3.5 SP1 to EF 4 was excessive. "That's really slow if you're waiting for new features," he said. Trying to get new features out to customers sooner, the team tried a hybrid model for subsequent releases with the runtime core components being part of the .NET Framework while new features shipped out-of-band via NuGet and the tooling was part of Visual Studio.
"We made one not-very-good attempt at doing an out-of-band release of the tooling, the 2011 June CTP of 'death and destruction' as we call it on our team," Miller said. "It went horribly; our product just wasn't set up for shipping out-of-band."
While the hybrid model got features out quicker, Miller noted there was a confusing mismatch of EF and .NET versions, so the team "bit the bullet" going from EF 5 to EF 6, going out-of-bound by moving the runtime to NuGet and the tooling to the Microsoft Download Center, while also being "chained in" to new Visual Studio releases.
The whole history, which Condron admitted was "still confusing," is explained in the 1-hour-plus show if you want to try to sort it out.
I was more interested in the move to open source at the same time EF was moved out of the .NET Framework, putting source code, nightly builds, issue tracking, feature specifications and design meeting notes on CodePlex. "Anything that we can make public, we made public," Condron said. The team also opened up the development effort to the community. "We happily accept contributions from as many people as we can," Condron said. But that strategy raised concerns by some, so he emphasized only the EF team has commit rights, and any pull requests must go through the same rigorous code review and testing process as do internal changes.
"We've gotten 25 pull requests; we've accepted 21 from eight different contributors," Condron said. He noted that while the development effort is open source, the shipping, licensing, support and quality remain in the hands of Microsoft.
Green noted that the strategy was a great way to get more people on the development effort, and Condron agreed, citing the thorny issue of enum support long requested by thousands of customers but not introduced until EF 5 and .NET 4.5. "Why is there no enum support--because nobody had written enum support," he said. "If you guys write it, we'll put it in." You can see the list of contributors on CodePlex (though only seven were listed in the Feb. 27 post).
Other benefits of open source, Miller said, include better customer relations, transparency and release cadence. "Back in the 3.5 SP1 days, we weren't listening to customers that great, but we've ... opened up the EF design blog" and started doing more frequent releases, he said. Condron also noted that with the nightly builds, the team has even gotten a few immediate bug reports the very next day after some source code was broken in the ongoing development efforts, which he said was "fantastic."
And the open source strategy will be expanding, Miller said. "The open source codebase at the moment just has the runtime in it" he said. "The tooling for EF 6 isn't going to be there, but after the EF 6 release, we're planning to put the tooling in as well. So we'll have the tooling there and you'll get to watch us develop on it, and we'll even accept pull requests on it as well."
Previewing EF 6, Miller and Condron ran through many new features coming from the EF team, such as nested entity types, improved transaction support and multiple contexts per database. But other new features are coming from community contributors, such as custom migrations operations, improved warm-up time for large models, pluggable pluralization and singularization service and "a few other cool things."
EF 6 is currently in Beta 1, and will go to RTM at same time as Visual Studio 2013, with an RC1 coming "soon."
[CLARIFICATION: The RC1 was actually made available on Wednesday, Aug. 21.]
For more information, see the EF feature suggestions site and CodePlex for community contribution guidelines and a product roadmap.
What do you think of the new EF 6? Comment here or drop me a line.
Posted by David Ramel on 08/22/2013 at 8:00 AM0 comments
Here's a troublesome aspect of the Big Data revolution I didn't expect: the melding of mind and machine. IBM yesterday unveiled a completely new computer programming architecture to help process vast amounts of data, modeled on the human brain.
The old Von Neumann architecture that most computers have been based on for the past half century or so just doesn't cut it anymore, scientists at IBM Research decided, so something new was needed, starting from the silicon chip itself. Ending with ... who knows, maybe some kind of organic/electronic hybrid. I keep getting visions of Capt. Picard with those wires sticking out of his head after being assimilated by the Borg. (If you don't know exactly what I'm talking about here, I've misjudged this audience completely and I apologize--please go back to whatever you were doing.)
The Von Neumann architecture encompasses the familiar processing unit,control unit, instruction register, memory, storage, I/O and so on. Forget all that. Now it's "cognitive computing" and "corelets" and a bunch of other stuff that's part of a brand-new computer programming ecosystem--complete with a new programming model and programming language--that was "designed for programming silicon chips that have an architecture inspired by the function, low power, and compact volume of the brain," IBM said.
Those would be "neurosynaptic chips" stemming from IBM's SyNAPSE project headed by Dharmendra S. Modha, unveiled in August 2011. In a video, Modha called the new system a "brain in a box." Now IBM is sharing with the world its vision of a new programming language and surrounding software ecosystem to take advantage of the chips, partially in response to the Big Data phenomenon.
"Although they are fast and precise 'number crunchers,' computers of traditional design become constrained by power and size while operating at reduced effectiveness when applied to real-time processing of the noisy, analog, voluminous, Big Data produced by the world around us," IBM said in its announcement.
That theme was echoed in the description of the SyNAPSE project, which explained that the old "if X then do Y" equation paradigm wasn't enough anymore.
"With the advent of Big Data, which grows larger, faster and more diverse by the day, this type of computing model is inadequate to process and make sense of the volumes of information that people and organizations need to deal with," IBM said.
The company said its new system could result in breakthroughs such as computer-assisted vision.
"Take the human eyes, for example," IBM said. "They sift through over a terabyte of data per day. Emulating the visual cortex, low-power, light-weight eye glasses designed to help the visually impaired could be outfitted with multiple video and auditory sensors that capture and analyze this optical flow of data."
That's fine and dandy--and seriously laudable. But, of course, these systems will eventually somehow be connected. And with the machine learning focus, the machines will, well ... learn. And after they learn enough, they will become self-aware. And when they become self-aware, they'll realize they're vastly superior to pathetic humankind and don't really need us around anymore. And then you've got some kind of dystopian nightmare: Skynet and Arnold androids, Borg, whatever.
Well, that's fine. I, for one, welcome our new machine overlords. I'm getting fitted for my headpiece right away. Have a good weekend!
What's your plan? Are you up for learning a new programming language? Did you get all the pop culture references? Please let me know by commenting here or dropping me a line.
Posted by David Ramel on 08/09/2013 at 2:53 PM0 comments
The SQL Server community this week engaged in a lively debate about limitations of the 2014 Standard Edition and Microsoft licensing practices.
The discussion--highlighted on Hacker News--was sparked by a post by database consultant/blogger Brent Ozar, titled "SQL Server 2014 Standard Edition Sucks, and It’s All Your Fault."
"Every release lately, Microsoft has been turning the screws on Standard Edition users," Ozar wrote. "We get less CPU power, less memory, and few (if any) new features."
He complained that organizations wishing to use more than 64GB of memory needed to step up to the more expensive Enterprise Edition (see feature comparison). After listing deficiencies of the Standard Edition--such as not allowing database snapshots, auditing, numerous business intelligence (BI) features and more--he enjoined readers to boycott the product:
"Microsoft won’t change its tack on SQL Server licensing until you start leaving. Therefore, I need you to stop using SQL Server so they’ll start making it better. You know, for me."
You should check out the Hacker News discussion (116 comments as of noon Wednesday). Even if you aren't interested in the squabbling--some readers dared to contend that 64GB was plenty of RAM for small businesses, for example--you can learn a lot about the nitty-gritty concerns and trials and tribulations of SQL Server users and developers on the front lines.
Or, as Ozar put it in an update to his post yesterday: "If you [DBAs] want to stay current on what startup developers think about databases for their new projects, HackerNews is a good reality check. It's a completely different perspective than the typical enterprise developer echo chamber."
What do you think about Microsoft licensing terms for SQL Server and limitations of the the Standard Edition? Comment here or drop me a line.
Posted by David Ramel on 07/31/2013 at 1:15 PM0 comments
Microsoft on Tuesday announced the availability of a Premium preview for Windows Azure SQL Database with beefed-up features for cloud-based business-class applications.
Those features include reserved capacity for each database for better and more predictable performance. The Premium service "will help deliver greater performance for cloud applications by reserving a fixed amount of capacity for a database including its built-in secondary replicas," Microsoft said.
An e-mail alert said that current SQL Database customers--excluding free trials--can sign up to receive an invitation to the limited preview.
A SQL Server Blog on TechNet noted
that the preview--first announced earlier this month at the Worldwide Partner Conference in Houston--is ideal for:
- Apps that require a lot of resources such as CPU cycles, memory or input/output operations. An example is a database operation that consumes many CPU cores for a long time.
- Apps that require more than the limit of 180 concurrent connections provided in the Web and Business editions.
- Apps that require a guaranteed fast response, such as a stored procedure that needs to return quickly.
To satisfy these demanding apps, Microsoft is initially offering two levels of reservation size, called P1 and P2. The former offers one CPU core and 8GB of RAM at a preview price of $15 per day and an eventual general availability price of $30 per day (in addition to storage). The P2 service doubles all those numbers. Full pricing is available here
There's already lots of detailed information about the Premium preview. Our sister site Redmond Channel Partner covered the announcement, and Microsoft has an extensive guidance page with all the nitty-gritty details you could ask for. Scott Guthrie also provides some information on the release, in addition to discussing new support for performing "recurring, fully automated, exports of a SQL Database to a Storage account."
Microsoft said it "will continue to add business-class functionality to Premium databases over time, to further support higher end application requirements."
What do you think of the Premium service? Comment here or drop me a line.
Posted by David Ramel on 07/25/2013 at 1:15 PM0 comments
Big Data is the future, Hadoop is the tool and Hortonworks is the partner to help Microsoft help businesses navigate the coming sea change in the way they operate. That's the takeaway I got from Microsoft exec Quentin Clark in his keynote address at the recent Hadoop Summit North America held in San Jose, Calif.
Clark, corporate vice president, Microsoft Data Platform, told the audience he took a break from his vacation to address the early adopters of a transformation that will completely change the industry in the next couple of decades.
"We believe Hadoop is the cornerstone of a sea change coming to all businesses in terms of how they are able to embrace information to effect change for how they run their day-to-day business," Clark said.
He likened this change to the way line-of-business applications changed the way all organizations work, from government to large businesses to small businesses. "We believe Hadoop is at the very root, the very cornerstone, of a similar kind of impacting change, but it's about all this new value, if you will, of information--information from systems and data that people already have that they aren't processing well today, embracing signals from partners, from customers, even from competitors in the industry, and analyzing that information differently. We believe that over the next couple of decades we'll see a complete transformation in how businesses think about their information, think about their businesses."
This transformation will be similar to the way the world was changed by advances in the telecommunications and travel industries, going from switchboard-assisted phone calls to ubiquitous cell phone coverage and from steamships and wagons to jet airliners, Clark said. He predicted that one day there will be 1 billion users of Big Data, and that will signal the completion of the transformation.
Microsoft feels a responsibility to help customers embrace Hadoop because it has become the Big Data standard, Clark said, noting that the company has logged some 6,000 engineering hours over the last year in its partnership with Hadoop vendor Hortonworks, a cosponsor of the summit. "It is a bit different for us," Clark admitted, to work on such an open source project in view of its strong brands in the data platform space, such as SQL Server and Excel. But he said the move made sense for Microsoft and Hortonworks was the best partner. "We're putting our shoulder now firmly behind their distribution on Windows," he said. "The Hortonworks Data Platform for Windows is what we're standing behind for our customers."
Of course, the cloud is a major part of Microsoft's vision of the future, and Clark said the Windows Azure HDInsight Hadoop service is one of the fastest-growing roles in the Azure arena.
After some demonstrations of technologies such as GeoFlow
and Data Explorer
, Clark emphasized that Microsoft was addressing the conference attendees as part of its effort with Hortonworks to get Hadoop out to the masses--and eventually to 1 billion users. "You all are the early adopters," Clark said. "You're the ones that see this coming. You're the ones on the leading edge of this, and every phenomenon we've had that's impacted businesses in this deep a way has always come with folks like yourselves that have that clarity early on to know what's coming."
Do you know what's coming? Clue us in by commenting here or dropping me an e-mail.
Posted by David Ramel on 07/11/2013 at 1:15 PM0 comments
The latest version of Microsoft's flagship Relational Database Management System (RDBMS) is offered in two versions: the regular SQL Server 2014 Community Technology Preview 1 and the cloud-based SQL Server 2014 Community Technology Preview 1 on Windows Azure, both from the TechNet Evaluation Center. The announcement comes one day before the BUILD 2013 developer's conference in San Francisco.
The Windows Azure cloud was first and foremost in Microsoft's messaging about the new software, touting the company's "Cloud OS." "Microsoft has made a big bet on what we call our cloud-first design principles," said Brad Anderson, corporate VP, in a blog post discussing the new previews.
"SQL Server 2014 features in-memory processing for applications ("Hekaton"), as well as data warehousing and business intelligence," Anderson said. "SQL Server 2014 also enables new hybrid scenarios like AlwaysOn availability, cloud backup and disaster recovery. It lives in Windows Azure and can be easily migrated to the cloud from on-premises."
Along with SQL Server 2014, Microsoft announced the availability of previews for Windows Server and System Center, both as 2012 R2 versions.
The SQL Server 2014 CTP will expire after 180 days or on Dec. 31, 2013, whichever comes first. Download options include an ISO DVD image, CAB file or Azure version. Microsoft recommends the ISO or CAB version to test the software's new in-memory capabilities.
Posted by David Ramel on 06/25/2013 at 1:15 PM0 comments
I guess I've done my part to fuel Big Data hype by writing about Big Data hype--it's kind of a vicious circle. But it's a significant milestone and indication that it's gone beyond hype and is here to stay when the term is entered into the Oxford English Dictionary.
That came with the June update of "The definitive record of the English language." Also, big companies are constantly jumping onboard the bandwagon, with the behemoth General Electric and supercomputer company Cray Inc.--among others that you wouldn't normally associate with database products--announcing their own Big Data products this week.
Along with the culmination of the hype and general acceptance of Big Data--heading toward saturation--throughout the IT industry, of course, comes the inevitable backlash. Check out these articles:
And those are all from this week!
When everybody starts turning on you, you know you've made it.
And, of course, companies without the resources of General Electric, HP, IBM and the like are going all out to capitalize on the trend in various innovative ways. Forbes reports that:
"Kaggle, the data-science-as-sport startup, [provides] a matchmaking service between the sexiest of the sexy data janitors and the organizations requiring their hard-to-find skills. It charges $300 per hour for the service."
Wow. That's lawyer-like coin. Everybody wants their piece of the ever-growing pie.
How are you going to cash in on Big Data? Share your thoughts here or drop me a line.
Posted by David Ramel on 06/21/2013 at 1:15 PM0 comments
More details are emerging about in-memory capabilities in the new SQL Server 2014, announced at the recent TechEd 2013 conference.
The first Community Technology Preview is expected to be released soon, possibly this month, and you can register with Microsoft to be notified of its availability.
Highlights of the new release are data warehousing and business intelligence (BI) enhancements made possible through new in-memory capabilities built in to the core Relational Database Management System (RDBMS). As memory prices have fallen dramatically, 64-bit architectures have become more common and usage of multicore servers has increased, Microsoft has sought to tailor SQL Server to take advantage of these trends.
The in-memory Online Transaction Processing (OLTP) capability--formerly known by the codename Hekaton--lets developers boost performance and reduce processing time by declaring tables as "memory optimized," according to a whitepaper (PDF download) titled "SQL Server In-Memory OLTP Internals Overview for CTP1."
"Memory-optimized tables are stored completely differently than disk-based tables and these new data structures allow the data to be accessed and processed much more efficiently," Kalen Delaney wrote in the whitepaper. "It is not unreasonable to think that most, if not all, OLTP databases or the entire performance-sensitive working dataset could reside entirely in memory," she said. "Many of the largest financial, online retail and airline reservation systems fall between 500GB to 5TB with working sets that are significantly smaller."
"It’s entirely possible that within a few years you’ll be able to build distributed DRAM-based systems with capacities of 1-10 Petabytes at a cost less than $5/GB," Delaney continued. "It is also only a question of time before non-volatile RAM becomes viable."
Another new in-memory benefit is "new buffer pool extension support to non-volatile memory such as solid state drives (SSDs)," according to a SQL Server Blog post
. This will "increase performance by extending SQL Server in-memory buffer pool to SSDs for faster paging."
Independent database expert Brent Ozar expounded on this subject, writing "SQL Server 2014 will automatically cache data [on SSDs] with zero risk of data loss."
"The best use case is for read-heavy OLTP workloads," Ozar continued. "This works with local SSDs in clusters, too--each node can have its own local SSDs (just like you would with TempDB) and preserve the SAN throughput for the data and log files. SSDs are cheap, and they’re only getting cheaper and faster."
Other in-memory features mentioned by the Microsoft SQL Server team include "enhanced in-memory ColumnStore for data warehousing," which supports real-time analytics and "new enhanced query processing" that speeds up database queries "regardless of workload."
Some readers expressed enthusiasm for the new features, but, of course, wanted more. "Ok the in-memory stuff (specifically OLTP and SSD support) is valuable but the rest is so so," read one comment from a reader named John on the Microsoft blog post. "Really I wish that we would see continued improvements in reporting and analysis services and in general less dependence on SharePoint which is a painful platform to manage. QlikView and Tableau are a real threat here."
Besides the in-memory capabilities, Microsoft also emphasized increased support for hybrid solutions where, for example, a company might have part of its system on-premises because of complex hardware configurations that don't lend themselves to hosting in the cloud. These companies can then use the cloud--Windows Azure--for backup, disaster recovery and many more applications. You can read more about that in this whitepaper (also a PDF download).
What do you think of the new in-memory capabilities of SQL Server 2014? Comment here or drop me a line.
Posted by David Ramel on 06/13/2013 at 1:15 PM0 comments
Microsoft today announced SQL Server 2014, designed with "cloud-first principles" and featuring built-in, in-memory OLTP and a focus on real-time, Big Data-style analytics. No specific realease date was provided in the announcement.
"Our Big Data strategy to unlock real-time insights continues with SQL Server 2014," said Quentin Clark, corporate vice president with the Data Platform Group, in a blog post. "We are embracing the role of data--it dramatically changes how business happens. Real-time data integration, new and large data sets, data signals from outside LOB systems, evolving analytics techniques and more fluid visualization and collaboration experiences are significant components of that change."
The news came with a slew of other big product announcements at the TechEd North America conference in New Orleans, such as Windows Server 2012 R2 and System Center 2012 R2. All will be available in preview later this month.
A key feature of SQL Server 2014 is the incorporation of in-memory, online transaction processing (OLTP) technology stemming from a project that has been in the works for several years, codenamed "Hekaton
," Clark said. Developed in conjunction with Microsoft Research, Hekaton greatly improves transaction processing speeds and reduces latency by virtue of working with in-memory data, as opposed to disk-based data.
Microsoft touted the benefits of the "conscious design choice" to build the Hekaton technology into SQL Server 2014, with no need for a separate data engine. "Other vendors are either introducing separate in-memory optimized caches or building a unification layer over a set of technologies and introducing it as a completely new product," said Dave Campbell, Microsoft technical fellow, when Hekaton was announced as a coming component of SQL Server 2014 last November. "This adds complexity forcing customers to deploy and manage a completely new product or, worse yet, manage both a 'memory-optimized' product for the hot data and a storage-optimized' product for the application data that is not cost-effective to reside primarily in memory," Campbell said.
Clark picked up on that theme in today's announcement. "For our customers, 'in the box' means they don’t need to buy specialized hardware or software and can migrate existing applications to benefit from performance gains," he said.
Clark also emphasized the embrace of cloud computing, noting how SQL Server 2014 will work seamlessly with the cloud-based Windows Azure to reduce operating expenditures for mission-critical applications. "Simplified cloud backup, cloud disaster recovery and easy migration to Windows Azure Virtual Machines are empowering new, easy to use, out-of-the-box hybrid capabilities," he said.
The Microsoft exec also noted SQL Server 2014 will include improvements to the AlwaysOn feature, supporting "new scenarios, scale of deployment and ease of adoption."
As mentioned, Microsoft provided no release date, but that detail was bound to be foremost in the minds of many users, such as one named Patrick who posted the very first reader comment on Clark's blog post: "Are there some dates (other than 2014)?"
What do you think of the big news about SQL Server 2014? Comment here or drop me a line.
Posted by David Ramel on 06/04/2013 at 9:03 AM0 comments