Retro Database dBASE Making a Comeback?

Ok, that report is due soon, so I'm going to fire up dBASE to run some reports, export the data into Lotus 1-2-3 and summarize everything with WordPerfect--while listening to Wham! and Foreigner, of course.

Oops, my mind was momentarily transported back into the mid '80s.

Amazingly, though, one of those pioneering software products was just updated as of yesterday. Yup, dust off those old .dbf files, dBASE PLUS 8 has been released.

And, while the original Ashton-Tate version was developed for the CP/M operating system (remember those dual 5-1/4 in. floppies--one for the program, one for the data?), this new one runs on Windows 8 (yes, even the 64-bit version). My, how times have changed.

WordPerfect, of course, is still around under the stewardship of Corel Corp., but I hadn't heard anything about dBASE for quite a while. The new dBASE guardian, dBase LLC, claims it's still in use by "millions of software developers." The company was formed last year with the help of some people who formerly worked at dataBased Intelligence Inc., "the legal heir" to dBASE.

I'm not sure exactly what happened to dBASE after the astounding success of dBASE III, but, according to Wikipedia, the decline started with "the disastrous introduction of dBase IV, whose stability was so poor many users were forced to try other solutions. This was coincident with an industry-wide switch to SQL in the client-server market, and the rapid introduction of Microsoft Windows in the business market."

Anyway, the new version includes ADO support, a new UI and "enhanced developer features with support for callbacks and the ability to perform high precision math."

Pricing is $399 for the regular edition, $299 for an upgrade and $199 for a personal edition without ADO support. I wonder what those prices equate to in 1985 dollars?

UPDATE: Here's a pretty good history of dBASE by Jean-Pierre Martel, editor of The dBASE Developers Bulletin.

Any old-timers out there with a good memory? What did dBASE III sell for? And why did some of these pioneering products die or fade into obscurity, while others continue to thrive? Comment here or drop me a line.

Posted by David Ramel on 03/20/2013 at 1:15 PM0 comments

BIDS Templates Come to Visual Studio 2012 in SSDT Update

"Does SSDT for Visual Studio 2012 support BI project templates?" asked James V. Serra in a TechNet forum last September.

Some six months later, the answer was yes: "Hi James, the download to add the BI Project Templates to the VS2012 shell is now available."

Microsoft last week announced the online release of "SQL Server Data Tools – Business Intelligence for Visual Studio 2012" (SSDT BI), available for download here.

The release includes templates for Visual Studio 2012 BI projects, including Analysis Services, Integration Services and Reporting Services. These templates were part of the old Business Intelligence Development Studio (BIDS).

SQL Server Data Tools (SSDT) encompasses a bunch of integrated services and enhancements to improve database development entirely from within the Visual Studio IDE, such as incorporating functionality found in BIDS and SQL Server Management Studio (SSMS), among a host of other features.

Prior to this, the BI templates were available only in Visual Studio 2010, SSDT 2010 or SQL Server 2012. The new release will be installed through the SQL Server 2012 setup tool as a shared service and will install a Visual Studio 2012 integrated shell if you don't already have VS 2012.

This will hopefully relieve a lot of the frustration of data developers confused by different versions of SSDT, which was introduced with SQL Server 2012 but hosted in the VS 2010 shell, and inconsistencies in functionality as data development tools have evolved.

Apparently, though, there are still some frustrated users and more integration to be done. SQL DBA John Pertell welcomed the announcement. "That’s great news as a lot of developers, myself included, have been waiting for this functionality," he said. However, he added, "the bad news is that it doesn’t include the Database Projects templates released last year. You’ll still need to install them separately. But they will work together."

He explained further:

So if you want just the BI templates for Visual Studio 2012 you only have to install the BI version of SSDT. If you also want the database projects you will need to install both the BI templates and the database templates. And if you want to use the test plans for your new database projects and create SSRS reports or SSIS packages you’ll need a full edition of VS 2012, either Premium or Ultimate, plus the database templates plus the BI templates.

There were also some users frustrated by the install experience, especially on 64-bit machines running SQL Server 2012 (see comments on this blog post). Visual Studio and the SSDT integrated shell are 32-bit apps, and users reported errors, some of which were apparently caused by the installation tool trying to install the 32-bit version of SQL Server 2012, Service Pack 1. The solution seems to be to choose the "perform new install" option during installation and not the "add features to existing" option.

Still, many data devs are happy with the new capabilities. Those include Serra, who said on his blog, "It took 8 months, but at least it was quicker than being able to use BI in VS 2010, which took about two years."

Other enhancements to Visual Studio 2012 added last week include Office Developer Tools and a SQL Server Data-Tier Application Framework update.

What do you think of the new BI functionality in SSDT? Are we headed toward one big, comprehensive IDE that will include everything you need for SQL Server development in one place? Comment here or by e-mail.

Posted by David Ramel on 03/14/2013 at 1:15 PM0 comments

SQL Encroaches on Big Data Turf

Remember when SQL developers felt threatened by Big Data? Relational database management systems were old-school relics that couldn't cope with the vast amounts of unstructured, disparate data. NoSQL was the future. You needed to get onboard with Hadoop and MapReduce, running on Linux.

Well, not anymore.

Maybe not ever, really. There is just too big of an installed base of SQL developers and systems for the two camps, Big Data and SQL, to have remained apart. Even four or five years ago the convergence was underway with Hive, a data warehouse system for Hadoop that uses "a SQL-like language called HiveQL."

That convergence seems to be rapidly accelerating. Microsoft has been helping out, of course, with PolyBase in its SQL Server 2012 Parallel Data Warehouse to enable SQL queries of Big Data and initiatives such as HDInsight and the Hortonworks Data Platform to get Big Data into the Windows ecosystem.

But Redmond has plenty of company. Just this week I had the opportunity to interview Web coding pioneer Lloyd Tabb about the subject when his new company, Looker Data Sciences Inc., announced a query-based business intelligence (BI) platform called Looker. "SQL and relational querying is the best way to ask questions of large related data sets," Tabb told me.

He should know what he's talking about. He was a database and languages architect at Borland in the earlier days of RDBMS and went on to build LiveWire, the first application server for the World Wide Web. He was later a principal engineer at Netscape where he was architect of Netscape Navigator Gold (later named Composer), the first WYSIWYG HTML editor, and the engineering lead for Netscape Communicator. He helped found and later became a pioneer in crowdsourcing, just to name a few of his accomplishments.

Looker, according to the company, "uses a new modeling language, LookML, which enhances SQL for analytics so end-users can perform powerful analytics without needing to know how a query is written."

I asked Tabb about the use of SQL instead of NoSQL, Hadoop or other Big Data technologies associated with BI analytics, and he gave me a little history lesson.

"Back in the day conventional wisdom was that if you were going to create an application for a PC you had to write it in Assembly language," Tabb said. "Higher-level languages generated code that was too big and too slow. Later, conventional wisdom was that you couldn't build a 'real-applicaiton' in an agile language--it was too big and too slow.

"Hadoop was designed because at the time there were no SQL engines that could deal with data sets that large. Developers regressed to hand coding queries in MapReduce. Both SQL and C are still in use today because they are the best abstractions for the kinds of problems they solve."

Looking around, I see lots of other evidence pointing to the Borg-like assimilation of Big Data by SQL. A few weeks ago GigaOM explored the subject with an article titled "SQL is what's next for Hadoop: Here's who's doing it," and just yesterday a PluralSight course on the topic was announced, described as "An investigation into the convergence of relational SQL database technologies from several vendors and Big Data technologies like Apache Hadoop."

And there are plenty more similar things going on out there. So rest easy, SQL data developers, your future is still bright.

What do you think about the convergence of Big Data and SQL? Share your thoughts by commenting here or by e-mail.

Posted by David Ramel on 03/08/2013 at 1:15 PM0 comments

Red Hat Goes All In On Big Data (Whatever That Is)

I tuned in to a Webcast earlier this week where Red Hat announced it was contributing its Hadoop plug-in to the open source Apache Hadoop community and totally embracing Big Data with an "open hybrid cloud" strategy. More on that later.

What I found really interesting was the response to an audience member who asked, "How do you define Big Data?"

Hmmm. Good question. It's one of the most over-hyped terms in the tech world today, but exactly what is it? Red Hat executive Ranga Rangachari provided the following:

So ... what we think of ... analysts have different ways to talk about this. You've heard some analysts talk about the four Vs, which is the volume, the velocity and a few other attributes to it. And, yes, that is one way to look at it, but I think our view of Big Data is, fundamentally I think, the underlying type of data, either semi-structured or unstructured. That's one way, at least, from a technology standpoint, which contrasts very much from your typical structured databases that people are used to over the last 20 years or so.


Obviously, it's not that easy to define Big Data.

John K. Waters addressed the question a year ago:

While there's lots of talk about big data these days (a lot of talk), there currently is no good, authoritative definition of big data, according to Microsoft Regional Director and Visual Studio Magazine columnist Andrew Brust.

"It's still working itself out," Brust says. "Like any product in a good hype cycle, the malleability of the term is being used by people to suit their agendas. And that's okay; there's a definition evolving."

Wikipedia defines it as "collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications."

In other words, no one knows.

Anyway, Red Hat will open source it's Hadoop plug-in and jump on the Big Data bandwagon with it's vision of an open hybrid cloud application platform and infrastructure. Rangachari said it was designed to give companies the ability to create Big Data workloads on a public cloud and move them back and forth between their own private clouds, "without having to reprogram those applications." Red Hat said in a news release that many companies use public clouds such as Amazon Web Services for developing software, proving concepts and pre-production phases of projects that use Big Data. "Workloads are then moved to their private clouds to scale up the analytics with the larger data set," the company said.

The Red Hat Hadoop plug-in is part of Red Hat Storage, running on Linux, which is based on the GlusterFS distributed file system. It's provided as an alternative to the Hadoop Distributed File System, known for some technical limitations that Apache and other organizations have also addressed.

Rangachari said the path to the open hyrbrid cloud Big Data application platform will eventually incorporate an Apache Hive connector (now in preview), NoSQL/MongoDB Java interoperability and RESTful OData Web protocol access, in addition to its existing JBoss middleware.

He emphasized that the new cloud strategy will be woven throughout every Red Hat project, noting that "Big Data could be one of the killer apps for the open hybrid cloud."

When asked why Red Hat was contributing its Hadoop plug-in to Apache, Rangachari said the Apache Hadoop community was the "center of gravity" in the Hadoop world and that the move will provide developers with easier access to the plug-in from the same ecosystem. He also said the company expects that, rather than stopping innovation of the technology, the move to open source will actually contribute to more innovation.

So what exactly is Big Data. Please explain here in a comment or via e-mail. We'll all appreciate it.

Posted by David Ramel on 02/22/2013 at 1:15 PM0 comments

Bill Gates Says Biggest Product Regret Was WinFS Data Storage

Data developers were interested to learn this week that it was a futuristic data storage product called WinFS that Bill Gates identified as the Microsoft product he most regretted not making it to market.

In a live question-and-answer event on called Ask Me Anything, the legendary Microsoft co-founder answered dozens of questions from readers. While he was most concerned with the charitable work of the Bill & Melinda Gates Foundation, many questions inevitably focused on his Microsoft and programming days.

Here's the exchange about the database product:

Q: What one Microsoft program or product that was never fully developed or released do you wish had made it to market?

A: We had a rich database as the client/cloud store that was part of a Windows release that was before its time. This is an idea that will remerge since your cloud store will be rich with schema rather than just a bunch of files and the client will be a partial replica of it with rich schema understanding.

When another reader guessed that it might be WinFS, Gates answered in the affirmative. Another reader wondered if the OS mentioned was Vista, and Gates replied that: "Vista was what eventually shipped but Winfs had been dropped by then."

According to Wikipedia, WinFS is short for Windows Future Storage, described as:

the code name for a cancelled data storage and management system project based on relational databases, developed by Microsoft and first demonstrated in 2003 as an advanced storage subsystem for the Microsoft Windows operating system, designed for persistence and management of structured, semi-structured as well as unstructured data.

I found it interesting to learn that even way back then, Microsoft was thinking ahead to the cloud, and then, as now, it's all about the data.

What did you think about Gates' AMA session? Please comment here or send me an e-mail.

Posted by David Ramel on 02/15/2013 at 1:15 PM1 comments

Programmers: Introverts or Extroverts?

"The icon of the shy geeky computer programmer is a mainstay of the technology landscape. But is it true?"

That's how a recent e-mail to me from Evans Data Corp. started out. At a previous company, as part of a class, I took a Myers-Briggs test that indicated I was introverted. And that my personality type (ISTP, one of 16 possible categories) tended to like motorcycles. I didn't need a standardized test to tell me either of those things, but I found it interesting.

I found the e-mail interesting, too. It said: "We asked over 400 software developers to rate themselves on a scale measuring introverted vs. extroverted. Only 2 percent thought they were completely introverted. So where do you think the other 98 percent saw themselves?"

Right off the bat, I thought, most of them probably thought themselves either partly introverted or extroverted. Duh. (Is there even such a thing as being "completely introverted?") But I was curious, so I asked for more information. Turns out this introvert/extrovert question was just a tiny part of a report to help companies market to developers. And they even provided me with a nice, customized quote to buy the report.

I'll have to pass on that, thanks, but the question still intrigued me. I know that good programmers tend to be good at math, so I've got a big strike against me for ever getting good. But what about being introverted? Does that help?

I would bet that most programmers are introverted. And, unfortunately, introversion comes with some negative baggage. Extroverts run things. They're the managers and supervisors. They're the ones you want to hang out with.

But I learned in the Myers-Briggs class that being introverted doesn't necessarily mean bad things, like being a weird loner who doesn't want to interact with people. From my understanding of that class, it has more to do with how people tend to recharge their batteries. Extroverts like being at parties and social occasions and can do it all day and come away refreshed. Introverts can socialize, but it leaves them tired. To recharge, they like to be alone for a while. Maybe reading a book or writing code. And being introverted doesn't mean you won't be a successful manager or supervisor. The class instructor said that former president Jimmy Carter is an introvert.

Well, I can't spring for report right now (I won't tell you the cost), but I'll do the next best thing: conduct my own survey. Are you introverted or extroverted? How do you see programmers in general? Does one or the other help or hinder good programming? Comment here or drop me a line. And I won't charge you.

Posted by David Ramel on 02/07/2013 at 1:15 PM6 comments

EF Power Tools Bugs Fixed as Development Heads in New Direction

The Entity Framework Power Tools Beta 3 was released this week, but some data developers eager to get their hands on new features were disappointed to learn it mostly includes bug fixes because the product's functionality is shifting to the EF Designer in Visual Studio 2012.

With EF Power Tools, data developers get additional Visual Studio design-time tools for Entity Framework development.

The most important bug fix in Beta 3 is non-compatibility with Visual Studio 2012 Update 1. Several other issues were also addressed, but some developers wanted more.

"I was so happy when I saw the title ... but no new features," one reader commented.

Microsoft's Rowan Miller explained: "The reason we aren't adding a bunch of new features is that we're incorporating 'Reverse Engineer Code First' into the EF Designer workflow (which already has table selection, etc.)." He pointed to the Entity Framework CodePlex page for more information on that initiative.

In response to another reader, Miller expounded on his explanation:

When I say included as part of the EF Designer I really just mean that all the EF tooling (EF Designer, Reverse Engineer Code First, and the other Power Tools functionality) will be included in a single installer (which in turn is included 'in-the-box' in new versions of Visual Studio). We are going to use the same wizard that Database First uses for selecting tables etc. though.

The Beta 3 does add some context menu options to the "Entity Framework" sub-menu in Visual Studio. For example, you can right-click on a C# project for "Reverse Engineer Code First" functionality, which lets you generate Code First mappings for a database. "This option is useful if you want to use Code First to target an existing database as it takes care of a lot of the initial coding," Microsoft said.

Another project right-click option lets you add reverse engineering templates to your project.

You can also right-click on a code file that includes a derived DbContext class to display the entire Code First model in the EF designer, display Code First model Entity Data Model (EDMX) XML and generate pre-compiled views, along with other options.

And, instead of generating pre-compiled views, you can right-click on a EDMX file to generate views for a model created using the EF Designer.

Microsoft said that even though it won't be releasing a Power Tools RTM, it will continue Beta releases until the related functionality is incorporated into a pre-release version of the EF Designer.

What's your experience been when using EF Power Tools and the EF Designer? Please share your thoughts here or drop me a line.

Posted by David Ramel on 02/01/2013 at 1:15 PM0 comments

Study: MongoDB Takes a Bite Out of MySQL

Some especially significant implications for Web developers can be found in a new study by research firm Ovum that measured the sentiment about Big Data vendors in 2012 Twitter posts.

While the study indicated that Big Data retained its popularity last year, data developers will be more interested in conclusions drawn by Ovum concerning the future of Web development.

"The Big Data buzz word even managed to transcend from the enterprise IT world to become a hot topic for business publications and journals in 2012, with MongoDB claiming considerable mindshare among Web developers who traditionally relied on MySQL," Ovum said in a news release.

Ovum principal analyst Tony Baer expounded upon that idea in a blog post. "To some extent, the results were surprising: while Hadoop garners much of the spotlight as a Big Data platform, the vendor 10gen, which develops MongoDB, came in second in mentions to Apache, which hosts the Hadoop project."

Ovum reported that Apache garnered 9.4 percent of Twitter posts, while MongoDB followed at 6.2 percent. "Although MongoDB is not known for storing high volumes of data, it is associated with variety, given its schemaless architecture," Baer said. "The popularity of the 10gen brand is attributable to the fact that MongoDB has become for Web developers the document equivalent of MySQL; it is open source, built in a language (JavaScript) that is highly popular among Web developers, and relatively simple to develop."

However, Baer said, "Ovum believes that the popularity of 10gen is more indicative of the future of Web development rather than Big Data, per se. We view 10gen as becoming the non-transactional database successor to MySQL in the world of Web developers."

I also found it interesting that a study about Big Data used the Big Data technique of culling information from social media to provide insights and conclusions not available through traditional database systems. I also found it interesting that DataSift, the company that conducted the study for Ovum, showed up in the very results it produced, coming in at 10th place in the ranking of Big Data companies mentioned in Twitter posts. All kinds of fascinating stuff here.

What do you think? Is MongoDB encroaching upon MySQL's turf? Please share your thoughts here or drop me a line.

Posted by David Ramel on 01/24/2013 at 1:15 PM1 comments

Salary Survey Shows Data Devs Doing Well; Silverlight, Not So Much

Being a data development guy, I was interested in how data-related developers were faring when the recent Visual Studio Magazine Salary Survey came out, and the answer is pretty darn well, comparatively.

But, also being a Silverlight fan, I was most struck by one particular chart: "Salary by Microsoft Technology Expertise." More than 1,000 developers were asked: "What Is Your Primary Area of Technology Expertise (Have Product Knowledge and Work with on a Regular Basis)?" One line said it all:

Silverlight n/a

No one? Not one single developer was primarily using Silverlight?

It seems like only yesterday that Silverlight was the technology of choice for streaming Olympic Games, political conventions and Netflix movies.

There was a lot of angst among Silverlight developers when Microsoft emphasized new ways of developing apps for the Windows Store and Windows 8 ecosystems with the Windows Runtime, focusing on open technologies such as JavaScript, HTML5 and CSS. Silverlight developers were reassured that their skills would transfer to the new ecosystems and that they could continue to use XAML, C# and such to produce new-age apps with Silverlight's companion Expression Blend IDE. That may well be happening, but it looks like Silverlight itself is dying on the vine, judging from this salary survey. Too bad.

Anyway, back to the data devs. While the average salary for .NET developers was pegged at about $94,000, SQL Server developers reported an average salary of $97,840, taking second place in areas of expertise after SharePoint at $103,188.

SQL Server developers also ranked highly when it came to the best technologies for job security/retention, being chosen by about 65 percent of respondents, following Visual Studio/.NET Framework at 82 percent.

So, as I reported last year, data-related developers are doing all right. Congratulations, and keep up the good work!

Do you miss Silverlight? Do you feel good about your job prospects as a data developer? Please share your thoughts by commenting here or dropping me a line.

Posted by David Ramel on 01/18/2013 at 1:15 PM5 comments

Data Access, Reimagined

"There are lots of discussions about using database[s] in Windows Store apps in MSDN forum[s]," reads a brand-new blog post by Microsoft's Robin Yang on MSDN.

Yes, developers are apparently still struggling with data access in the new Windows 8 ecosystem.

A quick check bears this out. In fact, just a week ago, a developer asked, "Is [it] possible to use 'LINQ to SQL' database in Windows 8 metro apps--or any other easy option is there to use local database?"

The answer was predictable: "It seems there is no official announcement of support for Linq to Sql or EF for database access in Windows 8 Metro Apps. You can try to use Web services to access the data."

Such questions have appeared on for well more than a year. A few examples:

Many reader answers point to using SQLite, which is exactly what Yang's post did (the post indicates the author of the post's content is Aaron Xue, though it was posted by Robin Yang).

I earlier touched on and provided links for a few other options such as IndexedDB and Web services/the cloud.

But HTML5/JavaScript seems to be the popular programming model of choice for Windows Store apps, and Yang has also conveniently provided a three-part series on this (authored by Roy Tian), titled "Using HTML5/JavaScript in Windows Store apps: Data access and storage mechanism." You can find this series (along with other posts) on the Windows Store apps development suppport blog page.

So check out these latest posts to bone up on Windows Store app data access--and perhaps keep waiting for SQL Server CE support.

What do you think about data access in Windows Store apps? Please share your thoughts here or drop me a line.

Posted by David Ramel on 01/11/2013 at 1:15 PM0 comments

Data Drives App Development Software Market

A recent report from research firm International Data Corp. (IDC) provides further proof that data is king when it comes to software development. The Application Development & Deployment (AD&D) market is expected to grow at a higher rate in 2013 after slow sales in late 2012, and some of the hottest segments of that market revolve around data-related development, IDC reported.

"Within the AD&D markets, the Relational Database Management Systems (RDBMS) market stands out with a 34% market share. It is by far the biggest individual market," IDC said. "Unlike other mature markets, RDMBS is forecast to outperform most AD&D markets with high single-digit growth in 2013 and beyond." Oracle dominates that market, IDC said, with nearly a 50 percent market share.

Also poised for revenue growth is Data Integration and Access Software, described by IDC as "a structured data management market with revenues of more than $4 billion . . . experiencing growth on par with the RDBMS market with which it has a close relationship." IBM dominates that market, the research firm said, and rules the overall AD&D market with Oracle and Microsoft.

No surprise, IDC said the highest market growth is expected in the predictable areas, "where markets are aligning with or supporting mobile, cloud, social and big data areas."

The information was released by IDC in conjunction with its Worldwide Semiannual Software Trackers project, a paid service.

What do you think about the growth prospects for data developers in the coming years compared to other app development? Please comment here or drop me a line.

Posted by David Ramel on 01/02/2013 at 11:21 AM0 comments

What Data Developers Want for the Holidays

Dino Esposito isn't asking for much from Santa this year. Nothing new or bleeding-edge. In fact, he kind of wants to step back in time, in search of simplified SQL querying:

I'd love to have back a framework that was in beta testing and probably even in production around SQL Server a decade ago: making queries in plain English, like "give me all customers based in WA." The code was amazingly able to make most of them--or at least get close, anyway. I'm working on a simplified version of it--so it would really great to have it from Santa!"

Esposito is talking about English Query, a project for SQL Server 2000 that he was involved in some 13 years ago. Esposito, a well-known developer, book and article author, presenter, trainer and all-around technical expert based in Italy, shared his thoughts with me in an informal survey I took of data developers with equally sparkling credentials, asking what their data development holiday wishes were. Following are some of their thoughts.

Dr. James McCaffrey, who manages training for Microsoft software engineers in Redmond, among many other projects, had this to say:

I get the feeling that there's a lot of flux with MVVM, MVC, and MVP and so my wish (assuming that I'm right and that there is flux) is to see some stability emerge here.

McCaffrey is right about most things, to understate it, and a lot of Microsoft products do seem to be in transition, so some stability in 2014 would be nice.

Brandon Satrom is an HTML5 expert, among a lot of other things, at Telerik.

For us, the biggest wish on the list is for a FULL OData implementation for both MVC and WebAPI. The end result we're looking for is the ability to fully and dynamically query a dataset based on URL parameters. Full OData support would be an awesome start, and if we're extra lucky this year, perhaps Dynamic LINQ integration for Entity Framework as well.

Fellow Teleriker Chris Sells is vice president of the Developer Tools Division at the company.

I think what most data developers want for Xmas is an end-to-end, offline-enabled, client-side and mobile-focused data source stack. The occasionally connected story is a hard one for any of the mobile OSes (and it's no picnic for desktop OSes, either), so something simple, capable, robust and cross-platform for client-side data story is what I'm looking for in my Xmas stocking from Santa!

Noted author Peter Vogel is a principal at PH&V Information Services.

What I'd like is some reliable way to move changes from development to production that won't drive my DBA crazy. Microsoft's new SQL deployment package is great--but if deploying a package on my Web server causes changes in my database, my DBA is going to [Editor's note: just substitute "do painful things to me" here; suffice it to say that Vogel's DBA has some anger management issues], (and I'm opposed to that).

Some reliable tool to estimate "response time under load" would be great. It would (a) take a picture of how busy my database server is over the course of a day and, (b) estimate the response time for all the data access operations in my application (and tie those operations to my UI and services). I'd then specify how much each part of my UI and my SOA will be used in production, and the tool would estimate my response time for each UI component or service operation throughout the day, highlighting those that exceed some allowable limit.

Jeremy Likness, multiple book author and principal consultant for Wintellect LLC in Atlanta, thinks some of his wishes might be coming.

True asynchronous support in Entity Framework and other data providers. Not just wrapping requests in a task, but the actual asynchronous implementation that will scale correctly in highly concurrent environments.

Better/easier extensibility of OData across various producers (that is, WCF 4) and consumers.

Consistent APIs across platforms--that is, a standard data solution for Windows 8, Windows Phone 8, the server, and so on that can be access through a common API so it's not a completely different repository and data access layer for each implementation

Stronger support in database projects for schema changes--that is, I know there is the compare/publish, but an explicit way to write in migrations so you can have push-button updates out of the box, for example, I iterate within a sprint and change a few items, I'm readily prompted to fill in any issues with the schema (default, move data and seed data) and I get two specific outputs: a creation script (start from scratch) and a migration script (upgrade from previous iteration)--again, as part of a build and not an interactive schema compare.

Sean Iannuzzi is a solutions architect for The Agency Inside Harte-Hanks. He took a lot of his precious time to give me an extremely detailed reply. It's great stuff, so I'm sharing it all with you.

What developers want as the perfect data improvement gift would be providing easier data integration from Entity Objects to Data Contracts, Model objects that are exportable as fixed data and including complete model data lists when working with Views in Razor (Data Model Extension).

Entity Objects to Data Contracts or Model Objects

Most of the time, when building Web sites or applications, either data contracts or models are needed to support various differences in the UI versus the data layer. As a result, a mapping exercise is needed to link the two together. I usually use AutoMapper, as it handles this mapping very well, but it would be awesome if this was included as part of the [.NET] framework.

Export Compact Data Elements

Another item related to data development that would be a great feature would be if certain fields in data contracts could be marked for different levels of return options. For example, at times, I may want to lazy load all of my data and only need the IDs and not all of the data associated to the data contacts. What would be awesome would be a way to annotate the data fields with levels that would control when it would be included with the return set. Something such as, deep contract member, medium contract member and light contract member, which could be added at the field level. Light contracts could just include the ID fields, medium would include ID fields and the parent records, and the deep contracts could return all data in the hierarchy structure. What would be really awesome is if this was figured out for you automatically, but that's just a wish and very unlikely.

Fixed Format Export Options

At times, exports are needed for data that's in a fixed field format that's used in a Web application or service. A great feature would be to allow annotations to support how the data could be exported and then, through reflection, pull in the attributes based on the model.

Something such as:

public class FlatFileAttribute : System.Attribute
  public int fieldLength { get; private set; }
  public int startPosition { get; private set; }

   /// File Attribute constructor to set 
/// the start position and field length ///
/// /// public FlatFileAttribute(
int startPosition, int fieldLength) { this.startPosition = startPosition; this.fieldLength = fieldLength; } }

Razor Data Model Extension

The last feature that I would like automatically included is the ability to map data elements from a hierarchal model to a view without the need of an extension and for the data fields to be included as part of the model. For example, if you have a model with a list of subelements and you are creating them on the view, they will be null be default. To remedy this, I usually create an extension method so that the data is included with the model--so that all model fields are included for an object such as Parent.Children, where children is a collection beneath the Parent object. This would be a nice feature as well.

public static IDisposable BeginCollectionItem(
  this HtmlHelper html, string collectionName)
  return BeginCollectionItem(html, collectionName, "", "");
public static IDisposable BeginCollectionItem(
  this HtmlHelper html, string collectionName, 
  string prefix, string suffix)
  var idsToReuse = 
    GetIdsToReuse(html.ViewContext.HttpContext, collectionName);
  string itemIndex = idsToReuse.Count > 0 ? idsToReuse.Dequeue() : 
    prefix + string.Format("
    <input id="\" name="\" value="\" type="\" {1}\??="" off\??="" 
autocomplete="\" {0}.index\??="" hidden\??="" />", collectionName, html.Encode(itemIndex)) + suffix); return BeginHtmlFieldPrefixScope( html, string.Format("{0}[{1}]", collectionName, itemIndex)); }

I'd like to thank all of these guys for taking the time to share their thoughts with you. And I'd like to continue the conversation. What would you like to see in the coming year in terms of data development technologies? Please comment here or drop me a line.

Posted by David Ramel on 12/20/2012 at 11:21 AM1 comments

.NET Insight

Sign up for our newsletter.

Terms and Privacy Policy consent

I agree to this site's Privacy Policy.

Upcoming Events