Here's an interesting problem for reviewing products in the .NET/Visual Studio toolspace: What are the boundaries? For instance, in our September issue we just reviewed a product that supports
creating Office applications and another product for
creating PDF documents. Those seemed to "obviously" be tools of value to .NET developers.
However, we just got a press release about the release of Amethyst and WebOrb which, together, are supposed to provide end-to-end Flash to .NET development within Visual Studio. Are these tools that we should be reviewing in Visual Studio Magazine?
Not only do the technologies overlap, so do the companies. Aspose, which makes the PDF product we reviewed, also makes a Flash management tool. If the vendors are mixing their technologies is the same true for developers?
Posted by Peter Vogel on 09/08/201034 comments
A couple of years ago I spent a month driving around England. While there, I noticed how much information the British pack into their roadscapes. There's information on the signs, information on any overpass you go under, and a ton of information on the road itself.
This "roadbed" information not only includes actual messages but also packs in an enormous number of symbols. It took me awhile to be able to pick out what was actually important to me at any one time and ignore the rest -- initially, I tried to read it all (which was hard, what with all the honking that seemed to occur wherever I was driving).
All this is relevant to the hands-on review of JetBrains ReSharper and Developer Express CodeRush, which I wrote for the August issue of Visual Studio Magazine. Because any developer who adopts DevExpress CodeRush will have to make the same kind of mental adjustments that I did while driving around on those English roads. CodeRush, you see, adds a lot of information to the Visual Studio screen. The result, initially (and, at least, for me) is information overload. It took me about 30 or 40 minutes before I got really comfortable with the amount of visual feedback that the tool was giving me.
After that 30 minutes, however, I found that my eye was automatically ignoring what I wasn't interested in and picking out the information that I did need when I wanted it. At the very least, when I was working in C#, the thin red lines down the left hand side of every code block (classes, methods, if blocks) was invaluable. I'm still prone to having problems getting my braces to match up and those guidelines were tremendously helpful.
I've never been a big user of bookmarks. However, I found them so easy to invoke in CodeRush (Ctl_Numeric Keypad +) that I started using them more. Part of the reason was that the visual marker for a bookmark was so easy to spot that I felt more comfortable about scattering them around my code. And the little number that CodeRush adds to the end of my methods that provide a measure of the member's maintainability started to trigger me to refactor code that I would have otherwise ignored.
After 30 minutes, I was still feeling a little overwhelmed by what CodeRush added to Visual Studio's display. However, it gradually transitioned from being "clutter" (my initial reaction) to becoming "feedback." I suspect that, given a day or two, I'd adjust and come to depend upon those markers. And, of course, anything that I decided I didn't like, I could always turn off in CodeRush's Options dialog.
Posted by Peter Vogel on 09/02/20102 comments
CodeRush doesn't claim to have as many new features in its Visual Studio 2010 version as JetBrain's ReSharper did in their new version. And I'm not sure that's a bad thing: The product is actually packed with features. In my
first review of CodeRush back in 2009, I noted that I had a hard time mentally managing all the key stroke combinations used to access all the functionality in the package.
I may have been overreacting. With CodeRush installed, Ctrl_+ often activates the feature you want (based on where your cursor is) or brings up a context sensitive menu of available actions. For instance, while I'm a big fan of Test Driven Development, I find the "generate by usage" feature of Visual Studio 2010 that supports "test first" development awkward to use. To use the native Visual Studio feature I have to type in a statement that uses a property that I haven't written yet, move my mouse back to the SmartTag at the start of the property name, click the dropdown arrow, and select the option to create the property... and I always get an auto-implemented property. I've got nothing against auto-implemented properties, but sometimes I know that I'll need a backing field.
With CodeRush, I type in the property name, press Ctl_+ (you can reassign this key if you want), and get a drop down menu right under the cursor that I can arrow down through for the mouse. Furthermore, this menu gives me choices: an auto-implemented property or a "real" property complete with a backing field. CodeRush then lets me choose where the property is to go in my code. And my hands never have to leave the keyboard!
As far as I'm concerned this is the way that the feature should have been implemented.
Posted by Peter Vogel on 08/26/20105 comments
Every day I'm reminded that what cost of sending email is (virtually zero). As a product reviewer, I get lots of e-mail from vendors. Some of it is from software suppliers who I've had products from in the past and am now one of their registered users. Some is from vendors who would like me to review their product. Some of this is spam.
Recently I got a press release from "Emily." I'm already suspicious when e-mail comes from a so-called friendly name rather than a full name. A glance at the subject line showed that this was a general mailing and business related. Now I'm doubly suspicious because not only should e-mail come from people with a full name, general mailings should come from an identifiable company.
The product turned out to be completely unrelated to our magazine and the products we review (it was a phone switching exchange for IP communications). The company, rather than managing its press release, just sent a blanket e-mail to anyone who might review anything anywhere, confident that I'd feel that having their... stuff... clogging my in box is a value-added activity. It's the functional equivalent of spammers who simultaneously sending me spam for Viagra and breast enhancements. This is just poor mailing list management, folks.
Posted by Peter Vogel on 08/25/20100 comments
In our August issue, we reviewed JetBrains ReSharper and Developer Express CodeRush (
Two Productivity Tools for Visual Studio 2010). Today's I'll look into dealing with one of the quirks of these useful tools.
Both ResSharper and CodeRush have their very own Options dialogs, separate from Visual Studio's Tools | Options dialog. There's a reason for this: both have an enormous set of options that you can use to customize the behavior of the tools. While you may normally be the kind of person who just accepts the default settings for your tools, getting to know these options can be useful if you find that, after installing one of these add-ins, Visual Studio gets sluggish in its response time.
Any Add-In for Visual Studio has the danger of impacting performance. And Add-Ins like ReSharper or CodeRush are especially prone to slowing down Visual Studio because they both, effectively, run in the background all the time, analyzing code and keeping up with your changes. If you're considering using either of these two tools you should first make sure that your computer can comfortably run Visual Studio.
However, if you want the benefits of these add-ins -- and they're worth having -- and you find that Visual Studio is getting sluggish, you do have the Options dialog. One option is to try turning off some of the tools' features in their respective Options dialogs. The first choice for improving performance is to turn off the code analysis tools, if you're willing do without them, of course.
But you don't necessarily have to decide which features you're willing to do without. On occasion, I've found that I can speed up performance just by giving the tools a little more time to respond. In Resharper, for instance, delaying the time that ReSharper takes before displaying its enhanced IntelliSense lists seemed to give me improved performance on an underpowered computer.
It's also worth keeping up to date. ReSharper released an update on July 7th that, among other things, addressed some performance problems. DevExpress released an update for many of its products on August 14, including one for CodeRush.
Some things can't be helped though: Multiple Add-ins can have... interesting... interactions. It's not impossible that add-ins can conflict and you'll have to decide which one you'll keep and which one you'll turn off.
Posted by Peter Vogel on 08/19/20100 comments
In our August issue, we reviewed JetBrains ReSharper (
Two Productivity Tools for Visual Studio 2010). I had to be careful because I was testing ReSharper in Visual Studio 2010, which is so new there was a real danger that I could mistake some cool new feature in Visual Studio for something that ReSharper has given me.
I could stick with what I know is supplied by ReSharper. My favorite feature of the tool is still present: if I type in the name of a class that I don't have a namespace for, ReSharper will volunteer to write out the Imports/using statement for me. Love that.
But, since I was reviewing the new version, I made sure to look at the new stuff. For instance, as an ASP.NET developer, I appreciate the new navigation feature that links Master Pages to content pages: I can right-mouse click in an aspx file, pick Edit Master and go directly to the Master Page for the content page I'm editing.
Wait: That's Visual Studio doing that.
What ReSharper offers is actually much better. From the popup menu, I can select Navigate | Related Files and get a popup box of all the files related to my current aspx file. That includes, the Master Page (of course), the code file, any ascx files on the page, any related style sheets, and more. That navigation box is so useful that I might actually learn the related shortcut key.
Equally useful is the ability to follow a variable's value through my application. I can select any variable, right-mouse click and select Inspect | Value Origin. That gives me a list of every place that the variable is set and what the variable is set to. Clicking on an entry in the list takes me to that line. When I'm trying to figure out where and how a variable is used, this navigation tool will be invaluable. In fact, as with any new tool, the biggest problem is going to be remembering that this tool is available.
Posted by Peter Vogel on 08/18/20100 comments
I was surfing the
Devdirect site, which seems to list every new software development product release ever made and I ran across two interesting lists: The top 10 best selling products and the top 10 software categories by sales. I realize that these are snapshots and may just reflect one day's sales, but I found them intriguing.
The top 10 products actually had a couple of surprises:
- ASPPlayground.NET SQL Forum
- Color Tab Control .NET
- SpreadsheetGear 2010
- RadControls for ASP.NET AJAX Q2 2010
- RadControls for WinForms Q2 2010
- RadControls for Silverlight Q1 2010
- Telerik Reporting Q2 2010
- GdPicture.NET
- Diagram Editor Tool for VB, .NET and VC++
- 3-Heights™ PDF Viewer
The predominance of the Telerik RadControls and reporting products probably has more to do with their recent release than market dominance. But it does suggest that programmers are still primarily interested in control toolkits -- not much different from the days of Visual Basic 3. But it is interesting that a tool as specialized as one designed to create forums in .NET for SQL Server databases can outsell the more general purpose tools.
The top 10 categories reveal an even more traditional perspective:
- Calendar, Date & Time
- File Upload/Download
- Graph & Chart
- Grid
- Image Acquisition
- Image Processing
- PDF View
- Reporting, Report Writers
- Scheduling & Diary
- Tab & Tabstrip
I suspect that these would have been the top 10 categories ten years or even twenty years ago when I was a Visual Basic programmer (well, maybe not the File Upload control).
Posted by Peter Vogel on 08/17/20100 comments
I used Jason Short's experience with VistaDB to
discuss the challenges of moving from software developer to software. It seemed reasonable to give Jason the last word with some lessons learned:
#1 – Have a pricing model that will make money on what you sell now, not next year. A small company can't afford a subscription model (only five percent of VistaDB users renewed their subscription, for instance). And, by the way, there are no economies of scale. Charge higher and get fewer customers that you can afford to support better.
#2 – Don't include support. You can't afford it. On average, I spent almost $1,000 per user in tech support during year two. Free tech support is an open invitation to get a call for every compiler error message. No amount of documentation will prevent this. People who are solo coders want someone else to bounce ideas off, play out designs, etc. They end up hitting whatever vendors they can get free support from in order to have that sounding board. That's consulting, not tech support.
#3 - No free updates. You have to get a constant revenue stream and upgrades are one part of that. Naming matters, here: Version 4.0 was my first major upgrade and its name scared buyers off. If I had called it version 3.6, my base would have moved to it without a second thought.
#4 - Big companies should pay big prices. I have to agree with Joel Spolsky that having a corporate edition at any price is a bad idea. You'll end up with a megacorp that should have paid you 10,000 developer licenses only buying one corporate license. Give corporate buyers a deal, but don't give them the farm.
#5 - Pick a tight niche. If we had an ASP.NET-specific database engine that only worked in that environment and handled caching, replication, etc., we could have charged a lot more for it and specialized the code, as well. Pick the tightest niche you can and stick to it: You can charge more money.
#6 - Free trumps everything for most companies. SQL CE is not perfect, but it is free and it's flattening the database market. It's hard to compete against free.
Posted by Peter Vogel on 08/13/20100 comments
Recently I
reviewed the Versant db4o object database. I used the opportunity to talk to German Viscuso, who manages db4o's developer community, about the database market from the perspective of an object database company. You can read the first part of our interview
here.
Peter Vogel: What market is db4o (and related products) competing in? The database market? The object database market? A different one?
German Viscuso: We're competing in the On-Line Transaction Processing (OLTP) database market, which means the transactional (as opposed to non-transactional) data persistence market. This includes object (ODB), relational (RDB) and XML (XMLDB) databases, among others. For Versant's db4o product, it's mainly the embedded OLTP database market, for Versant's V/OD product, it is the large scale, complex, mission critical OLTP application space.
As it so happens, the ODB API has always included an object caching component, so the ODB API bears a remarkable resemblance to some of the recently released caching market products. So, in addition to the OLTP space, the object database is often used as a kind of persistent and query-able application object cache.
PV: What are the toughest challenges facing db4o in the market?
GV: Awareness. There has been a great swell of new products which deliver "partial" object database features: caching, key-value storage, graph analysis [and] schema-less, unstructured content document management. All of these things are delivered capabilities in object databases and we have enterprise 500 companies who have built solutions in each of these spaces on the object database. But there is a lack of awareness in the general software development community.
These types of technologies are extremely difficult to build, taking years to get right. So many who are unaware of object databases (ODBs) have tackled just one particular aspect of a problem and created narrower solutions. So, technology competition isn't really the problem for ODBs; it's the awareness that ODBs are an option. Typically, when someone becomes aware of the ODB as a choice and tries it, it wins. Our challenge is to increase the awareness so we get tried more often.
Another challenge is that there is so much existing data in relational systems. There must be a way of ODBs to play well in the "data eco-system." Service oriented architectures (SOAs) go a long way to solving this problem, because it's the data service that becomes important, not the technology used underneath the service. In SOAs, the choice becomes more about what is most efficient to implement, has highest stability and gives the best performance.
Posted by Peter Vogel on 08/06/20101 comments
German Viscuso is director of community management at Versant, which makes the db4o local database that
we recently reviewed. Since db4o is a free, object-oriented databases that integrates easily with .NET applications, it made sense to ask German "Why an OO database?"
Peter Vogel: Why object instead of relational?
German Viscuso: Faster development and evolution of the code base, because no mapping between runtime objects and persistent storage is required. Faster runtime execution, because the relational Primary Key-Foreign Key (PK-FK) relationships are natively stored by [the] object database, which means they are resolved at runtime without CPU intensive JOIN operation. Storing the PK-FK relationship directly in the database is akin to creating the complex index overlays required in a relational database to speed JOINs. Object Databases (ODBs) are just a substantially more efficient way to store and retrieve data.
Object databases are good at dealing with complex, domain specific models. The JOIN operations of a relational database become unavoidable when the model gets complicated. If the model is sufficiently complex, there's no way to create performance-oriented single table mappings between objects and tables. You often end up with tertiary tables and even more JOINs -- this not only kills performance, it makes development difficult. Object databases allow you to avoid this completely by using pure object identity indexing from inside of database, transparent to the application developer.
PV: Is there still a place for relational databases?
GV: Yes, and there always will be. The relational database (RDB) is great technology for basic business data management and supporting ad-hoc queries in small to medium sized data environments. It's only when data gets complex and is domain driven that object databases show an advantage. When it comes to ad-hoc queries, the RDB is still the best solution -- until you pass some medium-sized data levels and move into terabyte-plus ranges. In that case, newer data warehouse solutions are clearly a better choice.
Still, if you're doing data analysis against large amounts of data using existing domain realationships, then ODBs show a significant advantage. That's why ODBs are used heavily in large scale modeling and simulation type applications like weather forecasting, defense analytics, financial risk management, online gaming, etc.
Then, of course, you have the truly massive, non-transactional, "mostly-read" space where Google FS and like technologies provide the only feasible solutions (given today's hardware). However, these application areas are outside that of the traditional database application space.
Posted by Peter Vogel on 08/03/20100 comments
I think that a lot of developers working for someone else think about working for themselves either as an independent consultant (like me, for instance) or as the owner/vendor of a killer software product (I tried that once, too). Over the last few blogs I've talked about the history of a developer (Jason Short) who bought a software product in 2007 (VistaDB,
reviewed here), significantly improved it, and started marketing it. You can read
Part 1,
Part 2, and
Part 3 of the blog series.
In July of 2010, Jason decided to call it quits as the owner of the company. He described the problem in his e-mail announcing his withdrawal from the company:
I may hold on to VistaDB, but it will be relegated to a nights and weekends type of activity. There will be no more full time work on VistaDB from me.... I am planning to spend my free time on a more advanced engine.
This sums up the problem for many developers who consider marketing a product that they've created. Initially this is a hobby: something they can work on nights and weekends. After marketing the product for three years, that hobby phase is starting to look attractive to Jason again.
In the hobby stage, as Jason notes, the developer has a lot of flexibility,
Items that don't interest me... will be dropped like a hot rock.... [I] know I could improve performance probably 10x over what it is today, but not without massive design changes. If this is a hobby/research project then I will make those changes... but without a way to make money on it there is little point in developing it as a commercial product.
But Jason also points out what changes when you become a commercial product vendor. As a product vendor Jason says he has to be "worried about backward compatibility... all the crazy upgrade paths." And that doesn't include customer support, advertising, maintaining the customer forums, and all the other requirements of being a real company.
This is the dilemma: developers start building a product as a hobby but when they move to selling it, the effort required by the product increases exponentially. Becoming a commercial product requires committing to at level of work moves the product out of the hobby, "nights and weekends" mode. The product now requires a fulltime commitment. And, if the developer doesn't make that time commitment, the product won't be able to keep its customers. But that commitment means the product has to produce enough revenue to support the developer... and any additional staff that the product now requires. As VistaDB's story demonstrates, it's an unusual combination of product and developer that will be able to make the leap from hobby to commercial product.
It's too bad, because VistaDB remains, I think, a great product. Hopefully, some smart company will buy it and continue its story.
Posted by Peter Vogel on 07/26/20103 comments
In my
last two blog posts I discussed the history of Jason Short, who moved from being a developer to a product vendor with VistaDB (which we
reviewed in July). In the middle of June, VistaDB sent out an e-mail discussing an upcoming platform shift for VistaDB (I discussed the impact of platform shifts on product in
an earlier blog). As part of that move, the company made the source code for the previous version of VistaDB available for purchase by license owners.
Between then and July 8th, Jason decided to pull the plug: "I cannot afford to work on VistaDB full time anymore, and I am in negotiations with a third party to acquire the product." In addition to the problems described in the interview I had with him in my previous blog, Jason added these problems:
In the past three years... Health Insurance for employees is up 500 percent, corporate taxes are up 22 percent... unemployment insurance is 160 percent higher now, credit card merchant fees are double... business insurance is now totally out of our reach, server hosting is almost double, the list goes on and on.
And I don't imagine the recession helped much, either.
As Jason pointed out in the interview in my last blog post, the Visual Studio/.NET toolsspace is tough. Mark Driver of the Gartner Group discussed the structure of the toolspace in my two interviews with him here and here, and it didn't sound like a place where many people get rich (or even survive). But there are some additional lessons that we can take away from this story that I want to come back to in my next blog entry.
Posted by Peter Vogel on 07/22/20100 comments