Microsoft has always had a plan to support cross-platform development using the .NET Framework. For the longest time, the plan was for you to create a Portable Class Library (PCL) -- any API you used from a PCL was supposed to work on any .NET supported platform.
If you did create a PCL project, you were given a list of platforms and asked to check off which ones you wanted to run on. After checking off your choices, what you could access in your library was the intersection of the .NET Framework APIs supported on each of those platforms. Fundamentally, that meant the more platforms you checked off, the less of the .NET Framework you got to use (in fact, some combinations would take out whole versions of Visual Studio).
The process wasn't dynamic, however. In reality, what you were picking was one of a set of predefined profiles that combined various .NET Framework APIs.
Microsoft has a new approach: Standard Class Library projects. A Standard Class Library consists of those APIs that "are intended to be available on all .NET implementations." The news here is that there is only one Standard and it supports all the .NET platforms -- no more profiles agglomerated into an arbitrary set of interfaces.
The catch here is that the Standard may not include something you want ... at least, not yet. With PCLs there was always the possibility that, if you dropped one of the platforms you wanted to support, you might pick up the API you wanted. That's not an option with the Standard, which is monolithic. In some ways it's like setting the version of the .NET Framework you want to support in your project's properties: The lower the version you pick, the less functionality you have.
Obviously, then, what matters in the .NET Standard is comprehensiveness. There have been several iterations of the .NET Standard specification, each of which includes more .NET Framework APIs. The latest version (as of June, 2018) is .NET Standard 2.0 and (like version 1.3 before it) it's a real watershed in terms of adding common functionality -- more than 5,000 APIs. With version 2.0 there's a very high likelihood that what you want to use is in the Standard.
You can check out the whole list here. The page also includes links to a list of namespaces and APIs added in any version of the .NET Standard. It's telling that the API list for version 2.0 is too big to display in anything but its raw format.
Posted by Peter Vogel on 06/26/2018 at 7:29 AM0 comments
It's easy to miss that you've opened a read-only file in Visual Studio: When you open a file you can't change, a tiny little lock icon appears on the tab of the editor window to the right of the file's name. By default, Visual Studio won't even tell you that you can't change the file until -- after you've made all your changes, of course -- you try to save the file. Only then do you get the bad news with a dialog that gives you three choices:
- You can create a new file
- Attempt to overwrite the file (that is, attempt to make the file writeable)
- Cancel and go back to the file which holds a ton of changes you can't save
Notice the absence of a "Oh , just throw everything away" option.
If you'd prefer to know about this problem before you start making your changes then you just need to set an option in Visual Studio. Go to Tools | Options | Environment | Documents and uncheck the option called "Allow editing of read-only files; warn when attempt to save."
Now, when you start to make changes a read only file you'll get that dialog box asking if you want to create a new file, make the file writeable, or cancel. This time, the cancel option will return you to a file that you haven't invested any time in.
By the way, and for the record, the "make writeable" option never works. It's just there to give you hope ... and then crush it.
Posted by Peter Vogel on 06/20/2018 at 10:50 AM0 comments
I have two separate styles for using the DbContext object. In one style I create the DbContext object when my class is instantiated, either as part of defining a field for my class:
Public Class Customer Repository
Dim db As New CustomerEntities
Or in my class's constructor:
Public Class Customer Repository
Dim db As CustomerEntities
Public Sub New()
db = New CustomerEntities
I use this style when all the methods in the class us the DbContext object and I expect the methods to be called independently of each other (and in a variety of different combinations). I shouldn't admit this, but I frequently forget to call the Dispose method at the end of those methods.
My other style is to leverage the Using keyword, like this:
Using db = New CustomerEntities
This is the style I follow when my Entity Framework code is integrated with other processing (typically, other EF code). The primary reason I use Using in this style is to ensure that the Dispose method is called -- the End Using statement that marks the end of the block will make sure that happens. Basically, I'm compensating for my failures in the previous style.
It turns out that I needn't have felt bad about those missing calls to Dispose. A few quick tests with performance monitor will show that it's difficult (I would say "impossible") to detect any difference between applications that call the DbContext's Dispose method and those that do not.
There is, as always, one exception: when you take control of opening and closing the Connection object available through the DbContext object. In that scenario, it's entirely possible that you may forget to close your open Connection, something that DbContext won't let happen if you leave control of the Connection up to it.
Leaving a Connection open is bad because it defeats connection pooling (an open Connection object ties up a connection at the database, forcing other applications to create new connections at the database). Calling the Dispose method ensures that your Connection is closed.
So, as long as you let DbContext manage your connections, feel free to ignore the Dispose method. On the other hand, if you're managing your Connections, the Dispose method may be your bestest friend.
Posted by Peter Vogel on 06/18/2018 at 3:15 PM0 comments
As Philip Jaske mentioned in his interview with Becky Nagel, one of the cool things in ASP.NET Core is the ability to rewrite incoming URLs to "fix up" a request. There are lots of reasons to do this, the primary one being that it gives you the flexibility to move server-side resources to new URLs: You just rewrite incoming requests using the old URLs to point to your new URLs.
But it does raise the question of when you should use URL Rewriting instead of sending an HTTP redirect to the client.
On the face of it, HTTP redirects are inefficient because they require you to send a redirect response to the client (network latency) with the new URL. That, in turn, requires the client to resend its request to the new URL provided in the redirect (even more network latency).
If the HTTP redirect is implemented with a 301 code (indicating that the change in URLs is permanent), then the client should automatically replace the original URL with the new, redirected URL when a request to the old URL is issued. That should eliminate the network latency on any future requests ... but the reality is that most hand-crafted consumers used to call Web Services aren't set up to do that.
HTTP rewriting should be more efficient than redirects because the client makes a single trip to the server: All the redirection is handled on the server. In addition, if you're using HTTP rewriting to support moving services to new URLs then you don't have to leave a client at the old URL just to return redirects. As an added feature, the client never sees the rewritten URL (unlike a redirect where the new URL is sent to the client), which may or may not be important to you. Effectively, the difference is similar to using a Server.Transfer versus a Server.Redirect in ASP.NET.
However, I say rewriting "should" be more efficient than redirects because Microsoft notes that if your rules get complex enough (or if you have "too many" rules), rewriting has the potential to slow performance on your site. If you use URL rewriting, then you'll want to monitor response times in case they start increasing.
You have other options. IIS has the URL Rewrite extension which is tightly coupled with IIS and will give you better performance. However, the URL Rewrite extension requires you to set up rewrite rules in IIS manager, moving rewriting out of your control as a developer and into the hands of your site administrator. ASP.NET Core's URL rewriting feature lets you keep your hands on the reins.
Posted by Peter Vogel on 06/18/2018 at 3:39 PM0 comments
When I started creating Web Services, I was using ADO.NET DataSets to retrieve data and then sending that data to my consumers using XML. Those Web Services are still there, but my consumers now want JSON.
The good news is that I don't have to rewrite my code to return the data in the right format. While I could switch to using SQL Server's new ability to convert query results into JSON, the existing code has that whole "working" feature that people like so much -- I have no desire to replace it.
The people who created NewtonSoft.JSON saw this problem coming and provided a solution for converting DataSet tables into JSON. First, you need to extract the DataTable holding your rows from your DataSet:
Dim dt As DataTable
dt = MyDataSet.Tables("Customers")
Then create a JsonServializer:
Dim js As JsonSerializer
js = JsonSerializer.CreateDefault
At this point you could set properties on the JsonSerializer to control how your JSON will be generated.
Next, pass your DataTable and Serializer to the FromObject method on NewtonSoft's JArray class. The FromObject method will convert all the rows in your DataTable into an array of JToken objects, held in a JArray object:
Dim rows As JArray
rows = JArray.FromObject(dt, js)
Now you can send the whole collection to the consumer:
Alternatively, you use LINQ to pull out the rows you want. This gets the first row, for example, and sends it to the consumer:
Dim row As JToken
row = rows.FirstOrDefault
Posted by Peter Vogel on 05/29/2018 at 11:03 AM0 comments
If you have change tracking turned on in Visual Studio, then you'll be getting highlights in the right-hand margin of your editor window flagging the condition of lines in the current file. If you're not getting those lines and would like to, then go to Tools | Options | Text Editor and check the Track Changes option.
Here's your quick reference to the colors and icons in the editor window's right-hand margin:
- Yellow: The line has been changed but not yet saved
- Green: The line has been changed and saved
- Orange: The line has been changed, saved, and the change undone
- Little square dots in the middle of the margin: Break points
- Little square dot on the right side of the margin: Syntax error
- Gray block: The portion of the file that's currently being displayed
- Solid blue line: The current position of the cursor
Posted by Peter Vogel on 05/22/2018 at 1:17 PM0 comments
In general, it's considered rude to seal classes because it prevents other developers from extending the class through inheritance. However, when you declare a base class it's considered perfectly acceptable to mark some classes as overridable/virtual ... and to leave some methods unmarked. Those methods left unmarked can not be overridden by derived classes that inherit from the base class. Essentially, the base class developer is saying that these methods are essential to the nature of the class and modifying those methods (or properties, for that matter) would distort the class.
But what about the derived class? It's not hard to imagine a derived class that overrides a method in a way that is essential to the nature of the derived class. Sealing the derived class to prevent a new class inheriting from it would be considered rude. However, like the developer of the original base class, the developer of the derived class should be allowed to say that some changes are not allowed.
This is role of the NotOverridale/sealed keywords: They allow a developer to mark an overridable method (or property) as no longer overridable. As an example, here's a CreditApproval method that's overridden a method in the base class but has been marked to prevent any further modifications. First, in Visual Basic:
Public NotOverridable Overrides Function CreditApproval() As Boolean
Now, in C#:
public override sealed bool CreditApproval()
Posted by Peter Vogel on 04/26/2018 at 5:03 AM0 comments
I just read another discussion of Enums in .NET where the author was all excited about the fact that (under the hood) a named, enumerable value is actually stored as a number. There are ways, in both Visual Basic and C#, to use those numeric values.
I'm not going to show you how to do that because it's wrong, wrong, wrong. The point of using enumerated values is to get away from embedding magic numbers in your code and, instead, replace those values with meaningful names. Accessing the numeric value (a textbook example of an "implementation detail") violates the purpose of setting up an enumerated value in the first place.
More importantly, using those numeric values is just an accident looking for a place to happen because those numeric values are assigned positionally. If you're using those values in some "clever" way (sarcasm intended) then your code will break if someone inserts a new value into your Enum. At that point, every subsequent enumerated value gets assigned a new numeric value.
I do make one exception: If I want to be able to add two named values together to get a new value, then I use bit flags. But bit flags work by explicitly assigning every enumerated value a numeric value (no positional assignments) and then using the enumerated names without referring to the underlying numeric values. That's restrictive enough that I don't feel I'm violating my principles when I take advantage of it.
Posted by Peter Vogel on 04/23/2018 at 3:07 PM0 comments
As part of putting together a request to a Web Service, I'm perfectly willing to modify the headers in the request to carry some data rather than put that data in the body of the request. There is a risk here because some proxy servers will strip out any headers they don't recognize. However, in an SSL request, headers are encrypted and, as a result, not visible to proxy services. To ensure that my custom headers aren't stripped out I only use this technique where all requests are traveling over SSL.
My rule for deciding whether data should go into the header vs. the body is driven by the way the data is being used. If this is information that's independent of the request (that is, something used in a variety of requests) and is used to control the processing of the request, then I'm more likely to put the data in the request header. Security-related information is a good example.
But I also recognize that adding custom headers also reduces interoperability. I obviously can't include a custom header of my own when sending a request to someone else's service. Even when designing a request to be sent to my own service, I have to recognize that there are toolsets that make it difficult/impossible to alter headers (at least, I'm told that such toolsets exist). If I do create a header, I need to make it clear what will happen to clients that don't provide that header.
Posted by Peter Vogel on 04/16/2018 at 9:03 AM0 comments
So, you know the class you need but you don't know what class library it's in. How do you add the right reference to your project? Object Browser will let you do it in two steps.
You can do that because "Object Browser" is patently misnamed -- to begin with, it displays classes, not objects. Just as obviously, it isn't just limited to classes (objects) but also displays namespaces, enums, structs, interfaces, and class members (e.g. events, properties, etc.).
If you know what class (or interface or enum or, even, member) you want, you can search for it in Object Browser using the search box at the top of Object Browser's window. Once you find what you're looking for, just click on the Add to References icon at the top of Object Browser to add a reference to the relevant library to whatever project you have selected in Solution Explorer.
So, it isn't just a browser, either.
Worst. Name. Ever.
Posted by Peter Vogel on 04/05/2018 at 5:18 AM0 comments
When you're creating a derived class and your base type is a generic class, you have two choices in implementing your derived class: You can set the type of your derived class or you can make your derived class another generic class.
For example, imagine that you have a class called ReadRepository that accepts a variety of types:
Public Class ReadRepository(Of T)
If you create a CustomerRepository that inherits from ReadRepository, you might choose to set the type of your base class:
Public Class CustomerRepository
Inherits ReadRepository(Of Customer)
That's the strategy to follow when your derived class adds functionality specific to a datatype (in this case, I'm adding functionality specific to the Customer class).
On the other hand, if you wanted to create an UpdateAndReadRepository, you might choose to have your new class also be a generic class. In that case, your derived class also accepts a type placeholder and passes that placeholder to the base class:
Public Class UpdateAndReadRepository(Of T)
Inherits ReadRepository(Of T)
This is the strategy to follow if you're extending the base class with functionality that can be used with a variety of classes. In this case, adding Update capabilities for any class.
Posted by Peter Vogel on 04/02/2018 at 5:44 AM0 comments
Every once in a while, I end up with a bunch of collections in memory and need the ability to pick the collection I want by name. For example, I might need a bunch of Customer collections, one for each city where I have a customer.
Furthermore, I'd like to have those collections organized into a larger collection called CityCusts so I can pull out all of the Customers for any specific city.
The code I want to use to retrieve the Customers collection for a city would look like this:
Dim lstCusts As List(Of Customer)
lstCusts = CityCusts("Regina")
I could build that CityCusts collection myself by defining a Dictionary collection that holds values that are, themselves, a collection of Customers:
Private CityCusts As Dictionary(Of String, List(Of Customer))
Initializing the individual collections for each Dictionary entry and adding the right customer to the right City collection would be a pain, though. Even using LINQ's Group By syntax is awkward, in both languages.
Fortunately, the Lookup collection makes building that collection a snap. To declare a Lookup class you just specify the type of the Dictionary's key (pretty much always a string) and the type held in the collection. Here's the declaration to create a Lookup collection that holds a collection of Customers associated with a city:
Private CityCusts As Lookup(Of String, Customer)
To load the class, I just start with a collection of Customer objects and call the collection's ToLookup method. I must pass the method a lambda expression that specifies which property to use to organize the Customers collection.
This example creates collections of Customer objects that share the same value in their City property and then stores those collections in CityCusts, under the name of the shared city:
CityCusts = Custs.ToLookup(Function(c) c.City)
In C#, the two lines of code to declare and load the collection would look like this:
private Lookup<string, Customer> CityCusts;
CityCusts = Custs.ToLookup(c => c.City);
Bada bing (as they say), bada boom.
Posted by Peter Vogel on 03/19/2018 at 5:51 AM0 comments