In an earlier column, I showed how to access configuration settings in your project's appsettings.json file and then make those settings available throughout your application as IOptions objects.
But you don't have to use the appsettings.json file if you don't want to -- .NET Core will let you hard-code your configuration settings or retrieve them from some other source (a database, perhaps). Wherever you get your settings from, you can still bundle them up as IOptions objects to share with the rest of your application. And your application will neither know nor care where those configuration settings come from.
Typically, you'll retrieve your settings in your project's Startup class (in the Startup.cs file), specifically in the class's ConfigureServices method. If you're not using appsettings.json, creating an IOptions object is a three-step process.
First, you'll need to define a Dictionary and load it with keys that identify your configuration settings and values (the actual settings). That code looks like this:
var settings = new Dictionary<string, string>
For my keys, I've used two-part names with each part separated by a colon (:). This lets me organize my configuration settings into sections. My example defined a section called toDoService with two settings: url and contracted.
The second step is to create a ConfigurationBuilder, add my Dictionary of configuration settings to it and use that ConfigurationBuilder to build an IConfiguration object. This IConfiguration object is similar to the one automatically passed to your ConfigureService method except, instead of being tied to the appsettings.json file, this one is tied to that Dictionary of settings.
Here's that code:
var cfgBuilder = new ConfigurationBuilder();
IConfiguration cfg = cfgBuilder.Build();
The fourth, and final, step is to add an IOptions object to your application's services collection from your own IConfiguration object:
Now, as I described in that earlier article, you can use that IOptions<toDoSettings> object anywhere in your application just like an IOptions object built from your appsettings.json file.
Posted by Peter Vogel on 03/26/2019 at 8:42 AM0 comments
In many of these tips, I've suggested ways that you might want to change Visual Studio's default configuration. That's not always a good thing. For example, I've known some developers who, because of some problem, had to re-install Visual Studio and lost all their customizations. I sometimes find myself at a client's site, working on a computer that isn't mine and looking foolish because some customization I depend on is gone ... or I used to, at any rate.
The solution to both problems is some preventative maintenance: Export your Visual Studio settings to a vssettings file. You can then restore those settings in the event of a disaster or moving to a new machine.
To export a file, from the Tools menu, select "Export selected environment settings" and then click the Next button. On the next page, by default, all settings are selected and that's the option I use (I'm concerned that if start picking and choosing settings, I'll leave one of my customizations behind). On this page, therefore, all I need to do is click the Next button.
The third and final page allows me to choose where I'll save the resulting vssettings file. I save it to one of my cloud drives so that I won't lose the file if something happens to my computer.
When I need to set up a new instance of Visual Studio, I import the file and get back my own, personalized version of my favorite development environment. When I'm working at a client's site, I first export the settings on the computer I'm using. I then import my vssettings file from my cloud drive (or a USB on one occasion when I wasn't allowed Internet access).
Posted by Peter Vogel on 03/21/2019 at 1:12 PM0 comments
By default, if you add a Razor Page to your project's Pages folder, the URL that you use to access that page is based on the Page's file name. So, for example, a Page with the file name CustomerManagement.cshtml is retrieved with the URL http://<server name>/CustomerManagement. There is an exception to this: A Page with the file name Index.cshtml is retrieved with just http://<server name>. This convention extends to subfolders: If that CustomerManagement.cshtml file is in the Pages/Customers folder then its URL is http://<server name>/Customers/CustomerManagement (and Index.cshtml in the same folder has http://<server name>/Customers as its URL).
I have two problems with this. First, the URL http://<server name>/Customers is the same for both a Page in a file named Customers.cshtml directly in the Pages folder and a Page in a file called Index in the Pages/Customers folder. You may say that this problem is one I've created for myself but I'd describe it as an accident waiting to happen ... and, when it does happen, it's a problem that can only be fixed by renaming or moving files. Of course, when you rename or move files, bookmarks all over the world stop working.
This is related to my second problem: Both the names of the files where I keep my code and their location on my Web server's hard disk should be private -- they're nobody's business but my own. Integrating file names and folder locations into your application's UI is the exact opposite of loose coupling, just like incorporating class and method names into your UI.
Fortunately, you have two options to implement loose coupling between URLs and Pages. The option I like is to use the page directive at the top of your Page's cshtml file: Just provide it with a string with an absolute URL that you want to use to access the page.
For example, this line, inside my CustomerManagement.cshtml file, means that the URL for my Page is now http://<server name>/Customers/Manage:
Alternatively, you can use the AddRazorPagesOptions method when configuring MVC support in your Startup class. This code also establishes http://<server name>/Customers/Manage as the URL for my CustomerManagement Page:
I need to point out, with either of these options, the RedirectToPage method must continue to use the Page's file name:
Posted by Peter Vogel on 03/15/2019 at 1:08 PM0 comments
I never get my code right the first time. And, even after my code passes all its tests, it's still not right. That's because I will have learned a lot about the problem when writing my code (wouldn't it be awful if that didn't happen?). But, unfortunately, much of my code reflects decisions made in an early, more ignorant stage of this learning process. As a result, I typically want to take some time, after the code passes its tests, to rewrite my code and make it "better."
The problem is that my clients need some proof that this rewrite is time well spent. One way to do that is to use Visual Studio's Analyze | Calculate Code Metrics menu choice to generate some hard numbers that show how the code is getting "better."
But, as I tell people all the time, no metric makes sense by itself: You need to compare your code's current numbers to what you had before to see if things are getting (in some sense of the word) "better." What you want to do is save your original numbers so you can compare them to your later, "better" numbers.
You have two ways to do this. One way is, in the Code Metric Results window, just select the metrics you're interested in, right-click on them and select Copy. Now you can paste these metrics into any place you want to keep them -- Excel would be a good choice. Of course, if you're doing that, why not just pick the Open List in Excel option on the Code Metric Results' toolbar? Now you can save those results in a workbook for later reference.
Heck, now that you've got those number in Excel, you can create a graph from them. My clients love graphs.
Posted by Peter Vogel on 03/13/2019 at 8:13 AM0 comments
You're thinking about making a change to that Transaction class but you're not sure how big an impact that change will have. One way to answer that question is to find all the places that the class is used.
Your first step is to click on the class name (and click on it in any place you find it, by the way). Then press Shift-F12 or right-click on it and pick Find All References. That opens a References window showing the statements that refer to your object (that window typically opens below your editor window).
As useful as that list of references is, I bet you really want to see the context of each of those lines to see how your object is used. Pressing F8 or Shift-F8 will take you to the "next" or "previous" reference; Double-clicking on any of the statements in the References list will take you directly to that statement.
Ctrl-Shift-Up Arrow and Ctrl-Shift-Down Arrow will also move you from one reference to another but, sadly, only within the current file.
Posted by Peter Vogel on 03/12/2019 at 12:15 PM0 comments
Eight-five percent of all application development is spent on existing systems, with existing databases. If you want to use Entity Framework's code-first development (where the database schema is an "implementation detail" generated from your object design) and migrations (which modifies your existing schema as your object model evolves), how do you do that with an existing database?
I'd suggest that you first step is to generate the object code that represents your existing tables (I use a tool for that). Once you've done that, and assuming you've used NuGet Manager to add Entity Framework to your project, you just need three commands to initialize your .NET Framework project for code-first migrations. Just enter these commands into Tools | NuGet Package Manager | Package Manager Console:
Add-Migration InitialCreate -IgnoreChanges
If you're working in .NET Core, you can skip the first command (Enable-Migrations). In .NET Core, migrations are enabled by default.
Posted by Peter Vogel on 03/11/2019 at 12:25 PM0 comments
The Dependency Inversion Principle says "the interface belongs to the client." As I've said elsewhere, adopting this principle means a reversing of the way applications used to be built: Design the database, build the objects to maintain the tables, wrap a UI around those objects and then bring the users in for training because they'd never figure the application out on their own.
The Dependency Inversion Principle says: Build the UIs that your users will understand (the interface belongs to them), design the objects that will make those UIs easy to build, build those classes, design the objects that make those classes easy to build and carry on until code-first Entity Framework generates the database you need.
You know this, already: the Dependency Inversion Principle is what drives the essential difference between ADO.NET and Entity Framework. With ADO.NET, it was your responsibility to create a connection (because you need a connection to the database), create a correctly configured command object (because you have to issue commands), call the appropriate execute method (because different SQL commands work differently) and then manage fetching rows and turning them into objects (because ... well, you get the picture). In other words, ADO.NET's API was driven by how the ADO.NET objects worked -- the reverse of the Dependency Inversion Principle.
On the other hand, essentially what Entity Framework says is, "Tell me what objects you want and I'll get them for you." Entity Framework provides the API that the application wants: An object-oriented way of retrieving, adding, updating and deleting data.
There are real cost-savings associated with the Dependency Inversion Principle. Because the principle requires that objects deliver the functionality that the client wants, interfaces tend to be more stable. Following the principle, APIs only change because the client program wants to do something differently (which, when you think about it, is the only reason we should be changing our code). You're welcome to upgrade how your objects work, of course ... but you're not allowed to change the API.
Of course, this level of abstraction isn't free: Entity Framework doesn't have the performance that pure ADO.NET has, even for the scenarios it targets: online, transactional applications. However, it does improve the productivity of developers who, let's face it (and given the current cost of hardware), are the most expensive part of an application -- just ask my client. And, when you do need "bare metal" levels of performance, there's always Dapper.
And, quite frankly, if you wanted the fastest performance, you'd be writing your code in assembler and running it on MS-DOS. Let's not be silly about performance.
Posted by Peter Vogel on 02/27/2019 at 10:55 AM0 comments
There are four Redirect help methods built into your .NET Core Controllers that you can use to tell a client that a resource exists ... but not at this URL. For all these cases, you should also be setting your response's Location header to tell the client where to find the result the client originally requested.
The helper methods and when to use them are:
- Redirect: This returns an HTTP 302 status code. This status code tells the client that what they requested can be found at the URL specified in the Location header of the response. However, that resource might be at this URL at some time in the future. If the original request was a POST, it's OK for the client to change that to a GET Request before using the new URL.
- RedirectPermanent: HTTP 301 status code. This code tells the client that the resource won't ever exist at this URL. The Location header should contain a URL that will give the client something like what they requested if a new request is made to that URL. For anything but GET requests, the user should be informed before a request is made to the new URL. As with Redirect, if the original request was a POST, it's OK for the client to change that to a GET Request before using the new URL.
- RedirectPermanentPreserveMethod: HTTP 308 status. This says that the requested resource won't ever exist at this URL. However, this code also says that, if this was a POST request, the new request to to the URL specified in the Location header must also be a POST request.
- RedirectPerserveMethod: HTTP 307 status. As with the original Redirect, this tells client that this redirect is temporary. As with the RedirectPermanentPreservice, this code also says that, if the original request was a POST request, the new request to the URL specified in the Location header must also be a POST request.
Posted by Peter Vogel on 02/25/2019 at 7:52 AM0 comments
In ASP.NET MVC, the File helper method built into your controller gives you multiple options for retrieving the file you want to send to the client. You can pass the File helper method an array of bytes, a FileStream or the path name to a file depending on whether you want to return something in memory (the array of bytes), an open file (FileStream) or a file on your server's hard disk (a path).
In .NET Core, you still have the File helper method, but it has one limitation: It assumes that any file path you pass it is a virtual path (a relative path within your Web site). To compensate, this new version of the File method simplifies two common operations that require additional code in ASP.NET MVC: checking for new content and returning part of the file when the client includes a Range header in its request.
To handle physical path names in .NET Core, you'll want to switch to using the PhysicalFile helper method. Other than assuming any file name is a physical path to somewhere on your server, this method works just like the new File helper method.
Posted by Peter Vogel on 02/22/2019 at 8:29 AM0 comments
If you've got Visual Studio 2017, you can run your application in Docker (even with the Community Edition).
First, you'll need to download Docker for Windows. You'll need to decide what operating system will be used inside your containers (Windows or Linux). For .NET MVC Core applications and Web services, it doesn't matter which you pick, though generally speaking, I'd say there are more resources available if you choose Linux.
After Docker for Windows is installed, you'll find the Docker icon sitting in the Notifications popup on your taskbar. Right-click on that and pick Settings to display the Settings dialog. From left side of the dialog select Shared Drives. That will give you a list of drives available from your computer. Check off the drives you'll use when running your application and click the Apply button (you'll be asked for your password). Once you've shared your drives, you can close the Settings dialog.
With all that done, starting your Web application in Docker requires just four steps:
- Right-click on your project and select Add | Docker Support. You'll get a dialog asking you to pick what operating system your container should use -- pick the same one you chose when installing Docker for Windows. When the dialog closes, you'll find that a Dockerfile file has been added to your project. That file specifies the container image to be used and the instructions for loading and starting your application.
- In Visual Studio's toolbar, find the dropdown list for the F5/Play button. From the list, select Docker (you'll probably find that it's already switched to Docker).
- Press <F5>. You may get a warning message from your firewall asking you to grant permission for your application but, once you've given that permission, your application should start.
- Brag to your friends about how hip and happening you are.
Posted by Peter Vogel on 02/19/2019 at 7:37 AM0 comments
The purpose of Docker is to build containers that hold, potentially, all of the components of an application: the application itself, the database engine, any Web services it requires and so on. That container, unlike a virtual machine, doesn't require an operating system so it takes less space than a VM and starts up/shuts down faster.
The good news here is that there are a bunch of prepared containers waiting for you to use on Docker Hub: these are called images. Many of them are Linux based, but for .NET Core applications that's not an issue: Core runs as well on Linux as Windows. Docker Hub is an example of a Docker repository and, if you want, you can create your own repository for your company rather than use Docker Hub.
While you can put all the components of an application in a single container, you can also create individual containers for each component (one for the application, one for the database, one for each Web service). This allows you to upgrade/replace components individually or start multiple copies of one container to improve scalability. When you have multiple containers, you'll want to use Compose to create (and start) an application made up of multiple containers.
In production you'll want to be able to monitor your containers, auto-start the appropriate number of any of containers and automatically restart any container that fails. For that you need an orchestrator -- the elephant in this living room is Kubernetes ... which has its own vocabulary (Kubernetes works with services which are made up of pods, each of which may have one or more containers; servers with pods running on them are called nodes).
Most recently, you have Swarm which allows you to treat all the containers in a group as if it were one service.
It would, of course, help if you knew how all this stuff worked. But, with the right terms (and if you can keep the other person talking), you might be able to get through the interview.
Posted by Peter Vogel on 02/11/2019 at 7:56 AM0 comments
I've done at least a couple of articles on how to support adding custom processing to every request to your ASP.NET MVC site (most recently, an article on HTTP Modules and Handlers). In ASP.NET Core the process is very different (of course) but it's actually much simpler.
Your first step to adding some processing to the ASP.NET pipeline is to create a class whose constructor accepts a RequestDelegate object. You should store that RequestDelegate in a property or field because you'll need it later:
public class CheckPhoto
private readonly RequestDelegate rd;
public CheckPhoto(RequestDelegate next)
rd = next;
Your second step is to add an async method to your class called Invoke and have it accept an HttpContext object and return a Task:
public async Task Invoke(HttpContext ctxt)
In that method you should call the Invoke method on the RequestDelegate passed to your constructor, passing on that HttpContext object (use the await keyword to get the benefits of asynchronous processing). By calling the Invoke method, you're invoking the next module in the processing chain during request processing -- that HttpContext object includes information about the incoming request. When that Invoke method returns, it means all the processing associated with that request is complete (including running your own code, of course) and the HttpContext object holds everything associated with your application's response.
Anything you want to do in terms of processing the incoming request should be done before calling the Invoke method; anything you want to do with the Response, you should do with the HttpContext after calling the Invoke method.
This means a typical Invoke method looks like this:
public async Task Invoke(HttpContext ctxt)
// ... work with HttpContext in handling the incoming request ...
// ... work with HttpContext in handling the outgoing response ...
To tell your site to use your module, first go to the Configure method in your project's Startup class. In the Configure method, call the UseMiddleware method on the IApplicationBuilder object passed to the method, referencing your class. Since the parameter holding the IApplicationBuilder object is called app, that code looks like this:
Posted by Peter Vogel on 01/11/2019 at 5:05 PM0 comments