When you call Entity Framework's SaveChanges method, Entity Framework has to know what entities have changed in order to figure out what SQL Update/Delete/Insert statements to generate. If you also want to find out what entities have changed, then you can access that information through the DbContext object's ChangeTracker property.
The ChangeTracker property holds Entity Framework's DbChangeTracker object, which, in turn, has a collection called Entries that holds all objects being tracked. This code, for example, writes to some kind of log the name of the classes with pending changes, along with their current state (for example, Modified, Added, Deleted and so on):
CustomerEntities db = new CustomerEntities();
// ... make changes to objects in db ...
//Record pending changes
foreach (DbEntityEntry e in db.ChangeTracker.Entries())
{
AuditLog.Write("Name and Status: {0} is {1}", e.Entity.GetType().FullName, e.State);
}
If you want to get the primary key value for the objects being changed, this code will build a string out of all the values for the properties that make up the primary key:
IObjectContextAdapter oca = (IObjectContextAdapter) db;
ObjectStateManager osm = oca.ObjectContext.ObjectStateManager;
ObjectStateEntry ose = osm.GetObjectStateEntry(e.Entity);
string keys;
keys = string.Empty;
foreach(object key in ose.EntityKey.EntityKeyValues)
{
keys = keys + key.ToString() + "/";
}
I've assumed in this code that it's very unusual to have an entity whose primary key consists of multiple properties. If that's not true for you, then you should probably use the StringBuilder class to hold your concatenated key values rather than a simple string.
If you do go that extra mile, you could incorporate that information into your log entry. It might also make sense to write to your log as comma-delimited values so you can load the log into Excel for analysis. That code would look like this:
AuditLog.Write("{0},{1},{2}", e.Entity.GetType().FullName, keys, e.State);
Posted by Peter Vogel on 11/27/20180 comments
I've always argued that the only easier way to test your code than using Visual Studio Test is to not test at all. But that doesn't mean that I think Visual Studio Test is perfect.
For example, the ExpectedException attribute, when placed on a test method, lets you check to make sure that your code throws the appropriate exception when something goes horribly wrong. The problem with ExpectedException is that it applies to the whole test method, not just the "code under test." This means that if your test or production code throws that exception anywhere at all, the ExpectedException attribute will tell you that your test has passed. Unfortunately, that exception may or may not have been thrown where you actually expected it to be thrown. That's not quite what you want to test for.
You have a better alternatives: the Assert object's ThrowsException and ThrowsExceptionAsync methods. With either of those methods, you specify the exception you expect to get from your method and then pass the code you want to test (as a lambda expression) to the method.
This example checks to see if the GetCustomer method throws a NullReferenceException when the GetCustomer method is called with an empty string:
Customer cust;
Assert.ThrowsException<NullReferenceException>(() => cust = CustomerRepository.GetCustomer(""));
This test will now pass if (and only if) this call to the GetCustomer method throws a NullReferenceException. If any other code in my test method throws that exception (or if my code throws any other kind of exception), my test will be flagged as failed. And that's exactly what I want.
Posted by Peter Vogel on 11/16/20180 comments
How many times have you done this because you wanted to check how the value of some variable changes over time:
- Set a breakpoint
- Wait for Visual Studio to stop on the line
- Check the value of the variables you're interested in
- Press F5 to continue
- Wait for Visual to stop on the line again
- Check the value of the variables you're interested in
- Press F5 to continue
- Repeat ...
There's a better way. You can have your breakpoints automatically print a message to your Output Windows and keep right on executing. You'll get a reviewable list of all of the values your variable has had, listed in order (no more "Wait! What was the last value? I've forgotten.").
How you take advantage of this feature depends on what version of Visual Studio you're using. In older versions of Visual Studio, right-click on a breakpoint and select the When Hit option from the popup menu. In more recent versions of Visual Studio, you still right-click on the breakpoint, but then you select the Actions option.
In the resulting dialog box you can put in any text you want displayed, along with the names of the variables whose values you want displayed. The variable names must be enclosed in curly braces ( { } ). This example displays the value of the EditorCount variable with the text "Count:":
Count: {EditorCount}
You'll also find that you have a Continue Execution option in the dialog. That option is checked by default and, if you leave it checked, then Visual Studio won't stop executing when it reaches your breakpoint. Instead, when Visual Studio reaches your breakpoint, you'll get your message logged to the Output window and Visual Studio will keep right on going.
Posted by Peter Vogel on 11/13/20180 comments
In an earlier column I showed how to add custom processing to every request or response that your ASP.NET MVC or ASP.NET Web API site receives or produces. In that column, I offhandedly remarked about the kinds of things you can do to those incoming and outgoing requests. I didn't, however, actually provide the code (just as well, probably, because the column was getting into tl;dr territory). My conscience has caught up with me: Here's the kind of code you can put in a handler or module.
In an ASP.NET MVC module, you're effectively limited to adding headers to the incoming request or outgoing response. You can also work with headers in ASP.NET Web API handlers. This code, applied to the incoming request in either enviroment, removes the Accept handler and adds a replacement (assuming that the variable r points to a request or response, of course):
r.Headers.Remove("Accept");
r.Headers.Add("Accept", "text/html");
In an ASP.NET Web API handler, you can also replace the content of the incoming request message, like this:
request.Content = new System.Net.Http.StringContent("{'custid':'A123'}", Encoding.UTF8, "application/json");
If you use this technique, the default model binding process won't load the parameters in your Action method with the content you've just inserted into the message. You'll need to write code like this in your Action method to retrieve that content:
public async Task Get(string Id)
{
string result = await Request.Content.ReadAsStringAsync();
The code to replace the outgoing request message in a Web API message handler is similar to the code to replace the incoming request:
resp.Content = new System.Net.Http.StringContent("{'custid':'A124'}", Encoding.UTF8, "application/json");
Posted by Peter Vogel on 11/05/20180 comments
In Visual Studio, if you want to find out what that xCt_1 variable really is or see the definition for that TranX class, the easiest way is to click on the variable or class name and press F12. You'll be taken straight to the statement that declares that variable or defines that class.
Once you get to the definition, if you want to see where that variable or object is used, press Shift_F12 and you'll open a window showing all the statements that use your selected name. Clicking on any one of those lines in the window will take you to the statement in the editor.
In the latest version of Visual Studio 2017, getting to a name's definition is even easier: hover your mouse over the variable or class name, press and hold the Control key, and then click on the name. You're taken straight to the definition.
Posted by Peter Vogel on 10/29/20180 comments
Short, short tip: If you're working in an ASP.NET Core controller you may not have noticed that you have some new methods for generating a response to go back to the client: Ok, Created, NotFound and others. Each of these methods returns a standard HTTP status code. All these methods accept parameters that also let you pass in some content to be formatted into your return message's body.
Saying "Everything went swell" or "Sorry about that" has never been easier.
Posted by Peter Vogel on 10/26/20180 comments
You can significantly reduce the time your users wait to see your ASP.NET pages by bundling your site's JavaScript files into a single zip file. You're not limited to downloading just your script files, however: There's nothing stopping you from including files from the Content Delivery Network of your choice. All you have to do is pass the URL to the CDN as the second parameter when you create a ScriptBundle, like this:
bundles.UseCdn = true;
bundles.Add(new ScriptBundle("~/bundles/jquery",
"https://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.7.1.min.js")
.Include( ... local script files ... ));
Microsoft recommends you provide a fallback when you use a CDN, in case the CDN isn't available. Microsoft suggests a script tag like that shown below to follow the script elements that fetch your script files. This code checks to see if the objects in the CDN file have been downloaded (jQuery, in my example) then goes and gets a local copy of the file if they're not:
<script type="text/javascript">
if (typeof jQuery == 'undefined') {
var e = document.createElement('script');
e.src = '@Url.Content("~/Scripts/jquery-1.7.1.js")';
e.type = 'text/javascript';
document.getElementsByTagName("head")[0].appendChild(e);
}
</script>
Personally, I think it's more likely that your scripts won't be available.
You'll want to test this, of course, but ASP.NET will steadfastly refuse to bundle your scripts when running in debug mode. If you want, you can add this line of code to your BundleConfig file in your project's App_Start folder to enable bundling even in debug mode:
BundleTable.EnableOptimization = true;
Posted by Peter Vogel on 10/25/20180 comments
I've written about managing your skills portfolio. That includes developing skills that are currently "niche" so that they may become your future "current" skillset -- and generate some lucrative jobs along the way. If you asked me now what I thought your next niche skillset should be I'd say Blazor ... but to do it through Xamarin.Forms. Here's why.
When I'm working in Xamarin.Forms, I create my UI in XAML. I then build my application and get packages that run on Android, iOS and Universal Windows Platform (UWP) using native UIs.
That makes Xamarin.Forms sound suspiciously like one of those universal "application generators" that I'm normally suspicious of. However, Xamarin doesn't fall into that category: Xamarin just generates the platform-specific UI and a deployment package -- I still have to write all my business logic in C#. When I'm writing that code I have access to all the classes in the .NET Standard library (which means, effectively, almost everything that runs on Mono).
Lately, I've been experimenting with Blazor, which allows me to write code in C#, access the classes in the .NET Standard library, and deploy the resulting package to any of the current crop of Web browsers (thanks to an industry standard called WebAssembly that supports downloading a version of Mono to the browser). Effectively, Blazor can take JavaScript out of the equation when creating Web apps.
With JavaScript gone, is it my imagination or is there any reason that my Xamarin.Forms XAML couldn't generate HTML-based UIs as effectively as it generates Android/iOS/UWP-based UIs? If not, and I can truly create code that runs anywhere, why wouldn't I want to build my next application in Xamarin.Forms?
I'm not the only person asking that question, by the way, as quick search on the terms "Blazor Xamarin.Forms" shows. Makes me wonder about the technologies I should be acquiring.
Posted by Peter Vogel on 10/17/20180 comments
While I’m opposed to writing comments in code, even I recognize the value of comments placed on a class or method declaration (I’m excluding properties because most don’t require commenting). Presumably, if you’re writing these comments it’s with the hope that someone will, someday, read them ... and it would be awfully embarrassing if you misspelled things in those comments.
If that sounds like a problem worth addressing, go to Visual Studio’s Tools menu and select the Extensions and Update menu choice. In the resulting dialog, select Online from the tabs on the left and enter “Spell Check” (with the space in the middle) in the search box. You’ll get a list of spell checkers that you can add to your applications but, in Visual Studio 2017, you’ll also get Eric Woodruff’s Visual Studio Spell Checker. It’s an extension of an earlier spell checker for Visual Studio (and that earlier version is still available through GitHub if you don’t find it in Extensions and Updates).
After downloading the extension, you’ll need to shut down Visual Studio and wait patiently for Visual Studio’s installer to appear. Clicking the Modify button in the installer window will install Spell Checker. Once you restart Visual Studio, you’ll find a new Spell Checker choice on Visual Studio’s Tools menu with a sub-menu containing lots of options.
If you pick the option to spell check your whole solution, then you’ll find that Spell Checker checks all comments and all strings -- probably finding more errors than you care to do anything about (for example, I wouldn’t consider “App.config” in a comment to be an error). Fortunately, you can train Spell Checker to ignore words (like, for example, “App.config”) or configure what Spell Checker checks (through Tools > Spell Checker > Edit Global Configuration).
You can find it more about Spell Checker here. It would be a shame if some later programmer thought less of you because you spelled something wrang.
Posted by Peter Vogel on 10/16/20180 comments
As I discussed in an earlier column, SQL Server keeps a plan cached for each query it sees (assuming the query requires planning in the first place, of course). That's great for speeding up processing the next time that query shows up because the plan for the query can simply be pulled from the cache.
However, there are any number of queries in any application that SQL Server may never see again (at least in, for example, the next 24 hours). The plans for these "one-time" queries are taking up space in the cache even though they might never be used again. The cache manager will recover space if necessary by keeping track of plans with low planning costs and discarding those plans as the cache runs out of space. However, that doesn't address the space used by those "one-time" plans.
You can help the cache manager out by turning on (or having your DBA turn on) the option to optimize SQL Server for ad-hoc workloads. With this turned on, the first time a plan is created, the identifiers for storing the plan in the cache are created, along with a stub for the plan. The plan itself, however, isn't added to the cache. It's only upon the second occurrence of the query (or one like it) that the plan is added to the cache. The basic assumption here is that if a query appears twice, it will appear many times. The only cost is that the plan has to be generated twice.
In addition to reducing the amount of memory held by the cache, this option will give you some idea of how many "one-off" queries you have. After this option is turned on, you can check to see how many stubs you have in the cache. If you don't have many, then it indicates that most of your queries are run multiple times and optimizing for ad hoc queries probably isn't doing you much good.
If you have lots of stubs it indicates that you've saved yourself space in your cache -- this was a change worth making.
But it might be telling you something else. It seems to me that an application with many unique queries is an unusual kind of application. I'd wonder if the application was generating queries in such a way that SQL Server didn't recognize that it could reuse its plans. I'd be interested in finding a way to "standardize" those queries to allow SQL Server to reuse their plans.
Posted by Peter Vogel on 10/15/20180 comments
In another column, I describe how you can, from JavaScript, call methods on C# objects defined in Blazor pages. As that sentence implies, however, there's no way to access properties on those objects ... at least, no official, documented way.
It can be done, however. To make a method on a class accessible from JavaScript, you decorate the method with the JSInvokable attribute. You can, it turns out, do the same thing with properties, like this:
public string FirstName { [JSInvokable] get; [JSInvokable] set; }
Once you've done that, you can read and set the FirstName property from a JavaScript function by treating the property's getter and setter like methods. This JavaScript code, for example, sets the FirstName property to "Jan" and then reads the value back out of the property (see that earlier column for all the ugly details):
cust.invokeMethod("set_FirstName", "Jan");
var fullName = cust.invokeMethod("get_FirstName");
While I've an auto-implemented property here, this also works with "fully implemented properties" with explicit getters and setters.
The Blazor documentation doesn't mention this "feature," which may mean that it's just a "happy accident" (it certainly seems to depend on the internal implementation of properties). As such it may be wiped out in the next release of Blazor or replaced with some better, slicker syntax.
But if you're working with Blazor and really want to access properties, there it is.
Posted by Peter Vogel on 10/10/20180 comments
Assuming you're using the latest version of Entity Framework, the easiest way to update your database is to use DbContext's Entry class: It's just two lines of code no matter how many properties your object has.
As an example, here's some code that accepts an object holding updated Customer data, retrieves the corresponding Customer entity object from the database, and then gets the DbEntityEntry object for that Customer object:
public void UpdateCustomer(Customer custDTO)
{
CustomerEntities ce = new CustomerEntities();
Customer cust = ce.Customers.Find(custDTO.Id);
if (cust != null)
{
DbEntityEntry<Customer> ee = ctx.Entry(cust);
Now that you have the Entry object for the object you retrieved from the database, you can update the current values on that retrieved entity object with the data sent from the client:
ee.CurrentValues.SetValues(CustomerDTO);
The CustomerDTO doesn't have to be a Customer class. SetValues will update properties on the retrieved Customer class with properties from the CustomerDTO that have matching names.
It also matters that this code updates the retrieved entity object's current values. Entity Framework also keeps track of the original values when the Customer object was retrieved and uses those to determine what actually needs to be updated. Because the original values are still in place, the DbContext object can tell which properties actually had their values changed by the data from CustomerDTO through SetValue. As a result, when you call the DbContext's SaveChanges method, only those properties whose values were changed will be included in the SQL Update command sent to the database.
Of course, you did have to make a trip to the database to retrieve that Customer entity object. If you'd like to avoid that trip (and, as a result, speed up your application) you can do that, but it does require more code.
Posted by Peter Vogel on 10/04/20180 comments