Spell Check Your Comments in Visual Studio

While I’m opposed to writing comments in code, even I recognize the value of comments placed on a class or method declaration (I’m excluding properties because most don’t require commenting). Presumably, if you’re writing these comments it’s with the hope that someone will, someday, read them ... and it would be awfully embarrassing if you misspelled things in those comments.

If that sounds like a problem worth addressing, go to Visual Studio’s Tools menu and select the Extensions and Update menu choice. In the resulting dialog, select Online from the tabs on the left and enter “Spell Check” (with the space in the middle) in the search box. You’ll get a list of spell checkers that you can add to your applications but, in Visual Studio 2017, you’ll also get Eric Woodruff’s Visual Studio Spell Checker. It’s an extension of an earlier spell checker for Visual Studio (and that earlier version is still available through GitHub if you don’t find it in Extensions and Updates).

After downloading the extension, you’ll need to shut down Visual Studio and wait patiently for Visual Studio’s installer to appear. Clicking the Modify button in the installer window will install Spell Checker. Once you restart Visual Studio, you’ll find a new Spell Checker choice on Visual Studio’s Tools menu with a sub-menu containing lots of options.

If you pick the option to spell check your whole solution, then you’ll find that Spell Checker checks all comments and all strings -- probably finding more errors than you care to do anything about (for example, I wouldn’t consider “App.config” in a comment to be an error). Fortunately, you can train Spell Checker to ignore words (like, for example, “App.config”) or configure what Spell Checker checks (through Tools > Spell Checker > Edit Global Configuration).

You can find it more about Spell Checker here. It would be a shame if some later programmer thought less of you because you spelled something wrang.

Posted by Peter Vogel on 10/16/2018 at 8:31 AM0 comments

Speeding Up SQL Server: Planning for One-Time Queries

As I discussed in an earlier column, SQL Server keeps a plan cached for each query it sees (assuming the query requires planning in the first place, of course). That's great for speeding up processing the next time that query shows up because the plan for the query can simply be pulled from the cache.

However, there are any number of queries in any application that SQL Server may never see again (at least in, for example, the next 24 hours). The plans for these "one-time" queries are taking up space in the cache even though they might never be used again. The cache manager will recover space if necessary by keeping track of plans with low planning costs and discarding those plans as the cache runs out of space. However, that doesn't address the space used by those "one-time" plans.

You can help the cache manager out by turning on (or having your DBA turn on) the option to optimize SQL Server for ad-hoc workloads. With this turned on, the first time a plan is created, the identifiers for storing the plan in the cache are created, along with a stub for the plan. The plan itself, however, isn't added to the cache. It's only upon the second occurrence of the query (or one like it) that the plan is added to the cache. The basic assumption here is that if a query appears twice, it will appear many times. The only cost is that the plan has to be generated twice.

In addition to reducing the amount of memory held by the cache, this option will give you some idea of how many "one-off" queries you have. After this option is turned on, you can check to see how many stubs you have in the cache. If you don't have many, then it indicates that most of your queries are run multiple times and optimizing for ad hoc queries probably isn't doing you much good.

If you have lots of stubs it indicates that you've saved yourself space in your cache -- this was a change worth making.

But it might be telling you something else. It seems to me that an application with many unique queries is an unusual kind of application. I'd wonder if the application was generating queries in such a way that SQL Server didn't recognize that it could reuse its plans. I'd be interested in finding a way to "standardize" those queries to allow SQL Server to reuse their plans.

Posted by Peter Vogel on 10/15/2018 at 9:45 AM0 comments

A Blazor Tip You Should Almost Certainly Ignore

In another column, I describe how you can, from JavaScript, call methods on C# objects defined in Blazor pages. As that sentence implies, however, there's no way to access properties on those objects ... at least, no official, documented way.

It can be done, however. To make a method on a class accessible from JavaScript, you decorate the method with the JSInvokable attribute. You can, it turns out, do the same thing with properties, like this:

public string FirstName { [JSInvokable] get; [JSInvokable] set; }

Once you've done that, you can read and set the FirstName property from a JavaScript function by treating the property's getter and setter like methods. This JavaScript code, for example, sets the FirstName property to "Jan" and then reads the value back out of the property (see that earlier column for all the ugly details):

cust.invokeMethod("set_FirstName", "Jan");
var fullName = cust.invokeMethod("get_FirstName");

While I've an auto-implemented property here, this also works with "fully implemented properties" with explicit getters and setters.

The Blazor documentation doesn't mention this "feature," which may mean that it's just a "happy accident" (it certainly seems to depend on the internal implementation of properties). As such it may be wiped out in the next release of Blazor or replaced with some better, slicker syntax.

But if you're working with Blazor and really want to access properties, there it is.

Posted by Peter Vogel on 10/10/2018 at 11:02 AM0 comments

Updating Entity Framework Objects with Changed Data

Assuming you're using the latest version of Entity Framework, the easiest way to update your database is to use DbContext's Entry class: It's just two lines of code no matter how many properties your object has.

As an example, here's some code that accepts an object holding updated Customer data, retrieves the corresponding Customer entity object from the database, and then gets the DbEntityEntry object for that Customer object:

public void UpdateCustomer(Customer custDTO)
   CustomerEntities ce = new CustomerEntities();
   Customer cust = ce.Customers.Find(custDTO.Id);
   if (cust != null)
    DbEntityEntry<Customer> ee = ctx.Entry(cust);

Now that you have the Entry object for the object you retrieved from the database, you can update the current values on that retrieved entity object with the data sent from the client:


The CustomerDTO doesn't have to be a Customer class. SetValues will update properties on the retrieved Customer class with properties from the CustomerDTO that have matching names.

It also matters that this code updates the retrieved entity object's current values. Entity Framework also keeps track of the original values when the Customer object was retrieved and uses those to determine what actually needs to be updated. Because the original values are still in place, the DbContext object can tell which properties actually had their values changed by the data from CustomerDTO through SetValue. As a result, when you call the DbContext's SaveChanges method, only those properties whose values were changed will be included in the SQL Update command sent to the database.

Of course, you did have to make a trip to the database to retrieve that Customer entity object. If you'd like to avoid that trip (and, as a result, speed up your application) you can do that, but it does require more code.

Posted by Peter Vogel on 10/04/2018 at 1:51 PM0 comments

.NET Tip: Testing Private Fields

If you're interested in this tip, skip the next section -- the next section is just a long-winded explanation of the most recent occasion when I felt I had to use these tools. You can tell, by the length of my apology, that I'm uncomfortable with this tip.

An Apology
Here's my problem: I have a data repository that retrieves data and, after returning the data to the client, holds the data in a cache so that the next request for the same data will run much faster. The cache is a List of objects declared as a field/class-level variable ... and that field is declared as private because the cache is no business of the client and shouldn't be accessible. I should be able to test my repository without considering the cache, right? Either my test gets the right answer or it doesn't and I shouldn't care if the private cache was involved.

Except I do care. For one thing, if the data repository just ignores the cache, then my code is failing. Among other goals, I want my tests to make sure that the cache is being used when appropriate. In addition, all my data retrieval methods accept a parameter that clears the cache. This feature allows the user to choose to get the latest, freshest data from the database. To test that option, I need to know when the cache is cleared.

There are lots of ways of checking for that. I could create a mock database object that would report when the database is accessed (and, by implication, when the cache is being used). However, since I'm using Entity Framework, that would require mocking Entity Framework ... which requires more work than just checking the private field holding the cache (and the more code I write, the more likely it is that my test will have a bug in it). Alternatively, I could insert data into the database to make the database different from the cache and check for those differences ... except this repository is working with a read-only data source (it's actually the output of a stored procedure that assembles several tables).

The Actual Tip
If you want to check a private field during testing, you can use either PrivateObject or PrivateType. PrivateType is the tool to use when accessing static members (that is, members you can call directly from the class name, like Integer.TryParse()); PrivateObject is the tool to use when accessing instance members (that is, members that can only be accessed from an instantiated class like, firstName.Substring(1,3)).

To use PrivateObject, you instantiate the class, passing the object whose private members you're interested in examining. You can then use that PrivateObject object to access internal members by passing the member's name as a string to, for example, the GetField method. You'll need to do a cast to convert the returned reference to the right type.

This code passes a CustomerRepository class to PrivateObject and then retrieves a field called DataCache, cast as a List of Customer objects:

CustomerRepository cr = new CustomerRepository();
PrivateObject po = new PrivateObject(cr);
List<Customer> cachedData = (List<Customer>)po.GetField("DataCache");
cachedData.Add(new Customer { Id = -99 });

With PrivateObject you can also retrieve properties, elements of an internal array, invoke a private method, or get the Type object for the for the object that PrivateObject is wrapping.

If I wanted to access a static member, I'd use PrivateType, passing the Type object for the class I'm testing. This code retrieves a static version of the DataCache field in the CustomerRepository class:

PrivateType pt = new PrivateType(typeof(CustomerRepository));
List<Customer> cachedData = (List<Customer>)pt.GetStaticField("DataCache");
cachedData.Add(new Customer { Id = -99 });

Of course, by using this technique I'm just asking for trouble: If I ever change the internal implementation of my cache, this test will stop working and not because there's a bug in my code. So far, I'm OK with that ... but you should check in a couple of years from now. You may find some later programmer cursing both my tests and my name.

Posted by Peter Vogel on 10/03/2018 at 10:29 AM0 comments

Controlling Your Visual Studio Default Window Layout

Starting about a week ago, whenever I opened a solution in Visual Studio, Solution Explorer did not appear. Instead, the right-hand side of the Visual Studio window was completely occupied -- top to bottom -- by the Properties window. I have no idea why this happened. I like the Properties window as much as the next developer, but I want it stacked below Solution Explorer (What can I say? I'm a traditionalist). Not matter what configuration I left Visual Studio in, the next time I opened it, Solution Explorer was gone. The solution was to restore to my default layout.

If you have a window layout that you'd prefer to the one Visual Studio is giving you, you need to be proactive or, as in my case, you might some day lose it. It's easy to set your default layout: Arrange your windows the way you like, go to the Window menu in Visual Studio and pick Apply Window Layout > My Default.

If you want you can setup several window layouts by picking Window > Save Window Layout. This choice gives you the option of assigning a name to your layout. You can now switch between layouts either through a keyboard shortcut (Ctrl+Alt+ <a number>) or by going to Window > Apply Window Layout and picking the layout you want by name. Whichever way you go, you'll get an annoying dialog box asking you to confirm that you want to change layouts, but the dialog box has a "Don't show this again" option that you can check to make it go away forever.

Do remember that Visual Studio tries to remember the windows you last had opened or closed so it can restore to that layout for you. So, if you open and close windows, you won't necessarily see your default layout again until you pick it from the Apply Window Layout. As I said at the start, this is a a good reason to set up your "default" window layout right now: Having that default layout in place makes it easy to get back to it after you've played with Visual Studio's windows.

If you ever want to discard a layout or rename it, you can use the Window | Manage Window Layouts choice. If you want to get back to the "factory settings," pick Window | Reset Window Layout. Don't panic if you pick that choice -- it doesn't discard your settings so you can still return to your preferred layout.

Posted by Peter Vogel on 09/27/2018 at 9:58 AM0 comments

Controlling Model Binding in ASP.NET Core

It seems to me like magic when model binding takes data from the client and loads it correctly into the properties of the Customer object in the parameter to this method:

public ActionResult UpdateCustomer(Customer cust)

However, sometimes model binding doesn't do what I'd like. For example, let's say my Customer object looks like this:

public class Customer
  int id {get; set;}
  string FirstName {get; set;}
  string LastName {get; set;}
  int TotalOrders {get; set;}

If model binding can't find any data from the client to put in the LastName attribute, it will just set the property to null. I'd prefer that model binding do a little more because the typical first line of code in my method is to check for problems in model binding using the ModelState's IsValid property:

public ActionResult UpdateCustomer(Customer cust)
  if (!ModelState.IsValid)

With model binding's default behavior, IsValid won't be set to false when there's no data for LastName.

I can get that behavior by adding the Required attribute to my Customer class' LastName property. The problem is that Required is also used by Entity Framework in code-first mode to control how the LastName column in the Customer table is declared. That probably isn't a big deal to you (though I worry about it).

Things get messier with the TotalOrders property because, unlike string properties, properties declared as integers aren't nullable. With or without the Required attribute, non-nullable datatypes are set to their default values. This means that if no data comes up from the browser for TotalOrders, it will be set to 0 ... and IsValid still won't be set to false. It's now hard to tell if the customer has no orders or if the data wasn't sent.

I could change the datatype on TotalOrders to a nullable type (that is, int?) and put the Required attribute on it ... but now I'll have to work with the TotalOrders' Value property to retrieve its data. It's all getting a little complicated.

When working with non-nullable types, I prefer using the BindRequired attribute instead of Required. BindRequired will cause model binding to set the IsValid property to false if no data comes from the client (and it will do that without affecting how my columns are declared in the database).

This is how I might declare my Customer class to get the IsValid property set when TotalOrders is missing and still have TotalOrders as a nullable column in my database:

public class Customer
  int? id {get; set;}
  string FirstName {get; set;}
  string LastName {get; set;}
  int TotalOrders {get; set;}

If I wanted the FirstName and LastName columns to also be nullable, I'd use BindRequired on those properties, also.

Posted by Peter Vogel on 09/17/2018 at 9:23 AM0 comments

Adding Your Own Files to Your Visual Studio Solution

Despite the file extensions you see in the Add Existing Item dialog box, Visual Studio isn't limited to working with specific kinds of files. If you have some file that you want to include in your project, you can add it in Solution Explorer. If you want to be able to edit it in Visual Studio, you just need to associate its file extension with one of Visual Studio's editors.

To do that, go to Tools | Options | Text Editor | File Extension. Once there, type an extension in the Extension text box in the top left-had corner, pick an existing editor from the dropdown list to the right of the text box, and click the Add button just a bit further to the right. Now, when you click on a file with that extension, Visual Studio will open it using that editor.

You can also create a new custom editor for Visual Studio based on the Core Editor, using the Visual Studio SDK (though, in the most recent version of the SDK, you'll have to write your editor in C++ because C# and Visual Basic aren't supported any more).

Posted by Peter Vogel on 08/07/2018 at 10:17 AM0 comments

Switching Your Xamarin Project to Standard Class Projects

There are lots of differences between using a Standard Class/Portable Class Library (PCL) and Shared projects in a Xamarin solution. However, the most obvious one appears when you open any XAML file in a Shared project: In a Standard Class library you'll get IntelliSense support; in a Shared Project you won't get any IntelliSense support and virtually every element in your XAML file will be flagged as an error (though, fortunately, your solution will still compile).

Unless you have a compelling reason to go with a Shared project (for example: your version of Visual Studio doesn't support Standard Class Library projects), you'll want to use PCL or a Standard Class Library project ... and a Standard Class Library project is your best choice going forward. In fact, if your version of Visual Studio doesn't support Standard Class Library Projects and you want to work with Xamarin, it might be time to upgrade to a newer version of Visual Studio (remembering that, for example, Visual Studio 17 Community Edition is free).

If you're not sure which kind of project your Solution is currently using, first look at the icon beside your common project in Solution Explorer: If it's two overlapping diamonds, you have a Shared project (bad); if you have a simple box with "C#" inside of it then you have a Standard Class Library (good) or a PCL (not so good) project. To distinguish between a PCL and a Standard Class Library, open the project's properties and see if you have a Library tab on the left. If you do, you have a PCL project (as I said: not so good).

To convert your solution to using a Standard Class Library project, first right-click on your Solution node in Solution Explorer and use Add | New Project | .NET Standard | Class Library (.NET Standard) project to add a Standard Class library project your solution. Once the project is added, delete any default resources in the project (you won't need the Class1.cs file, for example). Then drag and drop any resources from your existing Shared/PCL project to your new Standard Class Library project. If you have an App.xaml file that marks the start point of your application in your old project make sure that you drag it to your new project.

Next, right-click on your new project and use Manage NuGet Packages to add the Xamarin.Forms package to your project. You'll need to add any other references or NuGet packages your original project was using.

Now do a rebuild on your new project. If you get some compile-time errors you haven't seen before, open your project's Properties and, on the Application tab, check that the Target framework dropdown list is set to the highest level (as I write this, that's .NET Standard 2.0). If it isn't, set the dropdown to the highest level and try another build. If you still have compile-time problems, then it's too early to move to a .NET Standard Class Library project and you'll have to live with your Shared or PCL project.

Now, the scary part: right click on your Shared or PCL project and pick Remove. Remind yourself that the project isn't gone, it's just not part of the solution. If it turns out you need something from it, you can use Add | Existing Item to pick up anything you've forgotten (you can also open the old project in Visual Studio to check any settings you might have missed).

If you don't yet have a XAML file (other than App.xaml) in your new project, right-click on your new project in Solution Explorer and select Add | New Item | Xamarin Forms | Content Page to add one. If you want this to be your start page, make sure this new Page's name matches the name in the App class's constructor in the App.xaml.cs file (you can either give your new XAML file a matching name or change the name in App.xaml.cs).

Finally, in the other projects in your solution, use Add | New Reference to add a project reference to your new Standard Class Library project and do a rebuild of your solution to flag any namespace issues that you have to clean up.

Posted by Peter Vogel on 08/06/2018 at 10:42 AM0 comments

Organizing Test Cases

In addition to the TestInitialize and TestMethod attributes that you're used to using when creating automated tests, there's also a TestCategory attribute that you'll find useful as the number of your tests starts to get overwhelming.

Effectively, using TestCategory lets you create a group of tests using any arbitrary system you want. This allows you to create (and run!) groups of tests that you feel are related without having to run every test in a class, a project or a solution. These could be, for examples, all the tests in all your test classes that involve your Customer factory or all the tests that use your repository class.

To use TestCategory, you add it as a separate attribute to a test or combine it with your already existing test attributes. These two sets of code are identical, for example, and assign GetAllTest to the Data category:

[TestMethod, TestCategory("Data")]
public void GetAllTest()

public void GetAllTest()

You can also assign a test to multiple categories. These examples tie the GetAllTest test to both the Data and the Customers categories:

[TestMethod, TestCategory("Data"), TestCategory("Customers")]
public void GetAllTest()

public void GetAllTest()

You can run tests in any particular category from Test Explorer. First, though, you must make sure that your tests are in List view: If your Tests are grouped in any way other than Run | Not Run | Failed, then you're not in List view (List view still groups your tests, it just does it in the default grouping of "by result"). The toggle that switches between List and Hierarchical view is the second button in on the Test Explorer toolbar, just to the right of the Run Tests After Build toggle.

Once you're in List view, the Group By toggle (just to the right of the List View toggle) will be enabled. Click the down arrow on the right side of the Goup By toggle and you'll get a list all of the ways you can group your tests. To group by category, you want to pick Traits from this list. Not only will this list all the tests you've assigned to a category, it will list any test to which you haven't assigned a category in a group called No Traits. Right-clicking on a category name will let you run all the tests in that category.

You can also run tests by category using the VsTest.Console or MSTest command-line tools. Those tools also give you an additional ability: You can combine categories with logical operators to either run only those tests that appear in the intersection of the categories you list or run all the tests from all of the categories you list.

Posted by Peter Vogel on 08/01/2018 at 12:36 PM0 comments

Use JavaScript Code from One File in Another File with IntelliSense

If you have a JavaScript (*.js) file containing code, it's not unusual for your code to reference code held in another JavaScript file. If you're using more recent versions of Visual Studio, you'll find that the editor knows about all the JavaScript code in your project and will provide some IntelliSense support as you type in your JavaScript code (not as much support as you'd get with TypeScript, of course).

If your version of Visual Studio isn't doing that for you, you can still get that IntelliSense support in your code by adding a reference to that other JavaScript file. A typical reference to another JavaScript file (placed at the top of the file you're entering code into) looks like this:

/// <reference path="Utilities.js" />

Now, as you add JavaScript code to the file containing this reference, you'll get IntelliSense support for any functions and global variables declared in Utilities.js.

And you don't have to type that reference if you don't want to. Visual Studio will generate that reference for you if you just drag Utilities.js out of Solution Explorer and drop it into the file you're adding code to.

Posted by Peter Vogel on 07/23/2018 at 10:21 AM0 comments

Eliminate Code and Add Functionality with Fody Attributes

Fody is such a cool NuGet package that it's a shame it's only been mentioned on this site once and in passing. Fody handles the problem you have all the time: crosscutting concerns. A crosscutting concern is something that happens in many places in your application but not in every place.

The .NET Framework's attributes are probably the most common tool for handling crosscutting concerns. For example, security is a crosscutting concern: Many parts of your application should only be accessed by authorized users ... but not all parts (the login screen, for example, must be accessible to everyone). You can handle that crosscutting concern in ASP.NET by putting an Authorize attribute on those methods that you want to lock unauthorized users out of. Most attributes address issues important to users (security, for example). Most Fody attributes, on the other hand, handle those problems that annoy developers.

For example, the two Fody attributes I'm using the most right now (as part of building a Xamarin application) are NotifyFor (which eliminates the need to write code for the PropertyChanged event in a property) and AlsoNotifyFor (which fires a PropertyChanged event for a related property when a property changes value). All I have to do is put the attribute on my property and Fody takes care of the rest.

But there are dozens of useful Fody attributes, including ones to make your string comparisons caseless, allow you to specify the backing field for an auto-declared property, and check the syntax of your SQL queries during builds. There's also SexyProxy, which I've never needed but its name is so cute that I keep trying to find a use for it.

Posted by Peter Vogel on 07/23/2018 at 10:23 AM0 comments

Most   Popular
Upcoming Events

.NET Insight

Sign up for our newsletter.

Terms and Privacy Policy consent

I agree to this site's Privacy Policy.