So you've got this long running application that gradually takes over all the memory on the computer that it's running on. You've got a resource leak somewhere; but how do you track it down?
Task Manager should be your first stop in determining if your application has a problem with leaking memory. After you've started Task Manager and switched to the Processes tab, go to the View | Select columns menu choice. The dialog box that pops up gives you a list of columns you can add to the display, seven of which have labels beginning with "Memory." The first five (all but the two containing the word "Paged") can provide useful insight to what your application is doing with memory. If you see, for instance, a process that has its Peak Working Set constantly increasing, it's a clue that the process is in trouble.
To get a handle on where your problems really are, there's an even better tool. To run it, right-click on Computer in the Start menu and select Manage. Drill down in the TreeView on the left to (in Vista) Reliability and Performance Monitor or (in Windows 7) Performance Monitor and open it to start tracking key data about your computer.
As with Task Manager, after bringing up the monitor, you'll want to add the counters that will help you find your problem. To see what's available (and there are a lot of counters available), right-click on the monitor's graph and select Add counters. At the top of the list of counters are some .NET CLR counters that can be very helpful.
Under .NET CLR Memory, you'll find counters that track the memory used by Generation 0 objects (objects that haven't been around very long), Generation 1 objects (objects that survived one garbage collection because they were in use), and Generation 2 objects (objects that have survived more than one garbage collection).
Most applications have many Generation 0 or Generation 1 objects and fewer Generation 2 objects (most .NET objects have short lifespans). If you have a large amount of memory in the Generation 2 memory pool, it may indicate that either (a) objects that should be short-lived are hanging around longer than they should or (b) that more objects are being created than you expected. If the counter showing induced garbage collections is high, I'd look at whatever process is creating objects first. Garbage collection is triggered when the instantiation of a new object causes the memory budgeted for a generation to be exceeded and the induced garbage collections counter can tell you if that's happening too often.
Another set of counters tracks the Large Object Heap (LOH), which holds objects that need 85K of storage or more. If you know that you shouldn't have any objects in the LOH or, if the amount of storage seems out of proportion to the number of large objects you expected, you can focus on those objects in your application that require a significant amount of storage.
There's more. Expanding the .NET CLR Data section, for instance, reveals a set of counters that let you track how your connection pools are being used. Good debugging is driven by real data, and the monitor is a good place to get that data.
Posted by Peter Vogel on 08/04/20113 comments
Somehow, in the course of our June Toolapalooza feature (17 Free Tools for Visual Studio), I missed a bunch of freebies from Telerik. Telerik has made well over a dozen free tools available -- I lost count -- and they all look pretty cool. It's worth noting, these are not limited-period trials or significantly crippled software. If you're willing to forego support, for instance, you can apparently get Telerik's ASP.NET MVC extensions for free.
I must have been in an ASP.NET MVC frame of mind when I was looking at the page because the tool that caught my eye is the Razor Convertor. This is a command line tool that, when passed a file spec (e.g. *.aspx) and (optionally) a path to a folder to hold the converted cshtml files, rewrites your Views to the new Razor syntax.
This is a big deal. Whenever you move to a new technology (like the Razor engine for creating Views in ASP.NET MVC) there's always the problem of what do you do with the old $#*! (sorry: "legacy code"). You could upgrade that old code, but it would take time away from other projects that will deliver new functionality and benefits. Besides, while the "legacy" code is in an old technology, it does have that whole "working" thing going for it.
But if you hang onto that old technology, you're effectively making your toolkit larger. Ideally, you want a toolkit that's as small (and as homogenous) as is compatible with meeting the demands of your organization. So it would be nice to clear out any old technology that you don't to use any more.
Razor Convertor lets you do that when you upgrade to ASP.NET MVC 3 and switch over to Razor (and you will want to switch over). Razor makes it reasonable for you to convert all the Views in an application that you're going to be working on for some other reason than bringing it up to date. Use the tool to convert the application's Views over to Razor, add any new pages using the Razor engine, make the rest of your modifications and you've got a site upgraded to the latest and greatest!
Telerik doesn't claim that the tool can convert all of your MVC Views with 100 percent reliability (it doesn't deal with Master Page Views, for instance). But, based on what I've seen so far, it will get you very, very close. My biggest issue, for instance, was going back in and converting any expressions in my code.
Now: After you bring a whole application up to date, do you tell your boss that you used a tool to do the conversion? Or just take the credit?
Posted by Peter Vogel on 07/26/20114 comments
While DataSets hold a lot of data, sometimes you want to keep track of information about the DataSet itself. You could store information about the dataset in some variable (e.g. the last time you checked the DataSet for changes) but it makes sense to me to store that information with the Dataset itself through its extended properties.
To add an extended property to a DataSet, you go the DataSet's ExtendedProperties collection and add a name and a value. This example adds a property called LastChecked to the ExtendedProperties collection with the current date and time:
Dim ds As New DataSet
ds.ExtendedProperties.Add("LastChecked", DateTime.Now)
To retrieve your extended property, just pass the name back to the DataSet's Item method. When you retrieve an item from this collection it comes back as an Object, so you'll need to convert the value when you retrieve it:
Dim stDate As String
Dim dtDate As DateTime
stDate = ds.ExtendedProperties("LastChanged")
If String.IsNullOrEmpty(stDate) = True Then
dtDate = DateTime.MinValue
Else
dtDate = Convert.ToDateTime(stDate)
End If
If dtDate < DateTime.Now AndAlso
ds.HasChanges = True Then
'... process DataSet
End If
Posted by Peter Vogel on 07/19/20112 comments
One of the best things about .NET is that virtually everything you do in Visual Studio generates some text in a file. This means that, when you need to make a change to several places in your code, you can often make that change with a global Find and Replace. However, the typical developer's scenario is to (a) do a global find-and-replace and then (b) rebuild the application to find out what got broken -- because, sadly, a global find-and-replace often changes too much.
The Find In Files option (available from the Find and Replace choice on the Edit menu or from the dropdown lists at the top of the Find dialog) can be a big help here because it lets you specify what kind of files to search. Find in Files may not prevent you from changing things you didn't intend to, but it can limit the damage. For instance, if you know that you want to change the name of a variable in your code, you can limit your search to files matching *.aspx.cs (or *.aspx.vb). That will, at least, prevent you from changing some text tucked away in a file generated by a visual designer. The Files option even comes with a dropdown list of some typical file groups.
As long as you're using the Find and Replace dialog, this dialog is also the best reason to learn regular expressions. For instance, I recently got saddled with this text appearing about 200 times in, roughly, 20 aspx files spread across about two dozen projects:
<div id="somevalue" class="samevalue">
Since my CSS selectors were tied to the element's id attribute, the class value was completely redundant. Having time on my hands, I wanted to clean up the code and convert all of the div tags to this:
<div id="somevalue">
The problem was that while the id and class in any particular tag had the same value, each div tag had its own special value in the id and class attributes. Normally this would have been too much work to bother with. but using the Find and Replace dialog with a regular expression made it trivially easy. Here are the three settings I used:
Find in files: id="{.*}" class="\1"
Replace in files: id="\1"
Files: *.aspx
A button click, a few seconds wait, and all of the elements in the project were cleaner.
You might think that I had to repeat this process in every project. But one of the best parts of developing in ASP.NET is that you can open any folder as a Web site (just use File | Open Website). In a Web site all the files in the folder (and its subfolders) are part of the project. So by opening the folder that contained all of the projects that I wanted to change I was able to run the change across all 20 projects.
I'd love to take credit for this tip, but all credit goes to my very clever friend, Nigel Armstrong. Got a tip you'd like to share? Email me at phvogel@1105media.com.
Posted by Peter Vogel on 07/12/20110 comments
One of the most frustrating error messages that you can get when debugging your application is "File not found" when loading an assembly (or just instantiating a new class). This message means that, for some reason, .NET couldn't find the DLL with the class you needed. If the reason for the problem isn't obvious from the information provided (and it usually isn't) there is a tool that will give you some more insight: the Assembly Binding Log Viewer (fuslogvw.exe).
As its name implies, the fuslogvw gives you access to a log file of binding activities. That logging is turned off by default so you must first enable it. The easiest way to do that is to run fuslogvw, click on the Settings button and select the level of logging you want before closing the viewer. You're probably only interested in those cases where loading an assembly is failing so, to have just those errors logged, select the "Log bind failures to disk" option.
You can then run your program and click on fuslogvw's refresh button to review the log entries. The viewer shows the information in three columns: Application, Description, and Date/Time. The description won't tell you much more than you got from running your application but if you double click on the log entry you'll get a ton more data to help you figure out what went wrong.
You'll see, for instance, all the paths that .NET looked through trying to find the assembly. You'll also see whether the loading process was affected by entries in your application's configuration file or whether all the parameters were set from the machine configuration file. The log viewer may not hand you the answer to your load failure but it will probably tell you something about what's going wrong that you didn't know before.
You may not be able to start fuslogvw from the Visual Studio command prompt -- the file moves around. On one of my computers, picked at random, I found it in C:\Program Files\Microsoft SDKs\Windows\v7.0A\bin folder... but your mileage may vary. Wherever it is, however, the utility will wait patiently until you need it.
Posted by Peter Vogel on 07/05/20114 comments
By default, Visual Studio supports XHTML 1.0 Transitional, which dates from 2000. HTML has changed since then and you may want to access some newer version. In fact, with all the buzz HTML5 is getting you might be interested in trying it out, which means you'd like to have Visual Studio stop whining at you when you use HTML5 elements and attributes. IntelliSense support would be nice also.
Service Pack 1 for Visual Studio 2010 helps by adding some IntelliSense and validation support for HTML5 to the versions of HTML that Visual Studio supports. But the default remains XHTML 1.0 Transitional from 2000. You can change that.
If you keep the HTML Source Editing menu bar visible, you can change the HTML version easily from a dropdown list on the menu. If you don't, changing the HTML version for your project is a little more awkward. First, go to the Tools | Options menu choice. Once there, from the left-hand treeview, expand the Text Editor node and then the HTML node. Finally, select the Validation node. In the dropdown list on the right, pick the version of HTML you want to use (if you don't see HTML5 then you haven't applied SP1 yet). Maybe it would be easier to just right-mouse click on the menu bar in the editor and add the HTML Source Editing menu -- you can always turn it off when you've made the change.
SP1 support for HTML5 is obviously a stopgap effort. For instance, while the new CSS selectors and color choices won't raise validation errors, you won't get any IntelliSense support either. There are a few bugs in the HTML5 support and no support for some of the new standards like WAI-ARIA (which supports screen readers) and the Microdata vocabularies (which allow you to add metadata to your markup). IntelliSense for JavaScript doesn't appear to have been touched at all.
As of June 19, you can get support for those features (including some of the Microdata vocabularies) by applying the Web Standards Update for Visual Studio SP1 (available here). You must have applied SP1 before applying this update. You can get the update from the Visual Studio Gallery using Visual Studio's Extension Manager on the Tools menu.
It says something about the power of Visual Studio's extensibility that this update was created by the Visual Web Developer Team over their lunch hours.
Of course, while your editor may support these elements, you'll still need to determine whether your users' browser will support them.
Posted by Peter Vogel on 06/28/20113 comments
This tip is for C# developers only, unfortunately -- but it's the easiest way in the world to create an iterator. An iterator is any method that returns the "next item" in a series. The issue with an iterator is that you have to return a value when called and then pick up at the "next item" when called again.
The simplest iterator in the world is one that just counts from zero on up. In this example, for instance, x would first be set to 0 and then 1 and then 2 and then... you get the picture:
int res;
foreach (int i in Counter())
{
res = i;
}
Implementing that Counter method is easy if you use yield: Just insert the word "yield" at the start of your return statement to return the value and then pause your method at that statement until the next time it's called. When your method is called, your code will continue on from the statement following the yield statement.
The only wrinkle is that the method must appear in a method that's part of some iterator interface (e.g. IEnumerable). That means, to use the yield statement in the easiest way possible (and, when you're talking about the yield statement, you're aiming for "easiest way possible") you need to declare your method's return type as IEnumerable and call your method inside a foreach block. A version of the Counter method that would go from 0 to 2 would look like this:
public System.Collections.IEnumerable Counter()
{
yield return 0;
yield return 1;
yield return 2;
}
Each time the Counter method is called, the yield return statement will return the value and then wait to be called again before it will go on to the next statement.
A more complicated version would count up from 0 to the largest possible value that an int supports:
public System.Collections.IEnumerable CounterInfinite()
{
int i = 0;
while (i < int.MaxValue)
{
yield return i;
i += 1;
}
}
But the principle remains the same: Whenever your code is called, "yield return" something and then, in the next line of code, go on to get the "next one." If, in a Customers object, you wanted to create a property called Orders that lest a developer loop through all of the Orders for a the customer, you could use the yield keyword. In the property's getter, you would retrieve a DataSet and loop through the rows, returning an Order object created from the row at each yield return.
Here's similar code using LINQ to keep it short. The yield return inside the foreach loop returns an Order object and then waits to be called again before going through the loop to get the next Order:
public IEnumerable<Order> Orders
{
get
{
northwndEntities dc = new northwndEntities();
var res = from o in dc.Orders
where o.CustomerID == this.CustomerId
select o;
foreach (Order o in res)
{
yield return o;
}
}
}
A developer can now process your collection with code like this:
foreach (Order o in cust.Orders)
{
//do something with an order
}
Posted by Peter Vogel on 06/20/20118 comments
In our June Toolapalooza issue (
17 Free Tools for Visual Studio), we reviewed some of the best and most useful free tools for Visual Studio. One of those tools was smtp4dev, which allows you to quickly check the results of any email that you send from your application -- very useful in testing. But, as I noted in the article, I found smtp4dev to be... well, quirky, let’s say.
One of our readers came to my rescue and recommended Papercut as a great alternative. And he was right: I think I’m in love again (I create a lot of e-mail applications).
Downloading Papercut gives you a zip file containing a DLL, a config file, and the Papercut executable. Just drop these into any folder (though something in C:\Program Files would be most appropriate) and double click on the executable to run it. An msi would simplify installation, but not by much.
While it’s running, Papercut automatically picks up e-mail sent to the standard SMTP port (25) on any IP address: You just send mail from your application and switch to Papercut to review it. The user interface is clean and easy to use. An Options button lets you change the port being monitored and -- more usefully -- have Papercut minimize when it starts. When Papercut does minimize, it shrinks to the tray where you can re-open it to review e-mail by double-clicking on it. The tray icon displays a message whenever Papercut picks up some new mail so you can use this to test background processes as easily as foreground applications. Papercut gives you a full view of your mail: You can look at the full transmission (including headers) or just the body of the email. Papercut also keeps a log of all the activity it performs so, if something goes horribly wrong, you can trace back to see what happened.
How much do I like Papercut? Let me put it this way: I used the External Tools dialog on Visual Studio’s Tools menu to add Papercut to my menus.
Posted by Peter Vogel on 06/14/20110 comments
Sometimes you get null or Nothing passed to parameters for methods in your application. Sometimes that's not OK, but sometimes it is -- especially if you're accepting data from a database. If you're accepting a value parameter (like an integer), you may have been declaring your parameters as object so that you accept nulls. There is a better solution: you can add a question mark to your type declaration to indicate that it's OK to pass a null/Nothing value to that parameter:
Sub AcceptingParms(parmNullsOK As Integer?)
If parmNullsOK IsNot Nothing Then
Else
End If
End Sub
You're not limited to using the question mark in parameters or with the Integer/int datatype. You can use it on any value type where you're willing to accept a null/Nothing value. Do be aware, though, that these nullable types are a different datatype from their non-nullable cousins (i.e. int? is not int and Integer? is not Integer). So if you want to use a nullable value with a non-nullable value, you'll have to do a conversion:
Dim res As Integer
Dim num? As Integer = 2
res = Integer.Parse(num) + Integer.Parse(num)
What you're actually doing when you add the question mark to the end of your datatype is creating an instance of a reference class called System.Nullable<T>. To put it another way, int? is the same as System.Nullable<int>). The question mark is just shorthand for the longer declaration.
Posted by Peter Vogel on 06/07/20113 comments
In ASP.NET by using Views inside the MultiView you can, effectively, have several different pages inside a single ASPX file. Here's how to have those "additional pages" appear on your site's menus so that when a user clicks on a menu choice, the user not only goes to the right page but also has the page displaying the right View.
The trick here is to take advantage of a feature of the sitemap. In order to support the SiteMapPath control, each SiteMapNode in the in the sitemap must have a unique URL (or none at all). However that "uniqueness" includes whatever querystring you choose to include in the SiteMapNode's url attribute. There's nothing stopping you from having these two entries in your sitemap:
<siteMapNode url="~/Login.aspx?view=0"
title="Show View 0"
description="Display View 0 in a MultiView"/>
<siteMapNode url="~/Login.aspx?view=1"
title="Show View 1"
description="Display View 1 in a MultiView"/>
While the url attribute in these sitemapnodes point to the same page, they have different querystrings. If the page "MultiViewPage.aspx" has a MultiView on it, then this code in the Page_Load event will (when the user first sees the page) cause the View control specified in the querystring to be displayed:
if (!this.IsPostBack)
{
string pos = this.Request.QueryString["view"];
if (pos != null)
{
this.MultiView1.ActiveViewIndex = int.Parse(pos);
}
}
Posted by Peter Vogel on 05/31/20110 comments
Probably the only developers who care about this are the .NET developers doing Office development and who are upgrading projects to.NET 4. But if you do work with COM objects from code and are ever planning to move your projects up from .NET 3/3.5, this is worth knowing.
PIAs (Primary Interop Assembly) provide the information that supports interoperability between .NET and COM. PIAs for complex COM objects (think: Microsoft Word) are huge and kept in separate, monolithic assemblies. That means that when you use any member on your COM object, the whole PIA is loaded into memory (not to mention that the PIA was one more thing to be distributed).
Upgrading to .NET 4 can give you some significant benefits through the "NoPIA" feature (also called Type Embedding, but "NoPIA" just sounds cooler). With NoPIA, the necessary information to support COM interoperability is embedded into your compiled .NET code. Not only does that reduce the number of components to be distributed, only the parts of the PIA actually required by your application are incorporated into your code. If you only use a small part of a big COM object you could see significant size reductions in your compiled code.
The option is automatically turned on for new Visual Studio 2010 projects, but if you've imported a project from an earlier version of Visual Studio, you'll need to set it yourself. In your References list, select the reference to your COM object. Then, in the Properties window, find the Embed Interop Types property and set it to True.
Posted by Peter Vogel on 05/24/20110 comments
Prior to .NET Framework 4, threading used a locking mechanism that added to the thread overhead. In .NET 4 the locking mechanism is eliminated. On top of that, threading integrates better with garbage collection, reducing the time required to clean up after a thread finished running
Putting those two changes together means that just moving your multi-threaded application to .NET 4 can improve its performance. In addition, there was a bug in prior versions of .NET around aborting threads that issued locks. If you aborted a thread at the wrong time (when a no-operation instruction that was part of the generated IL code was being processed), the lock might never be released. If you've been trying to figure out why your multi-threaded application's performance degrades over time, it's a sign that the upgrade to .NET 4 might not be an optional activity.
If you've got time, after you upgrade, you should consider rewriting your code to use the Task Parallel Library (TPL). In previous versions of .NET 4, while you could queue up threads, you couldn't provide much information to .NET about which threads were most important. In the absence of that information, .NET treated all queued threads equally.
The TPL allows you to provide more information to .NET's scheduler to improve how your threads are handled. All by itself, the TPL gives you the ability to sequence tasks by having one task kick off as soon as another completes. See my VSM feature article on PLINQ and the TPL for more information.
Posted by Peter Vogel on 05/17/20110 comments