When you think ASP, think...
Recent Articles
All Articles
ASP.NET Articles
Message Board
Related Web Technologies
User Tips!
Coding Tips

Book Reviews
Sample Chapters
Commonly Asked Message Board Questions
JavaScript Tutorials
MSDN Communities Hub
Official Docs
Stump the SQL Guru!
Web Hosts
Author an Article

ASP ASP.NET ASP FAQs Message Board Feedback
Print this Page!
Published: Wednesday, April 6, 2005

Unit Testing the Data Access Layer

By Tim Stall


Unit testing is becoming very popular among .NET projects, but unit testing the Data Access Layer (DAL) still remains a common obstacle. The problem stems from the very nature of databases: they are slow, common, persistent, external, and relational. This conflicts the nature of unit tests, which should be quick, independent, and repeatable. Looking at the criteria for good unit tests (from Pragmatic Unit Testing in C# with NUnit, Andrew Hunt and David Thomas), it's easy to see why. The table below lists the goals of unit tests on the left along with the problems that databases pose for unit testing on the right.

Unit Testing CriteriaProblem with Database Unit Testing
AutomaticTesting the database first requires that that database has been set up, therefore we need ways to automatically create the schema and adjust the base data.
ThoroughThorough testing should cover objects like stored procedures and functions. However unlike business entities that exist in the code and are directly referenced with Intellisense support, data objects are only referenced indirectly through data access code.
RepeatableA database is designed for persistent storage, yet unit tests should be without persistent side-affects. Therefore you need to ensure that data from one test doesn't interfere with data from another test.
IndependentUnlike business entities, databases are not instantiated. There is likely just one database shared by the entire application. Furthermore even a simple stored procedure can be intertwined with other objects – having custom functions or lookups from other tables. Therefore the database is a common and dependent set of objects, making it harder to write independent tests.
ProfessionalProfessional includes industry performance, and databases are slow. This is especially problematic when repeatedly running hundreds of database tests.

While mocking out the data layer can be useful when testing business objects, there is still tremendous value in having the real Data Access Layer thoroughly tested. For example, stored procedures can contain complicated search logic, or database functions can perform logic that must be done at the database level.

This article will show how to resolve these problems so that you can reliably unit test your Data Access Layer. First we need to establish two prerequisites:

  1. Run both large and small SQL scripts, and
  2. Ensure that a base-line schema is set up before running any of the tests.

Using these techniques, we can test database objects like stored procedures, functions, triggers, and the DAL that uses them. Finally, we'll explain how to add diagnostic tests in NUnit to ensure that a developer's environment is set up correctly for unit tests, and how to handle configuration values like changing the database name or user connection information.

(This article assumes that the reader is familiar with unit testing and already has a data-access utility class to run stored procedures and execute SQL statements, such as Microsoft's Data Access Application Block (DAAB). For more information on unit testing refer to Muthukamar's article Test Driven Development Using NUnit in C#; for information on Microsoft's DAAB, see Examining the Data Access Application Block or Working with the Enterprise Library's Data Access Application Block.)

- continued -


In a nutshell, testing the database can be broken down into the following three steps:
  1. Set the state of the database to a "before" condition (this may include changes to both the schema and data)
  2. Execute a data object to be tested
  3. Check that the actual "after" state matches the expected state.
For example, you may have a stored procedure EMPLOYEE_INSERT that inserts an employee record, and returns the newly-generated primary key for use in the application object's ID field. In order to adequately test this functionality, you'll need to have a data-access utility class that can access the database with at least stored procedure and SQL calls. This is standard, and already covered by many other articles.

The other prerequisites are that you have the ability to run SQL scripts, and reset the database schema. Running scripts allows us to both set the before state, as well as verify the after state. Resetting the database schema ensures that all the data objects that we'll call actually exist.

This article includes a downloadable code sample. The sample has three projects:

  • ApplicationDAL – A sample DAL for an application. This also includes a DataHelper utility class with a method ExecuteSql to run SQL statements. Normally the DataHelper would be refactored to another assembly to encourage reuse, but we put it in this assembly for simplicity.
  • UnitTests – A console application that contains the test-driving-code used by NUnit. This also contains its own App.config.
  • TestUtilities – A class library that contains the prerequisite methods discussed above. This is used exclusively by the UnitTests project.
You can install the sample by simply unzipping it, setting the database info in the App.Config, and creating the database user from your connection string in App.Config. Note that the DiagnoseEnvironment tests are designed to fail if the database does not exist (which initially it does not), therefore you have two options for installing the database:
  1. Just run the tests twice. The DiagnoseEnvironment ones will fail the first time, and then work from there on out, or
  2. If you want all tests to pass the first time then you'll need to manually run the create script.

Prerequisite: Run SQL Scripts

There are two types of SQL scripts that we'll need to call, short CRUD (Create, Read, Update, Delete) statements like Select count(*) from Employee, and large batch scripts. Batch scripts could contain hundreds of short CRUD statements, or actual object-creation statements, and are used to setup the schema and data to a "before" state.

The short CRUD statements are easy to execute – we can just run those with DataHelper like so:

DataHelper.ExecuteSql("Insert into Employee(name) values ('Fred')");

While this is great for simple DML (Data Manipulation Language), it has some shortcomings:

  • The SQL is hard-coded instead of being in an external text file, making it hard to maintain and update.
  • This approach can't execute batch commands, like GO, which is used by many standard SQL tools (such as the Create Scripts of Enterprise Manager).
  • We can't just copy and run the SQL script in a Database tool because the SQL is intertwined with non-SQL code, like the class and method call DataHelper.ExecuteSql(...).
Essentially we want a utility method that lets us run any script that could be run from Query Analyzer or Enterprise Manager. Fortunately for us there is a command line tool, OSQL, that can run such scripts. Conceptually we want to mimic the process from the figure below. We call a RunScript method from our TestUtilties library, passing it the relative script file path (like Folder1\MyScript1.sql). That method gets the Root Script Directory and database connection info from the App.Config file, assembles the OSQL command prompt, and then runs the physical script file with OSQL in a hidden console window.

To implement this, let's start at the bottom. Running a script with OSQL requires at minimum the server, username, password, and absolute script file path. For example, to run a batch script with OSQL where the username was sa, the password pw1, and the database server localhost, we'd use the following command-line statement to execute the SQL statements in the file C:\temp\scrip1.sql:

osql -U sa -P pw1 -S localhost -i "C:\temp\scrip1.sql"

When you examine the code available at the end of this article you'll find that these connection parameters are stored in the UnitTest's App.config. The method GetSqlAdminInfo() retrieves these settings, storing them in a SqlAdmin structure. This structure is then handed off to the CreateOsqlArguments() to create the OSQL arguments for the command line.

The script file path is the only remaining parameter. While OSQL needs the absolute path, it is more convenient for testing purposes that our RunScript() method use the relative path. Therefore we can have a public and private RunScript() method. The private method takes the absolute path while the public one takes in just the relative path, combining it with the root directory (stored in the App.Config) to create the full absolute path. It can then send this full path to the private method, as the code below illustrates.

public static void RunScript (string strRelativeScriptPath) 
   string strUTRoot = System.Configuration.
string strFullScriptPath = strUTRoot + strRelativeScriptPath;

private static void _RunScript (string strFullScriptPath) 
   //1 - get OSQL command args
   SqlAdmin sa = GetSqlAdminInfo();
   string strArgs = CreateOsqlArguments(sa, strFullScriptPath);

   //2 - call

The RunHiddenConsole() method wraps System.Diagnostics.Process() to run OSQL and wait for it to exit. The implementation behind this method is outside the scope of this article, but is discussed more in my blog entry Using System.Diagnostics to Run External Processes.

Now that we've seen how to run an arbitrary set of SQL commands, we're ready to examine how to use this functionality to reset the database schema to an acceptable "before" state. We'll tackle this in Part 2.

  • Read Part 2!

  • ASP.NET [1.x] [2.0] | ASPMessageboard.com | ASPFAQs.com | Advertise | Feedback | Author an Article