Wednesday, November 9, 2011

Getting your App Continuously Tested

Being able to identify issues quickly and having an automated system that tells us whether or not the application is broken, sounds like a must have practice for software development companies nowadays. Nevertheless, we are still facing many challenges specially on large and complex systems.

So where do we start?

In my opinion, the first step in getting automated is having a CI (Continuous Integration) system in place and making sure the builds cover major branches.

The CI system needs to be able to deploy and run the application in isolation per branches, for example using different virtual machines / databases. This way we can make sure we have a known state of all resources and that we are able to roll back to that state after the tests run. Having the hardware/software resources needed for this is key to success.

In many cases, we may require changing the way the application is built and deployed, using tools and creating scripts that runs steps automatically in CI.

Being able to run tests automatically in CI

Select the tools that allows you to run automated tests unattended and getting tests results reports in CI. This way we make sure all the tests are executed after every code change, and that we also are able to run many tests on different environments/configurations.

Creating the tests

Get the whole team involved and make the creation of different levels of automated tests a part of development activities. Consider the team may need to improve their code design skills, which leads to testable code. Most of us have heard high cohesion and low coupling, for a long time; unfortunately it is common not seeing these applied.

Automated tests general guidelines

  1. The test should communicate intent: it should be clear and simple what the test is verifying, and how the functionality is used by the application.
  2. The test must have an assert.
  3. The test must pass and fail reliably. The test should not have code branches i.e. if/else statements that cause it to not give a reliable pass/fail.
  4. If for some reason, the test has code branches, there must not be one that doesn’t have an assert.
  5. Keep tests independent: As tests grow, running the tests sequentially may be unpractical, so we need to make sure we can run tests in parallel and get quick feedback.
  6. There must be a way to run separately unit, integration and end to end tests. The distinction between these must be clearly understood.
  7. Unit tests must run fast.
  8. Do not comment tests when they start failing, fix them.

This list can grow very long but at least this can be a good start =)

Hope this summary helps you and your team getting automated.

Wednesday, November 2, 2011

Visual Studio Debugging and Remote Debugging

This time I would like to share some notes gathered from different sources about debugging: how visual studio handles different builds (Debug and Release) and how we can debug code that has been deployed to a remote machine.

Debug, Release Build

Like most of you know visual studio projects has some predefined build configurations: Debug and Release. Using Debug, the program is compiled with full symbolic debug information and no optimizations.

With Release the program is compiled using Optimize code and  pdb-only options. The pdb-only option does not generate the DebuggableAttribute that tells the JIT compiler that debug information is available, but generates a .pdb (program database) file to allow viewing information such as source filenames and line numbers in the application’s stacktrace.

Optimized code is harder to debug since the compiler repositions and reorganizes instructions to get a more efficient compiled code, so generated instructions may not correspond directly to the source code. Some optimizations are always performed by the compiler and others only performed when the Optimized code option is set; optimization may include things like: constant propagation, dead code elimination. .Net runs optimizations in 2 steps: one optimization run by the compiler when generating the IL code and other when running the application and transforming IL to machine code, most optimizations are left to the JIT compiler.

While reviewing more about it, it seems that optimization is separate from pdb generation so pdb-only generation shouldn’t affect performance in most scenarios and it is recommended to produce PDB files, even if you don't want to ship them with the executable.

In order to debug an application with visual studio, we require the matching pdb file: The debugger looks for the PDB files using the dll or executable name and looks for [applicationname].pdb. When .dll, executable, and PDB files are generated it creates and stores identical GUIDs in each file. The GUID is used to determine if a given PDB file matches a DLL or an executable file.

You can use DUMPBIN to see if pdb files are found for a given dll, for example:

e:\Program Files (x86)\Microsoft Visual Studio 9.0\VC>dumpbin /pdbpath:verbose 
"e:\Projects\VS 2008\RemoteDebugging\WpfApplication1\bin\Release\WpfApplication1.exe"

Microsoft (R) COFF/PE Dumper Version 9.00.30729.01
Copyright (C) Microsoft Corporation.  All rights reserved.

Dump of file e:\Projects\VS 2008\RemoteDebugging\WpfApplication1\bin\Release\WpfApplication1.exe

File Type: EXECUTABLE IMAGE
  PDB file found at 'e:\Projects\VS 2008\RemoteDebugging\WpfApplication1\bin\Release\WpfApplication1.pdb'

  Summary

        2000 .reloc
        2000 .rsrc
        2000 .text

e:\Program Files (x86)\Microsoft Visual Studio 9.0\VC>

and GUID information can be inspected using DUMPBIN as well:

e:\Program Files (x86)\Microsoft Visual Studio 9.0\VC>dumpbin /headers 
"e:\Projects\VS 2008\RemoteDebugging\WpfApplication1\bin\Release\WpfApplication1.exe"
.....

Dump of file e:\Projects\VS 2008\RemoteDebugging\WpfApplication1\bin\Release\WpfApplication1.exe
.....

Debug Directories

      Time Type       Size      RVA  Pointer
  -------- ------ -------- -------- --------
  4D97EE82 cv           6C 00003708     1908    Format: RSDS, 
  {9873ECF8-BA29-4C84-9AF5-BC54B1E6FFD4}, 1, E:\Projects\VS 2008\RemoteDebugging\WpfApplication1\obj\Release\WpfApplication1.pdb

Another interesting feature around .net debugging is configuring an executable image for debugging,  if you want to debug [application].exe, create a text file named [application].ini, in the same folder having this information:

[.NET Framework Debugging Control]
GenerateTrackingInfo=1
AllowOptimize=0

This tells JIT compiler to generate tracking information and not run optimizations, making it possible for the debugger to match up MSIL and not optimizing resulting machine code.

Remote debugging

To remote debug a .net application you can use the visual studio remote debugger (msvsmon.exe),
Both machines should have access/permissions to connect to each other. Start  visual studio remote debugger on the machine running the application, this will start visual studio remote debugging monitor displaying the following information.

Msvsmon started a new server named 'Domain\user@machinename'. Waiting for new connections.
machine\user connected. 

On the machine you want to debug, start visual studio and on the attach to process windows, set Qualifier to the remote debugger server name. In this example 'Domain\user@machinename' use the mane as it appears.

Wednesday, March 2, 2011

xUnit.Net – Running the tests (RunAfterTestFailed Custom Attribute)

On my last post I created a Custom TestCategory Attribute and custom xUnit.Net TestClassCommand to run test by category. Now I want to create a Custom RunAfterTestFailed attribute to run a method whenever a test fails. We have the following test class

using Xunit;
using Xunit.Extensions;

namespace xUnitCustomizations
{
    [CustomTestClassCommand]
    public class TestClass
    {
        [Fact, TestCategory("Unit")]
        public void FailedTest1()
        {
            Assert.True(false);
        }

        [Fact, TestCategory("Unit")]
        public void FailedTest2()
        {
            throw new InvalidOperationException();
        }

        [RunAfterTestFailed]
        public void TestFailed()
        {
            Debug.WriteLine("Run this whenever a test fails");
        }
    }
}

We will use CustomTestClassCommandAttribute and CustomTestClassCommand previously created, and made the following changes to EnumerateTestCommands method, since we need a custom command to be able to handle errors when executing test methods:

public class CustomTestClassCommand : ITestClassCommand
{
    .....

    #region ITestClassCommand Members

    .....

    public IEnumerable<ITestCommand> EnumerateTestCommands(IMethodInfo testMethod)
    {
        string skipReason = MethodUtility.GetSkipReason(testMethod);

        if (skipReason != null)
            yield return new SkipCommand(testMethod, MethodUtility.GetDisplayName(testMethod), skipReason);
        else
            foreach (var testCommand in cmd.EnumerateTestCommands(testMethod))
                yield return new AfterTestFailedCommand(testCommand);
    }
    ....
    #endregion
}

AfterTestFailedCommand will handle calling the execution of the test method and calling the proper method whenever a test fails

public class AfterTestFailedCommand : DelegatingTestCommand
{
    public AfterTestFailedCommand(ITestCommand innerCommand)
    : base(innerCommand)
    {}

    public override MethodResult Execute(object testClass)
    {
        MethodResult result = null;
        Type testClassType = testClass.GetType();
        try
        {
            result = InnerCommand.Execute(testClass);
        }
        finally
        {
            if (!(result is PassedResult))
            {
                foreach (MethodInfo method in testClassType.GetMethods())
                    foreach (var attr in method.GetCustomAttributes(typeof(RunAfterTestFailedAttribute), true))
                        method.Invoke(testClass, null);
            }
        }
        return result;
    }
}

I liked the extensibility that xUnit.Net provides, even when there isn’t a built in solution for TestCategory and RunAfterTestFailed attributes, I was able to build it by looking at the source code, the unit tests and examples available in CodePlex (the framework has unit tests!). Hope this series of posts was helpful to get you started in migrating from MSTest to xUnit.Net and writing custom extensions for this framework.

Friday, January 28, 2011

xUnit.Net – Running the tests (TestCategory)

In my previous post I showed an example about converting a MSTest class to xUnit.Net, and now I want to provide a solution for converting MSTest TestCategory attribute to an equivalent implementation in xUnit.Net.

MSTest allowed us to run the test that belongs to an specific category, let’s see a solution on how this can be accomplished in xUnit.Net.

using Xunit;
using Xunit.Extensions;

namespace xUnitCustomizations
{
    [CustomTestClassCommand]
    public class TestClass
    {
        [Fact, TestCategory("Unit")]
        public void FastTest()
        {
            Debug.WriteLine("fast test executed");
            Assert.True(true);
        }

        [Fact, TestCategory("Integration")]
        public void SlowTest()
        {
            Thread.Sleep(5000);
            Debug.WriteLine("slow test executed");
            Assert.True(true);
        }
    }
}

Create TestCategory attribute
 
namespace Xunit.Extensions
{
    public class TestCategoryAttribute : TraitAttribute
    {
        public TestCategoryAttribute(string category)
        : base("TestCategory", category) { }
    }
    ...
}

CustomTestClassCommandAttribute attribute is used to indicate that a custom test runner will be used

public class CustomTestClassCommandAttribute : RunWithAttribute
{
    public CustomTestClassCommandAttribute() : base(typeof(CustomTestClassCommand)) { }
}

CustomTestClassCommand  is the class that implements ITestClassCommand and acts as the runner for the test fixture

public class CustomTestClassCommand : ITestClassCommand { // Delegate most of the work to the existing TestClassCommand class so that we // can preserve any existing behavior (like supporting IUseFixture<T>). readonly TestClassCommand cmd = new TestClassCommand(); #region ITestClassCommand Members public object ObjectUnderTest { get { return cmd.ObjectUnderTest; } } public ITypeInfo TypeUnderTest { get { return cmd.TypeUnderTest; } set { cmd.TypeUnderTest = value; } } public int ChooseNextTest(ICollection<IMethodInfo> testsLeftToRun) { return cmd.ChooseNextTest(testsLeftToRun); } public Exception ClassFinish() { return cmd.ClassFinish(); } public Exception ClassStart() { return cmd.ClassStart(); } public IEnumerable<ITestCommand> EnumerateTestCommands(IMethodInfo testMethod) { return cmd.EnumerateTestCommands(testMethod); } public bool IsTestMethod(IMethodInfo testMethod) { return cmd.IsTestMethod(testMethod); } public IEnumerable<IMethodInfo> EnumerateTestMethods() { string category; foreach (IMethodInfo method in cmd.EnumerateTestMethods()) { category = string.Empty; foreach (IAttributeInfo attr in method.GetCustomAttributes(typeof(TestCategoryAttribute))) category = attr.GetPropertyValue<string>("Value"); if (category.ToLower().Contains("unit")) // We can make this configurable yield return method; } } #endregion }


The Method public IEnumerable<IMethodInfo> EnumerateTestMethods() filters the tests methods by TestCategory's attribute Value, note that we can make this configurable so we can configure our CI server to run unit test as soon as there are changes in the repository to provide quick feedback and schedule (e.g. once a day) the execution slower test like integration or Web UI test.

Monday, January 3, 2011

xUnit.Net – Running the tests (ClassInitialize – ClassCleanup)

I started using xUnit.Net few weeks ago. My first question was how to do the things I was used to do in other testing frameworks like MSTest or NUnit, specially when using these frameworks not only for unit testing but for higher level tests like Selenium RC web tests. So far, the framework seems to be very good and extensible.

I am going to show some scenarios I have run into converting MSTest to xUnit.Net, having the following class

using Microsoft.VisualStudio.TestTools.UnitTesting;

[TestClass] public class AdminPageTest { static SeleniumWebTestContext test = new SeleniumWebTestContext(); [ClassInitialize()] public static void ClassInitialize() { new LoginPage(test).LoginUser(“maria”, “******”); } [ClassCleanup()] public static void ClassCleanup() { new AdminPage(test.Driver).Logout(); test.StopWebTestContext(); } [TestCategory("WebTest"), TestMethod] public void TestViewMyProfile() { var profileWindow = new AdminPage(test.Driver).SelectMyProfile(); Assert.IsTrue(profileWindow.CheckFirstName(“Maria”)); profileWindow.Close(); } [TestCategory("WebTest"), TestMethod] public void TestAdminSearchUser() { var userWindow = new AdminPage(test.Driver).SelectUserManagement(); userWindow.Search("Marcano Maria"); Assert.IsTrue(adminTab.VerifyEmail("my-email@domain.com")); } }

Note that SeleniumWebTestContext is holding information about Selenium RC and starts the server.

ClassInitialize – ClassCleanup / IUseFixture<T>

Sometimes we require sharing information among the methods defined in a class, for example in web tests we want to share the logged in the user information, execute different actions and validations and log out at the end:

using Xunit;

public class AdminPageTest : IUseFixture<LoggedUserContext> { SeleniumWebTestContext test; public AdminPageTest() { } public void SetFixture(LoggedUserContext usrContext) { test = usrContext.Test; } ... }


and the LoggedUserContext class will look like this:
public class LoggedUserContext : IDisposable
{    
    public SeleniumWebTestContext Test;    
    public LoggedUserContext()    
    {        
        Test = new SeleniumWebTestContext();        
        new LoginPage(Test).LoginUser(“maria”, “******”);    
    }    

    public void Dispose()    
    {       
        new AdminPage(Test.Driver).Logout();
        Test.StopWebTestContext();
    }
}