Screw the CSV File!


, , , , , , , , ,

Having messed up some test projects in trying read values from CSV files, and emulating several online tutorials, I decided to try a different approach. A Coded UI Test project directory has UIMap.Designer.cs, which does pretty much the same thing as the CSV file, so why not edit that instead?


The easiest way of altering the parameters it contains is by double-clicking the UIMap.uitest entry in the Solution Explorer and changing the values in the Properties window.


The less convenient way of changing the parameters is by modifying the lines in UIMap.Designer.cs. It doesn’t work with passwords, though – these are stored in an encrypted form, and replacing it with a plaintext value would break the test script. Any new password must be encrypted beforehand.


Fortunately I din’t have to write a function for that, as Microsoft already provides one: Playback.EncryptText(). Simply use this, and change the variable passed to this function from the UIMap.Designer.cs.


But the whole point of adding a CSV file was to make it easier for others to change the test parameters. The next best thing is having all the variables declared near the head of UIMap.Designer.cs. Why not create another partial public class, and have the script reference that instead of this.[Method]Params class?

Just above the [GeneratedCode] in UIMap.Designer.cs, I inserted the following:


And changed the variables in the other class to reference them:


How to Load Test with Visual Studio


, , , , , , , , ,

This is very different to the coded UI tests that I covered last week, and the process isn’t quite as straightforward.

The following are required:
* Visual Studio (Ultimate)
* Visual Studio Online account or Team Foundation Server

Setting Up
A load test uses a ‘Team Foundation Server’ (TFS), which is either an individual installation or obtained by registering a Visual Studio Online account at
When that’s sorted, sign into the TFS by clicking the icon at the top-right of the Visual Studio environment.


The project details will appear in Team Explorer window, where the Solution Explorer normally is.


Creating a Web Performance Testing Session
A Web Performance Test Project must be set up before any load testing is done. In Visual Basic or Visual C#, navigate to ‘Test’ and select ‘Web Performance and Load Test Project’. The Solution Explorer will re-appear. Add a Web Performance Test to the project.



Visual Studio should automatically begin recording the Internet Explorer session, unless an existing project is selected and the Add Recording button is clicked. The Microsoft Web Test Recorder browser add-on must be enabled at this point.


When finished, click the Stop Recording button. The steps will be displayed in the main window, and can be replayed using the Run Test button above.
A successful run will produce output similar to the following example:


The Load Test
With a recorded Web test session, the project should be ready for a load test. This time, add a Load Test to the project. This will start the New Load Test Wizard – this is where the ‘load test scenario’ is created and the Web test scripts imported.


When the dialogue closes, a tree structure will appear in Visual Studio’s main window. With a bit of luck Visual Studio will connect to the right server, queue the test and execute it when the Run Test icon is clicked.


Analysing the Results
But how to get the metrics and flashy graphs we’re after? Simple: Just double-click the load test name to get the tree structure in the main window, then click the Open and Manage Results button at the top-left.


In the Open and Manage Load Test Results, click ‘Open…’, and the metrics will appear in Visual Studio’s main window.


Visual Studio Test Script Chaining


, , , , , ,

Building on what I’ve covered in the last post, I’ve been developing my team’s capability to initiate a batch of automated scripts, which is useful if we’re off to the pub, or need to deploy an updated application quickly.

For this to work, a ‘solution’ must be created in Visual Studio, and the project files for the scripts imported. To do this, simply create a blank solution file (Other Project Types – Visual Studio Solutions, in the New Project window).


The Solution Explorer window appears to the right of the Visual Studio IDE. We have two options here: either create new projects within the solution, or add existing project files.
Because it’s important to name the methods and ‘assertions’ properly this time, in order to pin down errors within a much larger test, I’ve created new script projects from scratch. These will be two very basic scripts – one of which is a webmail account login and logout, and the other uses the calculator application.

Right-click inside Solution Explorer and select Add — New Project.


When created, follow the same procedure as before to onfirm the first script works on its own, then right-click on the solution title to add another project. The calculator application script has ‘assertions’ added for checking the application’s output is what it should be, and that specific interface elements are displaying the right text.

Once that’s working, return to the Visual Studio IDE. The two test projects are listed in the Solution Explorer window..


The final step is to run both projects in sequence instead of individually. You’re probably going mental trying to figure out how to do this. What I did was, in the Test Explorer window, click the ‘Run All’ link. This will execute the chain of test scripts we’ve created.


Basic Test Automation with Visual Studio


, , , , ,

It’s possible to learn a fair amount of actual software development just by testing applications – the way a development team works, at least one programming language, and how to solve problems as a programmer. I’m kind of in the weird position of providing an organisation with the ability to automate its software testing processes, despite this being an almost entirely new area to me.

I’ve got the core automation process nailed for individual scripts, I think – hit the record button, run through the application test manually, and replay the sequence of interactions later. Where LoadRunner was designed for load testing, Visual Studio (the ‘Ultimate’ edition) has something more suitable for testing GUIs and checking the data returned by the target application. Another difference is the test scripts can be Visual Basic or C# files.

The Procedure
I started out with a proof-of-concept project for a simple WordPress login and logout script. First, create a new ‘Coded UI Test Project’, under the Visual Basic or C# section. Either language is fine at this point.


The skeleton code will appear in the IDE, and the ‘Generate Code for Coded UI Tests‘ dialogue will give two options – the option we want is ‘Record actions, edit UI map or add assertions‘. When selected, a small UIMap panel will appear in the desktop, and this enables us to record a manual test, remove steps from the recorded sequence and focus tests on specific elements of the UI.


The Start Recording icon should be clicked before the target application is launched, and recording stopped/paused before the application is closed. Here I clicked the Start Recording icon, started up Internet Explorer and accessed

Next to the Record icon is another icon for Show Recorded Steps. We might want to look through this and remove any actions that weren’t part of the test we’re scripting.


If we’re happy with the sequence listed in the window, we click the ‘Generate Code’ icon, enter the method name, and click the ‘Add and Generate’ button. For some applications most the ‘Mouse hover’ actions can be removed first.

Back in the Visual Studio IDE, a script will appear as Visual Basic/C# code, but it’s not quite ready yet. The following steps are the really important bits for replaying the interactions and making the scripts portable.

1. Save the project, ideally by clicking ‘Save All’.
2. Under the ‘TEST‘ tab, select ‘Windows‘ then ‘Test Explorer‘. This will bring us down to the relevant code/method within the source file.
3. Then choose BUILD — Build Solution. Fingers crossed, it will build or compile with no errors.
4. Now, in order to replay the test, find the relevant method within the source file, right-click within it and select ‘Run Tests‘.


If done correctly, Visual Studio will use the script to replay all the interactions in gthe manual test, and finish with no errors.

At some point it might be necessary to modify the parameters, to customise the test script for a slightly different test.
The parameters and user input is stored in an XML file in the /obj/debug directory.
The way I found to do this is to go through the right pane, in the Class View window. Scroll down through the entries until you reach the project name, then find the [project name]params entry. After double-clicking that, another view will appear for that Public Class which contains the parameters accessible to the entire script.


The /bin/Debug directory is where the important project files are stored. There are two binaries for executing the test, plus an XML file that stores the test parameters. If you were to use these for automating other tests on another machine, it should be possible to run the script after changing the parameters as required in the (project-name).xml file.


Adding Assertions and Checking Values
Having got this working to test basic application functions, the next stage is to determine whether a target application is returning the correct output or displaying the correct text in dialogues. We do this by adding ‘assertions’. The following is a simple demonstration of this, using the calc.exe program.

Hit the ‘Start Recording‘ icon in the UIMap panel, open up the Calculator application, press some keys, then stop the recording.


Click the ‘Generate Code’ icon, and ‘Add and Generate’ to create the method.

After the code is generated, the target (centre) icon in the UIMap panel becomes active. Click and drag this to the calculator display box, which will then be highlighted. This will open the Add Assertion property list, and there will be listed the output of the calculator application. Right-click the output value entry (DisplayText value), and select ‘Add Assertion’.



And that’s it. Return to the Visual Studio code editing environment, then repeat the steps for running the test (save all, build, run inside test method).

I have uploaded

Using HP LoadRunner for Functional Testing?


, , , , ,

My first week in a DevOps job has been pretty awesome, and I took a very quick liking to the workplace and the people there.

One of the challenges I have is figuring out how to determine the efficiency of software applications under a variety of ‘use cases’, which kind of meant learning software testing, test scripting and associated tools on the fly (I’ve done basic unit and integration testing before, though).

I picked HP LoadRunner as a potential tool for the job. If you can imagine working a case across three different versions of EnCase, you’d have a fairly good idea of the learning curve involved. The download is roughly 800MB, and took all morning to install on one machine, but the effort’s well worth it. Plus the test scripts are constructed in C, so it’s an opportunity to relearn that programming language.

Rather than describe the three modules of LoadRunner, I’ll run through my attempt to automate a manual test. Basically the target application is tested by recording a sequence of transactions between the client and server, and using LoadRunner to replay them as an automate test. My reasoning behind this is that any errors encountered during the replay could be traced to broken features in the web application. Something else is required to check GUI rendering, as this is more event-driven.

Creating a Test Script
For this I’ve decided to run a ‘test’ against this blog (or, which I understand is perfectly legal if it doesn’t disrupt the service – simply visit the site, attempt a login and record those transactions. The idea is to get the basic process nailed, and later apply it to our own software.

In the Virtual User Generator interface, select ‘File — New Script‘ and ‘Solution‘, and select the type of service you want to test in the ‘Create a New Script’ dialogue. Here I’ve checked both the HTML and Web Services option, and set the script name as ‘MyFirstAutomation’.


Initially the script is a C source file with an empty Actions() function that’s populated as the manual test actions are recorded.

After clicking ‘Record’ under the Record tab, the Start Recording dialogue will appear, prompting the user to specify whether the target is a local application or web site. If it’s the latter, specify the service’s URL.


What happens is the script initiates a session with vuser_init and terminates with vuser_end (or something like that), and all the session transactions are typically implemented as function calls within Action(). The latter is where all the interactions should be recorded. Enter the URL of the target service and the browser being used, and we’re good to go.

The browser will open, load the URL, and the actions are recorded as a test is performed manually. There’ll be a small icon panel for stopping, pausing and resuming the recording.


Click the stop button after all the actions have been performed. We end up back in the Design Studio with a populated script. Click the Replay and Scan button to check the script for errors, then save the script (important!). The replay button can also be clicked, to ensure the script runs okay.

Setting up Virtual Users
Strangely enough, the virtual users aren’t generated by the LoadRunner Virtual User Generator module, but instead the LoadRunner Controller. Start the latter and exit the Virtual User Generator. The purpose of LoadRunner’s Controller module s to manage the execution of multiple virtual users. Start this and import the newly-created test script (select and click the ‘Add‘ button).


The scripts are loaded into Scenario Group rows. A Scenario Group could represent department managers, another could represent analysts, another could be users from another government service.
In this example, I have queued four virtual users simultaneously attempting to log into a WordPress account. In other words, by clicking the Play button I can launch four instances of MyFirstAutomation script simultaneously, then gather performance data from the server responses.


The test will take a good few minutes to run, during which we get some metrics thingy as the virtual users are doing their thing. When it’s completed, save the scenario as a .lrs file. Here it’s MyFirstScenario.lrs.


If we want to simulate the peak usage of a system, ‘Rendezvous Points’ enable the users to initiate a given transaction at the same time at some point in a test. The reported errors might be useful, if they indicate broken functions and can be traced.

The final step here is to load the .lrs scenario file into LoadRunner Analysis. Instead of closing LoadRunner Controller and opening LoadRunner Analysis, simply click ‘Results — Analyze Results‘.


Notice there’s an ‘SLA configuration wizard’ in the summary, with which performance can be compared with any SLA that might exist for the product/service being tested.


Further Note: Correlation and Test Parameters
One thing I almost forgot to mention, which I might cover in a more detailed look at the C code, is LoadRunner’s ‘correlation’ feature. It has something to do with identifying session-specific parameters that might change when automating a test that was performed manually. Certain values, such as session keys, cannot be used twice, and leaving that as a constant could therefore prevent a test being automated.

The best time to sort the correlation thing is just after preparing a new file for the script. Open the ‘Recording Options…’ dialogue in the Virtual User Generator module.


In the dialogue, set the correlation rules. Here I am testing an online .NET application, and have all the .NET correlation rules selected.


Once that’s done, start the recording and perform the test manually, as demonstrated before.


Get every new post delivered to your Inbox.