My first week in a DevOps job has been pretty awesome, and I took a very quick liking to the workplace and the people there.
One of the challenges I have is figuring out how to determine the efficiency of software applications under a variety of ‘use cases’, which kind of meant learning software testing, test scripting and associated tools on the fly (I’ve done basic unit and integration testing before, though).
I picked HP LoadRunner as a potential tool for the job. If you can imagine working a case across three different versions of EnCase, you’d have a fairly good idea of the learning curve involved. The download is roughly 800MB, and took all morning to install on one machine, but the effort’s well worth it. Plus the test scripts are constructed in C, so it’s an opportunity to relearn that programming language.
Rather than describe the three modules of LoadRunner, I’ll run through my attempt to automate a manual test. Basically the target application is tested by recording a sequence of transactions between the client and server, and using LoadRunner to replay them as an automate test. My reasoning behind this is that any errors encountered during the replay could be traced to broken features in the web application. Something else is required to check GUI rendering, as this is more event-driven.
Creating a Test Script
For this I’ve decided to run a ‘test’ against this blog (or WordPress.com), which I understand is perfectly legal if it doesn’t disrupt the service – simply visit the site, attempt a login and record those transactions. The idea is to get the basic process nailed, and later apply it to our own software.
In the Virtual User Generator interface, select ‘File — New Script‘ and ‘Solution‘, and select the type of service you want to test in the ‘Create a New Script’ dialogue. Here I’ve checked both the HTML and Web Services option, and set the script name as ‘MyFirstAutomation’.
Initially the script is a C source file with an empty Actions() function that’s populated as the manual test actions are recorded.
After clicking ‘Record’ under the Record tab, the Start Recording dialogue will appear, prompting the user to specify whether the target is a local application or web site. If it’s the latter, specify the service’s URL.
What happens is the script initiates a session with vuser_init and terminates with vuser_end (or something like that), and all the session transactions are typically implemented as function calls within Action(). The latter is where all the interactions should be recorded. Enter the URL of the target service and the browser being used, and we’re good to go.
The browser will open, load the URL, and the actions are recorded as a test is performed manually. There’ll be a small icon panel for stopping, pausing and resuming the recording.
Click the stop button after all the actions have been performed. We end up back in the Design Studio with a populated script. Click the Replay and Scan button to check the script for errors, then save the script (important!). The replay button can also be clicked, to ensure the script runs okay.
Setting up Virtual Users
Strangely enough, the virtual users aren’t generated by the LoadRunner Virtual User Generator module, but instead the LoadRunner Controller. Start the latter and exit the Virtual User Generator. The purpose of LoadRunner’s Controller module s to manage the execution of multiple virtual users. Start this and import the newly-created test script (select and click the ‘Add‘ button).
The scripts are loaded into Scenario Group rows. A Scenario Group could represent department managers, another could represent analysts, another could be users from another government service.
In this example, I have queued four virtual users simultaneously attempting to log into a WordPress account. In other words, by clicking the Play button I can launch four instances of MyFirstAutomation script simultaneously, then gather performance data from the server responses.
The test will take a good few minutes to run, during which we get some metrics thingy as the virtual users are doing their thing. When it’s completed, save the scenario as a .lrs file. Here it’s MyFirstScenario.lrs.
If we want to simulate the peak usage of a system, ‘Rendezvous Points’ enable the users to initiate a given transaction at the same time at some point in a test. The reported errors might be useful, if they indicate broken functions and can be traced.
The final step here is to load the .lrs scenario file into LoadRunner Analysis. Instead of closing LoadRunner Controller and opening LoadRunner Analysis, simply click ‘Results — Analyze Results‘.
Notice there’s an ‘SLA configuration wizard’ in the summary, with which performance can be compared with any SLA that might exist for the product/service being tested.
Further Note: Correlation and Test Parameters
One thing I almost forgot to mention, which I might cover in a more detailed look at the C code, is LoadRunner’s ‘correlation’ feature. It has something to do with identifying session-specific parameters that might change when automating a test that was performed manually. Certain values, such as session keys, cannot be used twice, and leaving that as a constant could therefore prevent a test being automated.
The best time to sort the correlation thing is just after preparing a new file for the script. Open the ‘Recording Options…’ dialogue in the Virtual User Generator module.
In the dialogue, set the correlation rules. Here I am testing an online .NET application, and have all the .NET correlation rules selected.
Once that’s done, start the recording and perform the test manually, as demonstrated before.