A Self-Documenting Process from User Stories


, , , , ,

Creating user stories from a requirements specification enables a development team to prioritise tasks and have things organised in a more structured and traceable way. Just as importantly, they can be arranged on a ‘story board’ as a way of communicating the current state of a project and the workload. Often I see this done with post-it notes on a whiteboard:


A user story usually is a variation on the following template statement:
As a… [user/application/consumer]
I need/want/expect to… [insert action here]
So that… [insert goal or objective here]

As an example, one user story I wrote the other day went something like:
'As an auditing application, I want to call the GetDepartmentNamesCodes() method from the API, so that I get a list of department names and department codes.'

Sometimes we find ‘acceptance criteria’ attached to user stories. These are lists of things to determine when a feature can be considered finished. For example:
* The application/user is able to call the API method
* The API method returns the correct data.

There are two freely-available online tools that can extend the story board method to provide traceability throughout the development and testing process: Microsoft’s Team Services and Trello.

User Stories in Team Foundation Server
The following example is loosely based on a current real-life project I’m working on. While it has only two user stories, both features will consist of multiple components, and there’s a good chance they’ll have a load of artefacts associated with them later on. If the test analysts find defects, we want to have the test scripts and results traceable to whichever features.
Extracting a list of user stories from the requirements specification is the first step.


Generally each user story is going to be implemented by a product feature. For example, a user wants to get a list of department names and department codes, and the feature could be an API to fetch that data. To add a feature, open the relevant user story, and create a feature as a ‘New linked work item‘.


After this is done for all the user stories, we’ll have a ‘backlog’ of features we can trace back to them. The story board here has a similar format to the physical white board:


The second tool I came across isn’t an Application Lifecycle Management thing like TFS, but it can still be used by relatively small development teams for organizing features, user stories and related artefacts. I use this mainly to organise tasks rather than user stories.
The main interface of Trello is a ‘story board’ similar in layout to the TFS board. Each column is a ‘list’, and each list can contain a ‘card’ that represents a work item or user story. Here I’ve created three lists: Backlog, In Progress and Complete.


As with TFS, a card can be dragged and dropped across lists.


As it happens, you can also attach things like screenshots, test scripts and source files to each Card, so it does provide for some level of traceability.


Listed as a ‘Power Up’, the Calendar is an optional feature that must be added manually. Clicking on a date in the calendar enables a Card to be added, and a work item to be added within it.

Minimise your Web Footprint with Lynx


, , ,

A command line Web browser that I found very usable over the years, Lynx (project page here) is a simple and elegant solution to a range of problems that Web 2.0 brings.


The main thing about Lynx is it’s very lightweight – it doesn’t load images, JavaScript, Flash, PDFs, tracking pixels, content from multiple ad servers, potentially some exploits and malware, etc. – all the things your unmodified Firefox browser fetches when loading a Web page. And it’s surprising how much faster the Web is without all that overhead. Still, I’ve managed to use Twitter and access my Webmail using Lynx.

The Windows version of Lynx has also proven extremely useful as a countermeasure to shoulder-surfing, since the same amount of information can be displayed in window that is probably smaller than this screenshot:


I had no problems getting a working insallation on Windows 8 with the curses (not color-style) release. On Windows 7 it’s hit-and-miss, as the required SSL libraries might be present or they might not, and it might require compiling he source using Cygwin.

It’s possible to get by without learning the more advanced keybindings. Basic navigation is as simple as using the arrow keys to move between links, and pressing ‘g‘ to enter a new URL. The ‘/’ key gives the same feature as CTRL+F for searching within a page. There are numerous key bindings for the browser, but you’ll more than likely get by with a small handful.

If you’re using Lynx regularly, you’ll want to bookmark Web pages. Use the ‘a’ key to add the current Web page to the bookmarks file. Use the ‘v’ key to view the stored bookmarks.


The bookmarks are actually stored in an HTML file that can be directly edited like any other to better organise the entries.

Since Lynx doesn’t do JavaScript or Flash or download ads, the only thing to worry about are cookies. In the lynx.cfg file you can set the browser to ignore third-party cookies by uncommenting the #ACCEPT_ALL_COOKIES:FALSE line.

When using the browser, access the ‘Cookie Jar’ with CTRL+K. Stored cookies can be removed individually by pressing the Enter key and ‘D‘.


Configuration and Setup
The most direct way of configuring the browser is to press the ‘O’ key to show the Options Menu.


If you really wanted to customise the browser, edit the following files on Linux:
/home/[user]/lynx_bookmarks.html [User’s bookmarks]
/etc/lynx-cur/lynx.cfg [Browser configuration]
/etc/lynx-cur/lynx.lss [Lynx UI colour settings]

And on Windows the files are:
C:\Users\[user]\Documents\lynx_bookmarks.html [User’s bookmarks]
C:\Program Files\Lynx - web browser\lynx.cfg [Browser configuration]
C:\Program Files\Lynx - web browser\lynx.lss [Lynx UI colour settings]

And on a FreeBSD system:
/home/[user]/lynx_bookmarks.html [User’s bookmarks]
/usr/local/etc/lynx.cfg [Browser configuration]
/usr/local/etc/lynx.lss [Lynx UI colour settings]

These files can be copied over to other installations of Lynx.

Get Files and Archive Web Pages
Move the cursor over the file link, and press the ‘D‘ key. By default the ‘Save to disk’ option will save the file to your home directory, but you can specify the file path otherwise.


The download feature can also be used to download a linked Web page as a .gz archive.

Bad CA?


, , , , , , , , , ,

I didn’t want to pick on VeriSign/Symantec specifically, but there was a story that broke earlier this week that got me thinking what would happen if an SSL Certificate Authority was compromised.

VeriSign is a trusted CA, and was bought out by Symantec back in 2010. Blue Coat is an interception hardware vendor that by its own admission sells to regimes with questionable human rights histories. The problem is that Symantec appears to have granted Blue Coat intermediate CA status, with the ability to verify SSL connections as secure on behalf of Symantec.
Take a look at the crt.sh entry and judge for yourself. The commonName is ‘Blue Coat Public Services Intermediate CA’, and the cert doesn’t expire until September 2025.

On a corporate network, the admins might install their own root certificates on the client machines, which enables them to decrypt SSL traffic for the purpose of detecting malicious activity. This is entirely legitimate if it’s done by whoever owns the network and all the client machines. I’m a little skeptical about the claim Blue Coat was limited to being an intermediate CA for testing purposes within a corporate network only.
The ‘trusted’ CA model would be fundamentally broken if this became common practice, since it would allow anyone operating Blue Coat’s MITM kit to tamper with HTTPS sessions undetected. The browser wouldn’t flag that connection as compromised, and we’d be none the wiser without a deliberate inspection of the certificate.

Looking at one certificate where Symantec is the CA, it transpired that the root CA is actually ‘VeriSign Class 3 Public Primary Certification Authority – G5’.


Symantec bought out VeriSign a while ago, so life could get pretty awkward for anyone who revokes or removes Symantec from their certificate lists without making a backup.

Scrapping the Intermediate CA
Ideally you’d do the following procedure for a cert where the subject or common name is ‘Blue Coat’, but since I haven’t encountered that yet, I’ve done this with a cert signed by Symantec. If you’re going through with this, make sure you keep a backup of the file.


If we try to open that file in Windows Explorer, Windows will recognise it as a certificate, and we get the option to install it using the Certificate Import Wizard. However, we get the option of importing it into the Untrusted Certificates store.


How to Remove SSL Certificate in Windows
In Windows 8.1, search in the start screen for ‘Manage computer certifications‘. The entries you’d want for this are under the Third-Party Root Certification Authorities and Trusted Root Certification Authorities, and they can be deleted or the permissions modified in their Properties.


In Windows 7, run certmgr.msc and do the same as above.

Certificates can also be revoked or deleted in the Advanced options in Firefox.


How to Remove SSL Certificate in Linux
In the Advanced settings of Firefox, certificates can be deleted/revoked certificates and their trust settings modified.


Alternatively, download the certificate from cert.sh and try to import it into Firefox’s Certificate Manager – note that the trust settings are blank by default. You’ll then see it listed under VeriSign as ‘Blue Coat Public Services Intermediate CA’. Click the ‘Delete or Disrust’ button in the Certificate Manager – the certificate would still be installed, but marked as untrusted.


In Linux Mint, there are also certificates in /etc/ssl/certs/ca-certificates.cert, and CA lists are in /etc/ca-certificates.conf. Entries in the ca-certificates.conf file can be invalidated by prefixing their entries with ‘!’. Plus you’ll find public keys for the CAs in /usr/share/ca-certificates/mozilla.

Windows Communication Foundation


, , , ,

Microsoft provides a load of documentation on this, so it’s definitely well documented and maintained, which makes it ideal for a Service Oriented Architecture (SOA) solution with numerous applications. A service provided by a WCF application should be accessible to other applications, regardless of the programming languages they were created with or the platforms they might be running on, which is the whole point of adding APIs to services. WCF is basically another interoperability solution that which can communicate using SOAP and REST, and also make use of commonly used WS components.

When creating a new WCF project, there are four project type templates. For the purpose of this demo I selected ‘WCF Service Application‘.


The project template uses the following libraries:
* System.Runtime.Serialization
* System.ServiceModel
* System.ServiceModel.Web

The project template gives us three source files we’re interested in:
* Web.config: Configuration for the Web server and service bindings (such as HTTPS).
* Service1.svc: Contains the procedures/methods that implement whatever functionality for the services, and the data returned.
* IService1.cs: Defines the interface for the called procedures, and the data types they handle.

This is where the concept of interfaces and implementations becomes important. Typically this is used in ASP.NET MVC as a way to ‘decouple’ methods from the core application, which makes it easier to modify an application’s functionality as a modular system.
Where methods are called directly in a conventional program, here a requesting object calls an interface that contains references to whichever methods implement the requested functionality. This is sometimes referred to as a ‘loosely coupled’ system.
In the WCF project we have a source file (Service1.svc.cs) that contains the methods that implement services by processing and returning variables. In another source file (IService1.cs) we have an interface that contains references to the methods.

The project template can be executed as is in Visual Studio, without any modification. Click on Service1.svc and hit F5. This will launch the WCF Test Client.


In the window to the left, there is an entry for the IService interface, and the references to the GetData() and GetDataUsingDataContract() methods. Return values are displayed in the main window after the methods are invoked.

Adding a New Service/Procedure
This is a simple matter of creating a new method, then adding an interface reference to it. In Service1.svc.cs, add the following method:


And then add an interface reference to it:


When executed, the ‘Invoke’ is simply a trigger for the method to execute and return the defined string.

Interactive Procedures
A WCF application can also process incoming data and return the output. The procedure/method below is no different from a conventional function that takes two variables, performs an arithmetic operation and returns an integer. The same principle will apply to whatever functions you want to make available through WCF.


The interface reference is simply a single line declaring the method/procedure name, the input data types and the return data type.


On executing the application, the XML for the Request and Response looks remarkably like what we get with an XML-RPC transaction.


The request payload has a tag for the function name, and a tag for each value being sent to it. Based on this concept, we should be able to provide almost anything as a WCF service. It also means we are able to call any WCF method and send arbitrary values to it.

WCF Configuration and Service Options
Right-click the Web.config entry in the Solution Explorer, and select ‘Edit WCF Configuration‘. This will open the Microsoft Service Configuration Editor.


The important options here:
Services: Create New Service appears to enable the developer to import a class/service from another object on the local system that has .NET classes. The ‘GAC’ option lists numerous other resources that could provide WCF services.
Client: A client configuration can be created for accessing a service. This configuration defines the message transport method (TCP, HTTP, pipes, P2P, etc.), the contract used and the endpoint address.
Bindings: The Bindings option specifies the transport protocol for requests and responses.
Standard Endpoints: The types of service to be provided, such as service announcement, service discovery, message exchange and workflow control.

A Custom WCF Client
Obviously WCF services are pretty useless without clients to access them. The simplest implementation of WCF is the WCF Service Library project, with which we can create the service as a DLL.


In both IService1.cs and Service1.cs, I changed the data type for GetData() from int to string. e.g.
string GetData(string value);

This is enough to have a service that returns a string when responding to a request. Next, I added a Windows Form project to the solution – this will be used as the UI instead of the WCF Test Client. In the Windows Form, there are text boxes for the input and output, and a Send button.
In order for the client application to make use of the WCF Service, a service reference must be added to the project. Right-click the References part of the client application project, and select ‘Add Service Reference…’‘.


Click the ‘Discover‘ button to find the available services.


Notice that entries to the WCF service appear under the DataSources and Service References in the Solution Explorer. The DataSources entry contains what looks like a connection string. The Service References entry will open itself in the Object Explorer, and we can view the available functions provided by the WCF Service.
Now add the following code as the client application’s button handler:


With the client application set as the startup project, hit F5 to run:


An Intro to APIs


, , , , , , ,

In a typical enterprise network we’d rarely see everyone using the same applications on exactly the same platform and with a uniform configuration. Instead, we’d expect a range of applications for different things, running on a variety of platforms, and using different data sources. This is one reason why, in my workplace, at least 50% of the applications we develop are used within Web browsers, and most interact with each other using some form of Application Programming Interface (API). This is also why major Web 2.0 services, such as Twitter and FaceBook, publish APIs for developers to integrate features into third-party software.

Strictly speaking, an API is an entry point into an application, platform or service that enables an external entity to use its resources. For example, the Windows API enabled third-party software to use the native services of the Windows NT operating system.
Scaling up this concept, multiple applications can use each others’ APIs to exchange information efficiently without going through another mediating layer, or one service can use another as a data source. This is likely something I’d be doing a lot more of in the near future.

To demonstrate how a very basic API works with Twitter, I created a standard Windows Forms application around the search query example on the Twitter API page. When the button is clicked, the application takes the content of the text box and appends it to the search query URI, with the GET command in the HTTP header, thereby sending a REST request:


A REST request has two defining features: 1) The command/requests are sent as URIs, 2) It makes use of the GET, POST, DELETE and PUT commands within the HTTP header.
Here the Twitter server will extract the ‘search?q=%40[search term]‘ part of the URI, and perform whatever operation was mapped to that string. It will then return the HTML for the result, and this will be rendered by the local application’s Web Browser element:


An ASP.NET Web API Example
The next example is something I borrowed from a Web API tutorial, to show what happens on the server end when an ASP.NET service responds to a REST request. An MVC Web application roughly works by routing requests to whichever ‘controller’ according to the URI. In this example, the requesting URI takes the form:


The standard ApiController class has a set of empty controller methods for handling whatever valid requests the application might receive.
In the tutorial project, the application has a catalogue of products that can be searched by their IDs. When a user requests an item, the request is routed by the server to the ‘products’ controller, and the ‘Id‘ value is passed to the relevant controller class.


Basically the ‘Id‘ value relates to whatever the user submitted in the Web page form, which is sent as a query value to the server:



Get every new post delivered to your Inbox.

Join 32 other followers