Exploring a Web Service Architecture

Tags

, , , ,

At an ACM lecture in 1972, Edsger Dijkstra made a point about the human mind’s limited capacity to contain the logic of a sophisticated computer program from start to finish. Complexity is one of the reasons it’s easier to work with object-oriented code and abstraction, instead of creating something that might be unstructured but functional and syntactically correct. A book that I’ve found useful over the past couple of weeks is Code Complete (Steve McConnell, Microsoft Press), which goes into the best practices for designing and implementing complex software.

Web Services are straightforward to understand when they do something basic and exist in isolation. However, they can also be considerably more complex – the ones I’m developing are actually messengers carrying requests to stored procedures on a SQL server and carrying back the response to the requesting application. They consist of multiple components spanning multiple repositories. While this method of development had the purpose of reducing the complexity of the source code, it increased the complexity of the program’s structure, and I ended up creating a UML representation (using ArgoUML) as a way to visualise it.

General-WS-Class-Diagram

Since this Web Service is one of many, the components are called from three separate repositories: Web Service APIs, Web Service Helpers and Data Access.

Web Service API
A very simple example is given below:

mywebservice-api

As we can see, this is just a public class that contains a call to method/class GetMyWebServiceResponse(),. This function is the Web Service helper, and its return value is declared as the Web Service response.

Web Service Helper
A helper class is the core of the Web Service. A helper can be any program or function that returns a value to the exposed API. An important point is there’s a separation of the logic from the API, and the logic can only be called by the API since it’s an ‘internal static’ method.

mywebservice-helper

1. The stored procedure name and parameters are declared as fetchedData.
2. fetchedData is passed to myDataItemsResponseBody().
3. myDataItemsResponseBody(fetchedData) is passed to myDataItemsResponse().
4. Whatever is returned from myDataItemsResponse() is returned to the requesting application/service.

Data Item
The first thing to notice here is the namespace. The classes and methods for constructing the response exist in a different namespace (and repository) to the APIs and helpers.

mywebservice-dataitem

The properties set the data type as being read-only and write-only when they need to be ( { get; set; } ). One of the potential advantages of using { get; set } is the accessibility of a variable/property wouldn’t be dependent on whether the class was declared private, e.g.
public int Number {get; protected set;}

Also, think of myDataItem in terms of being a ‘blueprint’ with properties userName and profileCode, rather than as a class with variables. The properties of an instance of myDataItems can be over-ridden in a derived class.

Data Response Body
The following code is for taking the data structure defined by myDataItem and forming the body of the Web Service response.

mywebservice-responsebody

As we can see, ‘public class myDataItemResponseBody‘ is derived from a base class called ‘BaseResponseBody‘. This means we have an instance of that base class. This is also referred to as an ‘inherited virtual class’. It is a ‘virtual class’ that can be modified to handle different data types in particular.

You’ll notice that myDataItemResponseBody() was declared twice here, but with different inputs. This is known as ‘overloading’. One reason to do this is if you wanted just one method capable of adapting to the number of inputs during runtime, rather than having more than one method name to deal with.
Here we use this because we know that something’s going to call myDataItemResponseBody(), but we wouldn’t know whether a data row would be passed to it. If there is a data row, the method will use it to populate an array. If there is no data row when the method is called, nothing is returned.

Also notice the method below that, the ‘public override myDataItem‘. This overrides the base class that was instantiated. The override modifies the instance of the base class that was declared with whatever was declared in the derived class.

Data Response
Forms the entirety of what’s returned from a query.

mywebservice-response

Again this myDataItemResponse is also derived from a base class. The body is returned as an XmlElement, as whatever is returned from the Web Service is an XML formatted response.

Basic Cisco 2600 Operation

Tags

, , , , , ,

It’s been a couple of years since I’ve done anything with this router, but I’ll be trying some things out with a managed switch I’ve acquired. A some point I’ll get an Adtran for linking both routers also.
On the hardware I’m using there are Ethernet, Console and Aux interfaces on the router itself, and four BRI interfaces on the module. The Ethernet interface (with the yellow cable) is for the LAN, and the BRI interface (blue cable) links the router to another router or exchange.

Cisco-BRI-and-Eth-Interfaces

To configure the router, you’ll need 1) a terminal emulator, 2) console cable, and most likely 3) a serial-to-USB adapter. I use the same connectors and terminal emulator for managing an HP Procurve switch.

Terminal Emulator Settings
Here I’ve used minicom as my terminal emulator. When running it the first time, there’ll be a configuration menu. Select the ‘Serial Port Setup‘ option, and change the configuration to the following:

minicom-settings

Then back in the main minicom menu, ‘Save setup as dfl‘.

If minicom is already running before the router is switched on, something like the following boot message should appear:

cisco-2600-boot-2

And depending on whether a startup configuration file was detected, there might be the option to run through the initial configuration dialogue.

Prepping the Interfaces
The first thing you’d want to know is which interfaces are present with the following command:
Router>show ip interface brief

By default, none of the interfaces have an internal or external IP address, and none are active:

cisco-interfaces-non-configured

For that, we must enable admin mode using the ‘enable’ command. The prompt will change to ‘Router#‘.

If we use the following comand, I can see that all interfaces are present but none are initialised for routing:
Router#show running-config

interface Ethernet0/0
no ip address
shutdown
half-duplex

So the next step is to enter the configuration mode with the ‘configure terminal‘ command, and the prompt will change to:
Router(config)#

Before changing anything, let’s think about the address range we want for the LAN, and the subnetting. Obviously it’s going to be an RCF 1918 address range, and since I don’t expect to have that many hosts on the network, 255.255.255.0 seems a good subnet mask. If I remember correctly, routing only happens if/when the 255 addresses are exhausted and we end up assigning addresses outside that range.

So, I want the internal Ethernet interface to have an IP address of 10.0.0.1, just to differentiate it from my other home network.

Enter the configuration for the Ethernet0 interface and set its IP address, netmask, state and description (optional):
Router(config)#interface Ethernet0/0
Router(config-if)#ip address 10.0.0.1 255.255.255.0
Router(config-if)#no shutdown
Router(config-if)#description LAN gateway interface

Then ‘exit‘ and ‘end‘.

The external interfaces (BRI/PRI) won’t be doing much for a while, but it would be nice to give one of them an IP address also. For the purpose of this, I just set it to have a static address of 192.168.200.1.

If we run the ‘show ip interface brief‘ command again, we see that both interfaces are up, and both have an IP address.

cisco-show-configured-interfaces

If we’re happy with the setup so far, the current running configuration can be written to file as the startup configuration with:
Router#copy running-config startup-config

cisco-save-running-config

A Self-Documenting Process from User Stories

Tags

, , , , ,

Creating user stories from a requirements specification enables a development team to prioritise tasks and have things organised in a more structured and traceable way. Just as importantly, they can be arranged on a ‘story board’ as a way of communicating the current state of a project and the workload. Often I see this done with post-it notes on a whiteboard:

int-app-story-board

A user story usually is a variation on the following template statement:
As a… [user/application/consumer]
I need/want/expect to… [insert action here]
So that… [insert goal or objective here]

As an example, one user story I wrote the other day went something like:
'As an auditing application, I want to call the GetDepartmentNamesCodes() method from the API, so that I get a list of department names and department codes.'

Sometimes we find ‘acceptance criteria’ attached to user stories. These are lists of things to determine when a feature can be considered finished. For example:
* The application/user is able to call the API method
* The API method returns the correct data.

There are two freely-available online tools that can extend the story board method to provide traceability throughout the development and testing process: Microsoft’s Team Services and Trello.

User Stories in Team Foundation Server
The following example is loosely based on a current real-life project I’m working on. While it has only two user stories, both features will consist of multiple components, and there’s a good chance they’ll have a load of artefacts associated with them later on. If the test analysts find defects, we want to have the test scripts and results traceable to whichever features.
Extracting a list of user stories from the requirements specification is the first step.

tfs-user-story-list

Generally each user story is going to be implemented by a product feature. For example, a user wants to get a list of department names and department codes, and the feature could be an API to fetch that data. To add a feature, open the relevant user story, and create a feature as a ‘New linked work item‘.

tfs-select-new-linked-work-item

After this is done for all the user stories, we’ll have a ‘backlog’ of features we can trace back to them. The story board here has a similar format to the physical white board:

tfs-story-board

Trello
The second tool I came across isn’t an Application Lifecycle Management thing like TFS, but it can still be used by relatively small development teams for organizing features, user stories and related artefacts. I use this mainly to organise tasks rather than user stories.
The main interface of Trello is a ‘story board’ similar in layout to the TFS board. Each column is a ‘list’, and each list can contain a ‘card’ that represents a work item or user story. Here I’ve created three lists: Backlog, In Progress and Complete.

trello-main-board

As with TFS, a card can be dragged and dropped across lists.

trello-moving-card

As it happens, you can also attach things like screenshots, test scripts and source files to each Card, so it does provide for some level of traceability.

trello-attached-files

Listed as a ‘Power Up’, the Calendar is an optional feature that must be added manually. Clicking on a date in the calendar enables a Card to be added, and a work item to be added within it.

Minimise your Web Footprint with Lynx

Tags

, , ,

A command line Web browser that I found very usable over the years, Lynx (project page here) is a simple and elegant solution to a range of problems that Web 2.0 brings.

lynx-start

The main thing about Lynx is it’s very lightweight – it doesn’t load images, JavaScript, Flash, PDFs, tracking pixels, content from multiple ad servers, potentially some exploits and malware, etc. – all the things your unmodified Firefox browser fetches when loading a Web page. And it’s surprising how much faster the Web is without all that overhead. Still, I’ve managed to use Twitter and access my Webmail using Lynx.

The Windows version of Lynx has also proven extremely useful as a countermeasure to shoulder-surfing, since the same amount of information can be displayed in window that is probably smaller than this screenshot:

lynx-win

I had no problems getting a working insallation on Windows 8 with the curses (not color-style) release. On Windows 7 it’s hit-and-miss, as the required SSL libraries might be present or they might not, and it might require compiling he source using Cygwin.

Navigation
It’s possible to get by without learning the more advanced keybindings. Basic navigation is as simple as using the arrow keys to move between links, and pressing ‘g‘ to enter a new URL. The ‘/’ key gives the same feature as CTRL+F for searching within a page. There are numerous key bindings for the browser, but you’ll more than likely get by with a small handful.

If you’re using Lynx regularly, you’ll want to bookmark Web pages. Use the ‘a’ key to add the current Web page to the bookmarks file. Use the ‘v’ key to view the stored bookmarks.

lynx-bookmarks-list

The bookmarks are actually stored in an HTML file that can be directly edited like any other to better organise the entries.

Cookies
Since Lynx doesn’t do JavaScript or Flash or download ads, the only thing to worry about are cookies. In the lynx.cfg file you can set the browser to ignore third-party cookies by uncommenting the #ACCEPT_ALL_COOKIES:FALSE line.

When using the browser, access the ‘Cookie Jar’ with CTRL+K. Stored cookies can be removed individually by pressing the Enter key and ‘D‘.

lynx-cookie-jar

Configuration and Setup
The most direct way of configuring the browser is to press the ‘O’ key to show the Options Menu.

lynx-config

If you really wanted to customise the browser, edit the following files on Linux:
/home/[user]/lynx_bookmarks.html [User’s bookmarks]
/etc/lynx-cur/lynx.cfg [Browser configuration]
/etc/lynx-cur/lynx.lss [Lynx UI colour settings]

And on Windows the files are:
C:\Users\[user]\Documents\lynx_bookmarks.html [User’s bookmarks]
C:\Program Files\Lynx - web browser\lynx.cfg [Browser configuration]
C:\Program Files\Lynx - web browser\lynx.lss [Lynx UI colour settings]

And on a FreeBSD system:
/home/[user]/lynx_bookmarks.html [User’s bookmarks]
/usr/local/etc/lynx.cfg [Browser configuration]
/usr/local/etc/lynx.lss [Lynx UI colour settings]

These files can be copied over to other installations of Lynx.

Get Files and Archive Web Pages
Move the cursor over the file link, and press the ‘D‘ key. By default the ‘Save to disk’ option will save the file to your home directory, but you can specify the file path otherwise.

lynx-save-file

The download feature can also be used to download a linked Web page as a .gz archive.

Bad CA?

Tags

, , , , , , , , , ,

I didn’t want to pick on VeriSign/Symantec specifically, but there was a story that broke earlier this week that got me thinking what would happen if an SSL Certificate Authority was compromised.

VeriSign is a trusted CA, and was bought out by Symantec back in 2010. Blue Coat is an interception hardware vendor that by its own admission sells to regimes with questionable human rights histories. The problem is that Symantec appears to have granted Blue Coat intermediate CA status, with the ability to verify SSL connections as secure on behalf of Symantec.
Take a look at the crt.sh entry and judge for yourself. The commonName is ‘Blue Coat Public Services Intermediate CA’, and the cert doesn’t expire until September 2025.

On a corporate network, the admins might install their own root certificates on the client machines, which enables them to decrypt SSL traffic for the purpose of detecting malicious activity. This is entirely legitimate if it’s done by whoever owns the network and all the client machines. I’m a little skeptical about the claim Blue Coat was limited to being an intermediate CA for testing purposes within a corporate network only.
The ‘trusted’ CA model would be fundamentally broken if this became common practice, since it would allow anyone operating Blue Coat’s MITM kit to tamper with HTTPS sessions undetected. The browser wouldn’t flag that connection as compromised, and we’d be none the wiser without a deliberate inspection of the certificate.

Looking at one certificate where Symantec is the CA, it transpired that the root CA is actually ‘VeriSign Class 3 Public Primary Certification Authority – G5’.

the-apple-cert-verisign

Symantec bought out VeriSign a while ago, so life could get pretty awkward for anyone who revokes or removes Symantec from their certificate lists without making a backup.

Scrapping the Intermediate CA
Ideally you’d do the following procedure for a cert where the subject or common name is ‘Blue Coat’, but since I haven’t encountered that yet, I’ve done this with a cert signed by Symantec. If you’re going through with this, make sure you keep a backup of the file.

symantec-cert-component

If we try to open that file in Windows Explorer, Windows will recognise it as a certificate, and we get the option to install it using the Certificate Import Wizard. However, we get the option of importing it into the Untrusted Certificates store.

firefox-insecure-connection

How to Remove SSL Certificate in Windows
In Windows 8.1, search in the start screen for ‘Manage computer certifications‘. The entries you’d want for this are under the Third-Party Root Certification Authorities and Trusted Root Certification Authorities, and they can be deleted or the permissions modified in their Properties.

windows-cert-manager

In Windows 7, run certmgr.msc and do the same as above.

Certificates can also be revoked or deleted in the Advanced options in Firefox.

firefox-cert-list

How to Remove SSL Certificate in Linux
In the Advanced settings of Firefox, certificates can be deleted/revoked certificates and their trust settings modified.

firefox-edit-cert-trust

Alternatively, download the certificate from cert.sh and try to import it into Firefox’s Certificate Manager – note that the trust settings are blank by default. You’ll then see it listed under VeriSign as ‘Blue Coat Public Services Intermediate CA’. Click the ‘Delete or Disrust’ button in the Certificate Manager – the certificate would still be installed, but marked as untrusted.

import-bad-cert

In Linux Mint, there are also certificates in /etc/ssl/certs/ca-certificates.cert, and CA lists are in /etc/ca-certificates.conf. Entries in the ca-certificates.conf file can be invalidated by prefixing their entries with ‘!’. Plus you’ll find public keys for the CAs in /usr/share/ca-certificates/mozilla.

Follow

Get every new post delivered to your Inbox.

Join 33 other followers