Conflicting Opinions About the Role of Open Source Software in Healthcare


, , , ,

Writing for, in the article ‘Open source is the only way for Medicine’, Dr. Marcus Baw argues:

As software and technology have become more and more part of medicine itself, neither distinguishable or separable from the practice of medicine, we must consider that these software artefacts now have to become part of the corpus of medical knowledge, and thus there is an ethical duty to share these technological artefacts openly, as Open Source software and Open Hardware, in order for the maximum benefit to humankind.

In principle I partly agree. If a team develops a system that improves the quality of healthcare, others could benefit globally by repurposing it. Superficially, there’s a moral argument for publishing code for such a system. But I think the case would be stronger if Dr. Baw hadn’t based it on a comparison between the sharing of software and the sharing of medical knowledge. If we accept, as our basis for argument, that both the medical profession and software development are distinct fields of expertise, software development would be analogous to performing surgery or treating a patient. The point here is that the production of properly engineered software is not equatable to knowledge sharing, since the former is often a bigger undertaking that requires the application of expertise and labour.
What is directly analogous to the sharing of medical knowledge is the sharing of knowledge that enhances the quality of software, through books, meetups, blogs, GitHub pages, tools, best practices, YouTube/Pluralsight videos, design patterns, etc. This has been the norm for decades already.

Another point worth mentioning is that making clinical software open source is all nice and altruistic, but it could go completely the other way. Over the years, we’ve seen how the ideal of collaborative progress can be co-opted to form a ‘sharing economy’, in which corporations exploit underpaid labour in ways that undermine the social contract, while creating a fake ‘startup culture’ that enables monopolies to assimilate anything and everything they deem profitable. The monopolies now exist. What’s to prevent them repurposing the best code, hiring the top 1% of developers, and making it harder for smaller vendors to compete? What’s to prevent a corporation co-opting a successful open source project and ensuring only their implementation gets approved?

The Testability of Proprietary Software

On the issue of testing and reviewing clinical software, Dr. Baw states:

But, with these closed-source proprietary systems, there’s no way to independently test their claims, there’s no scientific process, and the need for proper clinical peer review has been completely over-ridden by the wish of the company to make profit.

It’s not always appropriate to invoke ‘science’ when making an argument, as the scientific method is simply a tool for determining the typical behaviour of the observable world. For what it’s worth, one could observe an instance of proprietary software, make inferences about its behaviour, and form testable hypotheses. Dr. Baw could even do that for multiple proprietary applications and submit a paper on that for peer review. This would be an example of the scientific process applied to proprietary software.

As it happens, proprietary software is far from untestable, as the actual behaviour of an application can be compared with the expected behaviour for a range of anticipated use cases, and this is precisely what the NHS does for each iteration of clinical software it develops in-house. The Clinical Portal, My Health Online, WCCG, GPTR and Myrddin are thoroughly tested that way by professional test analysts. That method of testing is also applied to some third-party proprietary applications, such as LIMS, INPS and EMIS. It’s not the place of test analysts to weigh in on my source code, but it’s their job to criticise the applications from the users’ perspective and log application behaviour that deviates from what they expect.

When it comes to reviewing code, that’s important, but it doesn’t reveal much about how the software would actually behave, especially if one is reviewing scripts where the underlying components were coded in another language, or if software is so loosely coupled one needs to dig through five source files to determine the purpose of a component.
Subjective opinion also plays a huge role in fundamental decisions about how software is engineered and applied – there are even valid criticisms against what’s generally considered ‘best practice’. My opinions of someone else’s code, and whether that person should have followed a given design pattern, would be a matter of subjective opinion, regardless of my reasoning.

Open Source and Publicly-Funded Projects

Quite a large proportion of medical research is publicly funded. This is taxpayers™ money which is used for developing medicine. Where this medical research results in development of software, data sets, algorithms, or any other technological feature, it’s imperative that these be open sourced, so that the the maximum taxpayer benefit is extracted from the investment.

Aside from having to be really careful about publishing data sets, I sort of agree. I believe in subsidiarity, that everyone can empower themselves taking ownership of their communities, and open source projects are conducive to that. It’smore sustainable, I believe, for a community to utilise the skills it has, develop the systems it needs and use them as it sees fit. Taxpayers would certainly get more for their investment, and patient care might improve globally. However, for things to remain that way, NHS would need to assume a similar role to the Free Software Foundation, in funding the core developers and ensuring there are legal protections for in-house projects.


John Keats and State Surveillance


, ,

Dr. Richard Margraff Turley is due to give a talk on Literature and Mass Surveillance at the BCS Mid Wales event this week (Aberystwyth University is a four-hour drive for me, unfortunately). Titled ‘Who hath not seen thee … ?‘, this is an interesting discussion about the idea that state surveillance had some influence on the later works of John Keats.

The relevance of Keats’ writings to mass surveillance aren’t obvious. I think it’ll become clear to some readers that Keats, if he was indeed commenting on surveillance, was trying to describe a situation that was very different to that of today, and Dr. Turley hasn’t differentiated between mass surveillance of today and targeted surveillance of the past. The two are very different:

We might assume mass surveillance is a modern phenomenon, but “surveillance” is a Romantic word, first introduced to English readers in 1799. It acquired a chilling sub-entry in 1816 in Charles James’s Military Dictionary: the condition of “existing under the eye of the police”.

This definition, though very concise, is remarkably broad. What precisely does it mean to be ‘under the eye of the police’? It could refer to a state in which reasonable suspicions are investigated by detectives (targeted surveillance). It could refer to a state in which those ‘with nothing to hide’ are watched by a myriad of deputised officials and machinery (mass surveillance). Mass surveillance is something that violates our reasonable expectations of privacy in the most insidious ways.

It’s also important to remember that Keats penned his works in a politically volatile period, just a few decades after the revolutions in France, America and Haiti. Masses of people were awakening to the fact they could revolt and potentially overthrow governments in their struggle for universal rights, suffrage, a better quality of life, and even their very survival. Today the opposite is true: Most of us have everything to lose and little to gain by overthrowing The Establishment. Also, today mass surveillance wouldn’t be viable in a Western society without Silicon Valley corporations and and social media to provide the framework.

So, it wasn’t without reason that The Establishment would have employed spies to watch public events for indications of an imminent uprising, and resorted to heavy-handed tactics to prevent that happening. The Establishment felt it necessary to mobilise the police and the Army to protect the Bank of England, among other buildings, and charge radicals like Henry Hunt with treason. Could we really claim that as an example of mass surveillance, though? No, I think the issue here was that the use of state surveillance to monitor the political activities of citizens, instead of, say, people who already had political influence, was a new concept at the time.

What’s more telling than the content of the literature is the way Keats was guarded in his commentary of events, as if he suspected The Establishment knew about his more politically active acquaintances and were intercepting his letters. Today we refer to this as the ‘chilling effect’ – the reluctance to openly express dissenting opinions for fear of retribution. This is not an irrational fear when political discourse is divisive and uncivil.


Of Keats’ poem, ‘Lamia, Dr. Turley writes: ‘That poem opens with a queasy scene in which Hermes transforms Lamia from serpent to woman. The price is information: Lamia agrees to give up the location of a nymph’s “secret bed” to the priapic god.

Again, it’s conjecture to say that Keats was making a veiled reference to The Establishment’s surveillance apparatus, but we could nevertheless read that section of the poem as an allegory for it. Keats seemed to recognise and allude to the fact that people are willing to betray secrets in return for something, for some kind of benefit, rather like we’re collectively prepared to trade personal information for our 15 minutes of fame on social media. The nymph’s ‘secret bed’ could be a metaphor for a place where dissidents conspire, but it could also be a warning that even intimate details about ourselves and others could be traded. And why does Hermes want the information? For his personal gain, obviously, not for selfless reasons.

And once that level of sharing becomes accepted behaviour, it can quickly become a habit of inadvertent disclosure, as Keats and Turley also noticed:
Keats is describing actual workers, real people whose slacking off he reports as unthinkingly as we might share our own peers’ political views or locations on social media. As casually as a Google car might capture a moonlighting worker up a ladder outside someone’s house.

Extending ASP.NET IdentityUsers


, , , ,

As I posted the other week, I’ve used the default ASP.NET IdentityModels to initialise a Code-First schema on a local database server. Anyone running the application after that could register an email addess and password combination, login using that and manage his/her account. What if we wanted to extend this to include a user name, or some other field? Well, first the extra field, in this case ‘Name’, must be added to the ApplicationUser class in IdentityModels.cs:

To this property I’ve added the ‘StringLength()‘ attribute, since I want to limit the user name to 160 characters – this limit is applied to the database column when the migration is run.

If you want to list users in the application view, the names can be referenced using something like ‘@item.CalendarUser.Name‘.

Users will need to add their (user)names on the Register page, in addition to their email addresses and passwords. To enable this, we can add the ‘Name‘ property to AccountViewModel.cs, under the RegisterViewModel class:

Modifying the code in /Views/Account/Register.cshtml is a simple matter of copying and pasting one of the form elements and renaming it. The Register action in AccountController.cs also needs to be modified, so the Register page will post back the user’s registered name:

All I’ve really done to the default method is add ‘Name = model.Name‘ to the instance of ApplicationUser.

In the Package Manager Console, add this change as a migration script, and update the database. You might want to populate the ‘Name’ field in the table in the Server Explorer manually.

How to Spot Satellites without a Telescope



Recently I installed the Heavens Above application for an Android phone, experimented with it, and managed to see three satellites with the naked eye within a space of about ten minutes. This was at least two hours after sunset.

In the settings. set the magnitude to 4 – this should be the default anyway. The Sky Chart will often display the position of multiple rockets and satellites.

The application will also display the flight path of a given object when the user taps on it.

All you need now is a clear night, and wait for the Sky Chart to show a satellite about to pass overhead. They seem more visible overhead, and it’s easier to notice their movement. What you’re looking for is something that appears like a moving star.

Setting Up a Data Source Using the Code-First Entity Framework Model


, , , , , , ,

Since most my .NET projects for the past several years have been for clinical systems, I’ve been working from Database-First Entity Framework models. The software needed to be designed around whatever database schemas and stored procedures already existed, and data models needed to be generated from them. So, it’s only quite recently that I’ve looked at Code-First models, which seem more appropriate to situations in which we’d want a database schema to evolve as an application is being developed.

With the Code-First method, the data model is defined in the application’s code, and we sync the changes to the database.
To try the following example, you’ll need to create a new ASP.NET MVC project in Visual Studio, and ensure it’s created with the ‘Individual User Accounts‘ option (click the ‘Change Authentication‘ button when choosing the template), as I’m using classes within a file called ‘IdentityModels.cs‘ to set up the initial model and schema. When the project is loaded, right-click on the project and select ‘Manage NuGet Packages…‘. If Entity Framework isn’t already installed, install it.

We should be set up for the first step. It might be worth hanging onto the project if you’re following this example, because a future post will be going into developing the controllers to read and write data using this.

Generating the First Migration Script and Database From IdentityModel

First let’s take a look at the IdentityModels.cs file, as the two classes in here are important for understanding the Code-First method. The first class is ApplicationUser. I’m still not entirely sure how it works, as the implementations are hidden, but it uses IdentityUser as its base class, and I want to later extend an instance of this with a property that defines the application user name.

The second is the DbContext. Currently it references a connection string in Web.config named ‘DefaultConnection.

In my project, these classes are within the namespace ‘CodeFirstExample.Models‘.

Now it’s possible to generate a database table and its schema from this by using the Migrations tool, which is run in the Package Manager Console – find this in ‘Tools‘ – ‘NuGet Package Manager‘ – ‘Package Manager Console‘. Basically this is a PowerShell command line interface, and we can do more than simply fetch and remove packages.

PM> enable-migrations
PM> add-migration InitialMigration

The console should contain:

PM> enable-migrations
Checking if the context targets an existing database...
Code First Migrations enabled for project CodeFirstExample.
PM> add-migration InitialMigration
Scaffolding migration 'InitialMigration'.
The Designer Code for this migration file includes a snapshot of your current Code First model. This snapshot is used to calculate the changes to your model when you scaffold the next migration. If you make additional changes to your model that you want to include in this migration, then you can re-scaffold it by running 'Add-Migration InitialMigration' again.

We should also see a Migrations folder appear in Solution Explorer, containing a [migration name].cs file. This will have C# methods containing code that looks very much like SQL – these should be easily translatable to actual SQL scripts.

To execute the migrations script, run the following command:
PM> update-database

Now refresh the SQL Server Object Explorer window. If there is no configuration string included in Web.config or the connection string name isn’t specified in the DbContext, Entity Framework will create a database locally. Since I’ve already got SQL Server Express installed, the new schema appears under that connection name.
The new database will be called something like ‘aspnet-[project name]-timestamp’, and it’ll have tables for the ApplicationUser identity. The table it uses for local accounts is dbo.AspNetUsers.

If the application was run, the login and register features should be functioning, and the [Authorize] attribute can be set on any controller action. A user could register and modify an account, and the login details will appear in the database table.

Creating Our Own Model

Next I’ve defined my own model for a calendar application, by adding two classes: MyCalendar and EventType. In the Models folder, I’ve created another class file called ‘MyCalendarModel.cs‘, and added the following code into it:

I want to also add the following import statements, in addition to the defaults:

using System.Data.Entity;
using System.ComponentModel.DataAnnotations;

Also, what I typically do with the model is add validation attributes. I think it’s safer and better to sort out the validation here rather than implement it solely in the view layer. I did also assume that these attributes would also give us more precise control of the data types in the database tables, but that doesn’t seem the case.

Adding the Model to DbContext

Now we have two extra models that wouldn’t do anything unless they’re added to the DbContext class, so I’ve added them here.

I then cleaned and rebuilt the project, just to make sure there were no obvious errors before executing another migration.

Generating the Migration Script and Applying the Changes

Similar to what I did previously, but the migration is given a different name so I’d know later which script does what, and I’ll need to force the update-database command to overwrite the existing schema:

PM> add-migration AppendMyModels
PM> update-database -Force

In Server Explorer there’ll be two additional tables, EventTypes and MyCalendars, that were created by running the above on the migration script. The only thing I needed to do after that is populate the EventTypes table, as it’s there for populating a drop-down list.

Also, if we look in the table definition, we can see the data types have been loosely applied from the attributes in the model class, and primary and foreign keys were set.

Final Thing: Populating a Database Table for an MVC Drop-Down Menu

We could populate the EventTypes table by entering the values into it using Server Explorer, but doing it from a migration script enables us to replicate the changes with other databases much faster. For this, I’d need a black migration script template, which can be generated by running ‘add migration PopulateEventTypes

I’ve populated the Up() method with a series of SQL INSERT statements. I’m assuming the other method, Down(), fetches or pulls things from the database.

Now run the migration script again, using ‘update-database‘ to populate the database table. Viewing the EventTypes table in the SQL Server Object Explorer, we should see it populated.