Minds.Com and the Free Thought Project Interview


, , , , ,

Recently I was listening to the Free Thought Project’s interview with Bill Ottman, the CEO of Minds.com, and thought it worth expanding on some of the points. If you haven’t already done so, Minds.com is worth checking out, if you’re a content creator, blogger or citizen journalist looking for an alternative to the mainstream platforms.

The Free Thought Project was one of 810 accounts that got booted off FaceBook and Twitter for ‘inauthentic activity’, in what seemed more like a co-ordinated act of political censorship. While the full list hadn’t been released, the main targets appeared to have been groups reporting on corruption within politics and law enforcement – you know, things we have a civic duty to discuss on the Web.
Quoting Brittany Hunter in Foundation for Economic Education article: ‘What began with the ban of Alex Jones last summer has since escalated to include the expulsion of hundreds of additional pages, each political in nature. […] one thing is absolutely certain: we need more market competition in the realm of social media.
What’s particularly worrying is that the Silicon Valley corporations aren’t simply private entities excercising their own rights, as is commonly argued in their defence. They represent a giant oligopoli that has a disproportionate amount of control over the means of communication on the Web, an oligopoli that’s engaged in a co-ordinated suppression of political opinion, an oligopoli with more influence on the democratic system and access to politicians than the Russian state could ever hope to gain.

An alternative is needed to democratise social media. For many people in the know, Minds.com seems to be that alternative. Here’s why:

  • Minds is production-quality, can be deployed as a finished application, and it’s open source.
  • Users don’t need to provide personal or identifying information when registering an account.
  • Minds was developed for content creators.
  • The developers are working on decentralisation solutions.
  • Minds.com supports crypto currency and monetisation.

The first point is an interesting one. In Ottman’s opinion, a solution released as proprietary software cannot be a viable alternative, because of transparency or somesuch. I think he might have conflated administrative integrity with software integrity – that open source projects have been pressured into adopting a uniform ‘Code of Conduct’ demonstrates the problem with that reasoning. Personally I don’t think the open/proprietary thing has much bearing on a platform’s viability as an alternative to FaceBook, unless there’s a need to verify claims about certain features, such as whether true end-to-end encryption is being provided.
No, what’s more important is that Minds isn’t a half-baked proof-of-concept, but is a completed iteration comparable in quality and appearance to any mainstream social media site. This is the deciding factor that determines whether a solution would gain traction. Anyone could clone the software, deploy it on their own server and run their own version of Minds.com.

The option to register accounts anonymously/pseudonymously with Minds.com is probably the most important feature, because I strongly believe we should be setting boundaries between our online and offline lives, and between family, social circle, work colleagues and strangers. Such a thing isn’t really possible on a social network in which everyone’s posting under their real names. Also, I don’t think it’s possible, in our current political climate, to have any meaningful debate without pseudonymity, since it seems fashionable to ensure anyone expressing a dissenting opinion suffers disproportionate ‘social consequences’.

An undersold feature of Minds.com is the ease with which a citizen journalist, blogger, whistleblower, etc. can create and publish content. For the individual user, who wants to protect his/her identity, a Minds.com channel (with publicly-viewable blog posts) is cheaper and easier to maintain than a Web site, and it still provides the same benefits in terms of posting content and getting views.

Problems with the Design and Architecture

Now, for the things I’m not entirely sure about: My main criticism is that Minds.com is not (yet!) actually ‘engineered for freedom of speech, transparency and privacy’ in any tangible sense, as it’s still a centralised service hosted on AWS in the United States. Whether Minds.com defends its principles actually depends on the people running it – people who could sell Minds.com to a corporation, people who might face legal, financial and political pressures, and people who would eventually be hiring others.

When asked, by Neoxian, writing for Steemit, whether Minds could truly be considered decentralised, Ottman gave the following answer:
Good questions. It’s decentralized in that ultimately, yes, nodes will be able to optionally federate (this is still in dev). It is censorship resistant in that we allow all legal content, and in the future will integrate torrent options.

This is actually not an empty promise. The Minds developers have already been working on a decentralisation component called ‘Nomad‘, which is based on the Beaker browser and the DAT protocol. I’ve experimented with these briefly this weekend, and they really do work. If a P2P system does go mainstream, it’s likely to be this.

SOLIDifying C# Code


, , , ,

Class, Method, Variable and Property Naming

As anyone who’s tried to analyse the output of a decompiler would attest, the descriptive naming of objects within code makes a huge difference to its readability. This principle should be applied to classes, methods, variables and other objects, so the names are descriptive of their purpose and function. This will become important as we refactor our code.

Minimal-Responsibility Methods

This is conventionally referred to as the ‘Single Responsibility Principle’, and states that a method or function should do one thing only, and have a single responsibility. Structuring code this way should make testing each unit in isolation easier (when loose-coupling is applied), and it should be easier to extend/modify units without inadvertently affecting the general behaviour of the program. However, I’m calling it the ‘minimal-responsibility principle’ here, as I find it’s not always possible or productive to refactor methods/classes to multiple single-function units.

In an existing project, I’m looking at methods for distinct units of operation that could be extracted out. There is a ‘Quick Actions and Refactorings…‘ feature in Visual Studio that can do this for us.

Open/Closed Principle – Maker of Things Visible and Invisible

This SOLID principle states that a unit should be closed to modification, but open for extension. The idea behind this principle is that developers should extend existing units of code, instead of modifying them.

A good reason for this is the assumption that a working iteration of a program is dependent on code that already exists, and therefore modifications come with a higher risk of introducing defects. Secondly, the practice of extending existing methods or classes helps us to avoid the duplication of code.

What does this mean in practice, though? Any class that could be re-used should be instantiated as a base class for something that extends it. Visual Studio already does this by default with commonly-implemented features, such as MVC controllers. Let’s look at a WebAPI controller that I’ve refactored:

As we can see, GetLookupDataController is a class that’s derived from ApiController, and everything visible here is really an extension of that base class. Whenever we want to add a new WebAPI controller to a project, we declare the same ApiController as a base class instead of duplicating it under a different name.

I can provide an even simpler illustration: In my project I have a data object called ‘Item‘, which has three properties:

What if I anticipated a feature request that involves a similar object with several more properties? In that case, I’d rename the ‘Item‘ class as ‘BaseItem‘:

I’d add another class containing the additional properties that extend BaseItem:

When executed, the software would construct a data structure containing all properties from the base and derived classes.

Liskov Substitution Principle

This principle essentially seems an extension of the previous one. A derived class should implement all the functionality provided by its base class, and without modifying whatever’s being inherited. If the latter isn’t the case, it’s an indication that the base class violates the minimal responsibility principle. In other words, a program’s behaviour should remain unchanged if the reference to a base class was replaced with a duplicate of its code.

Interface Segregation

A client shouldn’t be dependent on things it doesn’t use. This dependency could be inadvertently created if we’re setting up an interface with multiple methods.
Imagine an interface called ‘IFileOperations‘ that contains three methods: Read(), Write() and Save(), that respectively implements three operations, read file, write to file and save. Any client calling that interface would need to either use all three methods, even if only one is needed, or throw a ‘not implemented’ exception.

One way to solve this would be to put each method within its own interface, IRead, IWrite and ISave. Or we could logically segregate the interfaces by mapping the names to the methods, e.g.

class FileOperations : IRead, IWrite, ISave

{ // All three methods here }

Dependency Injection

Here I’m using the constructor injection method of resolving the tight coupling between a Web API helper and a GetDbReader class. In this project, GetReader() is the method that executes SqlCommand() using parameters passed from the helper method. I started out with the following code in Helper.cs:

As we can see, GetLookupDataHelper() is dependent on an instance of GetDbReader(), which contains a method that implements the database reader function.

It is possible to use a form of dependency injection here, so the helper class and GetDbReader() aren’t so tightly coupled. I added an interface called ‘IGetDbReader‘ and declared the GetDbReader class as a member of it. After the interface is created, the helper instantiates the interface:

Getting the helper method to use the interface was easy, a simple matter of changing a line so that the interface was instantiated as ‘getDbReader‘.

Adding Web API to a Web Service Project


, ,


To get the Web Service application to route WebAPI requests, I changed Global.asax so it contained the following:

public class WebApiApplication : System.Web.HttpApplication
protected void Application_Start(object sender, EventArgs e)

And WebApiConfig.cs contains the following:

public static void Register(HttpConfiguration config)

name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new { id = RouteParameter.Optional }
config.Formatters.JsonFormatter.SerializerSettings.PreserveReferencesHandling = PreserveReferencesHandling.Objects;

Removing Duplicate Code

Because I’m in the habit of producing functional code first, ValuesController.cs and GetLookupData.asmx initially both contained the same method that would call SqlDataReader and other classes for constructing the response body. Having that duplication in code isn’t good practice – firsly, it contains more code than needed, and secondly, any modifications to either method would need to be duplicated.

I created a new class file called ‘Helper.cs‘, and copied one of the methods into it. Next, I replaced the code in the API controller and ASMX file with entry points that call the method in Helper.cs. The entry point code looks something like:

LookupDataHelper GetLookupDataHelper = new LookupDataHelper();

public GetLookupDataResponse GetLookupDataService(string code, bool validItemsOnly)
var result = GetLookupDataHelper.GetLookupDataHelper(code, validItemsOnly);
return result;

Conflicting Opinions About the Role of Open Source Software in Healthcare


, , , ,

Writing for Medium.com, in the article ‘Open source is the only way for Medicine’, Dr. Marcus Baw argues:

As software and technology have become more and more part of medicine itself, neither distinguishable or separable from the practice of medicine, we must consider that these software artefacts now have to become part of the corpus of medical knowledge, and thus there is an ethical duty to share these technological artefacts openly, as Open Source software and Open Hardware, in order for the maximum benefit to humankind.

In principle I partly agree. If a team develops a system that improves the quality of healthcare, others could benefit globally by repurposing it. Superficially, there’s a moral argument for publishing code for such a system. But I think the case would be stronger if Dr. Baw hadn’t based it on a comparison between the sharing of software and the sharing of medical knowledge. If we accept, as our basis for argument, that both the medical profession and software development are distinct fields of expertise, software development would be analogous to performing surgery or treating a patient. The point here is that the production of properly engineered software is not equatable to knowledge sharing, since the former is often a bigger undertaking that requires the application of expertise and labour.
What is directly analogous to the sharing of medical knowledge is the sharing of knowledge that enhances the quality of software, through books, meetups, blogs, GitHub pages, tools, best practices, YouTube/Pluralsight videos, design patterns, etc. This has been the norm for decades already.

Another point worth mentioning is that making clinical software open source is all nice and altruistic, but it could go completely the other way. Over the years, we’ve seen how the ideal of collaborative progress can be co-opted to form a ‘sharing economy’, in which corporations exploit underpaid labour in ways that undermine the social contract, while creating a fake ‘startup culture’ that enables monopolies to assimilate anything and everything they deem profitable. The monopolies now exist. What’s to prevent them repurposing the best code, hiring the top 1% of developers, and making it harder for smaller vendors to compete? What’s to prevent a corporation co-opting a successful open source project and ensuring only their implementation gets approved?

The Testability of Proprietary Software

On the issue of testing and reviewing clinical software, Dr. Baw states:

But, with these closed-source proprietary systems, there’s no way to independently test their claims, there’s no scientific process, and the need for proper clinical peer review has been completely over-ridden by the wish of the company to make profit.

It’s not always appropriate to invoke ‘science’ when making an argument, as the scientific method is simply a tool for determining the typical behaviour of the observable world. For what it’s worth, one could observe an instance of proprietary software, make inferences about its behaviour, and form testable hypotheses. Dr. Baw could even do that for multiple proprietary applications and submit a paper on that for peer review. This would be an example of the scientific process applied to proprietary software.

As it happens, proprietary software is far from untestable, as the actual behaviour of an application can be compared with the expected behaviour for a range of anticipated use cases, and this is precisely what the NHS does for each iteration of clinical software it develops in-house. The Clinical Portal, My Health Online, WCCG, GPTR and Myrddin are thoroughly tested that way by professional test analysts. That method of testing is also applied to some third-party proprietary applications, such as LIMS, INPS and EMIS. It’s not the place of test analysts to weigh in on my source code, but it’s their job to criticise the applications from the users’ perspective and log application behaviour that deviates from what they expect.

When it comes to reviewing code, that’s important, but it doesn’t reveal much about how the software would actually behave, especially if one is reviewing scripts where the underlying components were coded in another language, or if software is so loosely coupled one needs to dig through five source files to determine the purpose of a component.
Subjective opinion also plays a huge role in fundamental decisions about how software is engineered and applied – there are even valid criticisms against what’s generally considered ‘best practice’. My opinions of someone else’s code, and whether that person should have followed a given design pattern, would be a matter of subjective opinion, regardless of my reasoning.

Open Source and Publicly-Funded Projects

Quite a large proportion of medical research is publicly funded. This is taxpayers™ money which is used for developing medicine. Where this medical research results in development of software, data sets, algorithms, or any other technological feature, it’s imperative that these be open sourced, so that the the maximum taxpayer benefit is extracted from the investment.

Aside from having to be really careful about publishing data sets, I sort of agree. I believe in subsidiarity, that everyone can empower themselves taking ownership of their communities, and open source projects are conducive to that. It’smore sustainable, I believe, for a community to utilise the skills it has, develop the systems it needs and use them as it sees fit. Taxpayers would certainly get more for their investment, and patient care might improve globally. However, for things to remain that way, NHS would need to assume a similar role to the Free Software Foundation, in funding the core developers and ensuring there are legal protections for in-house projects.

John Keats and State Surveillance


, ,

Dr. Richard Margraff Turley is due to give a talk on Literature and Mass Surveillance at the BCS Mid Wales event this week (Aberystwyth University is a four-hour drive for me, unfortunately). Titled ‘Who hath not seen thee … ?‘, this is an interesting discussion about the idea that state surveillance had some influence on the later works of John Keats.

The relevance of Keats’ writings to mass surveillance aren’t obvious. I think it’ll become clear to some readers that Keats, if he was indeed commenting on surveillance, was trying to describe a situation that was very different to that of today, and Dr. Turley hasn’t differentiated between mass surveillance of today and targeted surveillance of the past. The two are very different:

We might assume mass surveillance is a modern phenomenon, but “surveillance” is a Romantic word, first introduced to English readers in 1799. It acquired a chilling sub-entry in 1816 in Charles James’s Military Dictionary: the condition of “existing under the eye of the police”.

This definition, though very concise, is remarkably broad. What precisely does it mean to be ‘under the eye of the police’? It could refer to a state in which reasonable suspicions are investigated by detectives (targeted surveillance). It could refer to a state in which those ‘with nothing to hide’ are watched by a myriad of deputised officials and machinery (mass surveillance). Mass surveillance is something that violates our reasonable expectations of privacy in the most insidious ways.

It’s also important to remember that Keats penned his works in a politically volatile period, just a few decades after the revolutions in France, America and Haiti. Masses of people were awakening to the fact they could revolt and potentially overthrow governments in their struggle for universal rights, suffrage, a better quality of life, and even their very survival. Today the opposite is true: Most of us have everything to lose and little to gain by overthrowing The Establishment. Also, today mass surveillance wouldn’t be viable in a Western society without Silicon Valley corporations and and social media to provide the framework.

So, it wasn’t without reason that The Establishment would have employed spies to watch public events for indications of an imminent uprising, and resorted to heavy-handed tactics to prevent that happening. The Establishment felt it necessary to mobilise the police and the Army to protect the Bank of England, among other buildings, and charge radicals like Henry Hunt with treason. Could we really claim that as an example of mass surveillance, though? No, I think the issue here was that the use of state surveillance to monitor the political activities of citizens, instead of, say, people who already had political influence, was a new concept at the time.

What’s more telling than the content of the literature is the way Keats was guarded in his commentary of events, as if he suspected The Establishment knew about his more politically active acquaintances and were intercepting his letters. Today we refer to this as the ‘chilling effect’ – the reluctance to openly express dissenting opinions for fear of retribution. This is not an irrational fear when political discourse is divisive and uncivil.


Of Keats’ poem, ‘Lamia, Dr. Turley writes: ‘That poem opens with a queasy scene in which Hermes transforms Lamia from serpent to woman. The price is information: Lamia agrees to give up the location of a nymph’s “secret bed” to the priapic god.

Again, it’s conjecture to say that Keats was making a veiled reference to The Establishment’s surveillance apparatus, but we could nevertheless read that section of the poem as an allegory for it. Keats seemed to recognise and allude to the fact that people are willing to betray secrets in return for something, for some kind of benefit, rather like we’re collectively prepared to trade personal information for our 15 minutes of fame on social media. The nymph’s ‘secret bed’ could be a metaphor for a place where dissidents conspire, but it could also be a warning that even intimate details about ourselves and others could be traded. And why does Hermes want the information? For his personal gain, obviously, not for selfless reasons.

And once that level of sharing becomes accepted behaviour, it can quickly become a habit of inadvertent disclosure, as Keats and Turley also noticed:
Keats is describing actual workers, real people whose slacking off he reports as unthinkingly as we might share our own peers’ political views or locations on social media. As casually as a Google car might capture a moonlighting worker up a ladder outside someone’s house.