A Very Frustrating But Also Very Rewarding Experience with AmCharts and Complex JSON Responses

Tags

, , , ,

Presenting data in amCharts and Chart.js from simple two-column tables was relatively straightforward. I had three Web APIs that each returned a two-column table that the charting scripts could easily read from. As I was finishing up the presentation, the application spec changed – all the data is now returned as a complex table by one stored procedure. What followed was a moderately frustrating couple of days, as I Bill Nyed the code multiple times trying to extract and group items from the JSON objects.

Given the main reason for using a single stored procedure was to reduce the load on the Service Broker, a single Web API call in my code is better than three. It also makes sense to implement all the querying features as JavaScript, since the browser fetches everything when the page loads.

The code for my solution is published on GitHub (click here).

Revisiting Arrays and Objects
My solution was to populate an array, or multiple arrays, with items from the JSON response, so it’s worth looking at JavaScript arrays to see the similarity between that and JSON.

An array could be static and predefined, e.g.
var users = ["michael", "john", "andy"];

Or it could be an empty array that’s populated during runtime, for example, in a script that populates the array from another source, such as:
var users = new Array("michael", "john", "andy");

The other type of variable I’m working with here is an object with multiple attributes. e.g.
var user = {userName:"michael", userID:"515", role:"Developer"};

You’ll notice this looks somewhat like a message object within our JSON response, because that’s precisely because the JSON response is an array of such objects. For example, the JSON response for the Dashboard is:

[{"Id":"0001","Date":"2017-05-05","MessageType":"Pathology","HealthBoard":"7A6","HealthBoardDescription":"BC1","MessagesProcessed":1},
{"Id":"0002","Date":"2017-05-05","MessageType":"Pathology","HealthBoard":"7A4","HealthBoardDescription":"BC2","MessagesProcessed":2}]

Getting Chart Data from a JSON Response Body
For the Messages by Type chart, I want a count of the number of instances for each messageType name in the Service Broker queue. If these counts could be presented as a doughnut chart, the user could readily see which category of systems is generating the most traffic – typically they’re pathology systems, so if cardiology systems are sending most the traffic, we know something’s not right.

Anyway, what I did first was initialise an array called ‘everything‘, and push all the JSON response objects to it. From that I extracted the messageType items and pushed them into another array called ‘myMessageType[]‘.

This enabled me to use ‘myMessageType.length‘ to loop over it and increment the counter variables for each instance of ‘Pathology’, ‘Radiology’, ‘Cardiology’ and ‘unknown’. More observant readers will notice I’m counting instances of rows, not what’s actually contained in the MessagesProcessed column. Most rows in that column have a value of ‘1’, so I can get away with that for now and add further logic in later.

(Update: It looks much better after the counters are placed in a single loop:)

At this stage, I should have a set of counter variables that provide data for the chart. Since that might become a problem solving task in itself, now’s a good time to establish, using a debugging tool and SQL Server Management Studio, whether the counter variables are indeed incremented.

If everything’s good at this point, the counter variables can now be used as the amCharts dataProvider source:

Triggering
On running the application, the charts still aren’t rendered even with the counter variables incrementing correctly. This is a timing issue, with the charts attempting to render before the arrays are populated and counted. The code needs to be modified so the sections of code are executed in the correct order.

The chart code can be encapsulated within a function. Here it’s called chartByType().

function chartByTypes()
{
// Charting code here
}

And add code for calling the above after a short delay when the counter arrays/variables have been populated:

// Insert call here to Chart 2
setTimeout(function () { chartByTypes(); }, 500);

And here was the result:

The NHS Ransomware Situation

Tags

, , ,

The bad news is it’s extremely unlikely the data could be decrypted directly. Recovery will depend on backups – most employees don’t, and probably can’t because of security policy, make personal backups of their work.
The good news is there’s a slim possibility GPs and hospital IT staff could recover their data (without paying the tossers who thought hospitals were a good target, of course). File encryption actually works by making encrypted copies of the data before erasing the original files, which means the latter just might be recoverable using common drive imaging and data carving tools. It’s a long shot, but that’s what I’d be attempting in their position.

How did this happen? Actually I heard from others a couple of weeks ago there were spear phishing attempts at another NHS trust, and assumed it was related to a malicious hacker group that obtained staff addresses after a third-party was compromised. However, that affected trusts here in Wales, and the current ransomware thing isn’t affecting us (directly) yet.
Not everyone in a large organisation can differentiate between a legitimate hyperlink and one that’s disguised in an email. Someone, in fact several, will click the link or open the attachment. That’s not really a problem if the anti-malware system has a signature for it, but there’s still a good chance it doesn’t. Exploit mitigation features on modern operating systems play a huge role in preventing malware. You probably know all this already.

The thing is (and yes, this is a huge problem) the NHS does rely on outdated operating systems and software, and for roughly the same reasons banks still use COBOL and industrial systems might still use Windows XP. You can have a legacy system that works, or upgrade attempts that come with serious risks. Remember the chaos in 2012 after an update attempt crippled the mainframe of three major banks? So, to address the Home Secretary’s point, one doesn’t simply move a system like this onto Windows 10.
When you’re dealing with critical software that’s deployed nationally, and when lives depend on integrity of the data, any minor change in the configuration must be thoroughly tested before that change goes live. And there could be a stack of software from multiple vendors, and a range of hardware also, dependent on that same configuration.
On top of that, I’ve also come across third-party clinical software that’s been around since the 90s, and can’t easily be replaced because it’s critical, very complex, has features that are extremely specialised and became the standard across NHS trusts – and the software, in turn, depends on older operating systems. Some of these problems are outweighed by what recently happened, but still… Scary, isn’t it?

According to The Guardian, Professor Woodward stated the exploit is for an SMB vulnerability that enabled the malware to spread, and the vulnerability was in Windows XP for which Microsoft didn’t release a patch. Metasploit did include exploits for older versions of SMB since at least 2013, and SMB vulnerabilities showed up in Nessus scans against Server 2012 back then.
Of academic interest is the exploit here was developed (or at least hoarded) by the NSA, and was among those published by the ShadowBrokers – several years ago I warned that something like this was inevitable if governments started developing ‘cyber weapons’.

About Nick Cohen’s Book and Stephen Fry’s Predicament

Tags

, , , ,

Just as I was composing this post about Nick Cohen’s book (‘You Can’t Read this Book‘), which addresses the psychology of religiously-motivated censorship, I read about Stephen Fry reportedly being investigated by Irish police under blasphemy laws. Since the existence of such law, in 2017(!), would be as retarded as Fry’s understanding of theology, I was initially a bit skeptical. Unfortunately it’s true. According to Independent.ie, the complainant, one member of the public, believed that Fry’s remarks were criminal under the Defamation Act 2009. The Act has an entire section (36) on blasphemy, and it’s extremely subjective in its wording. Hard to believe, isn’t it, that such a backward piece of legislation exists in Ireland and in the United Kingdom?

Onto Nick Cohen’s Book: There are three sections, dealing with religion, money and the state, and there is a fourth section suggesting solutions that are more abstract than practical. Here I’ll cover the first and add some of my own thoughts. Not because of the religious angle, per se, but because it’s where we find the most lucid descriptions of how the supposition of our collective liberalism and tolerance is pretty difficult to justify sometimes.

It seems fitting to quote the extract from the Virginia Statute for Religious Freedom:

Be it enacted by the General Assembly that no man shall be compelled to frequent or support any religious worship, place, or ministry whatsoever, nor shall be enforced, restrained, molested or burthened in his body or goods, nor shall otherwise suffer on account of his religious opinions or belief, but that all men shall be free to profess, and by argument to maintain, their opinions in matters of Religion, and that the same shall in no wise diminish, enlarge or affect their civil capacities‘.

Here Jefferson demanded no less than the right of anyone to express their religion in the public sphere and the right of anyone to criticise a religion. It does not imply that expressions of religion should be banned from the public arena, or that one should keep his/her religious beliefs private – legislating that would be state censorship, essentially, for what is religion but a system of ideas?
Jefferson is essentially trusting in the individuals’ ability to reason for themselves, to defend their opinions and beliefs through argument, and to follow their consciences. Christianity is no less a valid basis for morality than what the secular world ultimately bases its ideals on, if most of us believe in the principles of fundamental rights, human dignity and the sanctity of life. We have the intellect to resolve the more challenging questions of applying these principles in the real world.

This freedom is important, because human rights violations, oppression and injustice do indeed happen, they should be exposed and they should be openly discussed. Sometimes they aren’t: Overall Cohen’s book is about how our desire to openly discuss the issues is often outweighed by the fear of retribution, the fear of being sued, the fear of how it would impact our careers, the fear of something consequential. He made the case for this far better than I ever could.
Cohen argued that mainstream ‘liberals’, maybe for fear of causing outrage among religious zealots, cannot be objective and consistent in criticising oppressive ideology, and there are real-world examples provided of established liberals turning on those who criticise the oppressors – the Salman Rushdie drama being just one case in point. This is perhaps why we see only outrage against trivial instances of ‘oppression’ within our Western culture, instead of solidarity with victims of real oppression in other nations where Islam is dominant. And this is only a facet of the underlying problem – ultimately the same kind of fear prevented employees of global banks warning us of the impending economic crash of 2008, and forces the press to consider the risks of being sued when holding those with financial power to account.

How to Disable the Vivaldi Browser Keyring Request

Tags

, ,

A problem that’s been bugging me for a while is the Gnome Keyring pop-up that frequently appeared in the Vivaldi browser on every other page I loaded.

In the Vivaldi settings, under the Privacy tab, uncheck the Save Webpage Passwords option.

Vivaldi can be launched with the keyring disabled using the following command:
$vivaldi --password-store=basic

But we don’t want to do that each time, so we could add the option to the menu.

Right-click the desktop’s main menu to edit it (‘Edit Menus‘). Here we can modify the start options for the desktop applications. Find the entry for Vivaldi, and right-click to see the option for ‘Properties‘.

In the Launcher Properties we can append the ‘--password-store=basic‘ to the command, so it should read something like:
/usr/bin/vivaldi-stable %U --password-store=basic

Close the windows, restart Vivaldi and the pop-up should no longer appear.

Won’t Commit

Tags

, , , ,

Sometimes the command line is a better and more reliable way of pushing changes to a Git repository. The sync feature on the Windows client does something the Linux GUI client wasn’t doing, and the latter was refusing to update the remote directory.

First navigate the command line to the local github subdirectory for the project. Next thing is to check a few global variables, if the client hasn’t been set up already. The important ones are username/password and the text editor, otherwise you’d probably be stuck in vim if the ‘-m‘ option isn’t used for adding a commit message:
$ git config --global user.name "Michael"
$ git config --global user.email michael@example.com
$ git config --global core.editor nano

It’s quite possible my local working directory was in a ‘detached HEAD state’, which basically means I could have actually been trying to modify an older version of the project instead of the current one – ‘git checkout [commit identifier]‘ allows this, and ‘git checkout master‘ will point the local directory back to the repository’s master branch.
The ‘pull‘ and ‘push‘ commands should ensure both local and remote directories are in sync again:
$git checkout [branch]
$git pull origin [branch]
$git push origin [branch]

The output should show everything being up to date at this point. This seems to retrieve all files and history, which might be intensive depending on the size of the project.

Submitting the Changes
To see the current status, the differences between the local and remote directories:
$git status

As we can see here, there are two updated files: index.html and main.css. Another two .ttf files were also deleted from the local directory. To commit the changes, both files must be ‘staged’ – the concept is the same as including and excluding files in Visual Studio’s Team Explorer before committing.
$git add [file1] [file2] [file3]

Running ‘git status‘ again lists the staged changes:

Now the changes are ready to commit:
$git commit -m "Updated home page and CSS"

The command without the ‘-m‘ option will open a text editor for entering the commit message.

Finally the changes need to be ‘pushed’ to the remote directory/server.
$git push origin master

Note the 7-digit hexadecimal numbers. These are the first part of the changes’ SHA fingerprints and their identifiers in the history log.

Restore / Recover
The change log can be viewed with the ‘git reflog‘ command, with identifiers and HEAD numbers. Either could be used for selecting entries.

To view the details of a specific entry, use the ‘git show‘ command again. The ‘git show [identifier]‘ command will print the SHA fingerprint, the date/time, change author, files and section(s) of code that were changed.
$git show [change identifier]

To revert to an older version of a file, we can checkout that file from an earlier commit:
$git checkout [commit] [filename]

Omitting the filename will revert everything in the local directory to the specified version. Because the checked out file might differ from the current version in the master branch, it should appear as another change to be committed.