A Bit Late to the Party

Tags

, , , ,

Even though I’ve been to God knows how many developer meetups and events where everyone seems to be using GitHub, I’ve only recently got round to setting up an account there. I don’t really do much in the way of collaborative development outside my day job these days, and I’ve been working almost exclusively with Visual Studio and TFS. I wanted to put together a repository of hacks and useful code for junior developers that might be joining us, though.

Setting up a Repository and Mapping
Getting started was a simple matter of creating a repository through the GitHub site, installing a few client applications and mapping the repository to local directories. By far the easiest way is to use the official GitHub desktop client. I’m also using GitKraken on another machine, and Git-Cola on a Linux system.

Even if there are no source files present, the Git client will at least create a sub-directory containing a .git file.

Now it’s possible to add source files to the directory and use the desktop client to push them to the server. New and modified files appear in the desktop client under the ‘Changes‘ tab. You’ll need to click ‘Commit to master‘ and the Sync button. For Cola Git, the changes must be staged, committed then pushed.

Visual Studio Code
Git version control is integrated into Visual Studio Code. The editor will read the .git file when opening the local folder containing a repository clone.

And to check in the changes, the Git version control button opens the version control tab.

But it seems the changes still need to be pushed to GitHub using the client application.

Site
A GitHub account can also be used for hosting Web sites, which could be better than a wiki and a list of repos. The way to go about doing this is to create a new repository, but this time name it ‘[username].github.io‘. We want a README here also.

The Master branch should be set as the source in the GitHub Pages section of the repository settings.

Technically all that’s needed here now is an index.html, and whatever else would make up the site. Of course, the site files can be cloned, modified and checked back in using the version control system for offline editing.

The Vault 7 Release

Tags

, , ,

A price of having an overgrown surveillance industry and routine violations of the US Constitution the inevitability of classified material being exposed. There are too many former CIA hackers with sins to confess, but I wonder about the motives behind this leak. Certainly last week’s Vault 7 release is voluminous and shows that a comprehensive range of things has been compromised, but surprisingly little has been exposed so far relating to violations of their Constitution. What kind of intelligence service doesn’t develop tools and methods for targeted surveillance?
However, there’s a lot that hasn’t been revealed. Wikileaks’ Twitter post claims it’s less than 1% of the material they might publish. The CIA has close ties to Silicon Valley, a data collection budget over four times that of the NSA’s and a comparable allocation for data analysis. The budget for Computer Network Operations (basically what the Wikileaks material exposes), though, is much smaller. According to the press release, the CIA’s Center for Cyber Intelligence had over 5,000 users. It’s therefore a safe bet the CIA does have its own mass surveillance programmes, and anyone of interest to the CIA could have their devices hacked by the Center for Cyber Intelligence.

The consequences of weaponised malware aren’t only domestic. Weaponised malware set a precedent for state-sponsored malicious hacking, and undermined the moral standing and credibility of the US government. When there is a malicious attack on a given state’s network, there’s no telling who was responsible, now we know the CIA was developing methods of implicating other states. Therefore it becomes ludicrous to blame an adversary without very compelling evidence. For example, could we still be so certain the Russian government was responsible for the alleged hacking of the DNC servers, which I believe was unrelated to the published DNC emails?

Things of interest
* Much of the material under the Operational Support Branch section contains useful literature for developers and hackers. There you’ll find tutorials, product documentation, tips, coding practices, links, etc.

* I found a reference to something called ‘Palantir’ in the docs, which appears to be a reference to a testing tool. This caused a bit of fuss when that name appeared in the Snowden material, as it was assumed to be a reference to the company of that name that sells OSINT software.

* Some material deals with defeating ‘personal security products’ – anti-malware that the average home user would have installed. So far, they seemed to have broken past AVG, F-Secure, Comodo and Bitdefender, usually through DLL injection/hijacking.

Dark Mail

Tags

, , , , , ,

What makes the Dark Internet Email Environment project look promising is the Dark Mail Alliance consists of the same principled and highly skilled engineers who brought us PGP/OpenPGP and Silent Circle products.

GPG should be an ideal solution for protecting emails on third-party servers – it’s highly scalable, since each person needs only a private key and access to a list of public keys, it’s extensively documented and it has APIs that enable developers to incorporate GPG into their applications. Breaking GPG is non-trivial, and it would be unrealistic for the authorities to expect its developers could maintain a backdoor in an open source project.
The reality is that few people are using PGP/GPG/OpenPGP, since most of us want everything on demand with minimum effort. We all want to sign into our email accounts, or a service like FaceBook, and have our messages instantly available. Hardly anyone wants to download and configure anything, and is why numerous innovative privacy solutions fail to gain traction. StartMail and DIME are two different approaches to solving this.

My understanding, after reading through the Architecture and Specifications document, is that DIME is fairly close to what an automated GPG solution would provide, alongside a form of onion routing. Overall it’s about ensuring everything sensitive, including email headers, is protected between the sender and recipient.

Development is based around four protocols:
* Dark Mail Transfer Protocol (DMTP)
* Dark Mail Access Protocol (DMAP)
* Signet Data Format
* Message Data Format (D/MIME)

DIME should be deployable without too much work. Speaking to Ars Technica, Levison stated: ‘You update your MTA, you deploy this record into the DNS system, and at the very least all your users get end-to-end encryption where the endpoint is the server… And presumably more and more over time, more of them upgrade their desktop software and you push that encryption down to the desktop.’.

Message Structure
The message structure consists of an ‘envelope’ encapsulating the message body and email headers. Protected information includes the sender and recipient addresses. A layered encryption method allows the mail relays to access only the data they need to forward the messages, and using D/MIME the sender and receiver identities are considered part of the protected content. That is, the sender and receiver addresses are considered part of the payload, and should be encrypted along with the message body, which is something GPG doesn’t do.
This is important because the claim that authorities are inspecting only the ‘metadata’ of our Internet communications is misleading. Email addresses and message headers are actually contained within the payload of the TCP/IP traffic being inspected – the content must be read in order to read the email addresses.

Unlike conventional email, the message headers (including the To and From fields) really are processed as confidential, and instead routing is determined by the Origin and Destination fields (AOR and ADR).

Key Management
Public keys, signatures and other things associated with identities are managed as ‘signets’ – all the cryptographic information required for end-to-end encryption between communicating parties. There are two types of ‘signet’: Organisational signets are mapped to domains and contain keys for things like TLS. User signets, on the other hand, are mapped to individual email addresses, and contain public keys associated with them.

There are three modes: Trustful, Cautious and Paranoid. These specifically relate to whether the client or servers handle key storage and encryption. Users have the option of a) ensuring that only client devices have a usable decryption key, and b) trusting LavaBit to manage their cryptographic keys that apparently are extremely difficult for LavaBit admins themselves to access.

Dark Mail Transfer Protocol (DMTP)
DMTP handles the delivery of encrypted messages, with another protocol, D/MIME should protect the content against interception. Basically the email or message is encapsulated, with headers only revealing enough information for servers to relay the message. In order for the encryption to be implemented, the sender must look up the recipient’s public key(s), which happens through a ‘Signet Resolver’ to find a ‘signet’ that contains the public key.
DMTP is also the transfer protocol for retrieving ‘signets’.

The transport method is actually quite similar to conventional email. It uses an additional field to the domain name records, pointing to the DIME relay server.

One requirement of DMTP is that all sessions must be secured by TLS, and with a specific cipher suite (ECDHE-RSA-AES256-GCM-SHA3841).

Dark Mail Access Protocol (DMAP)
The finer details for this haven’t been established yet, and this section appears to be notes of a few points. DMAP is being designed to handle the key exchange and authentication side of the secure email implementations, making sure the client’s keyring is synched with the ‘signet’ admin servers.
DMAP is to incorporate something called ‘Zero-knowledge password proof’ (ZKPP), a way for two entities to confirm that each knows a secret value without exchanging that value.

Installation of DIME on Mail Servers
A development version of the DIME library can already be installed on a Linux system. Unfortunately I haven’t managed to get this compiled and installed on Linux Mint, even with the dependencies sorted. I’m still working on installing these on a CentOS server.

The dime executable enables the lookup of signet/address mappings, verification of the signets, the changing of DX server hostnames and the reading of DIME management records. The sending and receiving servers are determined using the dmtp tool. Signets, they are generated and managed using signet as .ssr files.

A proof-of-concept program (genmsg) in included for sending Dark Mail messages in the command line. To do this, the sender specifies the sender address, the recipient address and the path to the ed25519 signing key.
Once on the server, the messages are stored in a typical database scheme on a Magma server.

Making JavaScript Charts work with Stored Procedures and Entity Framework

Tags

, , , , , , ,

After adding one of the AmCharts examples in the CSHTML source, I had the graphics rendering code and a static array providing the chart data, and everything was displayed as it should. This application needed to be adapted so it displayed the metrics relating to the messages being processed by systems using Service Broker.
Again I used a stored procedure. This returns a table of dates against the number of messages processed on those dates, and takes as inputs startDate and endDate, both of which could be null. We’ll call this stored procedure ‘prMessagesProcessedByDay‘.

What I needed to achieve here was: a) Use Entity Framework to model the stored procedure and its response body, b) Add a controller to pass data between the Entity Framework model and the view layer, and c) Add some JavaScript to call the controller and render the data set as a chart.

Entity Framework Model
Right-click the Models folder in the Solution Explorer, and add new item. Here we want to add ADO.NET Entity Data Model, which will be a Database First (EF Designer from database).
When generating the model Entity Framework should have assigned the returned data as a ‘Complex Type’, which didn’t happen for some reason. In the Model Browser, I right-clicked on the ‘Complex Types‘ object for the model, and ‘Add New Complex Type…‘.

After selecting the stored procedure to import, the Context.cs code should look something like this:

ef-model-from-metrics-stored-procedure

Again both input variables are nullable, as the entire table should be returned if no date range is specified, and I should have the option of adding a feature for doing this later. And the returned variables were also nullable in case there was an empty table:

ef-model-from-metrics-returned-variables

The next problem is that Entity Framework runs on the application server, whereas the JavaScript executes in the client’s browser, so the application would need to fetch the data through a controller that calls the stored procedure whenever the page loads.

Web API Controller
The way I’m doing this is through a Web API controller. Apparently it handles JSON and serialisation, which is required for the JavScript to populate an array. Autogenerating an empty Web API controller gives us the following:

empty-web-api-controller

When doing this, you might encounter error messages about variable types. The first thing to check is whether the Stored Procedure is assigning a primary key to the returned table – especially if the Web API template includes select, edit and delete actions. Here I needed to modify the stored procedure by prefixing one of the instructions with ‘SELECT NEWID()as Id‘.

Second problem that might be encountered is an HTTP 404 error when attempting to call the Web API when the application’s running. Removing all the NuGet packages and re-installing them fixed the problem.

Thirdly, the controller needed to perform some typecasting, as it didn’t work well with ‘complex types’. It needed the object GetprMessagesProcessedByDay_Result() to be declared as a list.

Eventually I ended up with something like this:

adapted-web-api-controller

View Layer and Testing the Controller with JavaScript
Now there’s hopefully an Entity Framework model that’s accesible to the Web API controller, the next requirement is some JavaScript to send requests to it. The code for that would look something like:

javascript-ajax-json-call

This JavaScript section was repurposed from another tutorial, just to ascertain there was was a JSON response. After a few modifications it passed the following when the application was run:

working-webapi-request

Loading the Data into AmCharts
The chartData array included with the AmCharts example is in the same format as the JSON response, so switching the two should be straightforward.

amcharts-example-json-load

To adapt the AmCharts code, I imported dataloader.min.js and inserted the following JSON request code in place of the dataProvider section.

amcharts-mychart-json-code

And set the categoryField and valueField variables to the JSON response field names. Here’s the prototype:

amcharts-working-json-chart

Utopia

Tags

,

thomas-more-utopia

Waterstones unfortunately didn’t have much in the way of Aristotle’s writings, so I settled on Thomas More’s Utopia. It’s a paperback of just 150 pages, including Professor Baker-Smith’s notes. However, it’s not an easy read: Though the language was simple, I found myself actually disagreeing, in places, with humanist ideas that I know were based on rationality.
Given More was educated and a very devoted Catholic, I expected his work would borrow rather heavily from Aristotle, Thomas Aquinas and Plato, and indeed all three were mentioned in places, and as I anticipated, More challenged the professed idealism of Christian society by contrasting that with the the reality of the European culture in which the Church actually was dominant.

A statement in the closing pages sums up the theme of the book:
[…] when I survey and assess all the different political systems flourishing today, nothing else presents itself – God help me – but a conspiracy of the rich, who look after their own interests under the name and title of the commonwealth.

And so More examined the nature of true commonwealth, and envisioned an economic, social, political and religious system with that as its primary goal. And as I read the book, it seemed that More was presenting another question: How much are we actually willing to sacrifice in order to achieve such a society, given that he appeared to argue that commonwealth and private property (and possibly by implication individuality) are mutually exclusive?

This is expanded on further in Appendix I, in which More made references to what seemed a common adage among Socrates, Cicero, Erasmus and Euripides: ‘Between friends all is common’. I read this as an assertion that private property is at the root of all conflict, that commonwealth is therefore necessary for lasting peace. But this seems unattainable. As multiple failed attempts to implement Marxism were to demonstrate, there would always be a minority who guard their wealth, power and status jealously. Appendix II might have been added to make the point that true commonwealth is only found in primitive societies.

The First Book
It’s quite possible the foundations of this work were decided upon during a real world conversation between Thomas More and his friend Peter Giles, while they were in the Netherlands.
Roughly in the style of Plato’s Atlantis, Book One opens with an account of how More met a traveller called Raphael Hythloday, who told him of some undiscovered nation of Utopia. Raphael, in turn, recounts a fictional debate between himself and Bishop John Morton. Raphael speaks as someone who cares little for status or wealth, and as someone with a pretty low opinion (mainly contempt) of the political class, landowners and aristocracy.
If every political system is a conspiracy of the rich, where better to start than with the European justice system? This fictional debate started with a criticism of the idea of returning a crime of theft with the death penalty, the general point being that crime (or certain crimes) is driven by poverty, poverty in turn is caused by the inequality of power between the labourers and land owners, and therefore the existing laws were ineffective and unjust. Not only that, but in returning theft with more severe punishments, More argued (through the character of Raphael, of course), this form of justice caused more damage than the original crime.
Raphael also pointed out that such a law set a precedent that undermined its moral position: ‘[…] where human law permits the death penalty, then what’s to prevent men from settling among themselves just how far rape, adultery and prejury ought to be tolerated?

And that argument is still relevant today. Too often society fails to uphold the sanctity of life and basic freedoms as principles, and masses of protestors and campaign groups lobby today for legislation without examining why such laws should be considered progressive or detrimental in the long term.

Throughout the conversation, Utopia is sporadically used as an ideal that the existing contemporary political and economic solutions are contrasted with. But it turns out this idyllic place isn’t so idyllic after all. Raphael goes onto describe Utopia in greater detail, and thereby Thomas More explains that the eradication of inequality and irrationality comes with a price…

The Second Book
In the second book we are presented with Utopia, a society in which all things are held in common – it is a true commonwealth. What isn’t there to like about this system of government?
Firstly, humans are irrational, and we’re all driven by the need for personal gratification. One of our tendencies is to acquire and store wealth, which is, of course, the beginning of inequality. Utopia was designed from the start to eliminate this.

Reading through this section, one gets the impression that the state of Utopia is so delicately balanced that it requires conformity and micro-management to keep itself in existence. Although there are relatively few laws, which aren’t interpreted in the legalistic way we’re familiar with in the real world, almost every aspect of the citizens’ lives are regulated. Even the clothing is uniform among most the citizens, quotas determine whether a citizen would live in whichever city, and their activities are governed by the state schedule.
Despite all this planning and regulation, meticulously envisioned by More, it isn’t long before the beginnings of another class system, and inequality of privilege, become apparent – though to a barely noticeable degree compared to the inequality that exists in the real world. Deference is given to the leaders, some of them getting better food, and priests distinguish themselves by wearing more colourful clothes – and yes, the state religion is used as one method of controlling the citizenry. Utopia also retains an underclass of slaves, who are held in such contempt that they’re made to wear gold as a method of devaluing it. Notably, the slaves are made to do the jobs too gruesome for the citizenry of Utopia. To a lesser extent there is the distinction between the manual workers and those selected for promotion to the ranks of intellectuals from which the state officials are drawn.

I definitely recommend having a look at this book, in which you’d find much more in the way of argument against the current establishment, a few explanations of why politicians make decisions that are seemingly bad, and also the absurdity of extreme rationalism. All of it’s still very relevant today.