VPN Reviews and Tips for 2018

I’ve come across a pretty nice primer by Comparitech on VPNs for Linux users.

Advertisements

Trying to Understand the Meltdown and Spectre Vulnerabilities

Tags

, , , , , ,

Going by what I could understand of the research papers for Spectre and Meltdown vulnerabilities, this really doesn’t seem like a catastrophic problem. The most likely worst-case scenario would be numerous JavaScript instances of the Spectre exploit, which would enable access to session data across Web browser tabs – the immediate countermeasures would be to a) install ad-blockers, or b) use Edge or have a secondary browser with JavaScript disabled when viewing/sending sensitive data.

The vulnerability associated with Meltdown, on the other hand, could be used to read data from kernel space. It may or may not be exploitable remotely, and I think it’s less likely to be encountered in the wild because there are far easier ways to compromise a system. It can be mitigated (though not entirely) by installing a new kernel with KAISER (with KALSR) enabled in the configuration. I think it’s the default in the latest kernel releases.

The main difference between the two attacks is that only Meltdown could be used for accessing kernel space, and Spectre attacks are used to arbitrarily access data elsewhere within the same process. What both attacks have in common is they involve a processor feature that for executing instructions out of sequence and the ability to determine the contents of a CPU cache through a side channel.

Much of this was new territory to me, since I wasn’t well-read on the intricacies of thread/instruction scheduling in modern processors. From what I can gather, the proof-of-concept exploits bypass memory isolation by inserting erroneous data in transient instructions that are executed out of sequence, protected data is dumped into a CPU cache when an unhandled exception occurs, and the contents of the cache are read into a user-space array. Unfortunately that’s pretty much the extent of my understanding. There are numerous things that happen concurrently at a low level behind the exploits, and it isn’t clear precisely what the mechanism is for transferring data from the cache to an array in user space, or whether flushing the cache does this automatically.
The Red Hat advisory is a good place to start, and both research papers are very comprehensive if you want to explore this more deeply, and I’ve found Rendition Infosec’s presentation useful also.

Virtual and Physical Memory Recap
The following is a gross simplification of what happens on a running system, just in case it’s not common knowledge (it’s been a while since I’ve done any memory analysis): Memory is logically divided into user space and kernel space. User space is where processes (running programs) exist. The kernel space is where you’d find the kernel and loaded modules, and anything running in kernel space has access to everything on the system.
When programs and applications are launched, a data structure is initialised as a process – this contains an instance of the executable, a region for dynamic data, another region for static data, links to shared resources and libraries, and a few other things. To get an idea of what the data structure looks like on Windows, one could run VMMap or create a memory dump in Process Explorer. Now, it appears, to the user, that a process is one contiguous data structure, but in reality physical memory usage is actually fragmented. A process exists therefore as a virtualised region of memory, with virtual memory addresses being mapped in a page table to physical addresses.
The next important point is that a process shouldn’t be capable of accessing kernel space directly – if it did, any malicious or exploited process could manipulate the kernel and the entire system would be compromised. If a program attempts this, there’ll be an exception and the kernel would terminate the process if the exception is otherwise unhandled. This is why we’re usually more concerned with vulnerabilities in programs that already run with root/admin/system privileges.
Mechanics aside, the Spectre and Meltdown attacks exploit vulnerabilities in the processor hardware to bypass memory isolation.

Meltdown
The most concise definition I’ve found came from the ExtremeTech site: ‘The CPU does check to see if an invalid memory access occurs, but it performs the check after speculative execution, not before. Architecturally, these invalid branches never execute – they’re blocked – but it’s possible to read data from affected cache blocks even so.
The salient point to Meltdown is that it potentially enables an attacker to indirectly access kernel space and escalate privileges. This vulnerability can be mitigated by updating the kernel to the latest version, or by recompiling it with KASLR/KAISER enabled in the configuration.

The attack starts with the creation of a process that initialises a large array, and an attempt to read a byte of data from an arbitrary address in kernel space. Of course, this would result in an unhandled exception, with the kernel terminating the process. The way around this is to use speculative execution of instructions to pre-empt the kernel exception handler, as this would result in the instructions being executed and the cache state changing before the exception happens.
The next stage of the attack probes the cache state to determine what’s stored there, and reads the data into the user-space array. This is a transaction between a process and the hardware layer. The component receiving data from the cache is separate from the transient instructions, and could be another thread or process – a proposed Meltdown attack involves a parent process. To get the cache to reveal its state the researchers flushed or reloaded the cache into a user space array. Once the transient instructions access a valid address, that address is stored in the cache, and this could be confirmed by measuring the access time for that address – obviously the CPU will get data from the cache much faster than from the system memory. By measuring the access times for each address in the cache, it is possible to determine the cache’s contents.

Spectre
The main difference between this and the previous attack is that Spectre uses the processor’s branch predictor, and it cannot be used to read anything outside the process’ virtual memory. As I’ve already mentioned, this is still potentially a problem if it enables some entity to read data across browser tabs.

In the setup stage, the attacker searches for a set of instructions within a process that could be used to provide a ‘side channel’ for getting information about the cache state. Also, the branch predictor is trained to speculatively execute a branch of another process – the CPU is executing the same JMP instructions during multiple iterations, that gets recorded by the branch predictor, and the branch instructions are executed speculatively. Now it’s a matter of changing the operand of that JMP instruction, which would be a memory address of the instructions providing the ‘side channel’. The researchers searched for ‘flush-and-probe- instructions in shared DLL memory, and swapped an address in speculative instructions to point at it. The sensitive data can then be recovered by sending the flush and reload instructions.

Reading Saint Augustine’s Confessions (Probably Part I)

Tags

, , , ,

[X-posted from the Commentarium]

Between the ages of twenty and thirty, Augustine was torn between by four influences at odds with each other: there was the decadence and hedonism of the dying Roman empire, a curious middle-class sect of Manicheans who attempted to rationalise hedonism, the Academics and Neoplatonic philosophers who embraced skepticism, and finally the Christians.
Initially it seemed the Academics and Catholics were loosely aligned, sharing a generally common worldview, a symbiosis of ideas that would seem strange to many today outside the Church.
Though imperfectly, Neoplatonism shaped Christian theology by resolving some fundamental paradoxes, contradictions, questions, inconsistencies, etc. For example, how could one reasonably argue that the Blessed Sacrament is something it manifestly isn’t, how could one reconcile evil and suffering with a loving God, and why should one assume the existence of any deity to begin with? You’ve probably come across the ‘cosmological argument’ and the ‘argument from contingency’ already. It’s important to understand that these resolutions were ‘best guesses’, assertions made largely by necessity to explain how theology could be consistent with reality – they’re indeed based on meticulous reasoning, but since I’m not intellectually equipped to thoroughly critique philosophical arguments, I don’t take anything as true with absolute certainty. I will say, though, having skimmed through Thomas Aquinas’ writings, that some arguments seemed constrained to some extent by what the Church deemed acceptable and unacceptable conclusions.
But anyway, it wasn’t because of reason, philosophy or skepticism that Augustine became a Christian – these things only directed him from one system of belief to another as he searched for an understanding of the human condition.

So far I’ve managed to get halfway through St. Augustine’s Confessions, and have picked out some interesting ideas already.

The Problem of Evil
Though Augustine didn’t explain precisely why he began his initiation into the mysteries of the Catholic faith, his awakening and search for truth, when he was numbered among the Manicheans, came with his discovery of Cicero’s Hortensius (asserting the superiority of reason over rhetoric) and The Academics, skeptic philosophers who held that one should doubt everything:


'And as I had already read and stored up in memory many of the injunctions of the philosophers, I began to compare some of their doctrines with the tedious fables of the Manicheans; and it struck me that the probability was on the side of the philosophers[...]'

But Augustine, for roughly a decade, was pre-occupied with the question of why evil exists, for which the Manicheans already had an answer, sort of. The Manicheans believed that dualism provided the answer to the perennial question of why evil exists, and ‘that it is not we who sin, but some other nature sinned in us‘. This poses two problems: First, it implies that God must be limited in power, cannot be good, or is non-existent – objectively this isn’t necessarily false. Second, the idea that some other agency (e.g. the Devil) is responsible for one’s actions is a denial of our capacity for reason, intellect and free will. Neither explanations were palatable to The Academics.
The Catholic understanding of evil, which might have been developed by Augustine himself, is in terms of privative and positive, of evil being the absence of good. To understand why, we must first understand God and the sum of all creation being necessarily good, and therefore everything in existence must be good even if corrupted. Evil must, therefore, be a reduction or detraction from the nature of something, and could not have an existence of itself.
One could be forgiven for thinking this concept is too abstract to have any relevance to the world, but it’s not, as social teaching is predicated on it. If humans are made in God’s image, it follows that human life and human dignity are also sacred, and therefore so is the freedom of the individual. Evil would be manifested as the loss of life, the deprivation of freedom, the denial of human dignity, the absence of welfare and compassion.

But what about ‘original sin’? My own opinion (and this is a personal opinion!) is the Genesis stories began as an oral tradition when primitive humans became aware of a fundamental distinction between themselves and other species – the distinction being we have a rational soul, an intellect. Once humans became aware of ‘good’ and ‘evil’, we had the option of being guided by intellect instead of our predisposition for self-interest and gratification. I think we can establish the latter choice leads ultimately to unhappiness.
Certainly Plato and Epicurus understood there was a relationship between morality and living a good life, between hedonism and dissatisfaction. Having a morality consistent with first principles should lead one to living the best possible life.

Augustine’s Critique of Platonism
On what did Augustine base his premise that God, and by implication all existence, are good? Today the Catholic idea of God, formed largely (but not exclusively) by multiple philosophical ‘proofs’ by Thomas Aquinas, is that He is synonymous with existence, the source of reality, and there are multiple other attributes we could ascribe to Him – such as being simple, indivisible, not being contingent on anything, existing outside time and space, and unchanging.
To put it in modern language, God could be imagined as the primal substrate of reality from which quantum fields, quantum strings, or whatever happens to be fundamental, emerge – we could say that’s indivisible, keeps everything in existence and is omipresent. It’s not something that could be measured, quantified or even visualised, or proven or disproven, since our means of direct observation are limited. This, and the fact God is supernatural, means the question of its existence is outside the scope of the scientific method of reasoning. We can only conjecture about its nature in the abstract.

What did Augustine think at the time? He was initially of the materialist view that anything that can’t be observed, measured or expressed in terms of physics doesn’t exist, which first seems reasonable:


'I then held that whatever had neither length nor breadth nor density nor solidity, and did not or could not receive such dimensions, was absolutely nothing.'

The obvious counter-argument is there are examples of real things that aren’t measurable or solid: the soul, mind, consciousness and intellect exist, and the emergence of life itself is still a mystery. And for measure, I argue that our attempts to develop artificial intelligence could never result in anything other than something fundamentally and observably deterministic (PRNGs aside), no matter how complex the programming, since there is a transcendent difference between life and the imitation of it.
Anyway, Augustine soon realised the only other thing that couldn’t have dimensions or solidity was something infinite and omnipresent:

'So also I thought about [God] as stretched out through infinite space, interpenetrating the whole mass of the world, reaching out beyond in all directions, to immensity without end; so that the earth should have thee, the heaven have thee, all things have thee, and all of them be limited in thee, while thou art placed nowhere at all.'

So, Augustine envisioned God as something infinite in size, penetrating everything within the Universe and outside of it, and not being limited to any location or dimension. Elsewhere in the Confessions he uses the analogy of a sponge in the middle of a vast sea, with the sea being God and the sponge the natural world and everything within it.
The problem with this, Augustine conjectured, was that God would be more present in larger objects if that were the case, and an elephant would have more good than man. This is probably how Augustine made his conclusion for how there’s evil, since the logical answer would be that all existence is equally good, but humans have a greater potential to become corrupted.

And then Augustine went on to say of Plato’s works (carefully shortened for brevity):

'I found, not indeed in the same words, but to the selfsame effect, enforced by many and various reasons that “in the beginning was the Word [...]. All things were made by him; and without him was not anything made that was made.” That which was made by him is “life, and the life was the light of men. [...]. Furthermore, I read that the soul of man, though it “bears witness to the light,” yet itself “is not the light; [...] is that true light that lights every man who comes into the world.” And further, that “he was in the world, and the world was made by him, and the world knew him not."'

Augustine found much agreement between Plato’s ideas and what existed of Christian theology. In fact, he seems to have gone through Plato’s writings point-by-point, contrasting them. I think he did so – and I could be wrong on this – because Platonic philosophy might be considered a very tempting substitute for Catholicism, and he wanted to argue there were major deficiencies in the former.

Augustine’s Conversion
Augustine’s distrust of the Manicheans was further reinforced by his meeting with Faustus, a bishop within the sect. Though Faustus spoke with impressive eloquence and was the impersonation of a highly-educated man, Augustine discovered him to be ignorant of everything but a small collection of poetry and fables.
It was the philosophers, those he called ‘The Academics’, most likely the skeptics of Plato’s Academy, who exposed Augustine to the weaknesses and lack of consistent reasoning within the sect’s doctrines, but this wasn’t quite enough for Augustine to leave the sect, as the story of Christ was little more than an irrelevant fable to him at that point.

It was shortly after arriving in Milan, and meeting with Bishop Ambrose to assess his skill in rhetoric, that Augustine abandoned the Manichean sect and began his initiation into the Church. Basically Ambrose was preaching the scriptures in an allegorical/metaphorical sense, which for some reason came as a surprise to our young protagonist:


'I had heard one or two parts of the Old Testament explained allegorically--whereas before this, when I had interpreted them literally, they had “killed” me spiritually. However, when many of these passages in those books were expounded to me thus, I came to blame my own despair for having believed that no reply could be given to those who hated and scoffed at the Law and the Prophets'

Here we have two interesting points. Firstly, that an educated man such as Augustine thought the scriptures should be read literally – this has puzzled me, given the Bible is overtly a compilation of accounts, fables, letters, (bad) poetry etc. and yes, it does contain mythology. Any given section could be interpreted literally, literarily and allegorically. Secondly, there was derision against preachers even in those days, perhaps from the same Academics who were in opposition to the Manicheans.

Ambrose’s speech was described as:

'I was delighted with the charm of his speech, which was more erudite, though less cheerful and soothing, than Faustus' style. As for subject matter, however, there could be no comparison, for the latter was wandering around in Manichean deceptions, while the former was teaching salvation most soundly.'

We have what appears to be a response to the skeptical philosophers:

'[...] if I took into account the multitude of things I had never seen, nor been present when they were enacted such as many of the events of secular history; and the numerous reports of places and cities which I had not seen; or such as my relations with many friends, or physicians, or with these men and those--that unless we should believe, we should do nothing at all in this life. Finally, I was impressed with what an unalterable assurance I believed which two people were my parents, though this was impossible for me to know otherwise than by hearsay.'

To me this seems like a pretty weak argument for going from extreme skepticism to believing anything, though I can’t imagine the Academics actually being that unsophisticated. I think the point Augustine was trying to make is that nobody, not even an empiricist, could function in the world without relying on faith, and there could be no progress in the sciences, or our understanding of the world, without basing each hypothesis on a edifice of a priori assumptions.
Come to think of it, you’ll hear it commonly said that faith is a belief with the absence of evidence, but it seems synonymous with inference based on reason. That is, you could take the most batshit crazy fundamentalist and still find logic and reason behind that person’s beliefs. It’s therefore a question of how one evaluates evidence.

I’ll leave it there for now, as this is a fairly long post and I’m only halfway through the Confessions.

Unit Testing (Codemanship Session Notes)

Tags

, , , ,

The course went so comprehensively into the subject in just a few hours, and I could make only the briefest of notes between the problem-solving tasks (unit testing a Fibonaci sequence function, testing for ordered arrays, etc.). There’s a lot of material on the Codemanship site, though.

There are two reasons given for why we might want to unit test. Firstly we want to start the coding stage of development by creating unit tests that define our software’s behaviour – the tests should be prescriptive and not descriptive of the behaviour of the software we develop. This is the basic principle of Test-Driven Development (TDD). Secondly, the theory goes that the amount of change required to resolve defects increases sharply the longer they’re undetected. If a defect is found during a module’s creation, one only needs to rectify that module instead of multiple software components.
What wasn’t mentioned was the duplication of effort that comes with TDD, at least in the short term. You’re adding roughly double the amount of code (if not more) for each module. What I also learned from test automation is that test scripts require their own management, maintenance, version control and documentation. They become their own projects.

Anyway, the unit testing we’re familiar with here follows the xUnit class of framework. Apparently it is possible for anyone to develop their own framework with some programming skill. There are three common testing frameworks for Visual Studio:

  • MSTest
  • NUnit
  • xUnit

Basics
With a little further research, I found that ‘xUnit’ is a general term for unit testing frameworks that follow a general model: there is C-Unit, CPP-Unit, J-Unit, etc. Unit testing with an xUnit framework involves creating methods that follow the ‘Arrange-Act-Assert’ pattern. This means the initial lines determine the test parameters, the lines following that execute the unit being tested, and the final lines contain the assertions for comparing the expected results against the actual results. It should look something like this:

There is a collection of assertions that could be used, such as:

  • Assert.AreEqual();
  • Assert.AreNotEqual();
  • Assert.IsTrue();
  • Assert.IsFalse();

Another characteristic is a unit test should have no external dependencies, as the modules are being tested in isolation. We can use other test-related services to mimic the input.

Test methods have the [Test] or [TestMethod] decorator. These are within a class of test methods, the class having the [TestFixture] decorator. A ‘test suite’ consists of all the classes that form the test run.

It should follow this structure:

To get started with unit testing, add another project called ‘[ProjectName].Test‘ to the solution, and install the NUnit Framework package to it using NuGet. Then create a new class, and the import statement ‘using NUnit.Framework‘. Without ReSharper installed, you’ll need to install NUnit Test Adaptor for the solution.

Parameterised and Data-Driven Tests
This is a way of running the same methods multiple times with different values. It removes the need to duplicate the test methods. Simply change the values within the [TestCase] declaration.


[TestCase(1,1)]
public void SquareRootOfPositiveNumber(int input, int expectedRoot)
{
}

Data-driven testing is based on the parameterisation concept. It is also referred to as ‘property-based testing’.


([Range(1, 100, 0.1)] double input)
([Random(1, 1000, 100)] double input)

Fluent Assertions
NUnit includes the Fluent API that enables the use of more verbiose unit test scripts. Whether Fluent scripts are easier to understand is a matter of debate – they could be more readable to the layman, but less so for the developer.
For example:

Assert.That(basket.Total, Is.EqualTo(1250.0));
Assert.That(False, (False));

As Fluent assertions are more expressive and verbiose, they should be easier to debug.

Unit Test Patterns
Several design patterns for unit tests were covered briefly, but they follow SOLID principles in some places. This part of the session covered inversion of control, interfaces and dependency injection. With this the following could be achieved:

  • Setup code re-use
  • Setup code encapsulation and availability through simple API
  • Test code extension and re-use
  • xUnit patterns
  • Contract tests
  • Test data builders

Test Doubles
A double provides an interface that, for the purpose of testing, replaces something the software would normally use. These include:

  • Stub: Provides data to the unit under test.
  • Mock: An object that causes a test to fail if a method isn’t invoked on the expected interface.
  • Dummy: An object required for the test to run, but is not important.
  • Fake: Emulates the full behaviour of the expected object. For example, an in-memory database.
  • Spy: A stub that records which methods were called.
  • Since interfaces can allow for different implementations of a single action, they are ideal for impersonating compnents and data providers the unit under test might interact with.

    A stub should be very simple, and include little or no logic. Stubs should not be tested.

HTML5 sessionStorage and localStorage

Tags

, , ,

Web Storage is an HTML 5 feature that enables JavaScript to store and retrieve data in the browser application itself. There are two APIs for this: localStorage() and sessionStorage(). The former writes data to persistent storage, while the latter only stores session data. Both APIs provide setItem() and getItem() for writing and reading the data.

The storage keys and values are both typed as strings.


sessionStorage.setItem('key', 'value');
alert(sessionStorage.getItem('key'));

localStorage.setItem('key', 'value');
alert(localStorage.getItem('key'));

If the browser displays an alert when running the above code, you’ll know it’s working because the alert function has fetched the values from Web Storage.

If, on the other hand, the Developer Tools displays an ‘Uncaught DOM Exception’ error , the browser’s privacy configuration is preventing the JavaScript writing to Web Storage.

It is possible to view objects in Web Storage using the Developer Tools’ Storage Inspector.

An Example Application
If you remember my recent post, it was a currency converter application with the exchange rate hard-coded in multiple places in the JavaScript. Obviously this isn’t good coding practice. Here we can use Web Storage to store this value, and use the localStorage.getItem() function to provide the exchange rate as a JavaScript variable.
The following is a simple demonstration of an arithmetic operation on two variables in Web Storage:


window.sessionStorage.setItem('testValue1', '20');
window.sessionStorage.setItem('testValue2', '25');

var firstValue = window.sessionStorage.getItem('testValue1');
var secondValue = window.sessionStorage.getItem('testValue2');
var total = +firstValue + +secondValue;

document.getElementById("result").innerHTML = total;

For the currency converter, we first need a text box for the user to enter the exchange rate value:

This element calls a function called ‘writeExRate()‘:


function writeExRate()
{
var myRate;
myRate = document.getElementById("newRate").value;

window.localStorage.setItem('currentRate', myRate);
alert(localStorage.getItem('currentRate'));
}

Near the top of my HTML source, I added a second function that fetches the currently stored value and displays it whenever the page loads:


$(document).ready(function()
{
document.getElementById("result").innerHTML = localStorage.getItem('currentRate');
})

The main conversion function itself reads a value from an input field, and divides it by the current exchange rate value fetched from Web Storage:


function runconversion()
{
var currentRate = localStorage.getItem('currentRate');
var inputValue = document.getElementById("yenValue").value;

var convertedToPound = (inputValue/currentRate);
document.getElementById("answer").innerHTML = convertedToPound;
}

And here is how it looks in the browser:

Substituting the Hard-Coded Values in My Application
In the head element, I declared the following global variable:

var sliderMultiplier = parseInt(localStorage.getItem('currentRate));

Here I’m using parseInt() because objects are always stored as strings, and currentRate must be typed as an integer before our conversion functions can use it. Also, the interface should obviously display the currently-stored value whenever the page loads, so the following lines are included:

And also a form that enables the user to update the currentRate value in localStorage:

Now it’s a simple matter of replacing the hard-coded exchange rate with the sliderMultiplier variable.