The Intercept on Disk Encryption

Tags

, , , , , , , , , ,

Micah Lee’s arguments on The Intercept for BitLocker were very well made, most of them I agree with, and the important point is to choose wisely when it comes to disk encryption. Whether people should use BitLocker depends on why they need volume/disk encryption. If you’re one of those people ‘with nothing to hide’, and are primarily concerned about the loss or common theft of a laptop, BitLocker provides a very effective security measure. But Lee’s article was posted on The Intercept, so I’m assuming it’s about encryption in the context of protecting information from The Powers That Be, in which case it should have at least mentioned how BitLocker and the ‘lawful access’ issue are intertwined.

BitLocker (‘Device Encryption’ in the standard Windows 8 edition) has some weird key management going on. With the standard Windows installation the user must be signed into a Microsoft online account or a domain to activate BitLocker, so I’m assuming there’s no option but to upload the recovery key.

WTF?! No way!

WTF?! No way!

Maybe it was designed that way to make encryption ‘user friendly’ for the average consumer, but I don’t understand why a symmetric decryption key must be stored anywhere while the local machine’s switched off, and having more than one key would make it even more vulnerable. Key recovery attacks are such a common way of defeating encryption, so they strike me as odd design choices. Many of us also use encryption, in-house servers and local storage because we don’t trust ‘the cloud’ for a variety of reasons – there are service providers who aren’t above lying about their security practices or abusing positions of trust, so passwords/keys are the very last thing I’d want to upload.

Microsoft does reportedly comply only with ‘lawful’ requests, so BitLocker potentially comes with a key escrow system. Problem is the word ‘lawful’ has such a broad meaning in the United States – anyone could be labelled an ‘extremist’, and FISA (without public oversight) appears to be ‘rubber stamping’ requests. Maybe it doesn’t happen often to actual non-extremists and law-abiding citizens. Maybe it does. The point is strong encryption should provide a safeguard, at the very least against warrantless access.
I just find it odd that Microsoft’s offline device encryption isn’t supported for the average user. I also find it interesting that BitLocker wasn’t mentioned during that fuss over device encryption between Google, Apple and the FBI last year.

The Alternatives
Anyone using Windows would have to trust Microsoft to some extent where encryption is concerned, and obviously the system running the encryption software. If you’re not using BitLocker, the chances are the alternative uses Microsoft code libraries and DLLs that could theoretically have a backdoor of sorts, or be replaced with something that contains malicious code. I doubt Windows is backdoored by default, or the NSA wouldn’t have bothered with their ‘tailored access’ exploits, or the FOXACID/QUANTUM thingy. You could, I suppose, go down the road of not trusting Microsoft entirely, in which case you might want to consider a setup like the hardened UNIX security model I blogged about last year, and use that in conjunction with disk encryption. I think that’s about as secure as you can realistically get.

Micah Lee discussed solutions that encrypt an entire primary disk, but we don’t need to be limited to that. Why not instead encrypt a partition or attached storage device, when the only real advantage being lost is protection against local tampering of the OS? One possibility is to boot into an operating system that’s fairly clean and secure, and mount an encrypted drive from there. Most tablets and laptops have an SD or microSD slot to accommodate a secondary drive that might be fully encrypted with something like DiskCryptor. The encrypted storage could also be something like an iStorage or IronKey device.

The best-known alternative is TrueCrypt, which has been audited, doesn’t have any obvious backdoors and it’s proven resilient until now. I’m still wary of using TrueCrypt for the same reason Lee gave: the project has been discontinued, and software ages fast when nobody’s maintaining it and dependency issues start surfacing. That might rule out TrueCrypt as a long-term solution, but again a secondary storage device gives us the option of using legacy filesystems. However, we can use a fork of TrueCrypt, such as VeraCrypt. Logically this is a much safer option than BitLocker, since no encryption key is uploaded or stored unless you’re creating an optional(!) key file. VeraCrypt and DiskCryptor aren’t 100% proven, but they seem the best alternatives available.

Making Packets

Tags

, , , , , , , , , , , ,

I was hoping to create a packet crafting application for no particular reason, but had to leave it unfinished for a while. You’ll see why. If nothing else, the project should be educational to anyone learning about the networking layers, protocols and packet crafting tools.

In networking theory TCP/IP packets are usually discussed as a logical structure, with clearly defined header and payload sections, and we visualise those blocks travelling across the Internet (or network) and through the OSI layers. The OSI model, I argue, is actually descriptive rather than prescriptive: Think of it as an assembly line for packets, starting at the application layer and ending at the hardware layer. Each layer adds something specific to the packet.

Sockets
One of the first things we come across in network programming is ‘sockets’. Basically the socket is a data structure with a file handle that’s created to ensure data is piped to the network interface in the correct format – a network-enabled application should write an IP address, port number, TCP state and payload as variables, the variables should then be strung together into a sequence of bytes, becoming the TCP/UDP packet. The operating system might append a source IP address. The packet is then passed to the network interface. A Python script I created last year shows this at work nicely.

last-years-socket-program

The first line of interest creates an Internet stream (usually TCP) socket:
s = socket(AF_INET, SOCK_STREAM)

The next lines dump the variables serverHost, serverPort and raw_input to the socket:
s.connect((serverHost, serverPort))
s.send(raw_input(“MESSAGE: “))

Anyone running a packet capture would see the bytecode for these variables leaving the network interface in that order. Also important to note is the ‘s.recv(1024)’ line in the script, as this creates a 1024-byte socket buffer for incoming data. An incoming payload larger than 1024 bytes might cause a buffer overflow.

To demonstrate this further, I’ve added a feature to my little C# packet crafting application that enables users compare the bytecode being sent with the captured packet in Wireshark.

hexcompare

The Packet
The simplest form of packet is the UDP datagram. Since there is no session control or handshake, there are just three variables: destination address, port number and payload. The UDP header has fewer fields and is smaller, thereby reducing the overhead of processing each packet. This is why it’s useful in case where a load of packets are going one-way, and where large numbers of them are redundant packets – video streaming, for example. A person doesn’t need to receive every UDP packets being streamed in order to get decent quality video or audio.

wireshark-capture-udp

TCP socket programming is slightly more complex, as the program must establish a session with the server prior to sending a payload. This is handled by the operating system at the session/transport layer.

wireshark-capture-tcp

The Hardware Layer
So far the code worked for creating TCP/IP packets by writing to buffers. Data from the buffers are sent in sequence to the network interface, which prefixes the TCP/IP packets with an Ethernet header. It becomes an ‘Ethernet frame’. Because Ethernet works mainly at the hardware layer, we can’t manipulate this directly using C#. There is a NetworkInterface library, but it’s used for reading the status of a network interface.

So what’s needed is a driver that provides an API to the hardware layer, and something like this is provided by Pcap.Net. A few alternatives are available, and I might create my own drivers later in C/C++, if there’s a demand for it.
To use the Pcap.Net libraries, download the binaries from the project page. It will include a collection of DLL files that are referenced in Visual Studio (make sure the Copy Local property is set as ‘True’ for each). From this we require PcapDotNet.Base, PcapDotNet.Core, PcapDotNet.Packets and PcapDotNet.Ethernet.
It does cause ambiguity problems if the System.Net and the PcapDotNet libraries are both used, so the full namespaces must be defined at certain points in the code.

I ran into problems at this point. As far as I can tell, my program constructs the Ethernet headers but doesn’t write them to the outgoing TCP/IP packets because the native Client.Send() already does this. Since I had already coded most the program before including the Pcap.Net library, it needs a rewrite to use it throughout.

The iStorage datAshur

Tags

, , , , , ,

Last year a small team of us at Security-Forensics had a good look at the datAshur USB drive, and I was almost tempted to order the datAshur Personal at £30 for 8GB. While both appear to be the same product with a different casing, there are technical differences potential buyers need to be aware of. To summarise, I ended up ordering the original datAshur because its level of security is pretty damn impressive – the datAshur Personal maybe less so.

datAshur-front

datashur-back

To recap, the datAshur we looked at was basically an encrypted storage device that works differently to most other encrypted USB drives on the market. It uses an onboard Hardware Security Module (HSM, or ‘crypto-processor’), whereas other devices typically interact with a software application. It’s quite compact and feels expensive. I like it, but is it actually secure enough?
Apparently the datAshur has ‘military grade encryption’. While that’s technically a marketing term that doesn’t really mean anything, the hardware-based AES 256 would be practically unbreakable depending on how the encryption key is managed. This is where the FIPS standard becomes important.

The main thing I was concerned about is the datAshur personal is stated as simply having a ‘FIPS PUB 197 Validated Encryption Algorithm‘, while the older device is ‘FIPS Security 140-2 Level 3 Certified’. Essentially the cheaper datAshur Personal is missing a layer of security that was in the original product, being merely ‘tamper evident’ rather than ‘tamper resistant’.

It’s worth exploring why that’s important, because consumer reviews don’t even touch on this, and it took some digging to find the technical details of the device. Both products work by using a Hardware Security Module (HSM) to mediate the I/O and encrypt/decrypt whatever’s stored on the main memory chip. The HSM itself is unlocked with the user’s PIN, so the datAshur can’t be mounted directly.

But that’s still not secure, if the casing can be opened, the encrypted data pulled off the chip and decrypted. With the older datAshur product, this would be an extremely awkward task, as its circuitry is sealed in epoxy and both chips have a ‘memory protect fuse’ to prevent them being read externally. The datAshure Personal doesn’t have that level of protection, apparently, for some reason. The salient point here is the HSM appears to be generating and storing a random encryption key each time the device is reset, so the high tamper resistance would offset the disadvantage of having a stored key. If the casing was somehow opened the data would most likely be unrecoverable, as it’s extremely unlikely the main storage and HSM could be accessed without one or both being destroyed.

So, I bought the older datAshur product because I’m dead certain nothing can be recovered from it without considerable resources, expertise and determination – this is definitely worth the admittedly large trade-offs for the security. The datAshur personal, on the other hand, is probably okay if you want privacy of non-sensitive files at a lower cost.

Running a Performance Test with Visual Studio

Tags

, , , , , , ,

Another form of automated test that can be done using Visual Studio is the Web Performance and Load Test. Required for this are Visual Studio Ultimate and Team Foundation Server (installation or managed service), although it is possible to run load tests using just a Visual Studio Online account.

Creating a Web Performance Test
To set up a load test in Visual Studio, a Web Performance and Load Test project must first be created.

new-web-performance-test

A blank project has the settings and empty script in the Solution Explorer, and the empty script. Actions can be added manually under ‘WebTest1‘, but the steps can also be generated from a recording. Right-click on ‘WebTest1‘ and select ‘Add Recording…‘. By default this starts Internet Explorer with the Web Test Recorder panel to the left. Normally the URL of the target application is entered into the address bar, and performance data is gathered as various actions are replayed for that application.

When the recording is stopped, the buffered Web Test Recorder actions are converted into the test script and displayed in Visual Studio’s main window. Save and build the project at this point.

The Load Test
Once the web performance test script has been created, it can be used as a scenario for a load test. With the project open, click ‘PROJECT > Add Load Test…‘ and apply whatever settings as needed in the Create New Load Test Wizard dialogue. It’s best to have a number of Web Performance Test scripts to reflect the real-life usage of the deployed application.

new-load-test-wizard

The New Load Test Wizard requires at least one Web Performance Test scenario. This is where already created scripts are added. Most the configuration steps here are about shaping the number of users, browser variety, connection types and think times.

add-scenario-1

add-scenario-2

When done, Visual Studio will create a load test tab in the main window, populated with steps and parameters that can be modified as required. Under the Counter Sets entry parameters can be removed for whatever metrics aren’t needed. Save and build the project again.

load-test-steps

Now click the ‘Run Load Test‘ icon at the top left.

run-load-test-icon

The script will be sent to the connected Team Foundation Server or the Visual Studio Online service, which will queue it and acquire the resources. A usage quota might apply with the managed service, depending on the subscription.

running-load-test

The results from the completed load tests can be accessed in the Load Test Manager (‘LOAD TEST > Load Test Manager‘). A full report with all the included metrics can be downloaded by clicking the ‘Download report‘ link.

load-test-report

The Quicker Method using TFS or Visual Studio Online
This is a simpler test that isn’t based on web performance test scenarios. Under the ‘Load test‘ tab, the target application’s URL and number of virtual users is set before the test is launched. A few of the test parameters that were present in the Visual Studio test also appear here.

tfs-based-test-1

It takes a little longer for Visual Studio Online to acquire the resources this time.

tfs-test-results

Follow

Get every new post delivered to your Inbox.

Join 25 other followers