I’ve come across a pretty nice primer by Comparitech on VPNs for Linux users.
OpenVPN is a flexible VPN client that tunnels network traffic between the local system and a proxy/gateway server, so that nobody between those two endpoints should be able to determine the metadata or the content of your communications. While there are multiple posts on the UWN Thesis blog that deals with setting up OpenVPN on Windows and Linux, I’ve recently managed to get it working again on an Android device.
Of course, you’d need to find a VPN service that appears sufficiently trustworthy, and it’s important to check the session being tunnelled hasn’t been compromised (check the TLS/SSL certificate) once the VPN connection’s established. Remember that service providers are subject to the laws and constraints of whichever countries they’re operating in, and might be required to provide information about their users to The Powers That Be. The best you can do is read the terms of service and privacy statements very carefully.
The application I installed was OpenVPN Connect, which is available in the Play store. After installation, there are a few configuration options to look at.
- Seamless Tunnel: Defintely check this option. The last thing we’d want is the browser to fallback to the default network interface, exposing sensitive data, without us knowing if the VPN service drops the connection.
- Reconnect on reboot: Disabled this, just to be polite and reduce demand on the VPN servers.
- Connection Timeout: Sometimes it takes a while to establish a connection, so this is set to 1 minute.
- Force AES-CBC: Depends. AES-CBC could be better or worse than what’s being provided.
- Minimum TLS: Could set the latest as default, but shouldn’t matter so much.
- DNS Fallback: Even if you dislike Google, this is only a fallback option, and there’s no way to set an alternative.
Once the client configuration is sorted, the connection settings for a VPN service are required. For this demo I’ll fetch and load a connection profile for a service called ‘VPNBook’, which is distributed as a .zip archive of .ovpn files. I have downloaded and extracted these on my mobile device.
In the OpenVPN Connect menu, select ‘Import‘ and ‘Import Profile from SD Card‘. Next find and select the .ovpn file that contains the desired connection settings. Since TCP over 443 seems to work best for my Android device, I’ve loaded the tcp443.ovpn file.
Next you’ll need to enter the username and password, again available on VPNBook’s site. Now the client should establish a tunnel connection.
Once the ‘OpenVPN: Connected‘ message is displayed, I normally navigate the browser to InfoSniper or whatismyipaddress.com, just to ascertain that the browser traffic’s going through one of the VPN servers.
I’m probably going to repeating things I posted for the Tor on Linux setup, but I think the Android version of the Tor client deserves a post of its own. The setup I’m using has Firefox (the browser) and Orbot (Tor client and local proxy) installed separately, but it’s possible to install Orfox, which combines the browser and client.
I don’t make any guarantees about the safety here, for obvious reasons – developers are humans, and humans make mistakes when developing software, and few of us have the discipline to be consistent with our OPSEC and tradecraft – Tor on its own masks only IP addresses and I haven’t found a Privoxy equivalent for Android that strips identifying information from the browser traffic payload. You might need to analyse the traffic with Wireshark and work around that.
After launching Orbot, enable the ‘Apps VPN Mode‘, and check the boxes for applications you want routing their traffic through Tor. For example, if you want browser traffic going through Tor, enable Firefox or Chrome (or both). If you want the Android device to route all its network traffic through Tor, enable all the applications in the list.
Pressing the ‘Browse‘ button should launch Firefox, and if the browser traffic’s being routed through the proxy a success message is displayed.
This is enough to get most people started, to claw back some of that pseudonymity, that separation between our online and real-world identities.
Accessing .onion Sites
What about the Onion, or the ‘Dark Web’ (or whatever ominous name the press is calling it these days)? First you need a directory or list of hidden services with .onion addresses. Save or bookmark https://thehiddenwiki.org, and try the Tor search engines to get started – at least some of the links work, but others are a little temperamental.
The next thing you’d need to do is configure Firefox to use Tor servers to resolve .onion addresses, which is something the conventional DNS won’t do. Installing Orbot seems to configure this automatically, but sets .onion blocking by default to precent users accessing the Darknet accidentally. Enter ‘about:config‘ in the browser’s address field, and set ‘network.dns.blockDotOnion‘ to ‘false‘.
Make a note of that, if there might be a need to undo this change. Links and URLs with the .onion suffix should now be resolved and the hidden services made accessible in Firefox.
Other Configuration Options
Because Tor is a shared service with limited capacity, I think it’s only polite to limit my usage of it, so I’ve disabled ‘Start Orbot on Boot‘ and ‘Allow Background Starts‘.
The ‘Request Root Access‘ and ‘Transparent Proxying‘ options are loosely related. One configures Orbot to run the proxy server with root access if possible, and to control the network interface. The Transparent Proxying option should set the Tor proxy as the default virtual network interface for all network traffic being sent and received across all applications running on the device.
There are a set of options under Relaying that enables users to contribute local resources. There should be very little or no risk associated with running a non-exit relay, so it’s a matter of choice whether you’d want to add your device to the Tor network as a resource.
How would two clients communicate over IPv6 without a third-party knowing which addresses are used? This is one of the abstract problems I tried to solve back in 2013, when developing the idea of a secure messaging client that makes use of certain features associated with IPv6 (many thanks to Sam Bowne and Chris Tubb for the inspiration). It was based on two assumptions: a) both parties are assigned a block of IPv6 addresses rather than a single address, and b) communicating parties are able to arbitrarily select addresses from within their address ranges.
Address Spaces and Allocations
Given the number of possible IPv6 addresses (2^38 minus a few reserved address ranges), it’s possible that a person would be assigned a sizeable block of addresses from this, such as a 32-bit address space with 4294967296 possible addresses.
I’ve done a bit of research to determine the likely address space a person would typically be assigned. RFC 6177 reccommends allocating /48 blocks to each individual ISP customer. Whether this would actually happen in the real world remains to be seen – it’s also strongly recommended because IPv6 removes the requirement for Network Address Translation, which in turns means that an ideal allocation for a home network would be large enough to make network enumeration a little more time consuming.
IPv6 also allows for stateless address configuration, which should enable clients to select their own addresses, although this depends on how the local router is configured.
The Address Generation Algorithm
My solution is something like:
The session key is secret between two clients – how they share this is another problem which might require out-of-band communication using a public key system. Actually my proposal would be a good candidate for an instant messaging system or social network that works alongside Dark Mail.
The second parameter is the system time, in ‘HHMM’ format, because the algorithm should generate a different IPv6 address every x number of minutes, and HHMM should also be the same for both communicating clients. With a little more coding later, two clients might get this value from a shared source, perhaps over NTP.
New addresses are generated from a current IPv6 address and a session key that might be shared between peers. These might be read from an application database and/or network interface.
selfAddress = '3ffe:1900:4545:0003:0200:f8ff:fe21:67cf'
selfKey = 'mypassword123'
peerAddress = ' '
peerKey = ' '
currentTime = strftime("%H%M")
In order to get the current address, we require a networking/NIC module that enables us to select the network interface to read from. I’m most of the way through coding a C# version of the client, using System.Net.NetworkInformation to populate a drop-down list of interfaces.
Using the netaddr and pprint modules, an address can be formatted as a hexadecimal string – basically to get the digits without the octet delimiters. The line ‘selfAddressToHex[2:]‘ removes the ‘0x‘ characters from the output.
ip = IPAddress(0, 6)
ip = IPNetwork(selfAddress)
selfAddressToHex = hex(ip.ip)
selfAddressString = selfAddressToHex[2:]
Then a SHA256 fingerprint is generated with [sessionKey+HHMM] as inputs.
hashInput = (selfKey + currentTime)
print('Hash Input: ' + hashInput)
hashedValue = hashlib.sha256(hashInput)
hashedValueString = (hashedValue).hexdigest()
print('SHA256 Fingerprint: ' + hashedValueString)
Now we can substitute the last eight bytes of the current IP address with the last eight bytes of the SHA256 value to generate a new address:
final32 = hashedValueString[56:64]
print('New Suffix: ' + final32)
newAddressString = selfAddressString.replace(selfAddressString[24:32], final32)
Finally, reformat the hex string as a valid IPv6 address by adding the delimiters between octets:
newAddress = ':'.join([newAddressString[i:i+4] for i in range(0, len(newAddressString), 4)])
print('New Address: ' + newAddress)
The running script will produce something like:
We can later write newAddress back to the application database as ‘currentAddress’, and have something that triggers this part of the application every 15 minutes.
There are other things I’d like to build on this, namely components for setting newAddress as the local IP address, and messaging between two clients running the script.
What makes the Dark Internet Email Environment project look promising is the Dark Mail Alliance consists of the same principled and highly skilled engineers who brought us PGP/OpenPGP and Silent Circle products.
GPG should be an ideal solution for protecting emails on third-party servers – it’s highly scalable, since each person needs only a private key and access to a list of public keys, it’s extensively documented and it has APIs that enable developers to incorporate GPG into their applications. Breaking GPG is non-trivial, and it would be unrealistic for the authorities to expect its developers could maintain a backdoor in an open source project.
The reality is that few people are using PGP/GPG/OpenPGP, since most of us want everything on demand with minimum effort. We all want to sign into our email accounts, or a service like FaceBook, and have our messages instantly available. Hardly anyone wants to download and configure anything, and is why numerous innovative privacy solutions fail to gain traction. StartMail and DIME are two different approaches to solving this.
My understanding, after reading through the Architecture and Specifications document, is that DIME is fairly close to what an automated GPG solution would provide, alongside a form of onion routing. Overall it’s about ensuring everything sensitive, including email headers, is protected between the sender and recipient.
Development is based around four protocols:
* Dark Mail Transfer Protocol (DMTP)
* Dark Mail Access Protocol (DMAP)
* Signet Data Format
* Message Data Format (D/MIME)
DIME should be deployable without too much work. Speaking to Ars Technica, Levison stated: ‘You update your MTA, you deploy this record into the DNS system, and at the very least all your users get end-to-end encryption where the endpoint is the server… And presumably more and more over time, more of them upgrade their desktop software and you push that encryption down to the desktop.’.
The message structure consists of an ‘envelope’ encapsulating the message body and email headers. Protected information includes the sender and recipient addresses. A layered encryption method allows the mail relays to access only the data they need to forward the messages, and using D/MIME the sender and receiver identities are considered part of the protected content. That is, the sender and receiver addresses are considered part of the payload, and should be encrypted along with the message body, which is something GPG doesn’t do.
This is important because the claim that authorities are inspecting only the ‘metadata’ of our Internet communications is misleading. Email addresses and message headers are actually contained within the payload of the TCP/IP traffic being inspected – the content must be read in order to read the email addresses.
Unlike conventional email, the message headers (including the To and From fields) really are processed as confidential, and instead routing is determined by the Origin and Destination fields (AOR and ADR).
Public keys, signatures and other things associated with identities are managed as ‘signets’ – all the cryptographic information required for end-to-end encryption between communicating parties. There are two types of ‘signet’: Organisational signets are mapped to domains and contain keys for things like TLS. User signets, on the other hand, are mapped to individual email addresses, and contain public keys associated with them.
There are three modes: Trustful, Cautious and Paranoid. These specifically relate to whether the client or servers handle key storage and encryption. Users have the option of a) ensuring that only client devices have a usable decryption key, and b) trusting LavaBit to manage their cryptographic keys that apparently are extremely difficult for LavaBit admins themselves to access.
Dark Mail Transfer Protocol (DMTP)
DMTP handles the delivery of encrypted messages, with another protocol, D/MIME should protect the content against interception. Basically the email or message is encapsulated, with headers only revealing enough information for servers to relay the message. In order for the encryption to be implemented, the sender must look up the recipient’s public key(s), which happens through a ‘Signet Resolver’ to find a ‘signet’ that contains the public key.
DMTP is also the transfer protocol for retrieving ‘signets’.
The transport method is actually quite similar to conventional email. It uses an additional field to the domain name records, pointing to the DIME relay server.
One requirement of DMTP is that all sessions must be secured by TLS, and with a specific cipher suite (ECDHE-RSA-AES256-GCM-SHA3841).
Dark Mail Access Protocol (DMAP)
The finer details for this haven’t been established yet, and this section appears to be notes of a few points. DMAP is being designed to handle the key exchange and authentication side of the secure email implementations, making sure the client’s keyring is synched with the ‘signet’ admin servers.
DMAP is to incorporate something called ‘Zero-knowledge password proof’ (ZKPP), a way for two entities to confirm that each knows a secret value without exchanging that value.
Installation of DIME on Mail Servers
A development version of the DIME library can already be installed on a Linux system. Unfortunately I haven’t managed to get this compiled and installed on Linux Mint, even with the dependencies sorted. I’m still working on installing these on a CentOS server.
The dime executable enables the lookup of signet/address mappings, verification of the signets, the changing of DX server hostnames and the reading of DIME management records. The sending and receiving servers are determined using the dmtp tool. Signets, they are generated and managed using signet as .ssr files.
A proof-of-concept program (genmsg) in included for sending Dark Mail messages in the command line. To do this, the sender specifies the sender address, the recipient address and the path to the ed25519 signing key.
Once on the server, the messages are stored in a typical database scheme on a Magma server.