The last two weeks have been a clustersuck of getting the dissertation finished, having to review, edit and rewrite the better half of 100 pages here and there, hoping the stack of paper will be turned into a couple of decent hardbacks by the deadline. The working title is ‘Secure IPv6 Communications across Multiple Untrusted Networks‘ (comes with state of the art interactive CD-ROM). Considering the huge deal I made over this last year, it should have been much better.
When I first started thinking about the project a while back, the idea was a group of proxy servers with different IP addresses that would change every 24 hours or so, and a software client that would select the first available one. The central problem was how to communicate those proxy addresses to the clients without those addresses becoming known to whoever’s blocking them. The Global Internet Freedom Consortium creates technologies like that.
Things took another direction after coming across that SecuraBit interview with Sam Bowne on IPv6, then The Second Internet (Lawrence Hughes). All the pieces were there for secure communications system even better than what I originally thought up, and so IPv6 became the focus of the project. Somehow those huge address blocks and IPsec tunneling between hosts can be leveraged to defeat both censorship and surveillance. Probably forever.
About half way through, it was becoming apparent the solution is theoretically very simple, and the main component of my system would be a software client – installed on the Internet-enabled devices, it would handle everything from encryption, IPsec, address management and a couple of other things. It also turned out the system could be used for point-to-point (or P2P) communications and multicasting, so the plan shifted somewhat from defeating censorship in the existing client-server Internet. Actually creating a working product is another matter, as I’m not that technically gifted yet. Certainly not gifted enough to develop the GUI in C++.
Essentially what we have is a design and some components for a client application that could be adapted for a range of things – military communications between PDA-equipped units, P2P social networking (the application database supports this), media broadcasting over IPsec to hidden groups, and even government personnel deployed in other countries could host reports on their own devices without adversaries even being aware of it. Sounds like pretty impressive stuff, but it won’t become relevant for another 8-10 years because the routers and everything in between must be IPv6-ready for this to work.
In the end the dissertation amounted to a colossal amount of research on Internet surveillance and traffic filtering (many thanks to the UWN Thesis blog), a fairly detailed methodology for developing and testing the countermeasures, instructions for setting up a fully IPv6-capable carrier routing system, and some of the main components for the software client.
The last post touched on Pseudo-Random Number Generators, and the ones before that looked at the Data Encryption Standard. There’s also a post somewhere on this blog about RSA, which I’ll probably cover again shortly.
Since I’ve recently done an essay on the (very) basics of HTTPS, I thought it would be a good idea to rehash some of it here, to explain roughly how symmetric and asymmetric encryption work together to encrypt stuff over the Web.
HTTPS involves TCP packet encryption between the Application Layer and the Transport Layer of the OSI reference model, and so it works independently of Internet-enabled applications (usually web browsers). Of course, this would have meant minimal effort on the part of application developers to support this, and this is perhaps how it became more widely used than S-HTTP.
(Edited to add: I think this actually happens before the data is encapsulated as TCP packets.)
When we talk about HTTPS, we’re not referring to a protocol or the encryption methods themselves, but a prefix used in place of the standard HTTP to signal to the browser that a connection should be established through Secure Sockets Layer (SSL) or Transport Layer Security (TLS) to encrypt whatever data the browser sends to the transport layer and TCP sockets.
SSL and TLS in More Detail
SSL and TLS are both very much the same thing. The former was developd by Netscape Communications throughout the 1990s as a proprietary technology, and is almost as old as the World Wide Web. TLS was the successor, based on SSL 3.0, by an open community of developers and defined in RFCs 5246 and 6176.
But there are subtle differences between the two, though. SSL would start a secure connection from the beginning of a request, while TLS will only do so after the HTTPS handshake between the client and server or drop the connection if this isn’t possible. HTTPS/SSL would use a different port to standard HTTP, while HTTPS/TLS wouldn’t. This could make it harder for a packet inspection system to differentiate between an encrypted and unencrypted connection.
Overall, the whole thing is a hybrid encryption scheme consisting of at least two ciphers and a procedure for negotiating which ones the client and server should use. One cipher will be symmetric (usually RSA) and the other asymmetric. There’s a very straightforward reason for this.
The most efficient encryption is symmetric, such as Triple-DES, RC2 or R4, since they work by permutation, substitution and XORing plaintext with key bits instead of factoring vast numbers, and therefore require less computing resources.
Unfortunately the symmetric cipher can’t be used without first sharing the session key over the Internet, which could perhaps be intercepted at any point between the client and server. The obvious solution would be to use asymmetric encryption instead, in which public keys are used to encrypt the data. The problem with this approach is current asymmetric ciphers require far more computing resources as it involves generating and factoring very large numbers.
SSL and TLS solve this by using an asymmetric cipher to communicate a session key before switching to the symmetric encryption system. This means symmetric encryption can be used primarily without the session key itself ever being communicated unencrypted.
It’s after the handshake, key and certificate exchanges that this switch to symmetric encryption is made, so by this point the client and server should have already authenticated each other and negotiated a scheme for communication.
The whole process for establishing a secure connection between the client and server can be summarised as:
1. Client checks server’s identity.
2. Server checks client’s identity.
3. Establish the best encryption methods supported at both ends.
4. Generate and exchange asymmetric cipher key.
5. Exchange the (session) key for symmetric cipher using asymmetric encryption.
6. Switch to using symmetric cipher after the key has been safely communicated.
Certificates, Authentication and Trusted Parties
How does the client know it’s communicating with a given server, and not a malicious party? The answer to this is it doesn’t. The best guarantee are the ‘certificates’ exchanged when establishing an HTTPS session.
A client uses the following to confirm the server’s identity.
* Is the current date within the validity period?
* Is the issuing Certification Authority (CA) trusted?
* Does the issuing CA’s public key authenticate the issuer’s signature?
* Do the server and certificate domain names match?
If the answer to any of these is no, the server cannot be authenticated and the browser should display a warning to the user. At this point, the user has the option of rejecting the connection or making a decision based on a closer examination of the certificate. Of course, the user might choose to ignore the warning and proceed.
The major certification authorities, such as Verisign, GeoTrust and GoDaddy, are considered ‘trusted parties’, and issue certificates to be installed in all the main web browsers. The certification authorities are also considered ‘trusted’ because we must rely on their certificates as evidence that server x belongs to organisation y and on z domain. Again, certificates are evidence and not proof of this.
Basically, a domain owner will buy a certificate from one of the trusted certification authorities, and browsers visiting the server at that domain will be able to confirm its certificate is genuine, since both the client and server should recognise.
If a certificate was self-signed, which should theoretically be the case if the domain owner’s being impersonated, the browser will warn the user that the certificate is invalid.
HTTPS and Defence in Depth
In terms of Defence-in-Depth and network defence, communications encryption is something of a double-edged sword, especially in relation to corporate networks. The network administrator might want to protect the organisation’s data while preventing malicious traffic passing through firewalls and Intrusion Detection Systems (IDS) encrypted.
HTTPS and communications encryption in general should be considered only one component of a Defence-in-Depth scheme. It’s also a difficult problem to solve, as the same measures can also be used by malware and internal threats to export sensitive information to another server outside the network.
HTTPS doesn’t provide security on its own. It only ensures a connection between two points is secure, and could be defeated using one of two methods:
* It’s possible for a third-party to impersonate both sender and receiver, by placing a server that intercepts and relays the data. Both the sender and receiver would still be communicating over secure connections, but the connections would be to the attacker.
* Malware could be used to exfiltrate the data from a target before it’s sent over the secure connection.
Moral and Political Stuff
I had to include something in the essay about the moral and political side of encrypting Internet communications.
Online criminals typically operate by exploiting security flaws, and do so mainly to prevent their actions being attributable to them. For example, an unsecured wireless gateway can enable the criminal to act under another party’s IP address. Credit card information, stolen with the use of remote access malware, enables the purchasing of illegal material with the victim’s identity.
Often this can be broken down into two issues: confidentiality and non-repudiation/attribution.
So, I’ve always argued that promoting encryption and improvements in personal security would actually do more to prevent crime than facilitate it. There would be fewer security flaws for criminals to exploit, and it becomes harder to attribute crimes to anyone other than the criminal.
As for the political implications, the most obvious is that encryption empowers citizens while limiting the power of their governments, and it’s quite likely governments would legislate against the use of encryption if e-commerce (and by implication the wider economy) wasn’t so reliant on it. Lacking this option, some governments have resorted to finding ways around this.
The German government has been known to use remote access malware, to monitor the activities of suspected criminals, the most well-known being the Bundestrojaner. Among other things, this exfiltrates data from their laptops before it’s sent out over encrypted connections, in particular Skype calls.
Of course, the discovery of this malware on a suspect’s computer might be enough to undermine the prosecution’s case, and this argument was within the scope of my essay because a sufficient degree of non-repudiation and attribution is required to deter online criminals.
Around the same time, DigiNotar, one of the major certificate authorities, was breached and this resulted in fake SSL certificates being used to conduct Man-in-the-Middle attacks against people accessing Google, Yahoo and other email services in Iran. This might well have led to political dissidents, who may well have been under the impression they were communicating in relative safety, being discovered by the authorities.
Not only do these cases show communications security can be a matter of life and death, they highlight the false impression of security some users can have.
ARTHUR, C. 2011. The Guardian: Rogue web certificate could have been used to attack Iran dissidents. [WWW].
http://www.guardian.co.uk/technology/2011/aug/30/faked-web-certificate-iran-dissidents. (10th April 2013).
CLULEY, G. 2011. Sophos: German ‘Government’ R2D2 Trojan FAQ. [WWW].
http://nakedsecurity.sophos.com/2011/10/10/german-government-r2d2-trojan-faq/. (10th April 2013).
DIERKS, T. A. 1999. Internet Engineering Task Force: Request for Comments (RFC) 2246.
The TLS Protocol, Version 1.0. [RFC].
http://www.ietf.org/rfc/rfc2246.txt. (12th April 2013).
FRIER, A. KARLTON, P. KOCHER, P. 2011. Internet Engineering Task Force: Request for Comments (RFC) 6101. The Secure Sockets Layer (SSL) Protocol Version 3.0. [RFC].
http://tools.ietf.org/html/rfc6101. (12th April 2013).
KANGAS, E. 2008. LuxSci FYI Blog: SSL versus TLS. [WWW].
http://luxsci.com/blog/ssl-versus-tls-whats-the-difference.html. (9th April 2013).
MOZILLA FOUNDATION. 2005. Mozilla Developer Network: Introduction to SSL. [WWW].
https://developer.mozilla.org/en-US/docs/Introduction_to_SSL. (8th April 2013).
Yesterday’s Guardian Technology reported the PRC has deployed VPN-blocking technology as part of its Great Firewall. Apart from an alleged email from VPN firm Astrill, there’s no evidence of this, but something is happening. Initially it was those using corporate networks that reported the problem back in May 2011, while the home users were largely unaffected. This could well have been a strategy to compile a list of non-corporate VPN users.
I believe the PRC has merely applied their existing IP address blacklist to known VPN providers, rather than using a protocol-based filter, and that a given VPN service will remain reachable until it’s discovered. In other words, someone at the border gateway is searching for VPN providers and manually blocking them. The real test of this is whether VPN gateways within China, where TCP scanning is distributed across regional data centres, are reachable.
Cross-posted from the IPv6 Secure blog.
It’s been a while since I last posted an update, largely because the project’s been on hold for the last six weeks. Basically the second year of the course was mainly about theoretical stuff, like policies, compliance, management, legislation, etc., and the third year got very technical (and practical) from day one. And it’s not a bad thing either, as I expect any infosec professional to have at least some experience and a decent understanding of enterprise network and server configuration. So, that’s my excuse.
Roughly a month ago I had the basic secure messaging client application working, and hopefully I can get that communicating with the network. Later it can be modified for audio and video comms, and perhaps even a social network could be built around it someday.
Getting hold of the equipment for the development stage won’t be a problem, as I initially expected. I now have a carrier grade routing system at my disposal, which means the countermeasures can be tested with a collection of Cisco 2800 routers, an Adtran Atlas 550 Integrated Access Device (IAD), and TCP and IP filtering layers. The Adtran is what’s going to simulate the ISP and Internet.
By the end of January 2013, the whole thing should simulate multiple clients communicating between networks, tunneling their comms through whatever interception and filtering exists between them. It’ll be a form of P2P communication, but there’ll be nothing to mark it out to ISPs as such.
Before that happens, I’ll need to somehow configure the routing system, which must be done via serial ports and Telnet sessions, which is apparently quite easy.