Generating JSON Web Tokens

Tags

, , , , ,

An authentication server provides a client with a JSON Web Token, and the authentication server will also handle requests from application servers to verify tokens supplied by clients.

Algorithm

Generally, the token generation code follows this process:

  • 1. Initialise the token header as a set of JSON objects.
  • 2. Initialise the token payload as a set of JSON objects.
  • 3. Base64-encode the header and payload independently. This gives us the unsigned token.
  • 4. Using the HMAC SHA256 algorithm, and a secret key value, generate the signature for the unsigned payload.

A token, therefore, will have three segments: a) header, b) payload, and c) signature. The first two are Base64-encoded JSON objects, and the third is the encoded signature of the first two segments. This can be seen in action when generating a token with the debugger at jwt.io:


https://jwt.io/#debugger-io?token=eyJhbGciOiJIUzI1NiIsInR5
cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gR
G9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36P
Ok6yJV_adQssw5c

When decoded, the token would look something like this:


{ "alg": "HS256", "typ": "JWT" }
{ "sub": "Web Application", "name": "Michael", "iat": 45435334 }
{ SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c }

.NET Implementation

When working with .NET, one could use System.IdentityModel.Tokens.Jwt, but it can also be done using Newtonsoft.Json and System.Security.Cryptography.

I defined the header properties in a TokenHeader class:


public class TokenHeader
{
public string alg { get; set; }
public string typ { get; set; }
}

Instantiating the TokenHeader class in the main method, we can declare static variables for the objects:


TokenHeader tokenHeader = new TokenHeader();
tokenHeader.alg = "HS256";
tokenHeader.typ = "JWT";

The variables are formatted as JSON objects, then Base64-encoded:


string headerString = JsonConvert.SerializeObject(tokenHeader);
string encodedHeaderString = ConvertHeaderToJson(headerString);

private static string ConvertHeaderToJson(string headerString)
{
byte[] headerStringAsBytes = Encoding.ASCII.GetBytes(headerString);
string encodedHeaderString = Convert.ToBase64String(headerStringAsBytes);
return encodedHeaderString;
}

For generating the third segment of the token – the signature – the process is different. In this case, we need to generate a hash digest of the header and payload, along with a 256-bit secret value, then encode that digest. The algorithm for this is given as:


HMACSHA256(
base64UrlEncode(header) + "." +
base64UrlEncode(payload), your-256-bit-secret
)

This algorithm is implemented by the following code:


string message = encodedHeaderString + "." + encodedPayloadString;
System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
byte[] keyByte = encoding.GetBytes(myKey);

HMACSHA256 hmacsha256 = new HMACSHA256(keyByte);

byte[] messageBytes = encoding.GetBytes(message);
byte[] hashmessage = hmacsha256.ComputeHash(messageBytes);
string hashAsString = BitConverter.ToString(hashmessage);

byte[] signatureStringAsBytes = Encoding.ASCII.GetBytes(hashAsString);
string encodedSignature = Convert.ToBase64String(signatureStringAsBytes);
return encodedSignature;

JavaScript Implementation

Several JavaScript libraries are needed for the implementation here. The crypto ones are referenced on Joe Kampschmidt’s blog:

  • jquery-3.3.1.min.js
  • jquery.base64.min.js
  • crypto-js.min.js
  • hmac-sha256.min.js
  • enc-base64.min.js

To initialise the header and payload as JSON objects, JSON.parse() is used.


var headerjson = '{ "alg": "HS256", "typ": "JWT" }',
headerobj = JSON.parse(headerjson);

var payloadjson = '{ "sub": "x555", "name": "Michael", "iat": 445644543534, "issued": ' + Math.floor(Date.now() / 1000) + ' }',
payloadobj = JSON.parse(payloadjson);

Next, independently encode the header and the payload:


headerb64 = btoa(unescape(encodeURIComponent(headerjson)));
headerb64str = decodeURIComponent(escape(window.atob(headerb64)));

payloadb64 = btoa(unescape(encodeURIComponent(payloadjson)));
payloadb64str = decodeURIComponent(escape(window.atob(payloadb64)));

The unsigned token will consist of two segments, one for the encoded header and the other the encoded payload:
var unsignedToken = headerb64 + "." + payloadb64;

Finally, the crypto libraries provide the HMAC SHA256 algorithm used here for signing the token:


var secretKey = "Password1";
var tokenSignature = CryptoJS.HmacSHA256(unsignedToken, secretKey);
var encodedTokenSignature = CryptoJS.enc.Base64.stringify(tokenSignature);
gentoken.value = encodedTokenSignature;

And finaly, I’ve put all three segments together to form the signed token:
allsegments.value = headerb64 + "." + payloadb64 + "." + encodedTokenSignature;

Get the .NET and JavaScript code here…

Advertisements

Using HashBase to Host a DAT:// Site

Tags

, , , ,

One of the main problems in making a site public on a P2P network is its accessibility depends on a running server process on the initial host, until the files are seeded by other peers on the network. Fortunately, some of DAT’s developers operate a hosting service called ‘HashBase’, which we can use to make our sites persistent.

A site can be developed and uploaded to HashBase very easily from within Beaker’s integrated editor. Mine is a very basic site using Bootstrap.css. When done, review and publish the changes, so the finished article is displayed in the browser at the unique address.

Next, register an account with HashBase.io. There isn’t much in the way of account configuration here (it essentially provides a certain amount of storage associated with an account) so we can get straight to uploading the site’s files.

My site wasn’t accessible after my first attempt, and I assumed it just required time for the changes to propagate on the network. It turned out that dat.json needed to be modified in order to enable HashBase to host it – this isn’t mentioned on the Hashbase site, but instead in their documentation on GitHub. I learned the following lines must be added to dat.json:


{
"title": "SAPPHIRE-DAT"
"dir": "./.hashbase"
"brandname": "Hashbase"
"hostame": "hashbase.local"
"port": "8080"
"csrf": "true"
"bandwidthLimit":
"up": "1mb"
"down": "1mb"
}

This was enough to make the site reachable at dat://sapphire-dat.hashbase.io after re-uploading the archive. Reviewing and publishing changes in Beaker might cause dat.json to be reverted to its default, so it might be worth copying the above into another file called ‘template.json‘.

There are other recommended configuration options that might be important if we want to host something more than a static site. If you want to enable HTTPS, HashBase can sort the certificate provisioning if the following lines are also added:


"letsencrypt":
"debug": "false"
"agreeTos": "true"
"email": "myname@example.net"

Hashbase uses JSON Web Tokens to manage sessions. You absolutely must replace the secret with a random string before deployment.


"sessions":
"algorithm": "HS256"
"secret": "RANDOM-STRING-HERE"
"expiresIn": "1h"

DAT: A Better Way to Build a Social Network

Tags

, , , , , , , ,

I’ve hinted, in last week’s post, about the development of a feature that could help Minds.com evolve into a decentralised social network and bring P2P into the mainstream, thereby solving the growing privacy and censorship concerns that are associated with a centralised social network. This feature is based on an application layer protocol known as ‘DAT’. There are reasons to believe it’s likely to succeed where previous ideas failed: Since DAT works entirely at the application layer, and is implemented using Node.js, there’s very little effort or learning curve involved for developers and users of DAT applications. Web applications can be extended to support it, if the demand is there, using already published libraries that are extensively documented.
For those who aren’t developers, there is a working browser that anyone, without technical skills or knowledge, can use to browse and publish sites on the DAT Web.

What is DAT?

DAT started life with the scientific community, which had a need for a more effective method of distributing, tracking and versioning data. In the conventional Web, data objects are moved, Web pages are deleted and domains expire – this is referred to as ‘content drift’. We’ve all come across an example of this in the form of ‘dead links’. When using a hyperlink to reference a data object or Web page, there is no guarantee that link would be valid at some point in the future. DAT was proposed as a solution to this.
But what does this have to do with censorship and privacy, you’re probably asking? The answer to this question is in how data is distrubuted, discovered and encrypted.

Merkle Trees, Hashing Algorithms and Public Key Encryption

The DAT protocol is essentially a real-world implementation of the Merkle Tree data structure, with the BLAKE2b and Ed25519 algorithms for identification, encryption and verification (other docs state that SHA256 is used as the hashing algorithm). It’s not necessary to understand this concept in order to develop DAT applications, since there are already libraries for implementing this, but for the curious, I reccommend reading Tara Vancil’s explanation first before moving on to the whitepaper.

An important point Vancil made was DAT is about the addressing and discovery of data objects, not the addressing of servers hosting those objects. Data objects are not bound to IP addresses or domains either. Each data object has its own address, and that address is determined by its cryptographic hash value – a file’s hash digest will be static, regardless of where it’s hosted. This is important, because we’re accustomed to thinking of the Internet/Web in terms of the client/server model, and proposed solutions for privacy and anti-censorship typically try to deal with the problem of decentralised host discovery in a peer-to-peer (P2P) network.

Some form of data structure is required to make the data objects addressable and to enable their integrity to be verified. A DAT peer-to-peer network uses Merkle Trees for this, where all ‘leaves’ and nodes contain the hash values of the data objects they represent, and the root node contains the hash digest of all its child nodes. In other words, as the whitepaper puts it, ‘each non-leaf node is the hash of all child nodes‘.
Not only does this provide a way of verifying the integrity of the data objects – the root node’s digest will change if there’s any modification to a data object represented in the tree – it provides the means to an efficient lookup system, as the root hash digest becomes the identifier for a dataset.

Obviously, this means clients would need to fetch the root node’s value for a given dataset from a trusted source, which might be one of many designated lookup peers on the network. If the client wanted a given data object, it wouldn’t need to fetch everything referenced under the root node, but just the root node value, the parent node of the requested objects, and the hash values of the other parent nodes.

Addressing, References and Security

Now, let’s get into the more specific aspects of how Merkle Trees are implemented in the context of DAT. All the ‘leaf’ nodes in the DAT Merkle Tree contain a BLAKE2b or SHA256 (depending on the docs being read) hash digest of the referenced object. All parent nodes contain the hash digest and a cryptographic signature. The signature is generated by creating Ed25519 keys for each parent node and using them to sign the hash digest.

When sharing a locally-created site in the Beaker browser, or viewing one already shared on the network, you might notice the URI following ‘DAT://’ is a long hexadecimal string. This is actually the Ed25519 public key of the archive containing the referenced object being shared, and it’s used to encrypt and decrypt the content. The corresponding private key is required to write changes to the DAT archive.
The public key is, in turn, hashed to generate a discovery key, which is used to find the data objects. This ensures no third-party can determine the public key of a private data object that hasn’t been publicly shared.

Beaker

The Beaker browser looks very much like the standard Firefox browser on the surface, and it can be used to browse both DAT:// and HTTP:// addresses. As we can see, DAT sites are rendered just as well as those on the conventional Web. The only problem is that, as with Tor and I2P, sites are hosted on machines that aren’t online 24/7, so many of them are unreachable at a given time.

From the Welcome dialogue, we can get straight to setting up a personal Web site dor publishing on the DAT Web. A default index page, script.js and styles.css are included ready for us to customise. In addition, Beaker allows us to share the contents of an arbitrary directory on the machine it’s running on.

Previously-created sites are available under the ‘Library‘ tab in the main menu. Sites that aleady exist will be listed under the ‘Your archives‘ section, and can be modified and/or published.

What happens to a published site when the local machine is offline? There is a method to keep a site accessible, by somehow getting another person or machine to ‘seed’ the data. This is a short-hand way of saying another person could fetch a copy of the site and re-share it over the network. Seeding happens automatically as a user is actively browsing a DAT site.

The Node.js Modules

Several Node.js modules provide libraries that developers can use to implement DAT features in their applications.

  • hypercore: A component for creating and appending feeds, and verifying the integrity of data objects. The API exposes a number of methods under the ‘feed’ namespace for reading, writing and querying feeds.
  • hyperdrive: This is a distributed filesystem for P2P. One of the design principles is to reproduce, as closely as possible, the APIs as the core Node.js filesystem component, thereby making it transparent to application developers. This module enables a local file system to be replicated on other machines.
  • dat-node: A high-level component that developers could use to bring together other DAT modules and build DAT-capable applications.
  • hyperdiscovery: Module for network discovery and joining. Running two instances of a hyperdiscovery module will result in a given archive key being replicated.
  • dat-storage: The DAT storage provider. Used for storing secret keys, among other things, using the hyperdrive filesystem.

In conjunction with Electron.js and Node.js, the above modules can be used to develop a DAT-enabled desktop application, of which Beaker is just one example.

Node Discovery in Practice

Two components are used for this: discovery-channel and discovery-swarm. The discovery-channel component searches BitTorrent, DNS and Multicast DNS servers for peers, and advertises the address/port of the local node. Therefore, it is based on the bittorrent-dht and dns-discovery modules. Using discovery-channel, the client can join channels, terminate sessions, call handlers on session initiation and fetch a list of relevant channels. The network-swarm module uses discovery-channel to connect with DAT peers and control the session.

Minds.Com and the Free Thought Project Interview

Tags

, , , , ,

Recently I was listening to the Free Thought Project’s interview with Bill Ottman, the CEO of Minds.com, and thought it worth expanding on some of the points. If you haven’t already done so, Minds.com is worth checking out, if you’re a content creator, blogger or citizen journalist looking for an alternative to the mainstream platforms.

The Free Thought Project was one of 810 accounts that got booted off FaceBook and Twitter for ‘inauthentic activity’, in what seemed more like a co-ordinated act of political censorship. While the full list hadn’t been released, the main targets appeared to have been groups reporting on corruption within politics and law enforcement – you know, things we have a civic duty to discuss on the Web.
Quoting Brittany Hunter in Foundation for Economic Education article: ‘What began with the ban of Alex Jones last summer has since escalated to include the expulsion of hundreds of additional pages, each political in nature. […] one thing is absolutely certain: we need more market competition in the realm of social media.
What’s particularly worrying is that the Silicon Valley corporations aren’t simply private entities excercising their own rights, as is commonly argued in their defence. They represent a giant oligopoli that has a disproportionate amount of control over the means of communication on the Web, an oligopoli that’s engaged in a co-ordinated suppression of political opinion, an oligopoli with more influence on the democratic system and access to politicians than the Russian state could ever hope to gain.

An alternative is needed to democratise social media. For many people in the know, Minds.com seems to be that alternative. Here’s why:

  • Minds is production-quality, can be deployed as a finished application, and it’s open source.
  • Users don’t need to provide personal or identifying information when registering an account.
  • Minds was developed for content creators.
  • The developers are working on decentralisation solutions.
  • Minds.com supports crypto currency and monetisation.

The first point is an interesting one. In Ottman’s opinion, a solution released as proprietary software cannot be a viable alternative, because of transparency or somesuch. I think he might have conflated administrative integrity with software integrity – that open source projects have been pressured into adopting a uniform ‘Code of Conduct’ demonstrates the problem with that reasoning. Personally I don’t think the open/proprietary thing has much bearing on a platform’s viability as an alternative to FaceBook, unless there’s a need to verify claims about certain features, such as whether true end-to-end encryption is being provided.
No, what’s more important is that Minds isn’t a half-baked proof-of-concept, but is a completed iteration comparable in quality and appearance to any mainstream social media site. This is the deciding factor that determines whether a solution would gain traction. Anyone could clone the software, deploy it on their own server and run their own version of Minds.com.

The option to register accounts anonymously/pseudonymously with Minds.com is probably the most important feature, because I strongly believe we should be setting boundaries between our online and offline lives, and between family, social circle, work colleagues and strangers. Such a thing isn’t really possible on a social network in which everyone’s posting under their real names. Also, I don’t think it’s possible, in our current political climate, to have any meaningful debate without pseudonymity, since it seems fashionable to ensure anyone expressing a dissenting opinion suffers disproportionate ‘social consequences’.

An undersold feature of Minds.com is the ease with which a citizen journalist, blogger, whistleblower, etc. can create and publish content. For the individual user, who wants to protect his/her identity, a Minds.com channel (with publicly-viewable blog posts) is cheaper and easier to maintain than a Web site, and it still provides the same benefits in terms of posting content and getting views.

Problems with the Design and Architecture

Now, for the things I’m not entirely sure about: My main criticism is that Minds.com is not (yet!) actually ‘engineered for freedom of speech, transparency and privacy’ in any tangible sense, as it’s still a centralised service hosted on AWS in the United States. Whether Minds.com defends its principles actually depends on the people running it – people who could sell Minds.com to a corporation, people who might face legal, financial and political pressures, and people who would eventually be hiring others.

When asked, by Neoxian, writing for Steemit, whether Minds could truly be considered decentralised, Ottman gave the following answer:
Good questions. It’s decentralized in that ultimately, yes, nodes will be able to optionally federate (this is still in dev). It is censorship resistant in that we allow all legal content, and in the future will integrate torrent options.

This is actually not an empty promise. The Minds developers have already been working on a decentralisation component called ‘Nomad‘, which is based on the Beaker browser and the DAT protocol. I’ve experimented with these briefly this weekend, and they really do work. If a P2P system does go mainstream, it’s likely to be this.