How to Disable the Vivaldi Browser Keyring Request

Tags

, ,

A problem that’s been bugging me for a while is the Gnome Keyring pop-up that frequently appeared in the Vivaldi browser on every other page I loaded.

In the Vivaldi settings, under the Privacy tab, uncheck the Save Webpage Passwords option.

Vivaldi can be launched with the keyring disabled using the following command:
$vivaldi --password-store=basic

But we don’t want to do that each time, so we could add the option to the menu.

Right-click the desktop’s main menu to edit it (‘Edit Menus‘). Here we can modify the start options for the desktop applications. Find the entry for Vivaldi, and right-click to see the option for ‘Properties‘.

In the Launcher Properties we can append the ‘--password-store=basic‘ to the command, so it should read something like:
/usr/bin/vivaldi-stable %U --password-store=basic

Close the windows, restart Vivaldi and the pop-up should no longer appear.

Won’t Commit

Tags

, , , ,

Sometimes the command line is a better and more reliable way of pushing changes to a Git repository. The sync feature on the Windows client does something the Linux GUI client wasn’t doing, and the latter was refusing to update the remote directory.

First navigate the command line to the local github subdirectory for the project. Next thing is to check a few global variables, if the client hasn’t been set up already. The important ones are username/password and the text editor, otherwise you’d probably be stuck in vim if the ‘-m‘ option isn’t used for adding a commit message:
$ git config --global user.name "Michael"
$ git config --global user.email michael@example.com
$ git config --global core.editor nano

It’s quite possible my local working directory was in a ‘detached HEAD state’, which basically means I could have actually been trying to modify an older version of the project instead of the current one – ‘git checkout [commit identifier]‘ allows this, and ‘git checkout master‘ will point the local directory back to the repository’s master branch.
The ‘pull‘ and ‘push‘ commands should ensure both local and remote directories are in sync again:
$git checkout
$git pull origin
$git push origin

The output should show everything being up to date at this point. This seems to retrieve all files and history, which might be intensive depending on the size of the project.

Submitting the Changes
To see the current status, the differences between the local and remote directories:
$git status

As we can see here, there are two updated files: index.html and main.css. Another two .ttf files were also deleted from the local directory. To commit the changes, both files must be ‘staged’ – the concept is the same as including and excluding files in Visual Studio’s Team Explorer before committing.
$git add [file1] [file2] [file3]

Running ‘git status‘ again lists the staged changes:

Now the changes are ready to commit:
$git commit -m "Updated home page and CSS"

The command without the ‘-m‘ option will open a text editor for entering the commit message.

Finally the changes need to be ‘pushed’ to the remote directory/server.
$git push origin master

Note the 7-digit hexadecimal numbers. These are the first part of the changes’ SHA fingerprints and their identifiers in the history log.

Restore / Recover
The change log can be viewed with the ‘git reflog‘ command, with identifiers and HEAD numbers. Either could be used for selecting entries.

To view the details of a specific entry, use the ‘git show‘ command again. The ‘git show [identifier]‘ command will print the SHA fingerprint, the date/time, change author, files and section(s) of code that were changed.
$git show [change identifier]

To revert to an older version of a file, we can checkout that file from an earlier commit:
$git checkout [commit] [filename]

Omitting the filename will revert everything in the local directory to the specified version. Because the checked out file might differ from the current version in the master branch, it should appear as another change to be committed.

A Bit Late to the Party

Tags

, , , ,

Even though I’ve been to God knows how many developer meetups and events where everyone seems to be using GitHub, I’ve only recently got round to setting up an account there. I don’t really do much in the way of collaborative development outside my day job these days, and I’ve been working almost exclusively with Visual Studio and TFS. I wanted to put together a repository of hacks and useful code for junior developers that might be joining us, though.

Setting up a Repository and Mapping
Getting started was a simple matter of creating a repository through the GitHub site, installing a few client applications and mapping the repository to local directories. By far the easiest way is to use the official GitHub desktop client. I’m also using GitKraken on another machine, and Git-Cola on a Linux system.

Even if there are no source files present, the Git client will at least create a sub-directory containing a .git file.

Now it’s possible to add source files to the directory and use the desktop client to push them to the server. New and modified files appear in the desktop client under the ‘Changes‘ tab. You’ll need to click ‘Commit to master‘ and the Sync button. For Cola Git, the changes must be staged, committed then pushed.

Visual Studio Code
Git version control is integrated into Visual Studio Code. The editor will read the .git file when opening the local folder containing a repository clone.

And to check in the changes, the Git version control button opens the version control tab.

But it seems the changes still need to be pushed to GitHub using the client application.

Site
A GitHub account can also be used for hosting Web sites, which could be better than a wiki and a list of repos. The way to go about doing this is to create a new repository, but this time name it ‘[username].github.io‘. We want a README here also.

The Master branch should be set as the source in the GitHub Pages section of the repository settings.

Technically all that’s needed here now is an index.html, and whatever else would make up the site. Of course, the site files can be cloned, modified and checked back in using the version control system for offline editing.

The Vault 7 Release

Tags

, , ,

A price of having an overgrown surveillance industry and routine violations of the US Constitution the inevitability of classified material being exposed. There are too many former CIA hackers with sins to confess, but I wonder about the motives behind this leak. Certainly last week’s Vault 7 release is voluminous and shows that a comprehensive range of things has been compromised, but surprisingly little has been exposed so far relating to violations of their Constitution. What kind of intelligence service doesn’t develop tools and methods for targeted surveillance?
However, there’s a lot that hasn’t been revealed. Wikileaks’ Twitter post claims it’s less than 1% of the material they might publish. The CIA has close ties to Silicon Valley, a data collection budget over four times that of the NSA’s and a comparable allocation for data analysis. The budget for Computer Network Operations (basically what the Wikileaks material exposes), though, is much smaller. According to the press release, the CIA’s Center for Cyber Intelligence had over 5,000 users. It’s therefore a safe bet the CIA does have its own mass surveillance programmes, and anyone of interest to the CIA could have their devices hacked by the Center for Cyber Intelligence.

The consequences of weaponised malware aren’t only domestic. Weaponised malware set a precedent for state-sponsored malicious hacking, and undermined the moral standing and credibility of the US government. When there is a malicious attack on a given state’s network, there’s no telling who was responsible, now we know the CIA was developing methods of implicating other states. Therefore it becomes ludicrous to blame an adversary without very compelling evidence. For example, could we still be so certain the Russian government was responsible for the alleged hacking of the DNC servers, which I believe was unrelated to the published DNC emails?

Things of interest
* Much of the material under the Operational Support Branch section contains useful literature for developers and hackers. There you’ll find tutorials, product documentation, tips, coding practices, links, etc.

* I found a reference to something called ‘Palantir’ in the docs, which appears to be a reference to a testing tool. This caused a bit of fuss when that name appeared in the Snowden material, as it was assumed to be a reference to the company of that name that sells OSINT software.

* Some material deals with defeating ‘personal security products’ – anti-malware that the average home user would have installed. So far, they seemed to have broken past AVG, F-Secure, Comodo and Bitdefender, usually through DLL injection/hijacking.

Dark Mail

Tags

, , , , , ,

What makes the Dark Internet Email Environment project look promising is the Dark Mail Alliance consists of the same principled and highly skilled engineers who brought us PGP/OpenPGP and Silent Circle products.

GPG should be an ideal solution for protecting emails on third-party servers – it’s highly scalable, since each person needs only a private key and access to a list of public keys, it’s extensively documented and it has APIs that enable developers to incorporate GPG into their applications. Breaking GPG is non-trivial, and it would be unrealistic for the authorities to expect its developers could maintain a backdoor in an open source project.
The reality is that few people are using PGP/GPG/OpenPGP, since most of us want everything on demand with minimum effort. We all want to sign into our email accounts, or a service like FaceBook, and have our messages instantly available. Hardly anyone wants to download and configure anything, and is why numerous innovative privacy solutions fail to gain traction. StartMail and DIME are two different approaches to solving this.

My understanding, after reading through the Architecture and Specifications document, is that DIME is fairly close to what an automated GPG solution would provide, alongside a form of onion routing. Overall it’s about ensuring everything sensitive, including email headers, is protected between the sender and recipient.

Development is based around four protocols:
* Dark Mail Transfer Protocol (DMTP)
* Dark Mail Access Protocol (DMAP)
* Signet Data Format
* Message Data Format (D/MIME)

DIME should be deployable without too much work. Speaking to Ars Technica, Levison stated: ‘You update your MTA, you deploy this record into the DNS system, and at the very least all your users get end-to-end encryption where the endpoint is the server… And presumably more and more over time, more of them upgrade their desktop software and you push that encryption down to the desktop.’.

Message Structure
The message structure consists of an ‘envelope’ encapsulating the message body and email headers. Protected information includes the sender and recipient addresses. A layered encryption method allows the mail relays to access only the data they need to forward the messages, and using D/MIME the sender and receiver identities are considered part of the protected content. That is, the sender and receiver addresses are considered part of the payload, and should be encrypted along with the message body, which is something GPG doesn’t do.
This is important because the claim that authorities are inspecting only the ‘metadata’ of our Internet communications is misleading. Email addresses and message headers are actually contained within the payload of the TCP/IP traffic being inspected – the content must be read in order to read the email addresses.

Unlike conventional email, the message headers (including the To and From fields) really are processed as confidential, and instead routing is determined by the Origin and Destination fields (AOR and ADR).

Key Management
Public keys, signatures and other things associated with identities are managed as ‘signets’ – all the cryptographic information required for end-to-end encryption between communicating parties. There are two types of ‘signet’: Organisational signets are mapped to domains and contain keys for things like TLS. User signets, on the other hand, are mapped to individual email addresses, and contain public keys associated with them.

There are three modes: Trustful, Cautious and Paranoid. These specifically relate to whether the client or servers handle key storage and encryption. Users have the option of a) ensuring that only client devices have a usable decryption key, and b) trusting LavaBit to manage their cryptographic keys that apparently are extremely difficult for LavaBit admins themselves to access.

Dark Mail Transfer Protocol (DMTP)
DMTP handles the delivery of encrypted messages, with another protocol, D/MIME should protect the content against interception. Basically the email or message is encapsulated, with headers only revealing enough information for servers to relay the message. In order for the encryption to be implemented, the sender must look up the recipient’s public key(s), which happens through a ‘Signet Resolver’ to find a ‘signet’ that contains the public key.
DMTP is also the transfer protocol for retrieving ‘signets’.

The transport method is actually quite similar to conventional email. It uses an additional field to the domain name records, pointing to the DIME relay server.

One requirement of DMTP is that all sessions must be secured by TLS, and with a specific cipher suite (ECDHE-RSA-AES256-GCM-SHA3841).

Dark Mail Access Protocol (DMAP)
The finer details for this haven’t been established yet, and this section appears to be notes of a few points. DMAP is being designed to handle the key exchange and authentication side of the secure email implementations, making sure the client’s keyring is synched with the ‘signet’ admin servers.
DMAP is to incorporate something called ‘Zero-knowledge password proof’ (ZKPP), a way for two entities to confirm that each knows a secret value without exchanging that value.

Installation of DIME on Mail Servers
A development version of the DIME library can already be installed on a Linux system. Unfortunately I haven’t managed to get this compiled and installed on Linux Mint, even with the dependencies sorted. I’m still working on installing these on a CentOS server.

The dime executable enables the lookup of signet/address mappings, verification of the signets, the changing of DX server hostnames and the reading of DIME management records. The sending and receiving servers are determined using the dmtp tool. Signets, they are generated and managed using signet as .ssr files.

A proof-of-concept program (genmsg) in included for sending Dark Mail messages in the command line. To do this, the sender specifies the sender address, the recipient address and the path to the ed25519 signing key.
Once on the server, the messages are stored in a typical database scheme on a Magma server.