SharePoint 2013 First Look


, , ,

As a SharePoint admin-to-be, I spent the weekend trying to understand what it actually is. My first task is to write up a coherent explanation of SharePoint 2013. After trawling through hours of pseudo-technical marketing jargon, I decided the best way to learn is to just go through the site options myself. Anyone with an account and a couple of spare hours can do most of this.

The first thing to note about SharePoint is that it’s an object-based system – if Web Applications, documents and users are Lego bricks, SharePoint is the Lego board on which they’re put together. SharePoint is a platform on which Web Applications are assembled in a structured way to deliver a service. Sites themselves are constructed by putting together components known as ‘Web Parts’.
Maybe this is a slightly patronising way of describing it, but there’s a reason why it’s important to view SharePoint as an object-based system. What’s also important is that users get away from thinking in terms of files and directories, and start viewing documents as objects with attributes.

Sites and Site Collections
While SharePoint can be considered an advanced Content Management System, there are some differences. An organisation using SharePoint will have a master, top-level, site. Added to this are multiple sub-sites. Or perhaps there’s a Site Collection for each department, with a site for each team or business unit. Each Site Collection has an administrator, to distribute the task of managing the whole thing.


Remember the point about SharePoint being more a Web application platform? A site consists of ‘Web Parts’, each being a discrete component that performs some kind of page function – maybe a links container, calendar, a list of files or a blog entry. Web Parts are generally deployed at the Site Collection level, and can be re-used across sites for consistency.

Document Libraries
The primary reason for encouraging the use of SharePoint is better document management, and to make documents easier to find. If, for example, meeting notes from six months ago had to be retrieved, it might be like searching for a needle in a haystack where a directory containing hundreds of files is concerned.
SharePoint solves this problem by attaching metadata (or attributes) to each document, so queries and filters can be applied to a document list. The standard Microsoft Word (and LibreOffice) file already has some of these attribute fields as the document properties – with Document Library attributes, it’s simply a matter of attaching additional fields to this.

As SharePoint permissions are based on Active Directory here, files and folders are more like objects – their permissions are inherited. In order to create a private library that’s only accessible to selected users, we must therefore break that inheritance before setting new permissions.

Every SharePoint user should be able to set up their own Document Library. Demonstrating with my personal site, I start off with the default Document view. One of the first things to do is add the metadata/property fields, so attributes can be assigned to each document.

Click the LIBRARY tab, and the Office-type ‘ribbon’ menu will appear along the top. In this, we want the ‘Library Settings’.


In the Settings page there’s a section for columns. By default they’ll include the properties that are already standard for Microsoft Office documents, but the chances are we’ll want to add more by creating extra columns. For example, we might want a property for subject area and document type. This could later be useful for anyone who wants to, for example, search for documentation on a specific project.

Since I’ve added a property field here called ‘Document Type’, I can now use that to filter my document list.


To the left of the search box, there is an option to Create View. The views are just different ways to display the contents in a Library.


New files can be created from templates within SharePoint. This will open the Office 365 editor, and the document will automatically be saved in the Library. Templates can be added as ‘Content Types’.

Personal Sites
Most people don’t bother with static sites unless they have to. In my workplace it’s common for us to ask around, and email each other, for information that’s already posted somewhere on the portal, and we know that most people use web browsers to communicate rather than to view stuff. This is why ‘social’ media is more effective for drawing attention to something or other.

With SharePoint 2013 all users should get their own personal site, and the opportunity to interact, collaborate, contribute and otherwise become involved in whatever content is hosted. As we can see, it provides an internal ‘social’ network. What better way to facilitate ‘knowledge sharing’ and searching for expertise?
The personal space is accessed by clicking the user’s name at the top-right and ‘About Me’. Initially it looks a bit sparse, but other ‘Web Parts’ become available after sorting the ‘edit your profile’ page.


SharePoint and Team Foundation Server
One of the problems now is I’m administrating a site for a DevOps department, not an average sales or admin team, and we’re already using Team Foundation Server as our collaboration thingy. A couple of people have already been asking whether it’s possible to integrate the two. Technically it is possible.

In the Team Foundation Server Administration Console, we can see there are a couple of SharePoint options. One of them takes us to the SharePoint Extensions Configuration Wizard.


The other option is to set up SharePoint Web Applications. I’m assuming these are the applications to be embedded in a SharePoint site for displaying things from TFS. Here the service accounts of the web application and the main SharePoint site are registered so they can access the TFS resources.


When the ‘Nothing to Hide’ Argument Fails


, , ,

Some good might still come of the Ashley Madison incident, if we’re prepared to learn from it. My sympathy extends to the innocent who thought they had nothing to do with Ashley Madison. Most the others aren’t so much victims as people who took risks and got caught. I’m not saying that to occupy the moral high ground. I’m arguing that our reliance on trust actually makes the general lack of morality a central issue here, especially as we don’t get to decide which part of our online lives is exposed, or how that information is used against us. Solving the core problem is going to require a change of mindset, and I’m not optimistic.

None of us ‘conspiracy theorists’, who fifteen years ago maybe thought the New World Order would oppress the masses using RFID or somesuch, could then have envisaged a world where corporations like Google could build a detailed picture of one’s private life using data aggregated from sources we’re not even aware of, or that profiling would become so damn intrusive and insidious, or that mass surveillance would far exceed what ECHELON was capable of, or that people would opt for an operating system that informs on them.
Maybe I was a paranoid ‘conspiracy theorist’ for warning about what the world of ‘big data’ would entail, but here I am, looking at a pilfered database of people who intended to commit adultery, matching millions of names against email addresses, places of residence and their sexual fantasies. For myself, it’s not the sex that makes it salacious, but the fact the database even exists, that 33+ million personally identifiable records have become public, and Ashley Madison acquired data elsewhere to massage the figures. Whatever next?

While the Impact Team caused untold damage, and quickened the ruination of God knows how many relationships, they also effectively demonstrated that many (or even most) of us can indeed be compromised, and impressed upon us that those we entrust our secrets to should not be deemed trustworthy by default. I learned that lesson a while ago, in my own painful and career-limiting way.
Most of us have done something stupid on the Internet, and shared information that should really be private. Most of us are certainly potential targets for identity theft. Again, once that information’s been put on the Internet, we don’t get to decide which part of it becomes public, and anyone who can be blackmailed or similarly compromised is basically fucked.

Another lesson, which is more of an observation I would have posted the other month following the Hacking Team incident, is the exposure of morally questionable activities has become a common motivation for ‘malicious’ hackers. Recent victims include Sony Pictures, Hacking Team, Ashley Madison (of course), Stratfor, the Office of Personnel Management, the NSA itself even. There is a distinct pattern here, and it means ‘threat assessments’ should be revised to take this into account.

Can the FBI Decrypt Hidden TrueCrypt Partitions?


, , , ,

Earlier this month a story broke about Christopher Glenn, a former contractor being found guilty of copying sensitive files from the US military’s systems. It wouldn’t have gained much interest, but for one thing: the FBI managed to access data stored by Glenn in a hidden TrueCrypt partition. The question was posed: Can the FBI decrypt TrueCrypt? The short answer would seem to be ‘yes’, but not because of some weakness in the software itself, or because of some capability that’s not yet public.

Glenn’s ‘compound’ was raided because he was allegedly engaged in the sexual exploitation of minors. However, he was actually convicted for the theft of sensitive military files, none of which apparently were ‘Top Secret’, and there’s no mention of espionage. He was fired for stealing the files in October 2012, but it wasn’t until March 2015 the drive containing the encrypted partition was acquired, according to The Register article. From this, we could surmise that the FBI managed to defeat Glenn’s encryption within a short period, during a fairly routine investigation. But how? The three most probable explanations are:
* Glenn wrote the 30-character password down somewhere.
* The FBI coerced him into revealing the password
* The encryption key was stored in system memory during the raid.

Even with technical expertise, it’s actually very hard to use encryption to withhold evidence of a serious crime. As Glenn discovered the hard way, it’s analogous to cleaning up a physical crime scene, trying to confound an experienced forensics team – and this holds true for an isolated computer. Doing this in an IT ‘ecosystem’ that’s more or less fully compromised by the intelligence services in collusion with Silicon Valley firms requires a paranoid mindset, an extreme level of self-discipline and meticulous OPSEC.

On the surface, a fully encrypted volume protected with a complex 30-character password is a good way to use TrueCrypt, but the problem again is the partition was being encrypted and decrypted within a larger ‘ecosystem’. The activity has a footprint. An operating system might record the fact a hidden partition was occasionally being mounted, the password/key is retained in system memory, and the NSA/GCHQ could determine whether someone of interest downloaded a copy of TrueCrypt. In the United Kingdom, that’s pretty much enough to pressure a suspect into revealing a password under RIPA 2000.

Another APT Tutorial


, , , , , ,

Because Kali Linux is very popular among those starting out in computer security, and some of us spent months shaping our toolsets, it’s worth covering the main points of the Advanced Package Tool (apt).
The Advanced Package Tool (apt) is one of several package managers that have made life easier since the days when it was more common to configure and compile software from source. Through dselect and dpkg, APT handles the download, compile and dependency resolving stages for us.

Most users would interact with APT through a GUI such as the Synaptic Package Manager, but the command line provides more options and finer control over package management. Plus there might be occasions where the command line must be used for resolving issues. Or perhaps you installed a distro without a desktop environment and need to build from that using APT. Or perhaps you have the desktop but not the Synaptic GUI.

Getting Started
If the Linux distribution has just been installed, or the package manager hasn’t been used in a while, it’s important to refresh the package manager before doing anything, to avoid problems associated with broken package headers. The refresh operation is performed using the ‘apt-get update’ command.
Also it’s a good idea to perform an upgrade to the system, as mainstream Linux distros can rapidly become outdated. There are two options for this:

$apt-get upgrade
$apt full-upgrade

Basic Command Set
There are actually several variations of the ‘apt‘ command.


Using ‘apt‘ on its own will display the simple command set.


As you can see, all the functions of a GUI-based APT package manager are
present. There is also a ncurses-based interface called ‘aptitude‘.


Getting Repo and Package Lists
You might want to view and backup a list of package repositories for the installation. This can be done simply by piping the output to a text file. e.g.

$apt-get update >> repo-list.default.txt

Each entry is in exactly the same format used for adding a new repo later, so it’s useful for sharing known good repository addresses between installations.
Another command will also show how many packages are available in the current set:

$apt list >> package-list.txt

Or, alternatively:
$apt-cache stats

What if we want to install a program from the list? We won’t always know the exact full name of a package for a given application, and this is where the ‘apt search [application]‘ command is useful.


APT Sources
Sooner or later you might want to add other repositories for the current installation, either because the defaults become outdated or to get packages that aren’t available in the defaults. To do this, we can edit the apt sources.

$apt edit-sources

Or edit the sources list file directly at /etc/apt/sources.list.
These cannot simply be URLs. A source must be entered in a certain format that’s something like:
deb distribution component

Broken Package Headers
This causes all kinds of problems with apt. To fix this, we remove the package and source headers in /var/lib/apt/lists, then repopulate the directory.

#rm /var/lib/apt/lists/* -vf
#apt-get update

The APT Shell
An optional add-on is the APT Shell, listed as the ‘aptsh’ package. I haven’t found it useful, as it provides more or less the same thing as the command options.

Further Information
The man and info pages are always a good place to start for the apt commands. For anyone who wants more in-depth information on how the package manager works, documentation can be downloaded and viewed by installing the apt-doc package. This will place a set of text and HTML files in /usr/share/doc/apt-doc/, and these can be viewed in the command line using the lynx browser.


Kali Linux 2


, , , ,

UWN Thesis’ author spent a few months re-engineering her installation of Kali Linux back in 2013, and has posted a good collection of useful guides to customising an installation using APT. I preferred to put together something comparable by adding tools to Linux Mint with the same package manager.

The option is given to download a 3GB+ ISO with an exhaustive set of tools, or download the 900MB Light version that users can build on. With the latter, there aren’t many security-related programs in the desktop or command line, but Kali’s package manager can download from several large repositories by default.


The Kali Linux security tools list is a good place to start.

Forensic Mode


You still get the same collection of programs in the GUI as the live boot mode. The main difference is the forensic mode doesn’t touch the hard disk or mount any connected storage devices.
The only useful programs I could find here is possibly dd and stuff for cloning NTFS filesystems.


Get every new post delivered to your Inbox.

Join 27 other followers