Dissecting a Master Boot Record


, , , ,

Using the HDHacker utility I imaged a laptop’s MBR. The file was just 512 bytes, and this is how it appeared in a hex editor:


The first 446 bytes is the boot code, which is copied to physical memory and executed by the BIOS after the Power On Self-Test (POST). In turn, this code will load up the boot partition and execute another bootloader such as GRUB or LILO. Within the bootloader, the parameters for the next stage of the boot process (partition, kernel image, etc.) are determined.
Somewhere within the first 446 bytes of the MBR that was imaged, we also find the ‘#SafeBoot’ string – this is usually an indicator that a full disk encryption system was used. A list of strings associated with the different encryption products is available on the Guidance Software blog post.

After the boot code, at address 0x01B8 to 0x01BE, there is a disk signature.


Marking the end of the MBR is the ‘0x55AA’ signature.

The Partition Table
The following 64 bytes contains the partition table:


Each entry in the partition table tells the operating system the filesystem type and the partition size. This is probably where we read and write information on a partition editor.

Since only four partitions can be mapped within the MBR, we shouldn’t expect to be able to have more than four primary partitions on a given disk. We can, however, map an ‘extended partition’ that might contain several logical partitions.

Can we decode the following partition table entry?:
80 20 21 00 07 FE FF FF 00 08 00 00 00 D8 42 25

The first step is to look at how the bits are actually grouped, and determine what each group represents. According to the Microsoft TechNet page, the bits are grouped as follows:

[1000 0000] [0010 0001] [0000 00] [0000000111] [1111 1110] [1111 1111] [111111] [1100000000] [00001000000000000000000000000000] [1101, 1000, 0100, 0010, 0010, 0101]

[Boot Indicator] [Starting Head] [Starting Sector] [System ID] [Ending Head] [Ending Sector]

The Boot Indicator value, more specifically the first bit, marks this as an active partition – this will either be 0x00 for inactive or 0x80 for active.
With a bit of hex-binary-decimal arithmetic, it should be possible to calculate the approximate size of the partition from the final 32 bits in the table entry. Coverted to hexadecimal the value in our table is 0xD84225, and in decimal that is 14,172,709.
For an older disk a sector is 512 bytes, so that multiplied by 14,172,709 gives us just over 7GB, which seems a little too small. More recent hard disks have sectors of 4096 bytes, so the partition size in that case would be just under 60GB. Since that is the only partition in the table, we can say it’s likely the MBR was on a 60GB hard drive.

ASP.NET, OWIN and Authentication


, , , , , , , , ,

At the top of the Startup.cs file for an MVC application, we find import statements for Microsoft.Owin and Owin. These are imports are for the ‘Open Web Interface for .NET’, a technology for launching application modules, or ‘middleware’ between the application and IIS. One of the things OWIN makes use of is middleware for authentication with federated login systems and other Web services. The source code for the OWIN components used in Visual Studio can be examined on CodePlex.

A major clue to how MVC makes use of OWIN can be found in Startup.cs, in which there is a call to ConfigureAuth() with the ‘app’ namespace passed as a parameter:


This is a reference to another function in Startup.Auth.cs:


Both functions exist in different C# source files, but they are still within the same namespace and class. When the application is launched, ConfigureAuth() is called with the ‘app’ namespace as the parameter. Under the ‘app‘ namespace we see, within the ConfigureAuth() function, a number of components (possibly the ‘middleware’) for handling authentication tasks being called.
The app.Use[component name] is a reference to one of the many ‘extension methods’ listed on Microsoft’s IAppBuilder page. The extension methods call whichever components provide the app with a given feature.

OWIN and Third-Party Sign-In
Most of what an MVC application requires for third-party authentication (e.g. Google, Twitter, etc.) is provided by the OWIN library, and usually the hardest bit is getting the API key from the providers.

For this, disable Anonymous Authentication in the project’s properties. In Startup.Auth.cs there are function calls for the following services:
* app.UseMicrosoftAccountAuthentication()
* app.UseTwitterAuthentication()
* app.UseFacebookAuthentication()
* app.UseGoogleAuthentication()

Uncomment whichever service is to be used for third-party sign-in. In the following example, I’m using the Microsoft Outlook service, so I require the Microsoft Account Authentication:


In order to use this, the clientID and clientSecret parameters must be populated. Since I already have an MSDN account, I was able to sign in, register the application and get the values.


After entering the values and running the application again, the Microsoft sign-in button should appear on the login page, and that should direct the user to the actual Microsoft sign-in page. After the user signs in, the authentication ‘token’ is retained by the application.


Active Directory and Federated Sign-In
Getting the application to use Active Directory took a little more work. The following are required for this:
* Active Directory domain
* Several Microsoft.Owin libraries for Active Directory authentication
* Calls to app.UseCookieAuthentication() and app.UseWsFederationAuthentication() in Startup.Auth.cs source

When creating a new project from an MVC template in Visual Studio, the application should be configured to use ‘No Authentication’ in the Change Authentication menu. In the Properties window for the project, set the ‘SSL Enabled‘ attribute to ‘True‘. Finally, in HomeController.cs, add the [Authorize] attribute just above the HomeController : Controller class.
When testing the application at this point, the user should be displayed the familiar Error 401 page. This means the application has attempted (unsuccessfully) to check whether the user was authorised to view the Web application.

Onto the next stage of the project: We need an Active Directory domain that we can administrate. Fortunately there should be one set up in the Azure portal, in the Active Directory window. We need to add an application in order to get a MetadataAddress and Wtrealm value.
In the ADD APPLICATION setup panel, the Sign-On URL is the address and port number of the application runing locally (e.g. http://localhost:48246/), and the App ID URI can be anything. In this case I used ‘http://adtest/WsAuth1’.

Once the application has been added/registered, we need to make a note of the App ID URI and the Federation Metadata Document URL.


Now we have a domain set up for the OWIN Active Directory and Federated Authentication modules to use. In Startup.Auth.cs, make sure you can add the following import statements:
using Microsoft.Owin.Security.Cookies;
using Microsoft.Owin.Security.WsFederation;

If not, use Visual Studio’s package manager to fetch them.

Next you’ll need to add the following calls into the source:


When launching the application, the browser should now be redirected to a Microsoft sign-in page for the domain. On successful login, the home page for the application would load.


Starting with an ASP.NET Template (Part II)…


, , , , ,

After trying numerous suggestions posted on Stack Overflow without luck, I eventually came across a much easier way of getting an ASP.NET application to display, modify and remove SQL database records, and very little actual coding was involved. The following builds on the project created in my previous post, but it can be done with a blank MVC template also.

Create a Database
Because I totally screwed my local installation of SQL Server, the database in this example is instead hosted on Azure. The important thing is we have a working SQL database that an application can connect to, and a note is made of the server address, user name and password.

For the next stage we need the Microsoft SQL Server Management Studio, to set up the database table. An initial connection attempt to an Azure-hosted SQL Server results in the following error message:


This is normal, and is resolved by adding a firewall entry for the local machine (Client IP address) in the Azure portal.


In the database, a table is created (in this case using SQL Server Management Studio) with three columns: userID, userTitle and userData. For Visual Studio to build an Entity Framework model from this, one of the columns must be set as the primary key. Here it’s userID, and in the Column Properties I set the Identity Specification to ‘Yes’.


Creating the Application
As I’ve pointed out, this could be done either with an existing MVC project, or with a Visual Studio template. Normally when connecting an application to a database, we import a driver, add a connection string then pass queries within our code. It works a little differently with MVC.
The MVC project requires a ‘Model’, which is added by right-clicking on the Model folder and ading an ADO.NET Entity Data Model. In the Entity Data Model Wizard, I chose to use ‘EF Designer from database’. The details entered into the Connection Properties will determine the connection string for the project. Here it doesn’t really matter whether sensitive information is included in the connection string.


With a bit of luck Visual Studio will fetch the information required to build an Entity Framework model for the database.


Now the project has a connection string and EF model, it needs to be rebuilt so the model can be referenced by the source files to be added.

Controller and Views
The application requires a Controller to fetch records from the database, display them in whichever CSHTML page and perform whatever actions when the user interacts with the application.
Right-click the Controllers directory, and select the option to add a Controller. It’s possible to code this from scratch, but here I’ve added ‘MVC 5 Controller with views, using Entity Framework’.


The important options in the next window are for ‘Model Class’ and ‘Generate views’. From the Model Class drop-down, select the database table/model. With the ‘Generate views’ option ticked, Visual Studio will set up the new MVC page to enable users to interact with the database.


At this point there should be a sub-directory and a set of CSHTML files within ‘Views’ for the model. Right-click on its index.cshtml entry, and select ‘View in Browser’. The result should be a working application that can query, modify and delete records are stored in the Azure database.


And finally add the following line to _Layout.cshtml to add the table page to the navigation menu:
@Html.ActionLink("Database","../exampleTables/index", "Home")

Starting with an ASP.NET Template (Part I)…


, , , ,

I’ve found the literature on ASP MVC tricky to digest. From what I understand, the application server handles the requests and parameters, executes whatever was coded in C#, then renders the site content just as a conventional Web server would. We therefore find two categories of code in an MVC project: a variant of HTML, and C#. The C# handlers are stored under ‘Controllers’, and the HTML markup under ‘Views’. There is also a ‘Models’ folder, but it’s not used in the following project.
In a future post, probably the next one, I’ll show an easy way I’ve found of making the application interact with SQL database.

Opening an MVC Template
When creating a new ASP.NET Web Application project in Visual Studio, we get the option to create it from an MVC template. When all the prerequisites are loaded, the following should appear:


The template Web application has three default pages, a navigation menu and a layout. It can be launched by pressing the F5 key.

One of the first things I wanted to do was change the appearance and colours of the site. The CSS file for an MVC 5 Web application is bootstrap.css, in the Content directory. Normally editing CSS is a matter of changing a few items in a short file, but unfortunately the template file has almost 7,000 lines.
What I did was launch the developer feature in Firefox (with the F12 key), and hovered the mouse cursor over each line in the source until I found the element I wanted to change. I then searched for the relevant item in bootstrap.css with Ctrl+F. Firefox’s developer feature can also give us the line numbers of interest in the CSS file.


As an example, the navigation bar’s background in the template is ‘navbar-inverse‘. In Visual Studio, I used Ctrl+F to search through the relevant lines in bootstrap.css until I found the entry that determined the background colour. I also did the same for the elements I wanted to change, and eventually ended up with something like this:


Site Content
Adding and modifying the static content is straightforward, as this is done with standaard markup in the CSHTML files. Viewing the files, you’ll notice they contain only the text and associated tags, and not the full HTML source. This is because ASP.NET handles the presentation for us, and much of it only needs to be created once in _Layout.cshtml file.

Adding a Page
A new page in this application would need two things: a ‘View’, and a ‘Controller’ in the HomeControllers.cs file. Here I’ll add a ‘Links’ page to the Web application, firstly by creating a handler method for the page within HomeControllers.cs:
public ActionResult Links()
ViewBag.Message = "Links and resources go here.";
return View();

With the controller in place, it is time to add the new page. In the /Views/Home directory there are .cshtml files for the three existing template pages. Right-click on the Views directory, then ‘Add‘… ‘View…‘.
After changing the name to ‘Links’ and proceeding with the default options, Visual Studio generated a Links.cshtml file that references the recently added controller. Now any browser pointing to the site will send a request to the server for the URL. The server, or ASP.NET framework, will then execute the handler for Links.
The final step to adding this page is to include it as an option in the site’s navigation menu, which is determined in the _Layout.cshtml file:


Adding C# Code
Throughout the source files there are embedded references to something outside the HTML markup, for example, ‘@DateTime.Now.Year‘, ‘@RenderBody()’ and ‘@Html.ActionLink’. These are known as ‘Razor helpers’, Razor being Microsoft’s method of embedding calls to C# functions within HTML source.
Again, these functions are called, compiled and executed before the rendered HTML is returned by the server. My first example calls DateTime to display a clock on the Contact.cshtml page:


I decided to add custom tags around it to change the format of the output in bootstrap.css. As expected, the date and time is displayed in the Contacts page:


Small sections of C# code can also be placed directly within the HTML source, by adding it within the ‘@{ ... }‘ container. The following is an example of using Razor markup to declare a string as a variable and to display it on the Web page:

@{ var message = “This is a message”; }

If it’s possible to declare variables and print them to a page, there should also be a way of reading variables from another source. Here I have added a text file to the App_Data directory with some data, then inserted the following code in one of the CSHTML files:
var dataFile = Server.MapPath(“~/App_Data/applicationDataFile.txt”);
userData = File.ReadAllLines(dataFile);

@foreach (string dataLine in userData)


Input and Output
Razor and C# can also make the Web application interactive. As before, I added a method in HomeController.cs for the page, and created a typical Web form with standard HTML controls. In the source file I added the following to handle user input:



Another Razor helper is available for rendering charts from a set of given values. First add the following graph rendering function in HomeController.cs.


Then add the following line to whichever CSHTML file you want to display the graphs:
<img src=”@Url.Action(“DrawChart1”)”


Visual Studio and Application Lifecycle Management


, , , , , , , ,

Last week’s ‘Application Lifecycle Management’ (ALM) event hosted by the Visual Studio UK Team was mainly a demonstration of Team Foundation Server and Team Services. I’ve been using both (as well as Visual Studio Ultimate and Azure virtual networking) over the past year for work-related stuff and personal projects, but I’m still fairly new to this.
Microsoft has two Application Lifecycle Management products: Team Services, which is hosted and provided by Microsoft, and Team Foundation Server, which is the on-premises installation. Both are very much the same thing, but new features are added to Team Services before they’re included with TFS. Anyone with an MSDN account can get some hands-on experience with Team Services (available features might depend on the subscription).

The idea behind Team Services and TFS is to have a single portal through which development teams manage the workflow, from requirements specification to release and maintenance. To demonstrate this, the Microsoft people took some prepared code for a simple Web application, and went through what would be the development lifecycle in Team Services before deploying it as an Azure application.

At the core of a project are the source files and directories themselves, which can be viewed under the ‘Explorer’ tab. Basically everything that’s posted by developers through the version control system is viewable here – this should therefore be the definitive version of the source code.
The ‘Compare’ link should open two windows, enabling the source of two versions to be compared line-by-line. Comments that developers made when pushing their code to the server can be viewed by clicking the ‘History’ link.
For version control, Git and Team Foundation Version Control (TFVC) are supported, and both can be used in the same project – theoretically a project could be cloned from a Git repo and pushed onto TFS or Team Services. There is also better integration with GitHub, so fit should be easier to take existing published code and push it to TFS. Apparently Microsoft has deprecated support for Subversion.

Every large project will have ‘work items’, and here they’re viewed or managed under the ‘WORK’ tab. The ‘Backlog’ here is simply a list of items to be delivered throughout the development lifecycle, such as a feature request or bug. Items can be assigned to individual members of the development team, and the completion/resolution status monitored. This should help assess the progress or readiness of a product for deployment. To make this easier, there are some visualisation features, including the Board view and ability to create various charts.
External members can be added to a project as ‘stakeholders’ if they have an MSDN account.


Using the Queries, we can filter backlog items, for example, to get a list of bugs with a high severity value – tasks might also be organised into ‘sprints’ or iterations according to their priority. This is also a good way of tracking what exactly is in each ‘sprint’.
If the query is stored, it’s possible to connect to the Team Foundation Server from within Microsoft Excel (under the TEAM tab), select that query, and import the results to a local spreadsheet file. Work items can also be managed in an Excel sheet and published back to TFS.

Under this tab is where ‘build definitions’ are created for compiling the source that’s currently in the project repository. Recently Microsoft added more configuration options here, such as the target environment, deployment type, versioning, etc. XAML definitions are supported, but the feature is deprecated.
Builds can also be scheduled, which is useful in situations where much of the workflow is automated and we want to schedule a test run. Alternatively, builds can be configured to occur whenever new code is ‘checked in’ (Continuous Integration).


The default build steps are:
– Build
– Test
– Index andcsource symbols
– Publish build artifacts

Other steps can be added, for example, we might want to add the completion of other tests, or the deployment on certain environments, as a pre-requisite for publishing an application.

To do this on our own system, we’d have to install and configure a ‘build agent’. The agent authenticates with the TFS and posts the list of capabilities and resources on the target machine. Console messages relating to this can be viewed within the Web portal.


Application test suites and test cases can be managed under the ‘TEST’ tab. A test analyst could access a test case in the portal, click the ‘Run’ button, and run through the steps displayed in an Internet Explorer panel. Steps are marked as pass/fail.


The results are recorded and graphs can be created to show the results.


Test cases in TFS become work items, and should be listed under the WORK tab. Bugs encountered during testing can also be added as work items.

TFS and Team Services support load testing. To some extent, the load test can be tailored to mimic the type of traffic that the target application could realistically be expected to handle, but not with the same level of accuracy or granularity as HP LoadRunner.


The last I checked, this feature is only available in Team Services. It is closely related to ‘build definitions’, and is used for actually deploying the built projects when certain criteria are met and the build has been approved. Like builds, deployments can also be continuous, performed whenever changes are made to the project.


A deployment task is added as a step in the build definition. The target deployment platform is called a ‘Service Endpoint’. When setting up an Azure instance as a Service Endpoint, the details are copied from its certificate.
Microsoft has now changed this a little, so the product can be built and packaged, following the build steps mentioned earlier, without deploying it immediately after. Deployment types can be queued, so that one release would have to be tested and approved before the next build is deployed. We could also choose which environment to push a release to.

The Continuous Integration option can trigger this process on each build.


Get every new post delivered to your Inbox.

Join 29 other followers