SQL Server CE Setup

Tags

, , , , ,

CE should have been installed as part of SQL Server Express, but you’ll require the assembly references if developing a Windows application that uses an embedded database. The CE server wasn’t listed anywhere on my laptop, and I couldn’t use it without knowing the connection string or name. There is a command for getting information about the local SQL Server configuration, though:
sqllocaldb.exe info

For my project I chose the MSSQLLocalDB instance, and started that using the following command:
sqllocaldb.exe start MSSQLLocalDB

Then got information about it using:
sqllocaldb.exe info MSSQLLocalDB

The Command Prompt returns something like:

From this, we need the server name, and possibly the instance pipe name.

Creating a Database and Table
Now we want to create a database and at least one table. Databases can be administrated in Visual Studio’s the SQL Server Object Explorer, but I find it easier to work with SQL Server Management Studio.
To connect the SQL Server Management Studio to the local server, enter ‘(localdb)\MSSQLLocalDB‘ as the Server name. The Object Explorer will list the features available for the database server, and it’s here that tables, views and stored procedures can be created.

Right-click and select ‘Generate Change Script…‘, if you want to save the SQL code for replicating the schema to additional tables. It isn’t obvious how to commit the changes to the database, though – to do this, right-click the current tab, and select ‘Save Table_1‘.
Table rows can be added and modified by selecting ‘Edit Top 200 Rows‘, if you’re not versed in MS SQL scripting.

Adding Reference Assemblies to a Project
The chances are SQL Server CE was already installed along with Visual Studio, but the DLLs must be dowloaded and installed to enable the addition of the assembly references to a project. After installation, you might need to browse for the DLLs in Reference Manager. They should be stored in ‘C:\Program Files\Microsoft SQL Server Compact Edition\v4.0\Desktop‘. From this you’ll want the assembly reference for System.Data.SqlServerCe.dll, and the following import statement in the code:
using System.Data.SqlServerCe;

Since I’m only testing this as a data source, I’ve created a basic Windows Form with a DataGridView element. After creating a connection to (localdb)\MSSQLLocalDB) in the Data Source Configuration Wizard, I used the database as the data grid’s source.

This adds a connection string to the project, and we can select the table/view to be displayed in the data grid.

private void Form1_Load(object sender, EventArgs e)
{ this.allphrasesTableAdapter.Fill(this.phrasesDataSet.allphrases);
}

And we can add several text boxes and a button to update the database table.


private void button2_Click(object sender, EventArgs e)
{
// Insert new record
this.allphrasesTableAdapter.Insert(00, txtEnglish.Text, txtGerman.Text, txtNote.Text, cmbPhraseCategory.Text);
}

My next post will explore the use of stored procedures and Web Services for decoupling client applications from the database.

Advertisements

The Windows 10 Linux Compatibility Layer

Tags

, , , ,

Primarily the Windows Subsystem for Linux was released for developers used to deploying things on Linux servers, but after a week using it, I have to say it’s proven a stable and capable substitute for what I was using on my previous laptop. The application for launching the command line is known as ‘Bash on Ubuntu on Windows’. As we’ll see, this is no mere command line emulator, although it’s the inverse of WINE in a very approximate sense. From the users’ point of view it appears to function just like a virtual machine.

Setup
In order to use the Windows Subsystem for Linux (WSL) and ‘Bash on Ubuntu on Windows‘, the feature must be enabled. In Update and Security, enable the Windows’ developer mode (under the ‘For developers‘ tab). In Windows Features, there should be the option to enable ‘Windows Subsystem for Linux (Beta)‘. Once that’s done, the relevant components will be installed after the operating system is restarted.

The setup process is quick and straightforward when Bash.exe is first run.

Finding Your Way Around the Command Line
The first thing I wanted to do was learn something about the Linux subsystem’s environment, and running ‘ls -l‘ in the root directory, I could see there was a full Linux/UNIX filesystem present:

And this is isolated from the host machine’s actual filesystem, which is mounted as an external volume at /mnt/C/. If I wanted to use the nano editor to modify a text file in MyDocuments on the C: drive, I’d therefore need a command like:
$nano /mnt/c/Users/User/MyDocuments/[filename]

If the ‘top‘ command (or ‘ps‘ or ‘pstree‘) is used immediately after starting the application, and before switching to the root account, you’d see only two processes listed: init and bash, because we aren’t working with a full operating system. That’s also why there aren’t any kernel modules loaded either.

For the command line to be of any real use, we’ll need to install other programs for browsing the Internet, reading emails, modifying configuration files, developing and compiling code, etc. Here WSL provides us with dpkg and apt, which we can use to query the repositories and install whatever programs we need from them.

Programs I typically use include:

  • alpine: Email client
  • apt: My package manager of choice
  • binutils: For various analysis and reverse-engineering
  • gcc: Just in case things need compililng
  • hexcurse: Hex viewer and editor
  • HT(E): Disassembler and hex viewer
  • lynx: A decent command line Web browser
  • Midnight Commander (mc): Semi-graphical file manager interface
  • nano: Perhaps the most accessible command line text editor
  • pal: Calendar application
  • poppler-utils: Contains the utilities required to read the text contents of a PDF in the command line
  • sc: Spreadsheet application
  • vim: My fallback text editor, if nano isn’t present on a remote server

There’s a post elsewhere on this blog on how to perform various installation and upgrade operations using the apt package manager. To install, upgrade and remove programs, you must switch to the root account using the following command and providing the setup password:
$sudo su

To avoid the same broken package header problems I sometimes encountered before, I run ‘apt-get update‘ prior to installing/upgrading anything.

Subsystem Internals
What is the Windows Subsystem for Linux (WSL)? Is it an interpreter? Is it a virtual machine? Or is it an emulator? WSL is, roughly speaking, an interpreter for ELF64 executables, and I’ve mentioned it as a sort of reverse-WINE. It has three general components: 1) A session manager running in user mode to handle the Linux instance, 2) The LXSS Manager Service and LXCore components that emulate a Linux kernel system call interface within the Windows kernel space, and 3) A Pico process for the Bash command line itself.
The LXCore translates the UNIX system calls to NT API calls, thus enabling Linux executables to run on the Windows system and use the host system resources. In order to simulate a Linux environment, the LXSS Manager needs to maintain an instance of the Linux system to manage the processes and threads associated with the system calls.
Lxcore.sys contains a system call layer, a Virtual Filesystem layer, and the several filesystems that comprise the directory structure you see when running ‘$cd / ls‘. VolFs and DrvFs might send system calls to the actual Windows NT kernel.

The best abstraction of the WSL concept I’ve seen is posted on the WSL development team’s blog:


(Microsoft, 2016)

Here a comparison is made between the ‘ls‘ command and its Windows Command Prompt counterpart, ‘dir‘. Both appear to do the same thing for the user, and both are user-space processes sending their system calls to kernel space. The difference is ‘dir‘ sends its calls directly through the Windows NT API to the kernel, whereas ‘ls‘ must communicate through the WSL core (Lxss.sys). You’ll find LXSS as a DLL in C:\Windows\System32\lxss\LxssManager.dll.
Another difference is ‘ls‘ launches a ‘Pico process’ to achieve this, which differs from a conventional Windows process in that some of the regions, including the Process Environment Block and Thread Environment Block, are absent from its virtual address space. Basically a Pico process is a stripped-down process that requires a driver (such as WSL) to execute.

Therefore, the appearance of working within a Linux VM is illusory: bash.exe actually initiates another executable on behalf of the user, and WSL translates the system calls of both into NT API calls.

A consequence of this is that Linux shell executables can be run through Bash.exe in the Windows Command Prompt, using the following:
bash.exe -c "[command]"

Here I’ve run the ‘top‘ command this way:

Well, that’s the Windows Subsystem for Linux. Here are a few of the development team’s blog posts if you want to learn more about it:
Windows Subsystem for Linux Overview
Pico Process Overview
Windows Subsystem for Linux System Calls
Windows and Ubuntu Interoperability

Windows 10 Security and Privacy Initial Setup

Tags

, , , , , , ,

Recently I’ve bought my first personal Windows machine, and was a bit wary of connecting it to the Internet without first looking at the security configuration, even though Windows 10 has native memory protection features that make arbitrary code execution pretty damn difficult.
Here I’ll cover the basic steps for enhancing security further, and also a solution that might resolve the privacy issues associated with Cortana and telemetry.

Windows Defender
The first thing I’d recommend is setting up Windows Defender. This provides a basic anti-malware service, links to the local firewall configuration and parental controls. In the Update & Security menu, there’s initially a button to enable the Windows Defender service. Use this to access the Security Centre and its main options.

From what I’ve seen, most consumer-level routers don’t allow for a detailed firewall configuration. This is why it’s a good idea to check the one that’s included with the operating system. Although Windows Defender has a simplified interface for general filtering rules, I prefer to go through the entries in the ‘Windows Firewall with Advanced Security’ application.
The ‘Allow an app through the firewall‘ in the ‘Firewall & network protection‘ tab opens the Control Panel’s Windows Firewall options. Ensure the firewall is enabled for both private and public networks. Clicking ‘Allow an app through the firewall‘ link should open a menu to select and deselect application-level rules.

Application whitelisting, or ‘Default Deny’ – blocking all applications and services except those specifically allowed, is a strategy worth considering for a paranoid-level of security.

SmartScreen settings are displayed under ‘App & browser control‘. Technically SmartScreen improves security by checking the URLs of Edge browser requests against a list of malicious addresses, but it’s a trade-off between that and privacy.

The ‘Family options‘ tab contains options that are potentially useful if children are borrowing the laptop. As with Sophos Home security (which I’ll come to), the ‘Family options‘ are managed through a Web portal so it’s harder to disable without logging into the owner’s account.
Here the owner has the options to determine which sites are accessible in the Edge browser (what happens if Firefox is used?), set time limits for laptop/browsing activity and monitor online activity.

More advanced security-related configurations can be accessed in the classic Control Panel. Options to look at are User Account Control, BitLocker and Storage Spaces.

Disable Telemetry Services
Central to the privacy-related controversy around Windows 10 is the ‘telemetry’ feature. Essentially every several hours the operating system will send limited data about the usage to Microsoft. This cannot be disabled in the user-friendly Privacy settings menu, which instead only allows for Basic or Full diagnostics, but it can be disabled in Services.msc (the Services application), where it’s listed as ‘Connected User Experiences and Telemetry‘.

Some caution is needed when disabling services here, though, as many of them are for inter-process communication between critical operating system components.

Just in case the telemetry feature is re-enabled by some future update, it makes sense to configure the inbound and outbound firewall rules for ‘Connected User Experiences and Telemetry‘ in the Windows Firewall advanced settings. This might also be listed in the simplified firewall menu as ‘DiagTrack‘.

By the way, you could also do this for any applications you want to keep entirely offline.

Third-Party Protection
After the native security features are configured, the next thing to add is a dedicated third-party anti-malware product. I’ve reviewed BitDefender Total Security before, and found it an excellent product definitely worth the £30 annual subscription. I’m also thinking of giving the considerably more expensive F-Secure TOTAL a try, as I believe in supporting a company that takes a principled stand on digital rights, and the Freedome VPN service might prove very useful while travelling.

For now I’ve installed Sophos Home – I’ve followed this vendor’s work for a couple of years as a security undergraduate, and I’m very confident it provides an excellent layer of protection even though it’s a free service. Sign up for an account with Sophos on the site, and download the installer file (roughly 236MB). When launched we get a status screen when running the Sophos Home application.

What we get here is virus protection, Web protection and unwanted application detection. The latter should protect against spyware and adware.

Sophos Home installations are managed from the company’s Web dashboard, which has three configuration sections:

  • Virus Protection: Seems like your typical anti-malware detection system.
  • Web Category: Determine which categories of sites are allowed and which are blocked.
  • Exceptions: Set filtering exceptions for files, Web sites and applications.

Fixing Multiple Broken Assembly References in Visual Studio

Tags

, , ,

This problem often appears after a project is fetched from a version control system and a new file structure is created on the local machine, with the packages folder being read-only by default. The fist step is to make sure the packages folder isn’t read-only.

Then reinstall the packages using the following command in the Package Manager Console:
Update-Package -Reinstall

A Basic MVC Application with Node.js and Express

Tags

, , , , , , ,

Using Node.js, JSON and jQuery, I’ve managed to develop something much like an MVC application that’s considerably more lightweight than a .NET project, and anyone can use this as a template or basis for their own Web application project. Node.js enables the creation of Web servers, and enables communication between client-side JavaScript and the server. A Node.js application has the following:

  • Module imports
  • Server creation
  • Response

The source code in the following screenshots is posted on GitHub.

Creating a Simple Node.js Server
The server-side code for this is fairly simple:

Note the server created by this method is just a process listening on port 8090 (or whichever port is specified), and doesn’t host Web pages at this stage. Instead it returns an HTTP response using response.writeHead() to determine the header and response.end() to determine the body. When this code executes console.log() will print the ‘Server running’ message in the command line. A browser sending a request to localhost:8090 will display ‘Hello World’ as the response. I saved this file as ‘nodeserver.js’.

To start the server using the Node.js interpreter, simply navigate the command line to the directory where the .js file’s stored, and enter the following:
node nodeserver.js

File Operations
Perhaps the main reason we want server-side code, rather than something entirely client-based is data persistence. An application isn’t much use if it can’t store and retrieve data. Here I have two files: file-op.js server-side script, and the serverdata.txt data file. The latter simply contains two lines of text.

This time we import both the http and filesystem (fs) modules:


var http = require("http");
var fs = require("fs");

And specify the file to read:

var data = fs.readFileSync('serverdata.txt');

And this time, the HTTP response is defined as the contents of serverdata.txt:


http.createServer(function (request, response)
{
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end(data.toString());
}).listen(8090);

Streaming Data and Writing to File
The filesystem module enables the JavaScript to perform I/O with files using createReadStream() and createWriteStream(). As before, we import http and the filesystem modules, but leave the data as a null value. Another variable is needed to declare the read stream. The data returned by the fs.createReadStream() function populates readerStream.

And to write to file using createWriteStream:

Although this isn’t much at this point, it demonsrates that we can use persistent storage with a bit of JavaScript.

Node.js Express
Express can be used to achieve the same thing as ASP.NET MVC, as it handles routing, REST requests and other server-side operations. First we need to use npm to install Express.js.
npm install express --save

We’ll use the following simple express server to understand routing:

As with ASP.NET MVC, the controllers here determine actions to be performed when the server receives a given request in the form of a URI. To initiate an action, we only need to send its name as part of the URI in the browser. For example, ‘http://localhost:8090/listusers’ will cause express to return the response for that app.get() method. It responds by calling the sendFile() function that returns users.html. This is the equivalent of MVC’s ‘return view()‘.

Reading and Writing JSON Files
Of course, most Web applications function as an interface to some data source. Here I’ll try and use a JSON-based source to store and retrieve the data, with data being sent between the HTML and the JavaScript controllers. The Express.js site lists the database integrations it supports.

For the following the body-parser is required through npm:
npm install body-parser --save

In the HTML file we have a simple form with four fields. To the JavaScript file we add another method for handling the data submitted from the HTML.

If the fields are populated and submitted, the following JSON output is generated:


{"firstName":"michael","secondName":"mysecondname","user_name":"michael","user_email":
"michael@mail.com"}

Now we need a .json file for the application to append, for example ‘users.json’. Here’s the solution I hacked together, by trial and error:

This can also be extended to MongoDB, which is also JSON-based, if a data access layer needs to be added to the application.

To do the reverse – rendering JSON data in an HTML page – we’ll need jQuery and a script that fetches the data returned by the Node.js controller. In the HTML I have two elements, ‘get-data‘ and ‘show-data‘. The first is a link that triggers the JSON reader JavaScript.

The handler function will read the JSON file and return the output to the ‘show-data’ element, placing the read values in an HTML list.