Things Artificial Intelligence Could Never Have


, , , , ,

With the dire warnings of artificial super-intelligences as an existential threat being just one of several pseudo-religious ideas (simulated Universes, Ray Kurzweil’s ‘Singularity’, etc.) being courted by Silicon Valley, I wasn’t surprised that someone would attempt to build an actual religion of it. The stated purpose of Anthony Levandowski’s ‘Way of the Future’, registered as a non-profit in California, is to ‘develop and promote the realization of a Godhead based on Artificial Intelligence’, and ‘through understanding and worship of the Godhead [to] contribute to the betterment of society’.

I don’t think it can succeed where Christianity is perceived to have failed: The latter is based on ~2,500 years of human intellect and reasoning. For example, it’s reasonable (but not necessarily correct) to believe in the creator based on the principle that everything, including the Universe itself, must have an ultimate ‘First Cause’, as it’s more likely than an infinite regression of causes, and it must be something outside the natural world. This first cause must be synonymous with existence itself and manifest various absolutes, argued Thomas Aquinas somewhere in Summa Theologica. This idea addresses a perennial question, in an admittedly unsatisfactory way, to which it’s unlikely we’d ever know the answer with certainty: ‘Why is there something instead of nothing?’ And if one is of the view that the Universe is a finite and closed system, of which the laws of nature are only descriptive, it’s reasonable to believe there could exist a supernatural reality outside of what we typically observe. The scientific method of reasoning cannot tell us whether God exists or not, since that’s a metaphysical proposition, but it should lead theists and atheists alike to deeper questions about our existence.

The point here isn’t to convince anyone of these principles, but instead to make a case that it should never be substituted with the fear or worship of some artificial ‘super-intelligence’. An artificial ‘super-intelligence’ that certain ‘thought leaders’ have essentially conjured out of nothing. A ‘super-intelligence’ that requires presuppositions and assumptions that contradict observation, and that discounts things that are known. I’m particularly suspicious of the ‘super-intelligence’ thing because Silicon Valley seems intent on consolidating a monopoly that fetishises the collection of data about us, and what isn’t being mentioned in the AI debates are the facial recongition, the matching of online profiles to real-world identities, the automation of censorship – things that increase the information asymmetry between the individual an corporations.
Having developed software in multiple programming languages over the years, having reverse-engineered software, having assembled a rudimentary computer/processor of surface-mount components and having done my Masters’ review paper on a range of intelligent systems for detecting the usage of stolen ATM cards, I’m extremely skeptical of the idea that a processor-based system could ever become something more than a data processing tool of limited application, which can produce fuzzy abstractions of data sets or determine patterns or anomalies.

Recreating Human Morality
There are discussions about whether we can and should ‘program’ artificial intelligence with some form of morality. I argue that it’s unrealistic, partly because we don’t live in a society that allows objective morality or freedom of conscience. What we commonly find is the consensus isn’t really a matter of whether society values human life or fundamental rights, but rather how much society values those things. What exceptions, compromises and illogical juxtapositions of values should be made in the name of ‘progress’? What trade-offs does a person need to make just to function in society? Alexander Simon-Lewis asked exactly the right question on Is it dangerous to recreate this flawed human morality in machines?

The ‘rail cart’ problem, which is often mentioned in discussions about autonomous vehicles, happens to be a perfect illustration of this: Should an autonomous vehicle be programmed to terminate the lives of its occupants to prevent the deaths of innocent pedestrians? Should the vehicle change course and terminate one person to save the lives of several? Humans can weigh one course of action against the other, but has anyone questioned whether society would ever allow an autonomous vehicle to make an objective decision for itself? Society most certainly wouldn’t allow it, and so a course of action must either be programmed by a human beforehand (which might be considered murder by proxy) or remain a neglected ‘use case’. I’d bet £100 on the industry opting for the latter.
We can dig further into this problem, and ask whether it’s even ethically acceptable for manufacturers and consumers of autonomous vehicles to trust them with the lives of others, or allow a machine onto the roads that’s programmed to terminate one person’s life in preference to another because society perceives a difference in value between those persons – one could imagine an opinion piece in The Guardian arguing it would be tantamount to executing people for belonging to a perceived underclass. Should that kind of decision even be determined in real life from hypothetical scenarios?

An artificial ‘super-intelligence’ also wouldn’t be allowed to determine its own morality. What happens if this hypothetical ‘super-intelligence’, through impeccable and objective reasoning, and to ensure stability and the best quality of life for the maximum number of people, decided that every child is entitled to a mother and a father, that abortion is straight-up murder, that the death penalty should be abolished in the United States, that everything should be based around the right to life and the dignity of the human person? The inhabitants of Silicon Valley might be pissed, and someone would be modifying this AI to get the answers they wanted.

You’ll notice that most my points are made here as questions, and that wasn’t intentional. Even as a practising Catholic I genuinely don’t have the answers, and I cannot imagine how morality could be codified in a way that isn’t going to be problematic for navigating real world situations.

Why I Don’t Think Machines Could Ever Become Sentient
Ray Kurzweil would do well to watch a dissection of the human brain on YouTube. The neurons and synapses are so small and densely packed that the organ has a cross-section with the smoothness and consistency of really thick jelly. Could the workings of this structure, in all its intricacy and complexity, realistically be reproduced on manufactured hardware? According to the Human Brain Project, the biological human brain has ~86 billion neurons, each with ~1700 connections. To equate that with a computer is to seriously under-estimate its complexity, and this poses a real technical problem for proponents of artificial ‘super-intelligence’.
To simulate this on a computer would certainly require a data structure for each neuron, and maybe even a low neuron/processor ratio too. Even with a clever use of instantiation and destructors to simulate only the parts of the brain associated with intellect and cognition, such a task seems computationally possible but it would likely require an extremely low-latency network consisting of tens of billions of processor cores.
In early 2014, the fourth most powerful computer was able to simulate (to what degree?) 1 second’s activity of 1% of a human brain. This required more than 700,000 processor cores and 1.4 petabytes of system memory. The processing took about 40 minutes. At the time of writing this, the hardware is still within the top-ten most powerful.

But isn’t technology advancing at an ‘exponential’ rate? Well, technological progress is hard to quantify in general terms, let alone state it’s increasing ‘exponentially’, but Moore’s Law (which isn’t really a ‘law’) predicted/roadmapped the number of transistors for a given area would double every 18 months to 2 years, which means memory and storage capacities increase, processors can perform more operations per second, and the integrated circuits of a given density become cheaper to manufacture over time. Obviously there’s a limit to this, since a processor cannot have an infinite number of transistors, and at some point (somewhere between 5-9nm) quantum tunnelling will interfere with transistor states. Moore’s Law doesn’t predict, and is not even directly relevant to, changes in the form or substance of technologies. Other than having more transistors in their ICs, consumer devices, such as PCs, laptops, smartphones, MP3 players, digital cameras, etc. aren’t different in substance or form to the products we bought a decade ago. And computers have remained fundamentally the same in nature, regardless of how many transistors, diodes, capacitors and resistors form their circuitry.

What does this mean for machine intelligence? First thing is we still don’t have a precise definition of consciousness, what the difference is between living and sentient matter. Also it’s still arguable whether consciousness is God-given (as I believe it is) or whether it’s an emergent property of a complex, yet ultimately deterministic, system that’s purely the result of an improbable arrangement of molecules and 3.5 billion years of adaptation to the environment. And perhaps our consciousness is dependent on some currently unknown phenomenon at the sub-atomic scale.
What we do know is the computing technology we’re familiar with couldn’t be anything other than deterministic (possibly with the exception of neuromorphic hardware). In fact, a computer is quite mechanistic to anyone who understands how it works, even when endowed with some learning algorithm. We have a microprocessor containing arrays of nano-scale transistors in various arrangements, and each array will always produce the same output given a certain input. This input, the op codes and operands, are fetched from another array of transistors that constitute the system memory, and they’re in turn generated by a compiler that translates from a high-level programming language. A computer may as well be a doorstop or a brick without the software, and what is software other than a collection of man-made instructions on a dead storage medium? This is why a Dell Optiplex is no more capable of sentience than a BBC Micro.


Web Services and Stored Procedures from Scratch


, , ,

In the last post I described how to get started with using SQL Server Compact Edition in Visual Studio, with a database and table accessible in SQL Server Object Explorer. From that point it’s possible to develop an application that sends query commands to the database server, but that could allow for arbitrary query execution if the traffic between the application and server was being forged, or if a vulnerability in the application was exploited. It’s a good idea to use something like stored procedures and Web Services to decouple the database from the application.
A Web Service consists essentially of those three things: A connection string, a Web Service method and the base classes in System.Web.Services. The Web Service (ASMX) template is provided in Visual Studio 2015 and is added as an item to an existing ASP.NET project. This will create an empty Web Service class file, assembly references and import statements.

Like any database application, the Web Service requires a connection string for the database server, and this can be acquired in the SQL Server Object Explorer. The Web.config file should include a connectionStrings section that contains the connection string:

A WebMethod instantiates the SqlConnection and implements a SqlCommand. Although it’s possible to pass a conventional SQL command to the database server, it’s better to call a stored procedure instead. In the example below I’ve placed ‘spGetAllPhrases‘ into the SqlCommand function.

Stored Procedures
In the database, I now need a stored procedure called ‘spGetAllPhrases’ that returns all records from a table:


SELECT * FROM [dbo].[allphrases]

With the stored procedure added, I’ve checked again to ensure SqlCommand refers to it:
using (SqlCommand cmd = new SqlCommand("spGetAllPhrases"));

So far I’ve added a very basic Web Service and stored procedure combination. The chances are we could need Web Services that passes input parameters from a client that requests only records matching whatever criteria.

The first thing we need is to translate this requirement into a stored procedure. Here the stored procedure queries the database table for records with a given Category value:

-- =============================================
-- Author:
-- Create date:
-- Description:
-- =============================================
CREATE PROCEDURE spGetPhrasesByCategory @Category VarChar(50)

SELECT * FROM [dbo].[allphrases] WHERE Category LIKE @Category

When the stored procedure is executed, it will request the input parameter, which in this case is @Category, and return the results of the query. Now we need to create a Web Service method that gets and passes the Category variable to the stored procedure as an input variable.

Here, GetPhraseByCategory() is returned as a DataTable, as with the other Web Methods, and like the other Web Methods, the data table is populated by whatever’s returned by the stored procedure.
The difference here is we declare categoryName as the method’s input string – when the ASMX file is launched and the method is called, it will expect the client to supply a value for this. This variable is used as the stored procedure’s @Category parameter.

When using a browser to launch the Web Service, results are presented as an XML document.

Adding Records
A Web Service can also update a database table through a stored procedure created to accept input parameters. These parameters and their data types are defined after the stored procedure name.

CREATE PROCEDURE spAddPhrase @English VarChar(200), @German VarChar(200), @Note VarChar(200), @Category VarChar(50)

INSERT INTO [dbo].[allphrases] (English, German, Note, Category)
VALUES (@English, @German, @Note, @Category)

After running the stored procedure to check it works, it’s time to add a Web Method that calls and passes the variables to it.

This time we declare the method as ‘public string’ since we only need to return a message string telling the client whether the execution was successful.

The code for this project can be downloaded here…

SQL Server CE Setup


, , , , ,

CE should have been installed as part of SQL Server Express, but you’ll require the assembly references if developing a Windows application that uses an embedded database. The CE server wasn’t listed anywhere on my laptop, and I couldn’t use it without knowing the connection string or name. There is a command for getting information about the local SQL Server configuration, though:
sqllocaldb.exe info

For my project I chose the MSSQLLocalDB instance, and started that using the following command:
sqllocaldb.exe start MSSQLLocalDB

Then got information about it using:
sqllocaldb.exe info MSSQLLocalDB

The Command Prompt returns something like:

From this, we need the server name, and possibly the instance pipe name.

Creating a Database and Table
Now we want to create a database and at least one table. Databases can be administrated in Visual Studio’s the SQL Server Object Explorer, but I find it easier to work with SQL Server Management Studio.
To connect the SQL Server Management Studio to the local server, enter ‘(localdb)\MSSQLLocalDB‘ as the Server name. The Object Explorer will list the features available for the database server, and it’s here that tables, views and stored procedures can be created.

Right-click and select ‘Generate Change Script…‘, if you want to save the SQL code for replicating the schema to additional tables. It isn’t obvious how to commit the changes to the database, though – to do this, right-click the current tab, and select ‘Save Table_1‘.
Table rows can be added and modified by selecting ‘Edit Top 200 Rows‘, if you’re not versed in MS SQL scripting.

Adding Reference Assemblies to a Project
The chances are SQL Server CE was already installed along with Visual Studio, but the DLLs must be dowloaded and installed to enable the addition of the assembly references to a project. After installation, you might need to browse for the DLLs in Reference Manager. They should be stored in ‘C:\Program Files\Microsoft SQL Server Compact Edition\v4.0\Desktop‘. From this you’ll want the assembly reference for System.Data.SqlServerCe.dll, and the following import statement in the code:
using System.Data.SqlServerCe;

Since I’m only testing this as a data source, I’ve created a basic Windows Form with a DataGridView element. After creating a connection to (localdb)\MSSQLLocalDB) in the Data Source Configuration Wizard, I used the database as the data grid’s source.

This adds a connection string to the project, and we can select the table/view to be displayed in the data grid.

private void Form1_Load(object sender, EventArgs e)
{ this.allphrasesTableAdapter.Fill(this.phrasesDataSet.allphrases);

And we can add several text boxes and a button to update the database table.

private void button2_Click(object sender, EventArgs e)
// Insert new record
this.allphrasesTableAdapter.Insert(00, txtEnglish.Text, txtGerman.Text, txtNote.Text, cmbPhraseCategory.Text);

My next post will explore the use of stored procedures and Web Services for decoupling client applications from the database.

The Windows 10 Linux Compatibility Layer


, , , ,

Primarily the Windows Subsystem for Linux was released for developers used to deploying things on Linux servers, but after a week using it, I have to say it’s proven a stable and capable substitute for what I was using on my previous laptop. The application for launching the command line is known as ‘Bash on Ubuntu on Windows’. As we’ll see, this is no mere command line emulator, although it’s the inverse of WINE in a very approximate sense. From the users’ point of view it appears to function just like a virtual machine.

In order to use the Windows Subsystem for Linux (WSL) and ‘Bash on Ubuntu on Windows‘, the feature must be enabled. In Update and Security, enable the Windows’ developer mode (under the ‘For developers‘ tab). In Windows Features, there should be the option to enable ‘Windows Subsystem for Linux (Beta)‘. Once that’s done, the relevant components will be installed after the operating system is restarted.

The setup process is quick and straightforward when Bash.exe is first run.

Finding Your Way Around the Command Line
The first thing I wanted to do was learn something about the Linux subsystem’s environment, and running ‘ls -l‘ in the root directory, I could see there was a full Linux/UNIX filesystem present:

And this is isolated from the host machine’s actual filesystem, which is mounted as an external volume at /mnt/C/. If I wanted to use the nano editor to modify a text file in MyDocuments on the C: drive, I’d therefore need a command like:
$nano /mnt/c/Users/User/MyDocuments/[filename]

If the ‘top‘ command (or ‘ps‘ or ‘pstree‘) is used immediately after starting the application, and before switching to the root account, you’d see only two processes listed: init and bash, because we aren’t working with a full operating system. That’s also why there aren’t any kernel modules loaded either.

For the command line to be of any real use, we’ll need to install other programs for browsing the Internet, reading emails, modifying configuration files, developing and compiling code, etc. Here WSL provides us with dpkg and apt, which we can use to query the repositories and install whatever programs we need from them.

Programs I typically use include:

  • alpine: Email client
  • apt: My package manager of choice
  • binutils: For various analysis and reverse-engineering
  • gcc: Just in case things need compililng
  • hexcurse: Hex viewer and editor
  • HT(E): Disassembler and hex viewer
  • lynx: A decent command line Web browser
  • Midnight Commander (mc): Semi-graphical file manager interface
  • nano: Perhaps the most accessible command line text editor
  • pal: Calendar application
  • poppler-utils: Contains the utilities required to read the text contents of a PDF in the command line
  • sc: Spreadsheet application
  • vim: My fallback text editor, if nano isn’t present on a remote server

There’s a post elsewhere on this blog on how to perform various installation and upgrade operations using the apt package manager. To install, upgrade and remove programs, you must switch to the root account using the following command and providing the setup password:
$sudo su

To avoid the same broken package header problems I sometimes encountered before, I run ‘apt-get update‘ prior to installing/upgrading anything.

Subsystem Internals
What is the Windows Subsystem for Linux (WSL)? Is it an interpreter? Is it a virtual machine? Or is it an emulator? WSL is, roughly speaking, an interpreter for ELF64 executables, and I’ve mentioned it as a sort of reverse-WINE. It has three general components: 1) A session manager running in user mode to handle the Linux instance, 2) The LXSS Manager Service and LXCore components that emulate a Linux kernel system call interface within the Windows kernel space, and 3) A Pico process for the Bash command line itself.
The LXCore translates the UNIX system calls to NT API calls, thus enabling Linux executables to run on the Windows system and use the host system resources. In order to simulate a Linux environment, the LXSS Manager needs to maintain an instance of the Linux system to manage the processes and threads associated with the system calls.
Lxcore.sys contains a system call layer, a Virtual Filesystem layer, and the several filesystems that comprise the directory structure you see when running ‘$cd / ls‘. VolFs and DrvFs might send system calls to the actual Windows NT kernel.

The best abstraction of the WSL concept I’ve seen is posted on the WSL development team’s blog:

(Microsoft, 2016)

Here a comparison is made between the ‘ls‘ command and its Windows Command Prompt counterpart, ‘dir‘. Both appear to do the same thing for the user, and both are user-space processes sending their system calls to kernel space. The difference is ‘dir‘ sends its calls directly through the Windows NT API to the kernel, whereas ‘ls‘ must communicate through the WSL core (Lxss.sys). You’ll find LXSS as a DLL in C:\Windows\System32\lxss\LxssManager.dll.
Another difference is ‘ls‘ launches a ‘Pico process’ to achieve this, which differs from a conventional Windows process in that some of the regions, including the Process Environment Block and Thread Environment Block, are absent from its virtual address space. Basically a Pico process is a stripped-down process that requires a driver (such as WSL) to execute.

Therefore, the appearance of working within a Linux VM is illusory: bash.exe actually initiates another executable on behalf of the user, and WSL translates the system calls of both into NT API calls.

A consequence of this is that Linux shell executables can be run through Bash.exe in the Windows Command Prompt, using the following:
bash.exe -c "[command]"

Here I’ve run the ‘top‘ command this way:

Well, that’s the Windows Subsystem for Linux. Here are a few of the development team’s blog posts if you want to learn more about it:
Windows Subsystem for Linux Overview
Pico Process Overview
Windows Subsystem for Linux System Calls
Windows and Ubuntu Interoperability

Windows 10 Security and Privacy Initial Setup


, , , , , , ,

Recently I’ve bought my first personal Windows machine, and was a bit wary of connecting it to the Internet without first looking at the security configuration, even though Windows 10 has native memory protection features that make arbitrary code execution pretty damn difficult.
Here I’ll cover the basic steps for enhancing security further, and also a solution that might resolve the privacy issues associated with Cortana and telemetry.

Windows Defender
The first thing I’d recommend is setting up Windows Defender. This provides a basic anti-malware service, links to the local firewall configuration and parental controls. In the Update & Security menu, there’s initially a button to enable the Windows Defender service. Use this to access the Security Centre and its main options.

From what I’ve seen, most consumer-level routers don’t allow for a detailed firewall configuration. This is why it’s a good idea to check the one that’s included with the operating system. Although Windows Defender has a simplified interface for general filtering rules, I prefer to go through the entries in the ‘Windows Firewall with Advanced Security’ application.
The ‘Allow an app through the firewall‘ in the ‘Firewall & network protection‘ tab opens the Control Panel’s Windows Firewall options. Ensure the firewall is enabled for both private and public networks. Clicking ‘Allow an app through the firewall‘ link should open a menu to select and deselect application-level rules.

Application whitelisting, or ‘Default Deny’ – blocking all applications and services except those specifically allowed, is a strategy worth considering for a paranoid-level of security.

SmartScreen settings are displayed under ‘App & browser control‘. Technically SmartScreen improves security by checking the URLs of Edge browser requests against a list of malicious addresses, but it’s a trade-off between that and privacy.

The ‘Family options‘ tab contains options that are potentially useful if children are borrowing the laptop. As with Sophos Home security (which I’ll come to), the ‘Family options‘ are managed through a Web portal so it’s harder to disable without logging into the owner’s account.
Here the owner has the options to determine which sites are accessible in the Edge browser (what happens if Firefox is used?), set time limits for laptop/browsing activity and monitor online activity.

More advanced security-related configurations can be accessed in the classic Control Panel. Options to look at are User Account Control, BitLocker and Storage Spaces.

Disable Telemetry Services
Central to the privacy-related controversy around Windows 10 is the ‘telemetry’ feature. Essentially every several hours the operating system will send limited data about the usage to Microsoft. This cannot be disabled in the user-friendly Privacy settings menu, which instead only allows for Basic or Full diagnostics, but it can be disabled in Services.msc (the Services application), where it’s listed as ‘Connected User Experiences and Telemetry‘.

Some caution is needed when disabling services here, though, as many of them are for inter-process communication between critical operating system components.

Just in case the telemetry feature is re-enabled by some future update, it makes sense to configure the inbound and outbound firewall rules for ‘Connected User Experiences and Telemetry‘ in the Windows Firewall advanced settings. This might also be listed in the simplified firewall menu as ‘DiagTrack‘.

By the way, you could also do this for any applications you want to keep entirely offline.

Third-Party Protection
After the native security features are configured, the next thing to add is a dedicated third-party anti-malware product. I’ve reviewed BitDefender Total Security before, and found it an excellent product definitely worth the £30 annual subscription. I’m also thinking of giving the considerably more expensive F-Secure TOTAL a try, as I believe in supporting a company that takes a principled stand on digital rights, and the Freedome VPN service might prove very useful while travelling.

For now I’ve installed Sophos Home – I’ve followed this vendor’s work for a couple of years as a security undergraduate, and I’m very confident it provides an excellent layer of protection even though it’s a free service. Sign up for an account with Sophos on the site, and download the installer file (roughly 236MB). When launched we get a status screen when running the Sophos Home application.

What we get here is virus protection, Web protection and unwanted application detection. The latter should protect against spyware and adware.

Sophos Home installations are managed from the company’s Web dashboard, which has three configuration sections:

  • Virus Protection: Seems like your typical anti-malware detection system.
  • Web Category: Determine which categories of sites are allowed and which are blocked.
  • Exceptions: Set filtering exceptions for files, Web sites and applications.