Tags

, , , , , ,

Steve Pinker opened his two and a half minute video on Big Think (‘Why Alpha Males Fear the Rise of Artificial Intelligence‘) with a couple of very puzzling statements on gender and Greek mythology, which I can only guess was classic pseudo-intellectual bullshitting at work. Following that, the video doesn’t actually contain any reasoned argument against the reservations scientists and engineers have regarding the development of an hypothetical ‘super AI’. In Pinker’s view, a ‘super AI’ wouldn’t have attributes that render it dangerous, and it would have some undefined ‘safeguards’ – circular reasoning, as those safeguards wouldn’t exist unless there were serious considerations of the potential dangers to begin with.

Unlike Sargon of Akkad, I don’t believe an artificial ‘super intelligence’, or even general purpose machine intelligence, is even on the horizon – I’ll explain why this is. However, the intelligent systems technologies we already have come loaded with ethical issues. Today’s intelligent systems are mainly used for ‘big data’ and profiling human behaviour without most people being aware of it, so already it raises questions about control, oversight and its surveillance applications. The facial recognition system on FaceBook, where it’s impossible to prevent friends tagging you in photos, and where it’s maybe impossible to prevent public surveillance camera footage being tied to your online identity, is just one relatively minor example. Whenever we draw cash from an ATM, at least one intelligent system is clocking the amounts being withdrawn, our location and times of day we typically use an ATM, etc. And there was that story about a system that inferred that a Target customer was pregnant before her relatives knew.

Applied AI
There’s a lot more at play than Moore’s Law when predicting how machine intelligence might develop/evolve. The development of machine intelligence has been driven largely by a need to process the increasing volumes of data being generated, mined, stored, and most the powerful systems are used for creating abstractions of that data. In other words, there is an industry demand in the area of ‘big data’, so we can guess the direction in which the technologies might develop.

Another pattern is that machine intelligence is application-specific, either by design or adaptation. Almost all examples are simple reasoning algorithms that are optimised for making rational inferences based on specific data sources. I don’t think there’s a system that does natural language processing and plays chess, and there isn’t a system that could switch between driving a car and flying a passenger jet. And it seems we don’t currently have a way to create a ‘meta intelligence’ capable of transcending a domain – we therefore cannot predict how our hypothetical ‘super intelligent’ system might act, since it must be something very different from today’s systems.

Advertisements