Tags

, , , , ,

With the dire warnings of artificial super-intelligences as an existential threat being just one of several pseudo-religious ideas (simulated Universes, Ray Kurzweil’s ‘Singularity’, etc.) being courted by Silicon Valley, I wasn’t surprised that someone would attempt to build an actual religion of it. The stated purpose of Anthony Levandowski’s ‘Way of the Future’, registered as a non-profit in California, is to ‘develop and promote the realization of a Godhead based on Artificial Intelligence’, and ‘through understanding and worship of the Godhead [to] contribute to the betterment of society’.

I don’t think it can succeed where Christianity is perceived to have failed: The latter is based on ~2,500 years of human intellect and reasoning. For example, it’s reasonable (but not necessarily correct) to believe in the creator based on the principle that everything, including the Universe itself, must have an ultimate ‘First Cause’, as it’s more likely than an infinite regression of causes, and it must be something outside the natural world. This first cause must be synonymous with existence itself and manifest various absolutes, argued Thomas Aquinas somewhere in Summa Theologica. This idea addresses a perennial question, in an admittedly unsatisfactory way, to which it’s unlikely we’d ever know the answer with certainty: ‘Why is there something instead of nothing?’ And if one is of the view that the Universe is a finite and closed system, of which the laws of nature are only descriptive, it’s reasonable to believe there could exist a supernatural reality outside of what we typically observe. The scientific method of reasoning cannot tell us whether God exists or not, since that’s a metaphysical proposition, but it should lead theists and atheists alike to deeper questions about our existence.

The point here isn’t to convince anyone of these principles, but instead to make a case that it should never be substituted with the fear or worship of some artificial ‘super-intelligence’. An artificial ‘super-intelligence’ that certain ‘thought leaders’ have essentially conjured out of nothing. A ‘super-intelligence’ that requires presuppositions and assumptions that contradict observation, and that discounts things that are known. I’m particularly suspicious of the ‘super-intelligence’ thing because Silicon Valley seems intent on consolidating a monopoly that fetishises the collection of data about us, and what isn’t being mentioned in the AI debates are the facial recongition, the matching of online profiles to real-world identities, the automation of censorship – things that increase the information asymmetry between the individual an corporations.
Having developed software in multiple programming languages over the years, having reverse-engineered software, having assembled a rudimentary computer/processor of surface-mount components and having done my Masters’ review paper on a range of intelligent systems for detecting the usage of stolen ATM cards, I’m extremely skeptical of the idea that a processor-based system could ever become something more than a data processing tool of limited application, which can produce fuzzy abstractions of data sets or determine patterns or anomalies.

Recreating Human Morality
There are discussions about whether we can and should ‘program’ artificial intelligence with some form of morality. I argue that it’s unrealistic, partly because we don’t live in a society that allows objective morality or freedom of conscience. What we commonly find is the consensus isn’t really a matter of whether society values human life or fundamental rights, but rather how much society values those things. What exceptions, compromises and illogical juxtapositions of values should be made in the name of ‘progress’? What trade-offs does a person need to make just to function in society? Alexander Simon-Lewis asked exactly the right question on Wired.com: Is it dangerous to recreate this flawed human morality in machines?

The ‘rail cart’ problem, which is often mentioned in discussions about autonomous vehicles, happens to be a perfect illustration of this: Should an autonomous vehicle be programmed to terminate the lives of its occupants to prevent the deaths of innocent pedestrians? Should the vehicle change course and terminate one person to save the lives of several? Humans can weigh one course of action against the other, but has anyone questioned whether society would ever allow an autonomous vehicle to make an objective decision for itself? Society most certainly wouldn’t allow it, and so a course of action must either be programmed by a human beforehand (which might be considered murder by proxy) or remain a neglected ‘use case’. I’d bet £100 on the industry opting for the latter.
We can dig further into this problem, and ask whether it’s even ethically acceptable for manufacturers and consumers of autonomous vehicles to trust them with the lives of others, or allow a machine onto the roads that’s programmed to terminate one person’s life in preference to another because society perceives a difference in value between those persons – one could imagine an opinion piece in The Guardian arguing it would be tantamount to executing people for belonging to a perceived underclass. Should that kind of decision even be determined in real life from hypothetical scenarios?

An artificial ‘super-intelligence’ also wouldn’t be allowed to determine its own morality. What happens if this hypothetical ‘super-intelligence’, through impeccable and objective reasoning, and to ensure stability and the best quality of life for the maximum number of people, decided that every child is entitled to a mother and a father, that abortion is straight-up murder, that the death penalty should be abolished in the United States, that everything should be based around the right to life and the dignity of the human person? The inhabitants of Silicon Valley might be pissed, and someone would be modifying this AI to get the answers they wanted.

You’ll notice that most my points are made here as questions, and that wasn’t intentional. Even as a practising Catholic I genuinely don’t have the answers, and I cannot imagine how morality could be codified in a way that isn’t going to be problematic for navigating real world situations.

Why I Don’t Think Machines Could Ever Become Sentient
Ray Kurzweil would do well to watch a dissection of the human brain on YouTube. The neurons and synapses are so small and densely packed that the organ has a cross-section with the smoothness and consistency of really thick jelly. Could the workings of this structure, in all its intricacy and complexity, realistically be reproduced on manufactured hardware? According to the Human Brain Project, the biological human brain has ~86 billion neurons, each with ~1700 connections. To equate that with a computer is to seriously under-estimate its complexity, and this poses a real technical problem for proponents of artificial ‘super-intelligence’.
To simulate this on a computer would certainly require a data structure for each neuron, and maybe even a low neuron/processor ratio too. Even with a clever use of instantiation and destructors to simulate only the parts of the brain associated with intellect and cognition, such a task seems computationally possible but it would likely require an extremely low-latency network consisting of tens of billions of processor cores.
In early 2014, the fourth most powerful computer was able to simulate (to what degree?) 1 second’s activity of 1% of a human brain. This required more than 700,000 processor cores and 1.4 petabytes of system memory. The processing took about 40 minutes. At the time of writing this, the hardware is still within the top-ten most powerful.

But isn’t technology advancing at an ‘exponential’ rate? Well, technological progress is hard to quantify in general terms, let alone state it’s increasing ‘exponentially’, but Moore’s Law (which isn’t really a ‘law’) predicted/roadmapped the number of transistors for a given area would double every 18 months to 2 years, which means memory and storage capacities increase, processors can perform more operations per second, and the integrated circuits of a given density become cheaper to manufacture over time. Obviously there’s a limit to this, since a processor cannot have an infinite number of transistors, and at some point (somewhere between 5-9nm) quantum tunnelling will interfere with transistor states. Moore’s Law doesn’t predict, and is not even directly relevant to, changes in the form or substance of technologies. Other than having more transistors in their ICs, consumer devices, such as PCs, laptops, smartphones, MP3 players, digital cameras, etc. aren’t different in substance or form to the products we bought a decade ago. And computers have remained fundamentally the same in nature, regardless of how many transistors, diodes, capacitors and resistors form their circuitry.

What does this mean for machine intelligence? First thing is we still don’t have a precise definition of consciousness, what the difference is between living and sentient matter. Also it’s still arguable whether consciousness is God-given (as I believe it is) or whether it’s an emergent property of a complex, yet ultimately deterministic, system that’s purely the result of an improbable arrangement of molecules and 3.5 billion years of adaptation to the environment. And perhaps our consciousness is dependent on some currently unknown phenomenon at the sub-atomic scale.
What we do know is the computing technology we’re familiar with couldn’t be anything other than deterministic (possibly with the exception of neuromorphic hardware). In fact, a computer is quite mechanistic to anyone who understands how it works, even when endowed with some learning algorithm. We have a microprocessor containing arrays of nano-scale transistors in various arrangements, and each array will always produce the same output given a certain input. This input, the op codes and operands, are fetched from another array of transistors that constitute the system memory, and they’re in turn generated by a compiler that translates from a high-level programming language. A computer may as well be a doorstop or a brick without the software, and what is software other than a collection of man-made instructions on a dead storage medium? This is why a Dell Optiplex is no more capable of sentience than a BBC Micro.