Admittedly I had never heard of the Ethics and Religious Liberties Commission, and was unaware of the Statement of Principles on artificial intelligence, until I came across Luis Granados’ opinion piece on The Humanist.com.
Essentially the Statement of Principles lays out the evangelicals’ concerns about the potential of developments in machine intelligence to undermine the value society places on human life, fundamental rights and the dignity of the human person. If Luis Granados put his contempt for the religious aside and properly read the material, I think he’d find himself in agreement with them on many points.
‘America’s leading evangelical experts, as anxious to grab headlines as they are to make money, have now conveyed to us what God’s views are on the complex subject.‘ Granados claimed.
What the authors were trying to do is equip their fellow protestants, who (for reasons best known to themselves) take scripture as their authority, to argue against the compromise of freedoms and human dignity that might result from the way society views itself in relation to machine intelligence, or what society is being led to believe about the capabilities of related technologies. The Statement of Principles is predicated on certain religious beliefs shared among the intended readers, and the discourse between atheists and Christians on the basics of our faith has long passed the point of ‘let’s agree to disagree’. If Granados sees those beliefs as a valid target ridicule and derision, why bother writing an article about it?
‘The source of this inerrant and infallible word is the Bible, written thousands of years before AI was even dreamed of. After each of the statement’s twelve articles there is a list of Bible citations—most of which have little or nothing to do with the point being made.‘
Technically this is incorrect, though quoting scripture without some convincing exegesis isn’t really the best way to make a point. A substantial part of what’s being discussed here is the belief being promoted by certain ‘thought leaders’ that an artificial system could become conscious, super-intelligent and deifiable. There’s nothing fundamentally novel about this. Many cultures and religions over the millennia have attributed sentience and certain divine powers to inanimate things and emergent behaviours of nature that were beyond peoples’ understanding at the time. Likewise, the ‘simulated Universe’ idea is essentially creationism repackaged as something rational and scientific.
Our culture isn’t the first to become indifferent to the concept of objective morality, or become unstuck trying to define social progress, and it wouldn’t be the first to worship false gods if people started taking Silicon Valley ‘thought leaders’, such as Ray Kurzwel and Anthony Levandowski, seriously. Even Sam Harris ended his TED talk by referring to a potential intelligent system as a ‘god’.
What Granados labels ‘more bizarre excerpts from this Statement of Principles’ are actually his own misrepresentations of the sections he borrowed from. In fact, the bizarre thing here is how he consistently managed to completely misinterpret the source material, and ignored the substance of what was actually being said. A couple of his complaints were incoherent. Par example:
‘“We deny that the use of AI is morally neutral.” It isn’t? Of course it is! It’s a set of computer instructions, a bunch of zeroes and ones arranged in a particular pattern.‘
And two sentences later he asserted the exact opposite:
‘But the tool itself is as morally neutral as a shovel—it’s the human use of the tool that allows space for bias and discrimination to creep in.‘
‘“We reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.” There is a ton of money being spent right now on the prospect of linking human brains to outside data processing. Such capacity could unquestionably “improve” or “change” what it means to be a human being. […] For anyone to rule out the possibilities that could arise from a machine-brain interface based on what some theocrat wrote two thousand years ago is ludicrous.‘
This counterpoint dismisses the Platonic and neo-Platonic philosophy that preceded the New Testament, the scholastic arguments for the immateriality of intellect and free will, and foundational principles of computer science – these things feature a lot in modern-day ‘apologetics’. One school of thought is that the ‘rational soul’ – the thing that gives us free will and intellect – is metaphysical, coupled with the human brain rather than emergent from it, too abstract to be coupled with any form of technology, and is beyond the capacity of humans to create.
At the risk of belabouring points I’ve made here before, I also argue that a computer-based system can never be anything other than deterministic – it couldn’t be a computer otherwise – and consequently technology could never do anything outside the bounds of what a human could express as an algorithm. Of course, stochastic neural networks might be an exception to the rule, but I’m not seeing how intellect or free will could emerge from them. I think it’s an impossibility that anything man-made could improve or complete the human person to any greater degree than the technologies we have now.
And this one:
‘“While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation.” Pollyanna couldn’t have said it any better. In truth, there’s every reason to be terrified of the power of AI. In the wrong hands, or without careful monitoring, AI could ruin the lives of billions of people. Anyone who claims that a spirit in the sky will keep that from happening is a simpleminded menace.‘
Yes, technology in the wrong hands has the potential to ruin billions of lives, and that’s the whole point of the Statement of Principles. Nowhere in the Statement of Principles has anyone made a reference to a ‘spirit in the sky’, let alone claimed such an imaginary entity would prevent that happening. Anyone who claims Christians worship a ‘sky spirit’ or ‘sky fairy’ is either a) misrepresenting the faith to make whatever point, or b) is genuinely ignorant of the basics of Christianity.
What the evangelicals wrote immediately before the statement Granados quoted was this:
‘God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.‘
I can imagine this would come across as mere Bible-bashing denunciation of technology to an atheist, but in the warm incubator of cults where Silicon Valley just happens to be located, there are people who really believe artificial ‘superintelligence’, potentially the mere imitation of it, is something that should be worshipped.
Idolatory is considered a grave sin because it dehumanises by asserting the supremacy of inanimate things over humans, and usually those inanimate things are proxies for another party seeking to manipulate and exploit whatever social group. People die because they invest everything they have in ‘faith healers’, the ‘Prosperity Gospel’ and various pseudo-scientific alternatives to evidence-based medical treatments.
Could we imagine the amount of suffering and injustice there would be in a society that believed there was an equivalence of value between artificial things and human life, and deferred moral responsibility to an artificial ‘god’ that some corporation promoted as an infallible authority? And the granting of rights by the state to machines would necessitate limitations on human freedoms.
‘While devoting most of their text to meaningless platitudes, the learned evangelicals manage to ignore entirely the two most important ethical issues facing humanity as we stumble into an AI-dominated world. The first is the question of explainability—the importance of AI being set up in such a way that humans can figure out how it makes the decisions it does.‘
Aside from the irony of this statement coming from someone who devoted too much of his opinion piece to the usual juvenile ad hominems while again ignoring the substance of the source material, I’d say that explainability is a cursory ideal, because explainability in practice would be something along the lines of ‘These are the finer details of what we’re going to impose on society, whether you like it or not. Screw what Joe Average thinks.’ It’s naive to think we’d get to actually decide, given how powerful and unaccountable the corporations are in Silicon Valley, given their close relationship to the state, and given the ‘startup culture’ there exists to enable the expansion of those corporations. It’s public knowledge how facial recognition systems, IP cameras and other mass surveillance technologies work, to give a few examples, but we didn’t get to decide how they were used to violate our privacy.
‘The second is the question of “Who owns AI?” Unless something major changes about how our social systems finance capital growth, the tiny proportion of people who own most of the capital today will own an even greater share of it in the future, and the rest of us are going to own squat. When you consider how expensive it is likely to be to create the machine-brain interface mentioned above, the capacity of a handful of ultra-wealthy individuals to turn themselves into a super-species that doesn’t need the rest of us, as Yuval Noah Harari has warned, is more than a bit unnerving.‘
This has already been addressed at several points throughout the Statement of Principles, and the question itself is easy enough to answer: Silicon Valley corporations, Microsoft and IBM, since they have the resources to develop the most advanced technologies and buy out other firms that develop new ideas that consolidate whichever monopolies. Would it matter, though? Machine intelligence is already integrated into the lives of the general population, and in ways we’re not aware of. I have a digital/Internet radio that ‘learns’ which stations I listen to in order of frequency. Many smartphones have the capability to detect human faces in an image. Computer games include AI components, as do some high-end developer environments and anti-malware systems. If this is the current and future manifestation of machine intelligence in the real world, maybe we do get a choice.
Like this:
Like Loading...