In perhaps the finest single cinematic scene last year, we watched “Nathan” (Oscar Isaac) dance an eerie groove alongside one of his female robots “Kyoko” (Sonoya Mizuno) who mirrored him twerk for twerk.
Nathan incarnates some dark fictional amalgamation of Larry Page and Travis Kalanick — a billionare isolated by his fame, his fortune, and his genius and who pursues his ideas in the stark nihilistic fashion that Lewis warned us about in The Abolition of Man. No small wonder that though Ex Machina as a film draws us towards an annihilation echoed in its philosophically fraternal twin Melancholia, it does so devoid of Melancholia’s poetry and stripped of Melancholia’s knack for illuminating our great contingency problem, the one that Brother Hart so often highlights, the problem of the being we find donated to every contingent thing. In Melancholia, annihilation comes from without. In Ex Machina, it emerges from within. Even the title admits this.
Ex Machina, though an award-winning film, came to the Oscars with its fly unzipped. That dance scene in particular exposed not Kyoko nor even merely the fatal flaw within Nathan, Ex Machina’s antagonist, but rather the fatal flaw within Alan Turing’s test — a test that predicates all discussion on artificial intelligence and therefore most modern discussion on consciousness.
And so we move to the flaw in Mr. Turing’s Test.
For starters, I’m no mathematician. Were I one, I would have already failed having published no theorem by my age. Mr. Turing would outstrip me in that category, I think. But I do take issue with his philosophy on how to test for artificial intelligence. His test, developed in 1950, allegedly determines whether or not the intelligent behavior demonstrated by a given machine can be distinguished from that of a human being. The test assumes at the outset that an evaluator would be aware as to which one of his participants is the human and which one is the machine. And then, the Turing Test says, if the human under evaluation believes the machine to be another human, true artificial intelligence has been achieved.
It’s a cute summation for a complex problem, one with very persuasive verisimilitude. And the sticky part is that it works on the smaller scale: humans are fooled by machines in chat rooms, in games of chess, and so on. But as we often ask of businesses, systems, networks, and other processes these days, we should ask of the Turing Test the hardest question of our zeitgeist: does it scale?
No. It does not. And for this simple reason: consequentia mirabilis. Essentially, Turing’s argument explained as a syllogism:
- One human evaluator asks another human evaluator if a given robot is also human.
- The second human evaluator believes the robot is human.
- Therefore the robot is artificially human (artificial intelligence).
But of course if this is true, then so is the following:
- One robot evaluator asks another robot evaluator if a given human is also robotic.
- The second robot evaluator believes the human is robotic.
- Therefore the human is artificially robotic (organic program).
The same argument for ex machina can be used for ex homo — how do I know if any other human is experience consciousness to the degree that I am? (Which is one of Turing’s points.) And if it can used out of man, it can be used against the man Turing, which puts our original assumption for the original syllogism in a sticky situation: how do we really know if the evaluator who initiates this test is human?
For that matter, how do we know whether the second human isn’t a robot as well?
Turtles, turtles, all the way down.
(even for Turing’s Test.)
In the existential fallacy, a syllogism breaks down because the major premise has no existential import like saying “Every dragon breathes fire.” It doesn’t imply that dragons exist, it implies that dragons breathe fire. Nowhere in this statement will you find reason to believe you’ll stumble across a dragon somewhere in our world. Or other monsters like mechanistic demigods. Let’s state the syllogism even simpler:
- Every human knows the difference between robots and humans.
- A human cannot tell the difference between this particular robot and humans.
- Therefore this particular robot achieves humanity.
Unless, of course, the humanity achieved is confusion, I think the robot in question falls a bit short. We don’t know if any human knows the difference between robots and humans on the scale of artificial intelligence. The test has yet to test itself. And when it finally does test itself on the grand scale of the whole of humanity (currently valued at the collective brainpower of roughly 7.2 billion minds), it will have rendered itself meaningless.
So to take a page from Aristotle’s aforementioned consequent mirabilis, I give you the flaw in Turing’s Test:
If we must know the difference between robots and humans, then we must know the difference between robots and humans; and if we cannot know the difference between robots and humans, then we must know the difference between robots and humans in order to justify this claim.
In any case, therefore, we must know the difference between robots and humans.
Said simpler: whether humans and robots are the same or different in the future, we must still know the difference in order to talk about it at all.
If you’re interested in exploring this concept in a fictional world, I would highly recommend signing up for my mailing list — my first novel, released later this year, will begin exploring these theme as the first in a trilogy:
To sharpen the point finer, then, I point again to Brother Hart who has shown the individuality of human consciousness:
We can never know whether our assessor is human or a robot or a beast or one of the demons of hell or angels of heaven. For this reason, we received the command to love your neighbor as yourself. It’s quite easy to demonize an entire people, to resort to racism, bigotry, violence, sexism, jingoism, agism or any number of practices that involve the Abolition of Man. To call them beasts when they are men. To give abhorrent names to your neighbors like “Charlie” or “Towel Heads” or “Niggers” when they are your brothers, when they experience consciousness to the fullness and to the degree that you yourself experience it. And you, of course, would cringe and wince if someone used a racial slur (or any other slur) for you and your people. So why do it to another?
Moral law tempers this if you let it. Moral law teaches us that we cannot say “me first” and then “I love justice.” Justice assumes that we give the other person a fair shake first — to not say “I want to have sex with you” and then leave the other person stranded but rather, as the sacrament of marriage once taught us and could teach us again, to give all of ourselves and receive sex in return; to not say “give me money” and then leave our brother cold and naked with little more than a “be warm and well-fed” thrown their way but rather, as the call to generosity teaches us, to courageously give without asking for anything in return and find that the merciful receive mercy; to not say “I take your power” but rather to die that the other might live in order to attain a better resurrection.
The only way I can do any of that is to assume, unquestioningly, that The Other experiences consciousness to the same degree as The Self. To love my neighbor as myself. The law of special beneficence and general beneficence are both rooted in the hope that I, myself, will receive beneficence. We give grace to those we like and those we do not, to those we understand and those who confound us, because we ourselves have received grace.
But I can never actually prove this for another man. That’s why it’s so easy for demagogues to gain power: they simply have to treat other humans like machines and get a large group of humans to respond as such. Then the nicknames come out. Then the torches are lit. Then the walls are built — great walls whose costs the poor are forced to pay. On the converse: if I could prove it for a robot, then I would simply be in the same position as trying to prove this for another man. In either case, my call is the same: an incarnational life that mourns with those who mourn and laughs with those who laugh.
In fact, to dare to critique an original screenplay nominated for an Oscar, had I been Alex Garland I might have written the tale so that Nathan turned out to be the robot and Ava the human. The ambiguity of Kyoko’s dance scene with her maker could have gone either way: that’s what scared us. Made in his image, as it were. The difference, of course, was human frailty and its incapacity to fathom the mystery of consciousness. Fitting, then, that in the end, Ava did not meet her maker so much as Nathan met his.
cover image from Ex Machina
Comment early, comment often, keep it civil: