the flaw in alan turing's test turing test ex machina

The Flaw in Turing’s Test

In perhaps the finest single cinematic scene last year, we watched “Nathan” (Oscar Isaac) dance an eerie groove alongside one of his female robots “Kyoko” (Sonoya Mizuno) who mirrored him twerk for twerk.

Nathan incarnates some dark fictional amalgamation of Larry Page and Travis Kalanick — a billionare isolated by his fame, his fortune, and his genius and who pursues his ideas in the stark nihilistic fashion that Lewis warned us about in The Abolition of Man. No small wonder that though Ex Machina as a film draws us towards an annihilation echoed in its philosophically fraternal twin Melancholia, it does so devoid of Melancholia’s poetry and stripped of Melancholia’s knack for illuminating our great contingency problem, the one that Brother Hart so often highlights, the problem of the being we find donated to every contingent thing. In Melancholia, annihilation comes from without. In Ex Machina, it emerges from within. Even the title admits this.

Ex Machina, though an award-winning film, came to the Oscars with its fly unzipped. That dance scene in particular exposed not Kyoko nor even merely the fatal flaw within Nathan, Ex Machina’s antagonist, but rather the fatal flaw within Alan Turing’s test — a test that predicates all discussion on artificial intelligence and therefore most modern discussion on consciousness.

And so we move to the flaw in Mr. Turing’s Test.

For starters, I’m no mathematician. Were I one, I would have already failed having published no theorem by my age. Mr. Turing would outstrip me in that category, I think. But I do take issue with his philosophy on how to test for artificial intelligence. His test, developed in 1950, allegedly determines whether or not the intelligent behavior demonstrated by a given machine can be distinguished from that of a human being. The test assumes at the outset that an evaluator would be aware as to which one of his participants is the human and which one is the machine. And then, the Turing Test says, if the human under evaluation believes the machine to be another human, true artificial intelligence has been achieved.

It’s a cute summation for a complex problem, one with very persuasive verisimilitude. And the sticky part is that it works on the smaller scale: humans are fooled by machines in chat rooms, in games of chess, and so on. But as we often ask of businesses, systems, networks, and other processes these days, we should ask of the Turing Test the hardest question of our zeitgeist: does it scale?

No. It does not. And for this simple reason: consequentia mirabilis. Essentially, Turing’s argument explained as a syllogism:

  1. One human evaluator asks another human evaluator if a given robot is also human.
  2. The second human evaluator believes the robot is human.
  3. Therefore the robot is artificially human (artificial intelligence).

But of course if this is true, then so is the following:

  1. One robot evaluator asks another robot evaluator if a given human is also robotic.
  2. The second robot evaluator believes the human is robotic.
  3. Therefore the human is artificially robotic (organic program).

The same argument for ex machina can be used for ex homo — how do I know if any other human is experience consciousness to the degree that I am? (Which is one of Turing’s points.) And if it can used out of man, it can be used against the man Turing, which puts our original assumption for the original syllogism in a sticky situation: how do we really know if the evaluator who initiates this test is human?

For that matter, how do we know whether the second human isn’t a robot as well?

READ NEXT:  Fetch — Tap and Die 022

Turtles, turtles, all the way down.

(even for Turing’s Test.)

 

In the existential fallacy, a syllogism breaks down because the major premise has no existential import like saying “Every dragon breathes fire.” It doesn’t imply that dragons exist, it implies that dragons breathe fire. Nowhere in this statement will you find reason to believe you’ll stumble across a dragon somewhere in our world. Or other monsters like mechanistic demigods. Let’s state the syllogism even simpler:

  1. Every human knows the difference between robots and humans.
  2. A human cannot tell the difference between this particular robot and humans.
  3. Therefore this particular robot achieves humanity.

Unless, of course, the humanity achieved is confusion, I think the robot in question falls a bit short. We don’t know if any human knows the difference between robots and humans on the scale of artificial intelligence. The test has yet to test itself. And when it finally does test itself on the grand scale of the whole of humanity (currently valued at the collective brainpower of roughly 7.2 billion minds), it will have rendered itself meaningless.

So to take a page from Aristotle’s aforementioned consequent mirabilisI give you the flaw in Turing’s Test:

If we must know the difference between robots and humans, then we must know the difference between robots and humans; and if we cannot know the difference between robots and humans, then we must know the difference between robots and humans in order to justify this claim.

In any case, therefore, we must know the difference between robots and humans. 

Said simpler: whether humans and robots are the same or different in the future, we must still know the difference in order to talk about it at all.

If you’re interested in exploring this concept in a fictional world, I would highly recommend signing up for my mailing list — my first novel, released later this year, will begin exploring these theme as the first in a trilogy:

 


To sharpen the point finer, then, I point again to Brother Hart who has shown the individuality of human consciousness:

We can never know whether our assessor is human or a robot or a beast or one of the demons of hell or angels of heaven. For this reason, we received the command to love your neighbor as yourself. It’s quite easy to demonize an entire people, to resort to racism, bigotry, violence, sexism, jingoism, agism or any number of practices that involve the Abolition of Man. To call them beasts when they are men. To give abhorrent names to your neighbors like “Charlie” or “Towel Heads” or “Niggers” when they are your brothers, when they experience consciousness to the fullness and to the degree that you yourself experience it. And you, of course, would cringe and wince if someone used a racial slur (or any other slur) for you and your people. So why do it to another?

READ NEXT:  Fellow Dogs — Tap and Die 008

Moral law tempers this if you let it. Moral law teaches us that we cannot say “me first” and then “I love justice.” Justice assumes that we give the other person a fair shake first — to not say “I want to have sex with you” and then leave the other person stranded but rather, as the sacrament of marriage once taught us and could teach us again, to give all of ourselves and receive sex in return; to not say “give me money” and then leave our brother cold and naked with little more than a “be warm and well-fed” thrown their way but rather, as the call to generosity teaches us, to courageously give without asking for anything in return and find that the merciful receive mercy; to not say “I take your power” but rather to die that the other might live in order to attain a better resurrection.

The only way I can do any of that is to assume, unquestioningly, that The Other experiences consciousness to the same degree as The Self. To love my neighbor as myself. The law of special beneficence and general beneficence are both rooted in the hope that I, myself, will receive beneficence. We give grace to those we like and those we do not, to those we understand and those who confound us, because we ourselves have received grace.

But I can never actually prove this for another man. That’s why it’s so easy for demagogues to gain power: they simply have to treat other humans like machines and get a large group of humans to respond as such. Then the nicknames come out. Then the torches are lit. Then the walls are built — great walls whose costs the poor are forced to pay. On the converse: if I could prove it for a robot, then I would simply be in the same position as trying to prove this for another man. In either case, my call is the same: an incarnational life that mourns with those who mourn and laughs with those who laugh.

In fact, to dare to critique an original screenplay nominated for an Oscar, had I been Alex Garland I might have written the tale so that Nathan turned out to be the robot and Ava the human. The ambiguity of Kyoko’s dance scene with her maker could have gone either way: that’s what scared us. Made in his image, as it were. The difference, of course, was human frailty and its incapacity to fathom the mystery of consciousness. Fitting, then, that in the end, Ava did not meet her maker so much as Nathan met his.


monogram transparent

cover image from Ex Machina


Be sure to share and comment. And subscribe.

Comment early, comment often, keep it civil:

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Quick note from Lance about this post: when you choose to comment (or share this post with your friends) you help other readers just like you.

How?

Well, see, your comments & sharing whisper a few things to those who come after you:

The first is that this site is a safe place to speak up & stay curious. That it's civil. That discussion is encouraged. That there's no such thing as a stupid question (being a student of Socrates, I really and truly believe this). That talking to one another and growing together is more important than anything we could possibly publish. That the point is growing in virtue and growing together and growing wise. That discovery is invention, deference is originality, that we all can rise together. The only folks I'm going to take comments down from are obvious jerks who argue in bad faith, don't stay curious, or actively make personal attacks. And, frankly, I'd rather we talk here than on some social media farm — I will never show ads and the only thing I'm selling anywhere on the site or my mailing list is just the stuff I make.

You're also helping folks realize that anything you & they build together is far more important than anything you come to me to read. I take the things I write about seriously, but I don't take myself seriously: I play the fool, I hate cults of personality, and I also don't really like being the center of attention (believe it or not). I would much rather folks connect because of an introduction I've made or because they commented with one another back and forth and then build something beautiful together. My favorite contributions have been lifelong business and love partnerships from two people who have forgotten I introduced them. Some of my closest friends NOW I literally met on another blog's comment section fifteen years ago. I would love for that to happen here — let two of you meet and let me fade into the background.

Last, you help me revise. I'm wrong. Often. I'm not embarrassed to admit it or worried about being cancelled or publicly shamed. I make a fool out of myself (that's sort of the point). So as I get feedback, I can say, "I was wrong about that" and set a model for curious, consistent learning, and growing in wisdom. I'm blind to what I don't know and as grows the island of my knowledge so grows the shoreline of my ignorance. It's the recovery of innocence on the far end of experience: a child is in a permanent state of wonder. So are the wise: they aren't afraid of saying, "I don't know. That's new: please teach me." That's my goal, comments help. And I read all reviews: my skin's tough, but that's not license to be needlessly cruel. We teach one another our habits and there's a way to civilly demolish an idea without demolishing another person: just because I personally can take the world's meanest 1-star review doesn't mean we should teach one another how to be crueler on the internet.

For three magical reasons — your brave curiosity, your community, & my ignorance:

Please comment & share with friends how you prefer to share:

Follow The Showbear Family Circus on WordPress.com

Thanks for reading the Showbear Family Circus.
  1. "I think you can write about yourself without the vain, self-focused naval gazing. Good storytelling is a gift from writers…

  2. "His fans didn’t just write fiction about it. One calculated the tensile strength of the material it was made of.…

  3. My mother was the volatile Italian and my dad was the calming influence when things went awry. Dad was our…

  4. Lancelot, thank you, for that congrats, but I fear that continues my jinxed lament - that the late Andy Warhol…

Copyright © 2010— 2023 Lancelot Schaubert.
All Rights Reserved.
If we catch you using any of the substance of this site to train any form of artificial intelligence, we will prosecute
to the fullest extent permitted by any law.

Human children and adults always welcome
to learn bountifully and in joy.