Turing for Dummies (AI) — Part 1

Alan M. Turing (public domain image)

Alan M. Turing. You know this guy, right? He is considered the father of computer science and AI. In his free time, he helped end WW2 by cracking Nazi’s ‘Enigma’ code thus allowing the Allies to understand every single encrypted message that Nazis were using to communicate.

The Enigma Machine (public domain image)

Well, it turns out he, luckily, left behind “a couple” of works of art — tackling topics from mathematics (he got his PhD 🎓 in mathematics from Princeton), cryptography, computer science, AI, and philosophy.

One of those was on my backlog for a long long time, it’s Turing’s seminal AI paper from 1950 titled “Computing Machinery and Intelligence” where he introduced the well-known Turing Test or The Imitation Game as he called it.

I finally found some time to thoroughly read it through and well, me being me, soon afterward I found myself reading all of the follow-up work, all the way from 1980 to 2014, that came as either objection to his paper like the famous “Chinese Room” experiment or as an extension. Movies, podcasts, blogs, you name it, everything I could get my hands on.

This blog is an attempt to try and distill the vast amount of knowledge and ideas that this paper helped kick-start and hopefully provide you with something valuable.

I’m going to structure this series of blogs into 2 parts:

  1. Understanding Turing’s seminal AI paper

Let’s start! A💻 B💻 C💻 (you’ll later understand what I did here)

Personal update: I just created a monthly AI newsletter and a Discord community! Subscribe/join those to keep in touch with the latest & greatest AI news if that’s something you care about!

Computing Machinery and Intelligence

In the quest for understanding and engineering intelligence, the natural questions that must arise are the ones about thinking, understanding, attention, consciousness*, how does memory work, etc., concepts we stack under the umbrella term — cognition.

*- consciousness may or may not be considered as part of cognition depends whom you ask

Because you see in science when you don’t know something you create an umbrella term and you throw your dirty laundry under it. But that’s not just out of fun or scientists trying to exert their intellectual supremacy, it serves a purpose, it’s useful to name stuff — we need to communicate.

Anyways, we don’t exactly know how these work in humans/animals or even plants, how they are interrelated, and which of them cause others.

What is “thinking”? (Photo by frank mckenna on Unsplash)

Is consciousness needed for thinking and true understanding?

How and where does consciousness appear? Is it a product of pure computation or is it something deeper, some subtle interplay between the brain (hardware) and the mind that gives birth to it. Quantum effects? Microtubules?

Do we understand things or do we just “get used to” them?

Questions like those. But in order to avoid an infinite loop of trying to explain stuff we don’t know using other concepts that we don’t understand, Turing simply advised a game, a benchmark. His paper opening went like this:

I propose to consider the question ‘Can machines think?’. This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’.

Such an amazing opening! And him being aware that this would only lead to a futile, purely unpractical debate at the moment of history he lived in, and even today in 2020, he went on to say:

Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

And so we come to The Imitation Game!

The Imitation Game (Turing Test)

The game goes as follows.

You have a judge (may be of either sex) and 2 subjects a man and a woman. They are isolated from each other and the only way they communicate is via typed (not even handwritten as that could expose the sex of the subject) textual information.

The goal of the man is to try and trick the judge into thinking that he is a woman. The goal of the woman is to help the judge and explain to him that she is actually the woman. The goal of the judge is to correctly guess who’s a man and who’s a woman.

Now swap the man with a machine*. If the machine manages to trick the judge, as often as the man does, it wins the game and thus passes the Turing Test!

*- later I’ll be more explicit about what Turing meant by ‘machine’ tl;dr; computer

Turing-Test (image borrowed from Jaime Zornoza’s blog with his permission)

Nowadays the game is usually stated as having a judge, a human, and a computer and the goal of the computer is to trick the judge into thinking that ‘it’ is a human. But I think that it’s important whether the judge is aware that one of the subjects is a computer — we introduce a bias into the game.

Because once you know that there is a computer somewhere in there, then when you notice a however small but weird mistake you won’t ascribe it to a human and the game is over. If you didn’t know that the computer is playing the game you might think to yourself “huh silly humans…we do say weird stuff sometimes“.

All of this still seems like a vague game, so let’s make some predictions!

Turing’s Prediction (1950):

I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 10⁹, to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification after 5 minutes of questioning.

Note: 10⁹ that’s 1 Gb (bit not byte) of memory for all of you out there.

Well, actually not bad! I’d dare say that if you truly took an average interrogator Google’s Meena wouldn’t have any problems keeping the conversation going for at least 5 minutes.

But we’re still nowhere near figuring out or engineering a true intelligence.

We’re basically killing the pattern recognition/perception (converting sensory streams into patterns/concepts) part of the equation (the other part being the aforementioned cognition) and that’s a big leap forward.

Note on perception: think of the perception as a mechanism by which we convert multi-dimensional sensory input (visual information, auditory information coming from our sensors — eyes, ears, etc.) into lower-dimensional concepts (I see a pattern that looks like a bird and I’d be able to recognize a similar one later on, but I still don’t know what a bird is nor that it can fly, take a poop onto an unsuspecting fellow human if it’s a pigeon, etc.). You still need cognition to make any sense out of those patterns, to make a plan and produce some actions (like taking a selfie with the bird, why not it’s 2020 — InternalVomitException)

perception and the pooping pigeons

I think even Richard Feynman would be damn impressed by our capability to solve some of the pattern recognition problems like the ones present in computer vision using modern deep learning (neural nets) techniques, take a look at this awesome and informative video:

Richard Feynman on whether or not we can engineer intelligent machines

Khm, back to Turing. Let’s give it a bit more attention to his “understanding” of the words ‘think’ and ‘machine’.

The “Think” Part

May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.

Basically, we probably won’t even have to figure out thinking before we make computers capable of doing stuff we want them to. Airplanes don’t fly exactly like birds nor do fastest cars run like cheetahs but who cares? The goal there was speed and we did it better than nature ever could.

Cars don’t run like cheetahs, but who cares? (Photo by Magda Ehlers from Pexels)

On the other hand, if we wanted to make them capable of flying/driving inside of caves, forests, or some other more challenging environments then we would have to redesign them that’s for sure.

There are always tradeoffs in engineering. We find inspiration in nature but we don’t need to copy-paste it into our technology.

The “Machine” Part

Turing is very cautious not to allow just any kind of engineering feat to be used in the game. He anticipated that it may be possible to “engineer” a human from a single cell using genetic engineering and that that “machine” wouldn’t be a fair participant in the game.

He clearly states that by the word machine he actually means a digital computer.

A modern (electrical) digital computer (Photo by Tianyi Ma on Unsplash)

It’s important to note that the digital computer is a broader term than the electrical digital computer (uses electricity to do all of the calculations) that we nowadays simply call a computer.

Digital computers can also be mechanical, like the famous Babbage’s Analytical Engine, acoustic, optical (so-called photonic computing), etc. Those are “just” the implementation details. Electricity is simply convenient because it’s faster and easier to manipulate than other technologies. Electrical computers can’t do anything more than their slower mechanical cousins — they “just” do it faster.

Mechanical digital computer — Babbage’s Analytical Engine (public domain image)

Digital computers are universal computing machines meaning they can mimic any discrete-state machine. Or as Turing put it:

It is unnecessary to design various new machines to do various computing processes.

And this is something we’re quite used to nowadays, but it wasn’t always obvious. Back in the day, they had a special machine for everything. Now you just pull out your smartphone and you’ve got a calendar, notebooks, pencils, music, etc.

The strong underlying hypothesis that Turing is making is the following:

The human mind is roughly a discrete state machine so we can mimic it using these digital computers. Mind=Software, Brain=Hardware, you know. Humans operate by the “book of rules” (computer instructions) the same way as the computers do, it’s just that there are so many of them and they are so complex that it’s really hard to notice it. Following this is that thoughts have a computational nature.

Conway's game of life is one amazing example of a really complex-looking phenomenon that actually emerges from really simple rules. Maybe we, although we look so complex, emerged from really simple rules? Huh.

Mandelbrot’s fractal is another (a really simple rule produced this!):

Mandelbrot set/fractal (public domain image)

So it may not sound as crazy as it first appears even today.

To reiterate, computers just need to have enough memory (again assumption that mind can be approximated by a discrete-state machine) and we can mimic humans.

Turing was aware that he made a lot of conjectures that he couldn’t prove, so he entertained himself with the idea of what arguments others might come up with so as to point out the fallacies in his theory.

He came up with 9 contra-arguments to his own theory and he tried to refute all of them. I’m going to paraphrase them and try not to induce some logical errors while doing so. Let’s dig in!

Possible Objections Turing anticipated

  1. The Theological Objection

Objection: The soul is necessary for thinking and God gave soul only to men and women.

Turing’s reasoning goes something like this. Theological arguments are not very strong arguments as they are based on faith. You don’t question them you just accept them. Muslims believe that God gave souls only to men. Who’s right? Christians or Muslims? Probably neither.

Throughout human history, we’ve proven many of those religious ideas as wrong. Like the geocentric view of the universe proven wrong by Copernicus, Galileo and Kepler. And I agree 100%.

The theological objection (Photo by Aaron Burden on Unsplash)

2. The ‘Heads in the Sand’ Objection

Objection: The idea of machines thinking is too scary, so it’s not possible. Only humans can think! We’re special.

These first 2 arguments are really weak so Turing didn’t give them much attention. What can you even say to people that object you like this? Turing was quite creative with this one:

I do not think that this argument is sufficiently substantial to require refutation. Consolation would be more appropriate: perhaps this should be sought in the transmigration of souls

This argument usually comes from the same people that use the theological argument. Hence he is suggesting that they can have their supremacy conserved if they go fully cyborg by porting their souls to machines. 😆

The ‘Heads in the Sand’ objection (Photo by Sharad Bhat on Unsplash)

3. The Mathematical Objection

Objection: Basically there are things that even the most powerful digital computers (infinite capacity/memory ones) can not compute and thus give a correct answer to. A well-known fact in the fields of theoretical computer science and logic.

Turing basically says that although he is aware of such limitations there is no proof that human intellect is not subject to the same kind of limitations.

My opinion is that this is not something we’d even test for or that we test for today in order to see if the computer passes the Turing Test. Could you imagine, for example, giving subjects some program-input pair, and asking them whether it will eventually stop/halt?* That’s just silly, we’re not testing logicians nor mathematicians in this test — and even they couldn’t give a correct answer to those types of questions.

*- the so-called Halting Problem, in short without getting into what Turing machines are, you cannot, in general, give a correct answer to this type of question. It is said that this problem is undecidable over Turing machines.

The mathematical objection (Photo by Antoine Dautry on Unsplash)

If you’re interested in better understanding this part:

I’d like to hear your opinion if you’ve read any of these books!

And I’ve watched these while I was still an undergrad student, just go and binge-watch everything you can find on Turing on Computerphile’s channel:

Turing machines — Computerphile

4. The Argument from Consciousness

Objection: Even if computers exhibit creativity it’s not the same thing because they are not conscious nor do they feel emotions.

Turing basically says how do you know? If they are actually able to do these amazing things (which they currently are not*) and win the game how do you know that they are not conscious?

In a more extreme view called the solipsist view, you’re really not sure that I’m conscious you just know that you’re conscious (or do you?). You just ascribe consciousness to other humans because they appear to have one — that’s it. It’s basically the implicit application of induction reasoning with a single sample being you. Do you know what’s the next element in this array: 23?

*- They appear to be getting closer though, I’ll mention some amazing state-of-the-art AI in the II part.

The argument from consciousness (Photo by Mattia Faloretti on Unsplash)

5. Arguments from Various Disabilities

Objection: Yeah sure they can do all those things but they can’t do X!

X = {have a sense of humor, fall in love, learn from experience, do something really new, …}

Turing says that these are mostly based on the principle of inductive reasoning. You see the current and the past computers and you conclude that they’ll never be able to do X.

But we have solved some of these since 1950! Some chatbots do exhibit some amounts of humor. Computers definitely learn from experience — there’s a whole field I’m really passionate about called machine learning! And oh boy they can do new things — think of Google’s DeepDream, DeepMind’s AlphaGo/Zero/Star… I’ll go into more details in the II part!

The argument from various disabilities (Photo by Yomex Owo on Unsplash)

6. Lady Lovelace’s Objection

Objection: computers can’t create, they can only do stuff that we program them to do i.e. only the things that we know how to do.

This one is really important as it was the main argument of Searle’s seminal paper from 1980 — where he presented the so-called “Chinese room” argument. It also inspired an alternative to Turing’s Test (TT) the so-called Lovelace Test (LT). I’ll cover these more in the II part!

Turing reformulates this one as:

A better variant of the objection says that a machine can never ‘take us by surprise’. This statement is a more direct challenge and can be met directly.

And he goes on to explain that computers do indeed surprise him, but his argument is really silly. He said that in his hurried fashion he introduces some errors while programming and that once he sees the result he is surprised as it doesn’t make any sense.

Ah, developers know the struggle! Stay strong Turing! But he also mentioned learning as a way to refute this objection and he was right.

I think that this argument was proven wrong with the arrival of machine learning/deep learning. Grandmasters are learning how to play games of chess, Go, etc. from computers! Just ask Magnus Carlsen or Lee Sedol.

Nowadays grandmasters are learning chess from chess engines (Photo by Randy Fath on Unsplash)

7. Argument from Continuity in the Nervous System

Objection: The brain is not a discrete-state machine.

Turing acknowledges this but the assumption is that it can be modeled as one.

It’s a tough one. Take a look at this video with Roger Penrose where he talks about microtubules and quantum effects and gives hints against the computational nature of our minds (i.e. hardware matters you can not plugin software into any digital computer and expect it to produce cognition and consciousness).

Roger Penrose — Physics of Consciousness

8. The Argument from Informality of Behaviour

Objection: There is no way to prepare the machine for every possible combination that may appear in the real-world by explicitly programming it.

Turing believes that we just haven’t searched long enough — that if we keep on searching we’ll eventually figure out more and more of these rules but never all of them. So he is in a way actually supporting this objection as he is not giving a practical way to engineer beyond this limitation.

My personal opinion is that ML is a way to solve this and that autonomous cars will be the first real-world application to refute this one.

9. The Argument from Extra-Sensory Perception

Objection: If somebody has telepathic abilities he can figure out, say, which card the judge has in his/her hands whereas the machine can only do random guesses. Similarly for precognition, clairvoyance, and psychokinesis.

Back in his time, there seemed to be allegedly significant statistical evidence for telepathy. Some subjects figuring out, say, cards that the interrogator held, performed better than if they were guessing at random (they can’t see the card or have any other way of knowing what the card is aside from telepathy). Still not perfect but they were allegedly guessing better than at random.

Turing sweeps this one under the rug by saying that we should just put subjects into a telepathy-proof room. ❤️

Nowadays there are no scientific proofs for none of these so it’s not worth discussing them anymore, although it is an interesting SF idea…

Those were the objections that Turing imagined. There is one more interesting thing I want to mention and wrap up this first part of the series.

Turing and Machine Learning

At the very end of his paper, Turing hints at many interesting ideas, namely the ones related to learning.

What if instead of us trying to program all of the knowledge that we can “get our hands on” directly into the computer, we program the child-machine instead and expose it to the education/learning process?

Today we know that that’s one of the most promising subfields of AI — machine learning.

He compares it vaguely to an evolutionary process, where the initial structure of the child-machine corresponds to the hereditary material, changes that happen during learning are like mutations and instead of natural selection, we would have the ‘judgment of the experimenter’ or some kind of objective loss function — if we want to automate that part as well (as it is done today).

He acknowledges that injecting some domain knowledge would be extremely helpful as the evolution is not as efficient as we’d like it to be. Analogous to today’s hybrid systems (mix of learning + humans injecting knowledge).

Turing is also aware that some kind of reward-punishment learning would be useful as both humans and animals use it to learn. That’s what we call reinforcement learning or RL for short — the tech behind AlphaZero and many other amazing feats of engineering like this OpenAI’s robotic hand:

It was trained completely in a simulation but it learns to generalize to a new domain — the real-world. And it does so with quite a success.

Note: the success here is not the fact that it knows how to solve the Rubik’s cube, that’s “easy”! It’s that the single robotic hand is dexterous enough to do it!

Turing also suggests, as one possible route, building in a whole system of logical inference and knowledge base into the machine, but making sure that the machine can update those (he compares that to the constitution of the USA). That’s like symbolic AI but with the super-power of learning.

Last but not least he was aware that some sort of randomness would be extremely useful as the problem’s search space, that we are interested in, is usually enormous. And boy was he right.

Looking from the perspective of 2020 and working myself both professionally in this field (currently in Microsoft and collaborating with MS Research Cambridge) as well as in my free time, it’s amazing that he broadly predicted most of the research directions that we use today. He was really vague in his predictions of course, but still. Impressive.

Final thoughts

I really don’t know how to make this blog any shorter. I could but I don’t want to. Learning is a journey and every journey takes time (unfortunately for those of us that are highly impatient and goal-obsessed like me haha).

Turing had lots of brilliant thoughts but also some silly ones. Like this one:

Our problem then is to find out how to programme these machines to play the game. At my present rate of working I produce about a thousand digits of programme a day, so that about sixty workers, working steadily through fifty years might accomplish the job, if nothing went into the waste-paper basket.

He kinda believed in this linear path to engineering intelligence — you can’t blame him with the knowledge that was available at that moment of history.

The conclusion is the following: even the smartest people that live today and that inspire you, don’t know many things and they probably have lots of incorrect beliefs. Don’t fall into the trap of believing that they are geniuses out of this world and that you’ll never accomplish what they have/had.

Because that’s not true. They devoted their lives to accomplishing something, that’s the only difference. It’s “just” that.

In the next part of this series, I’ll cover some extensions and main objections to Turing’s paper and I’ll link many interesting things I found along the way.

Stay tuned and stay safe! ❤️ We’re going through a tough period with the whole pandemic situation and the impending economic crisis. Nonetheless, we’ll get through this!

With that, I’ll leave you with a question:

Do our minds really require our brain as a substrate? Is this fusion really necessary for the creation of cognition and consciousness?

Stay curious! (Photo by Greg Rakozy on Unsplash)

The second part of the blog is out!

And I’ve covered this blog in a video (if you prefer that format):

Computing Machinery and Intelligence

If there is something you would like me to write about — write it down in the comment section or send me a DM, I’d be glad to write more about maths, ML, deep learning, software, landing a job in a big tech company, preparing for ML summer camps, electronics (I actually officially studied this beast), etc., anything that could help you.

Also feel free to drop me a message or:

  1. Connect and reach me on 💡 LinkedIn and Twitter

And if you find the content I create useful consider becoming a Patreon!

Much love ❤️