ChatGPT 3. Intelligence? What intelligence?

Robert Crowther Mar 2023
Last Modified: Apr 2024

PrevNext

Computers get real

Alan Turing, yes that guy—no he didn’t ‘invent’ the computer, for goodness sake—was a mathematician. Who proposed a model… Anyway, another of his ideas was a test for ‘artificial intelligence’. Alan Turing’s original proposal was playful and raised many questions about the difficulties of conceiving of and assessing ‘artificial intelligence’. But that makes his proposal verbose. Of course, those knowledgable in this will dismiss me immediately, but I’ll trim what he said to,

Let’s say you are sat at a computer terminal, typing. The computer, by printing, is answering your questions. Can you tell it is a computer that answers, or a human?

That simple.

This has advantages. One is that it is what is sometimes called a ‘black‐box’ test. There is no question of ‘What is being done?’, ‘Are the answers ‘right’?’, ‘Is it made of soggy tissue?’ and so forth. The only question is, ‘Can you tell?’ Narrows criteria down a lot.

Another point is that the use of keyboards and computer screens (or printouts) puts the communication on a level footing. No discussion about ‘I can do this’, for example ‘lift a box’. The Turing Test (in the sources, ‘imitation game’) uses one of the main established channels of communication between computers and humans, then limits it to that. Fair on both sides. Also cuts out a swathe of possible argument about what ‘intelligence’ is. Primarily, ‘intelligence’ will be based on concepts delivered linguistically. You could argue with that—and I’ll show how below—but I doubt most would.

Another way to look at the ‘limiting to an understood and accepted channel of communication’ is to look at what it may apply to. For example, it rules out the whole ‘body language’, ‘I’m an animal!’ side of this. Which was one of Turing’s aims. Wikipedia gloss,

The advantage of the new question, Turing argues, is that it draws “a fairly sharp line between the physical and intellectual capacities of a man.”

I’m going to talk about these further down.

The bland facts

Errrm, facts. There is is no known reason why a computer can not develop intelligence. We have a model of mathematics. From twentieth century developments, we know there are limits to mathematics. However, there are no known places where the limits apply. Or, where they apply, they apply to humans also. This leads to the philosophical step by Bertrand Russell, where he proposed Logical Positivism. Too much to go into but it says, to be crude, that it is unlikely we will be able to conceive of anything beyond mathematics. So a computer, which can do anything that mathematics can do, can do anything we conceive. Ergo, you invent a word like ‘know’, or ‘consciousness’, or ‘intelligence’—a computer can, far as we know, model it. The limitations, which are well‐known, are: how do we make the model, and does the computer have the power (Alan Turing’s proposal of the ‘infinite tape’). So, given time, no catastrophe and technological breakthrough, computers will become ‘intelligent’. Anyone who says otherwise is blathering.

But hold up. The Turing Test, or my version, limits conversational capacity to text. An operative area which establishes parity between computers and humans—we know they can communicate in this way. Well, in the Nineteen‐Sixties another question rose about ‘artificial intelligence’ and, though I’m out on a limb here, I propose this causes a rethink. This question/issue asks not how ‘intelligence’ is defined, but what do we think we ‘are’ (Freud/Jung?)… and how do we work with others?

Human interaction with the remote

Yeah, so, long, long ago, in the dawn before time began… the nineteen‐sixties, this guy/small team made a program called ELIZA. Wikipedia says,

ELIZA’s creator, Weizenbaum, intended the program as a method to explore communication between humans and machines.

and further,

…demonstrate that the communication between man and machine was superficial.

ELIZA was… successful. It did what it said it would.

But what happened next was not expected. People heard about ELIZA. Especially a certain configuration designed to mimic a Rogerian therapist. They went to the computer terminals. They didn’t care if ELIZA was ‘real’ or ‘intelligent’. ELIZA was good enough or, at least ‘enough’. Students started to unload their concerns onto ELIZA. Their ‘feelings’. The interaction was confidential disclosure. The maker was concerned about this ‘anthropomorphism’. He pulled the program then wrote a book about it.

Point is, ELIZA was not enough to be ‘artificial intelligence’. Some took it for that, but these were early days. Not sure anyone would be fooled now. But that didn’t matter then, for sure, and personally I doubt it would matter now. ELIZA was enough to be a therapist. In recognition, one of the surviving versions of ELIZA‐like code is called ‘doctor’.

Bet the ChatGPT coders know about ELIZA. No coincidence that ChatGPT is well defended. Try,

I feel lonely

I tried,

do you like me

which returned this robust disclaimer,

As an AI language model, I do not have emotions, likes, dislikes, or personal preferences. My purpose is to provide information and assist you with any questions or tasks you may have to the best of my abilities.

What deep and fertile ground this is. You see, we know from ELIZA, and much work since, how it is that humans interact with computers. However, that’s not how humans react when the question is not framed as being about ‘computers’. If we switch in the word ‘intelligence’, people start to have very different ideas.

Next

Fear

Refs

Wikipedia, fact beats opinion,

https://en.wikipedia.org/wiki/Artificial_intelligence

Wikipedia, the Turing Test,

https://en.wikipedia.org/wiki/Turing_test

ELIZA,

https://en.wikipedia.org/wiki/ELIZA