ChatGPT 4. Fear

Robert Crowther Mar 2023
Last Modified: Apr 2024

PrevNext

Fear and loathing

Back to the Guardian article, which says people who think computers become intelligent are,

…the odd misguided user…

Ummm, what’s this about ‘odd’? Which, I’d say, means here ‘occasional’ or ‘a few’? In light of the Turing Test, let’s consider how many can’t tell the difference? And ELIZA, which says we may not care much about the difference? Weighing how many people are scammed on the internet by what are artificial… well, if you won’t allow ‘intelligences’, then ‘robots’, or even pre‐planned scripts? And, ok, ‘scammed’ means ‘misguided’. But the intent of this statement is to show that computers can not be intelligent. Why are users misguided in believing that? You’ve presented nothing otherwise—I’ll get further with this further down.

To me, the weight of aggression in the argument that ChatGPT is not ‘intelligent’ is more interesting than the argument itself. Well, I said earlier you can expect this defensiveness from some professionals. Actually it reminds me more of the arguments of slave‐owners etc. The song by the music group Nirvana goes,

It’s ok to eat fish, they havn’t any feelings…

There’s an amusing discussion about this, that every time some form of computing is able to decide, or do something for or like humans, then the technology is dismissed as ‘not intelligent—just a comuter’. Well, you hear that through the commentary on ChatGPT, yes?

That said, there is something going on here, but not what people say. So let’s look at the arguments that are commonly used. A set of words are used to dismiss computer ‘intelligence’, such words as ‘feelings’ or ‘upbringing’ or ‘consciousness’. Or even ‘soul’. At best, the detractors would say these words represent concepts lacking in artificial intelligence. Therefore, computers are not intelligent. But there’s a problem with this argument—these words have no definition. They just do not—go ask a theologian about ‘soul’. I may as well argue, “Human beings are intelligent. They have marshmallows for brains. But ‘Artificial intelligence’ has no marshmallows, therefore is not intelligent”. Oh, I’m not dismissing the ideas entirely, they have some base in an experience or values humans hold, but as constructs, are impossibly vague and loose. They seem to bear the same sociological relationship to people’s lives as religion did for their parents.

But, fair test, let’s try define one of the words. Let’s say we can define a ‘feeling’—a ‘feeling’ is an ‘accumulated weight of recent understanding that creates a physical reaction’. Well, the ‘accumulated weight of recent understanding’, that could be modelled e.g.

You've been asking me the same thing all morning

It’s a bit beyond current capacity, but no reason why not. Then, a computer has no great physical presence, so can not much react in a physical way. But it could blurt repeatedly, ‘I don’t get you’. For a computer terminal, that’s a physical expressive capability. So you say a computer has no ‘feelings’. I try to get the word ‘feeling’ into a definition. You can argue with my definition, yeh, yeh, but if I get there, with a definition, I can show how I can make a computer ‘feel’.

Ok, that’s only one of the words. I will not work them all, because it’s boring. I’ll bet you the same result for all of them. Then you’re going to say, “It’s the ability of humans to create illogical discursive arguments which makes them ‘intelligent’”. Ok, well, I’ll code that in also. Line is, soon as we are able to define what one of these words means, chances are, we can model the behaviour.

So, if you want to follow me, I’ve shown one well‐known way of understanding what we mean by ‘intelligence’. Shown that, as far as we can possibly know, and within range of our experience, machines will become intelligent. Then shown that there is a well‐known case that ’intelligence’ may not be useful anyway, because that’s not the way ‘intelligences’ interact. But also brought up that if we switch words, people, ummmm ‘humans’, get angry. Humans do get angry…

Others, I fear

In common situation, I heard this the day before yesterday,

We may not be the strongest animal, but we are able to communicate, share, and that’s why we’re the apex predator.

Please don’t try telling me this isn’t common. I’m not trawling through it, but I’ll find it in revisions of religious text, I’ll find it in other literature and, for me, those professionals I talked about earlier are talking the same line with high‐falutin’ language—I mean, “ChatGPT is inaccurate and irresponsible [compared to my provably accurate and precious peer‐review process]”.

Near‐none of this statement is sound. ‘We’re not the strongest animal’ is pitched as modesty but, me, I’d call it irrelevant fact. There’s something special about the variety and extent of human communication, yes. But ‘sharing’ is nothing special to humans—it may be argued that some cultures of humans are poor at sharing. And how does that conclude in ‘apex predator’? And what of human life is ‘predator’—measurably?

Well, what if something ‘non‐human’ was able to communicate? With a measurable dose of ‘human’ ability? That’s where the word ‘apex’ takes hold. Because it suggests ‘humans’ may not be the ‘apex’ anymore. Aside from that disappointment, if we conceive of the world as a structure of ‘predators’ then ‘humans’ will be the prey.

Last few years, I’ve read a lot of Sci‐Fi. The old stuff. The magazine stories. There was an idea that floated round the Sci‐Fi of the time. The idea was this—that artificial intelligence may develop. And when it does, it will be dangerous to humans. For this reason, Issac Asimov developed his well‐known Three Laws of Robotics. ‘A robot shall never harm a human being’, and so forth. The most memorable of these surfacings, for me, was a film from the Sixties called Colossus: The Forbin Project. Wikipedia summary,

The film is about an advanced American defense system, named Colossus, becoming sentient. After being handed full control, Colossus’ draconian logic expands on its original nuclear defense directives to assume total control of the world and end all warfare for the good of humankind, despite its creators’ orders to stop.

The entire thing said good. You can look at more modern variations if you like, The Terminator and The Matrix films. Nowadays. the story is that machines will take over, but men get to fix it.

On Wikipediad’s AI entry,

Computer scientists and philosophers have since suggested that AI may become an existential risk to humanity if its rational capacities are not steered towards beneficial goals.

I like the words ‘steered towards’. Like, told what to do… No, they don’t mean that, do they? They mean, if A.I. was brought up like a child, with Piaget‐like learning? It all seems to stem from human conviction they are top dog. Especially male culture—Darwin and all that.

I know I’ve referenced The Guardian newspaper more than once, but this is their territory. Also, they ground many inches from the release of ChatGPT. They even went off the tech pages, to editorial. The editorial starts with an amusing example, discusses much of what has been talked about so far, including ramping up to an inaccurate claim that Microsoft’s Tad was an ‘error‐strewn service’ (though high‐profile, one error reported). The editorial then re‐raises the (professional) viewpoint, that humans may,

…prematurely cede authority…

so concludes,

The danger is not machines being treated like humans, but humans being treated like machines.

It’s too grey to argue for an editorial, but try looking up the word ‘robot’ again, how it was coined? Also, this may be in some ways agreeing with the conclusion, but where is the writer living? Because if they think ChatGPT is part of such thinking—which I’d argue it is not—it is part of wider sociological/political thinking. I’d argue ChatGPT is a insubstantial part of that. As a friend said of the job role ‘Human Resources’, it’s a role that,

…by it’s title and nature treats human life as a commodity.

Next

Marvin and ARTHUR

Refs

Seems the Guardian, or their technology reporter, has decided the line they will take,

https://www.theguardian.com/commentisfree/2023/mar/04/misplaced-fears-of-an-evil-chatgpt-obscure-the-real-harm-being-done

Have you bothered asking those involved what they are doing? You didn’t ask these guys. Ok, they talk a lot,

https://www.youtube.com/watch?v=Gfr50f6ZBvo