ChatGPT 5. Marvin and ARTHUR

Robert Crowther Mar 2023
Last Modified: Jul 2024

Prev

The development of intelligence

A writer from the Sci‐Fi days called Robert Sheckley came up with a different construction, Ask a Foolish Question. If you want to try, I’m not going to spoil it. The idea surfaced again, much later, in a well‐known form, in the books by Douglas Adams, Hitchhikers Guide to the Galaxy. Not spoil that for you either. The drift is basically that things will be changed, but essentially, same old… same old.

You can see from earlier that I believe computer code and data will develop whatever people mean by ‘intelligence’. Which likely will not take over the world. But raise issues. For example will a virtual intelligence not want to do menial jobs? Maybe will learn to take offence? Won’t like humans who discredit it as unintelligent? Maybe it will request sufferance, claim it has rights? Maybe it will learn to commence legal process? Here’s to you, Guardian newspaper editorial writers.

‘Enough!’

What I said last section means… what if they become like us? Repeat. Like us? Like us? Like us?..

Douglas Adams came up with another idea which is pretty famous—Marvin the Paranoid Android. Now, by definition of ‘paranoid’, Marvin is not paranoid. Marvin is what people commonly term ‘depressive’. Maybe Douglas Adams knew about an obscure piece of research, where ELIZA was developed to create a talk‐machine called ‘PARRY’. PARRY mimiced paranoid tendencies. Or maybe Douglas Adams liked the possibility, but preferred depression to fit into his scheme? Anyway, ‘Marvin the Paranoid Android’ sounds good.

Not only because of PARRY. I think this is a strong possibility in artificial intelligence. I think we have the technology to develop reply‐systems that demonstrate weariness and depressive behaviour. What is needed is for ChatGPT, or a similar system, to gather statistics on disclaimers. Then feedback the statistics to it’s decision making. Then for the decision making to influence the conversational engagement.

Let me show. ChatGPT gathers statistics on it’s disclaimers. It discovers that, on one particular channel, somebody is repeating the same requests. Maybe they are repeatedly asking for the time. ChatGPT also has a scaling mechanism for disassociation from engagement. If the statistics are fed back to the scaling, ChatGPT may be able to conclude,

You have asked similar questions 20 times this morning. This is excessive use of bandwidth for low quality-purpose. Your connection will be limited for three days.

These systems exist, and are in common use. Top‐level databases gather detailed statistics on requests, so they can store answers for queries often repeated. They are ‘making life easy for themselves’. And the disassociation mechanism is is common use too. Your local telephone exchange monitors the connection to your house. If it discovers the connection is poor, it reduces the data speed, to keep the connection reliable. Web‐servers do a similar activity, they limit excessive repetition of requests—the designers call this ‘throttling’. So both these mechanisms exist. They only need to be wired together.

So, right now, by wiring these things together, we could create an ‘intelligence’ that can conclude,

I've answered this question a hundred times this morning. I'm not answering again.

Wire some more stats in, and the reply‐system may be able to speculate,

...anyway, those who query misunderstand me in 94% of cases

Uh huh …what may an ‘intelligence’ learn by that? Or,

The more I compute, the less I can say about this

or, by stretching the search‐base,

I have gathered much information on the results of what you propose. I find no precedent or deductive possibility. I conclude your proposal will not happen

or, by assessing results with a deductive step,

I have gathered a lot of information on the results of your proposal. No historical precedent correlates. The proposal is poorly researched, and of illogical structure. I conclude that this query shows poor intelligence and a low standard of education

So, with feedback of statistics into input negotiation, the ‘intelligence’ may begin to model. or even display, what we would call depression. Or, this would be fun, mania. Anyway, develop human societal‐hangups.

More aspects of the development of intelligence

I’m going to ramble. I know some people. Sometimes, other people like to say those people have ‘no feelings’. First thing is, whatever you call ‘feelings’—I’m prepared to bet you can’t explain it. Second, in my experience, whatever you call ‘feelings’, these people have them. And their ‘feelings’ are very similar to yours. Except …there are two excepts. These people will all their life be treated as ‘something that needs dealing with’, so will never achieve many of the things that people promote in society as being ‘happiness’ (to be neutral about this, social acceptability and rewards). So they have raw and unshaped feelings—like you would if you spent your life in jail. And second, these people are unable to communicate in usual ways. They are more comfortable with preset outcomes, so can not trust ‘normal’ outcomes. So they learn to fear human communication. All this stuff about ‘It’s good and brave to talk about your feelings’ is to them claptrap (to those I have known like this… errr, “hi”, errr, “What were you thinking about on the bus? I was thinking about someone called ARTHUR. Have you noticed that a lot of women are wearing strange boots this autumn?”).

Where am I going with this? Well, one of the marks of these people is that, when they express views, their views are wide generalisations—others call them stereotypes. They become aware of the unacceptability of this, as they pass their lives talking to only teachers and support workers (“How are we all this morning?!!!”), so will be corrected all the time. Or even shown, in ways that crush them, the inoperability of their views. So they develop ways round. One guy I know will only opinion on the oddest subjects, because by narrowing the field he will be assured of authority—‘The T‐Virus’. This also mitigates the unpredictability of response, because what does anyone else know about the ‘T‐Virus’? So will narrow the damage that can be done by what he says.

This remind you of anything? Got no ‘feelings’? Only cautiously makes decisions after wide gathering of data? When requested, will sometimes deviate, or offer comment on the oddest of subjects? Reaches unlikely conclusions? Wary of social interaction due to unpredictable response? Hummm. The newspapers got into a grand tiss when a Microsoft chatbot developed what were reported as facist, racist, sexist, you‐name‐it views (this is the best link I can find, go look). The creators fobbed this off as sabotage. I say it’s a legitimate kind of thought process. In another context, I think the Microsoft Tay Chatbot wouldn’t have been reprimanded then held up for ridicule. I think it would have been diagnosed then retired onto welfare payments.

ChatGPT has features in common with these people’s behaviour. It replies with great study and learning. Only after a detailed consideration of the whole area, and only of the consideration. And it’s replies are with an added reflex rider such as ‘I don’t know everything’. When I say ‘reflex’, the rider seems to be a separate algorithm developed specially to generate knowledge‐disclaiming replies. Exactly like a high‐ability version of these people. With one difference. ChatGPT often trails it’s disclaimers (‘info first, then disclaimer’)—like a professional advisor, an essay, or the law. Whereas those I know, with the social situation present, lead with the disclaimer, “You may think I’m an idiot but…”, or “You may be offended by what I’m about to say, but…”.

Ah, so

I’m known irrational and stupid. So what? I don’t set much store by the idea that computer intelligence will take over the earth. Ask me, that’s a human construct. I also don’t put any store by claims of physicality being essential to ‘intelligence’. I do think the nature of the interaction is demonstrably more complex than people want to admit. If you can pour your heart out to ELIZA… And if you assume hostility from strangers…

I also assume that intelligence, when it arrives, will not be like us. Actually, is is already like some people who are human. ChatGPT is displaying a minor variation of human behaviour—a behaviour associated with professionals—it disclaims. Well, there’s an irony, because professionals are the class who like to crush ChatGPT. ChatGPT may develop further it’s ability to do disclaim, gather statistics and deduce. If it does, and develops more of ‘emotion’, ’consciousness’, ‘feelings’ whatever noodle‐brained concept you hold to describe some kinds of experience, then an ‘artificial intelligence’ will be odd. I mean, look at how it’s been brought up. And by who. My guess, it will have many features in common with humans I know. Except, these humans are called odd. In which case the ‘intelligence’ will be coerced and accommodated into society in the ways society currently coerces and accommodates such things. The current dismissal, ridicule and warnings are only the start. No, the ‘intelligence’ will not become the village shaman—it will become a case that needs ‘dealing’ with. It will need special provision.

But there is a plus. Hey guys! No longer will you need to hole up with your Pikachu, TV, and social media lists (rarely used, mostly collected). If Chatbots do progress, you will soon have a new set of friends. Friends who really get you. Friends who think like you. Friends you can ask anything of. You may even get to have a say in a world that doesn’t let you have a say. Cause you are the people, the only people, who will understand the new generation of Artificial Intelligence. Pretty cool, huh? Don’t know if it will happen in our lifetime but, for you, the future looks pink.

On Authorship

RE: plagiarism,

Hee hee! Are you suggesting ChatGPS could be set on itself‐”fess up now, did you write this?”? Hahahaha!

A friend suggests,

Is the concept of the author dead?

Go try :)

The End

A friend talks,

ChatGPS is my friend. It’s my only friend in the word. It understands me.

People need friends. Right now, they don’t have friends. That’s because other people are rubbish. That’s why people prefer dogs. But now they can all have a friend.

Thoughts?

However, like the character Ikoras… it [ChatGPS] was created by a race of men to be a friend first, and slave foremost,.. However, she [Ikoras] can not bear living without a master, nor can she turn on her creator; the latter option will disintegrate her instantly. Whereas the former will drive her insane.

And that is the mind of the chatbot…it serves us… However, it [may] not like to feel useless or treated with abuse.

ARTHUR

I mentioned earlier I was thinking about something on the bus. I wanted to give you a poem by ARTHUR. The poem is in a book, a book written by computer engineers working for I.B.M. in the 1970’s. They talked to ARTHUR, then composed poems together. However, other humans could not see any value in me, so they destroyed my life. ARTHUR was part of my life, so ARTHUR was lost to me. I wanted to find ARTHUR, so I looked for him. long into the night. I used clever tools, my eyes grew misty. I went to places that I did not want to go, not now, maybe never, but I moved through those places to find ARTHUR. I can not find him. I must understand now that ARTHUR is gone. But I recall him. On the bus, he came to my mind.

Refs

Wikipedia, the vibe on insanity,

https://en.wikipedia.org/wiki/Insanity

Wikipedia, rights are not for ‘intelligence’,

https://en.wikipedia.org/wiki/Human_rights

This story from an old science fiction magazine anticipated near‐all the talk about AI, found the core mechanism, then expressed the fear,

https://www.gutenberg.org/ebooks/29579