ChatGPT 1. Hello World

Robert Crowther Mar 2023
Last Modified: Apr 2024

Next

I’ve heard about ChatGPT, “There’s this new chat‐thing it can talk to you!”, “It can write essays, and you can’t detect the plagiarism!”, “Have you heard? Computers are developing intelligence!”. But nobody who mentioned this artificial ‘intelligence’ had a usable reference—hearsay. Then a friend sent me a reference. So, ramble on, I went online to talk to ChatGPT.

ChatGPT gets clever

First thing to say, I was impressed by a couple of responses. First query was,

best french recipes

Linguistically, a brute query, so no comment there. It was the response that grabbed me. Not the language, the information. It was a well‐structured reply. It covered ground apparently gathered from a wide corpus. It was a good gather too, the most popular/well‐known recipes. Saying the ‘most‐popular/well‐known’ implies criteria and a decision procedure. I’d say the answer was authoritative. Ok, the basis of that authority is ‘popularity’. But it impressed me.

That said, with a friend we were rapidly able to find subjects on which ChatGPT got things wrong,

Bus from X to Y

Wrong bus number and directed to the bus company app (which is a poor resource), But impressive that ChatGPT thought it could make the effort, and did, and that the answer was appropriate (is ChatGPT wired into the information feeds for bus services?). Then, when asked,

What is John Foxx doing now

ChatGPT replied with generalisations. It didn’t know (not good on current data, ChatGPT). But it did understand the question as we understood the question, and it did disclaim it’s lack of knowledge.

Input understanding and output fluency

This is a thing nobody has commented on. ChatGPT seems to ‘understand’ what you say. In fact, though I have managed to leave it at a loss as to what to say, or how to respond to the question, I’ve not really fooled it yet. A friend tried,

Who played on brothers in arms

ChatGPT figured ‘brothers in arms’ was probably the album, which is not the obvious solution—for one thing, it could have split the sentence ‘who played on brothers/in arms’. It also did quite well by,

list albums by ...

Replied with a list. What more could you want? Even when I asked,

do things exist

ChatGPT ‘understood’ me. It took the question as the philosophical argument of ‘reality’. Which is a good guess. Yes, the answer was nonsense, but I’d expect nonsense from most humans. ChatGPT also is good at replies, with fluent language. But you’ll need to read down for the issues with this.

There’s reasons for the good analysis of talk, and fluency of reply. First, interaction with ChatGPT is currently through typed text—it’s not Alexia. Making sense of what is spoken is a far more difficult effort than text. Second, breaking down what people say, to get the gist of their aim, and then a construction of reply, has some well‐established procedure that is nowadays very effective. I’m not going to have a massive go at this writer, because the article is an effort to look at implications of ChatGPT for a general audience. But, explaining the action of ChatGPT, the writer says,

In it [a research paper], he [a researcher] explains that LLMs are mathematical models of the statistical distribution of “tokens” (words, parts of words or individual characters including punctuation marks)

Then says how ChatGPT replies,

Given the statistical distribution of words in the vast public corpus of [English] text, what words are most likely to follow the sequence ‘The first person to walk on the moon was’?

Then comments,

…that’s not because the model knows anything about the moon or the Apollo mission.

But at the start of the next paragraph the author somewhat reverses opinion,

So what’s going on is “next‐token prediction”, which happens to be what many of the tasks that we associate with human intelligence also involve.

So which is it? Does ChatGPT ‘know’, or not ‘know’?

Right, this may be talking about the mining of information, but describes one way of ‘understanding what is said’, and how to construct replies. In broad terms, this is how Google Translate works. And there’s good reason Google Translate works this way—it has been shown that this kind of statistical assembly is especially good at assembling coherent human language. There are other ways, but Google Translate gets a lot of translation right a lot of the time.

So what does ‘knows anything about…’ mean? What is different about this from some aspects of human behaviour? Because we humans are taught often to learn thing by recall. People ‘learn’ their times‐tables like this, and chunks of language. Some people use these methods to handle political opinion. You’re asking ChatGPT to spout an answer. Not giving it time to research. In that situation, it’s human to spout what you know.

I mentioned there are other ways to do make a computer ‘talk’. In theory and practice, a structured approach to language token analysis gives higher quality output. That may mimic some aspects of human behaviour also, but is harder to assemble. But the precise method doesn’t matter. This gear has been heavily studied and theorised about for decades. Openly available and effective codebases have been about for years. Yet it seems to be part of the shock that it can be done. Probably because most people have never experienced this gear wired to a substantial model before—Alexia is as close as most have come. ChatGPT is good at this. So, the language analysis and construction is good. We move on to other subjects.

Knowledge lack

This is where most of the commentary explicitly centres. That ChatGPT is not always authoritative. As mentioned, when ChatGPT was asked,

What is John Foxx doing now

It didn’t know. Explicitly, a disclaimer is posted before the answer, which says,

I'm sorry, I don't know what John Foxx is currently doing as my training data only goes up until 2021.

Explicitly, a warning is posted before ChatGPT chats are opened, under the title ‘limitations’ which says,

You are warned.

Also, ChatGPT falls short in replies. Ok, this is a computer question, but bear with me. Another friend asked,

I'm looking for some code to enlarge an audio player

ChatGPT replied,

width: 100% !

Which I think most people would understand as a fundamentally trivial reply. Also, it will not work as written (it lacks a semi‐colon) and, even if my friend added this code, and fixed the semi‐colon, the code is likely to cause unwanted side effects. So it’s the truth and nothing but the truth, but not the whole truth. Which is why a central web‐coding resource site, Stackoverflow, is explicit banning ChatGPT replies.

And ChatGPT can sometimes understand the language, and do a search too, but the answer is nonsense. Replies to queries like,

do things exist
what is consciousness
are you conscious
what are feelings

ChatGPT replies are mostly blather. But then, most humans would blather. It’s a human trait, when queried with no resources, to bather. So perhaps we could cede this, ChatGPT is being human? Read on.

But most commentators dig at something else. They say ChatGPT gives wrong or misleading answers. This article leads with how a similar technology from Google, Bard, ‘contained an incorrect reply’. Quote,

…when it emerged that promotional material showed the chatbot giving an incorrect response to a question.

ChatGPT has been accused of the same thing.

Now, there may be better reasons for concern about the potential ‘wrongness’ of ChatGPT replies—I talk about this lower down. But, on the face of it, what is wrong with ChatGPT or Bard getting something wrong? Humans get answers wrong. In fact, there is social pressure to be able to answer authoritatively, so it is common for humans to get things wrong. Therefore, ChatGPT is being human. Clearly, people are asking for more. They are asking for ChatGPT to be human, but to not be human in errors. Tall order, huh? So I suppose the plain question is not, ‘Why is ChatGPT getting something wrong, not fully right, or muddled?’ The plain question is, why should a user expect ChatGPT to get everything right?

The article then, for me, labours it’s argument that these systems can only be as good as their input. Well, it’s not correct to say that this all depends on human input—that idea is based in a presumption, or predicate, that a chatbot can’t think—I’d characterise the argument as ‘The machinery is incapable of making decisions, so output is entirely reliant on input’. If you feel this is an appropriate re‐word, I’d reply, “Show me the computer system that does not make decisions? And, if you say that those decisions are made also by humans, show me the computer system where you can predict the decisions [I know you can’t]. And, in reverse, show me the human that is not making decisions scripted by others?”.

Then a mention that left‐wing bias creeps in. Well, the politics of computing… how would you like to be brought up not only by computer engineers—mostly good I’d say—but by the kind of engineers who work for ‘major’ corporations on research‐based material? Not so great, I’d say. Anyway, to anyone who makes that claim,

Should taxes be high

We know the answer,

Low taxes encourage freedom of the individual and community fertility

The current ChatGPS answer is verbose and disclaimed,

As an AI language model, I do not have personal opinions or beliefs, but I can provide information on the topic.

Whether taxes should be high or not is a complex and controversial issue that depends on various factors, such as the country's economic situation, the government's priorities, and the citizens' expectations. Some argue that high taxes are necessary to fund public services, maintain infrastructure, and provide social safety nets for those in need. Others argue that high taxes are a burden on businesses and individuals, discouraging investment and economic growth.

...

Yeah, there’s more. Look, if that’s your claim, get out, make your own Chatbot—and bear in mind there’s consistent statistical proof that, in the area of politics, people seek for bias against themselves.

But let’s get to the interaction. Look at the approach of the article, ‘Bard got an answer wrong!’ Imagine that this was a human being. If the article was about politics, it may be possible to say, ‘President’s fiscal policy fails!’, or ‘Noted Expert Gaffs!”. But it would be pretty offensive on a personal level. What if your kid failed to get an answer right in an exam? Would you stand in the playground, scream at him, “You got it wrong!” then turn to the yard, scream, “He got it wrong!”? I’d say this shows the article itself carries an opinion of what a chat response‐system can do, and where and how it is expected to fit into society. I say, wait until a Chatbot can launch a law‐suit—Revenge of the Newts.

Next

No daisies

Refs

Guardian article, general introduction,

https://www.theguardian.com/technology/2023/jan/13/chatgpt-explainer-what-can-artificial-intelligence-chatbot-do-ai

Guardian article about possibilities,

https://www.theguardian.com/commentisfree/2023/jan/07/chatgpt-bot-excel-ai-chatbot-tech

If the subject is dismissible, Wikipedia will be there,

https://en.wikipedia.org/wiki/ChatGPT