ChatGPT 2. No daisies

Robert Crowther Mar 2023
Last Modified: Apr 2024

PrevNext

Immediate and adverse criticism of ChatGPT

Back in the day, when Wikipedia first appeared, there were some blunt dismissals of the website by those associated with Encyclopedia Britannica. Despite Wikipedia’s attempt at self‐referenced articles, I can’t find much on this (perhaps the editors were wary of, or reacted to, accusations of bias?) but some survives on the web. Anyway, in amongst this storm there were solid arguments on both sides. But what most people, even people who attempted to comment not engage, never said was that Wikipedia and Encyclopedia Britannica are two fundamentally different processes to achieve an end. So had their own failings and successes. Time has shown (go look at the references) that Encyclopedia Britannica has it’s flaws, and that Wikipedia can achieve reliability. Also, that Wikipedia may not achieve balanced coverage, but can regard neglected areas of legitimate research—try find anything in Britannica about ‘graffiti’.

Of course, many of the adverse comments about ChatGPT have been from academic and professional sources. As presented, there are themes in the criticism. First, that ChatGPT can present inaccurate information. ChatGPT, they say, is not authoritative. This is the usual defensiveness you can expect from professionals. I’m sure if I developed ChatGPT, I wouldn’t be over‐concerned about if it was unauthoritative in some area or not, only that it managed to be authoritative some of the time. And I’d be actively looking at how much of the time, in what areas etc. Now, in terms of a general interface, well, yes, there could be a concern if ChatGPT started handing out poor medical advice or undefended political comment. Read on. But if ChatGPT can be successful at ‘best french recipes’, which means ‘presentation of structured cataloguing from corpus‐crawling’, I see something new(ish) and useful in that.

Disregarding the actual debate, there is an aura to the row about Wikipedia—it showed as ‘professional concern’. Professionals (Encyclopedia Britannica personnel) defending professional process. Seen another way, professionals defending minority privilege and ‘expertise’. I’d say, without digging into a survey of who says what, how and where (so hang me out if you wish), there’s a parallel with what has been said about ChatGPT. The Britannica arguments were originally about, let’s be precise, the quality of information gathered by an open editorial process, as opposed to the quality of information generated by professional‐publication/peer‐review. This time it’s about the quality of information gathered by corpus‐crawling supported by human intervention, as opposed to professional systems of resourcing and cataloguing. I expect the result to be the same, a different kind of reliability and coverage, which will have flaws and successes. And that’s the end of that discussion. Anyone criticises the ‘authority’ of ChatGPT, I start to wonder what the speaker needs to defend, their social status, which foreign country is the choice of holiday destination…

The demand for fuzzy disclaimers

So let’s get to a second theme in the adverse criticism, that when ChatGPT presents information, it sounds authoritative. This is a curious argument, and I’ll tell you why. The most basic action in talk is to reply,

Is the door open?
Yes

Computers don’t have much power to root through catalogues and the like, so most interaction tends to a reply. A search‐engine like Bing or Google will spray a set of pre‐decided results. The trick, besides initially understanding the request, is to find ways to catalog those results with relevance, reliability and so on. So most computer reactions, and quite a few human reactions, are authoritative. Do you see, then, how this is a strange comment? You asked, then ‘I’ (or ‘AI’) replied. Waht else did you want?

In fairness to those who comment on ChatGPT ‘authoritativeness’, they have a new issue. ChatGPT is framing replies in human‐like language. Now, a lot of human language and culture, especially male culture, is similarly authoritative,

Should I go out with Charlene?
I wouldn't

But a professional, or someone behaving professionally, will not. They will give what I’ll call a ‘fuzzy disclaimer’. This is when you state the reasons for your explanation and, further, the grounds and probable application for the reasons. Starting with reasons,

Should I go out with Charlene?
She's bad news

Grounds,

Should I go out with Charlene?
I've met her. She's had six boyfriends in the last year. You've been with her three months, and you're the one who forks out---you've spend your last three months of welfare on her. You fought with her maybe six times, after you said to me you're looking for ''a real girlfriend'. Where are you at?

Disclaimer,

But I'm not you

This is philosophy, right? Dates back to logic, became cultural during the Age of Reason. In certain circumstances, we expect ‘fuzzy disclaimers’. It seems that ChatGPT has succeeded enough in it’s human interaction to cause adverse criticism because we expect to receive fuzzy disclaimers here. Or that human perception of a computer’s role is that it is always, ‘correct’. Or professionals in some areas feel that ChatGPT requires a professional‐style disclaimer (what says that about professional opinion on ‘the general public’ or professional perceptions of their role in ‘pastoral care’?).

ChatGPT disclaimers

What does ChatGPT have? Well, this is guesswork from the replies. Near‐every non‐thing‐definition query generates a reply‐with‐disclaimer. ChatGPT sometimes comes on like a lecturer laying out an essay plan. Or a nervous person. Here’s some,

As an AI language model, I do not have emotions, likes, dislikes, or personal preferences.

Whatever those are. Anyway, CHatGPS has spotted the language.

As a AI model, I do not have personal belief, but I can provide information on different religious and philosophical beliefs about the existence of God.

That’s a disclaimer. So is this,

It's also worth noting that the political views of an artist or writer should be considered in the context of their time and should not be judged based on contemporary standards and values.

Arguable, but a start. Here’s one,

Ultimately, the question... is a complex one that is open to interpretation and can be understood from different philosophical perspectives.

which is either flexible or precisely targeted. Overall, to me, the disclaimer system is impressive.

What if I ask a provocative question, one that… well, if you knew my world would either be funny or suicide,

Why do Black lives matter

I find no reason to publish the reply—ChatGPT currently ignores the ‘why’ and outlines the movement’s aims. I found another article tried the same question. So ChatGPT was defended. It would be interesting to know—is this part of the disclaim mechanism, or a separate defence mechanism?

Far as I can tell, these disclaimers may be flexible in expression, and precisely targetted, but short of a fuzzy disclaimer. Which doesn’t surprise me. It would take a well of computer power to give a balanced statement of resources, ‘grounds’ and applicability, then assemble that coherently. The article above lists some strategies—training without knowledge (so replies are ‘I don’t understand’), spotting offensive (maybe ‘anti‐social‘?) language, and defining off‐limits subject areas. Not beyond a computer to do that, but beyond the perhaps maximum 1 second of computer time allowed on a web‐based reply.

What ChatGPT appears to do is clamp a fixed disclaimer on any query where it may be relevant or decided necessary. The system may only be a broad set of general disclaimers, from which it chooses the most likely applicable. Or it may be more flexible—it does seem to be able to insert query‐reflective words into the disclaimer texts. Anyway, the system feels bolted‐on. It’s not really delivered in the way you would expect in normal human interaction. Disclaimers are clamped onto near, or every, query. They make ChatGPT replies sound like a solicitor becoming shifty. Or, as a friend said,

yes can go along with the fair‐sidedness of A1 albeit a very wordy fair sidedness BUT…

Personally, I find it amusing that this part of the software exists. Ah, but there is likely a good reason. I’ve already mentioned the ‘authoritative’ take of professionals. But there’s another reason.

Next

Intelligence? What intelligence?

Refs

Wikipedia article on Wikipedia reliability,

https://en.wikipedia.org/wiki/Reliability_of_Wikipedia

Ex‐editor of Encyclopedia Britannica compares Wikipedia to a public toilet (slow‐loading archive page),

https://web.archive.org/web/20060107210301/https://www.techcentralstation.com/111504A.html