Saturday, 16 Nov 2024

Misplaced fears of an ‘evil’ ChatGPT obscure the real harm being done | John Naughton

Misplaced fears of an ‘evil’ ChatGPT obscure the real harm being done | John Naughton


Misplaced fears of an ‘evil’ ChatGPT obscure the real harm being done | John Naughton
1.8 k views

On 14 February, Kevin Roose, the New York Times tech columnist, had a two-hour conversation with Bing, Microsoft's ChatGPT-enhanced search engine. He emerged from the experience an apparently changed man, because the chatbot had told him, among other things, that it would like to be human, that it harboured destructive desires and was in love with him.

The transcript of the conversation, together with Roose's appearance on the paper's The Daily podcast, immediately ratcheted up the moral panic already raging about the implications of large language models (LLMs) such as GPT-3.5 (which apparently underpins Bing) and other "generative AI" tools that are now loose in the world. These are variously seen as chronically untrustworthy artefacts, as examples of technology that is out of control or as precursors of so-called artificial general intelligence (AGI) - ie human-level intelligence - and therefore posing an existential threat to humanity.

Accompanying this hysteria is a new gold rush, as venture capitalists and other investors strive to get in on the action. It seems that all that money is burning holes in very deep pockets. Mercifully, this has its comical sides. It suggests, for example, that chatbots and LLMs have replaced crypto and web 3.0 as the next big thing, which in turn confirms that the tech industry collectively has the attention span of a newt.

The strangest thing of all, though, is that the pandemonium has been sparked by what one of its leading researchers called "stochastic parrots" - by which she means that LLM-powered chatbots are machines that continuously predict which word is statistically most likely to follow the previous one. And this is not black magic, but a computational process that is well understood and has been clearly described by Prof Murray Shanahan and elegantly dissected by the computer scientist Stephen Wolfram.

How can we make sense of all this craziness? A good place to start is to wean people off their incurable desire to interpret machines in anthropocentric ways. Ever since Joe Weizenbaum's Eliza, humans interacting with chatbots seem to want to humanise the computer. This was absurd with Eliza - which was simply running a script written by its creator - so it's perhaps understandable that humans now interacting with ChatGPT - which can apparently respond intelligently to human input - should fall into the same trap. But it's still daft.

It is. For the time being, though, we're stuck with the hysteria. A year is an awfully long time in this industry. Only two years ago, remember, the next big things were going to be crypto/web 3.0 and quantum computing. The former has collapsed under the weight of its own absurdity, while the latter is, like nuclear fusion, still just over the horizon.

With chatbots and LLMs, the most likely outcome is that they will eventually be viewed as a significant augmentation of human capabilities (spreadsheets on steroids, as one cynical colleague put it). If that does happen, then the main beneficiaries (as in all previous gold rushes) will be the providers of the picks and shovels, which in this case are the cloud-computing resources needed by LLM technology and owned by huge corporations.

Given that, isn't it interesting that the one thing nobody talks about at the moment is the environmental impact of the vast amount of computing needed to train and operate LLMs? A world that is dependent on them might be good for business but it would certainly be bad for the planet. Maybe that's what Sam Altman, the CEO of OpenAI, the outfit that created ChatGPT, had in mind when he observed that "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies".

Profiles of painSocial Media Is a Major Cause of the Mental Illness Epidemic in Teen Girls is an impressive survey by the psychologist Jonathan Haidt.

Crowd-pleaserWhat the Poet, Playboy and Prophet of Bubbles Can Still Teach us is a lovely essay by Tim Harford on the madness of crowds, among other things.

Tech royaltyWhat Mary, Queen of Scots, Can Teach Today's Computer-Security Geeks is an intriguing post by Rupert Goodwins on the Register.

you may also like

Airline passenger shares photo of 'reclined' seat debacle: 'Dude is in my lap'
  • by foxnews
  • descember 09, 2016
Airline passenger shares photo of 'reclined' seat debacle: 'Dude is in my lap'

A passenger paid for a first-class ticket on an American Airlines flight, but the seat in front of him trapped him in his chair, which led to the airline posting a public apology on X.

read more