Features | 31 Jul 2023

ChatGPT: AI is both smarter and dumber than you think it is

Artificial Intelligences are here! While they're not like the killer robots in sci-fi movies, they're already having a huge impact on our society.

Artificial Intelligence (AI) has been dominating the headlines, with ChatGPT in particular hogging the limelight. Unlike the AIs in sci-fi dystopias like The Terminator or The Matrix, ChatGPT might seem anticlimactic or even boring. It’s not a malevolent murder machine, but a publicly-available chatbot that – when prompted by a human – can spit out sentences or paragraphs that read uncannily as if they were written by an actual person.

Here’s everything you need to know about the latest generation of AIs.

Can you give an example of ChatGPT in action?

Sure, here’s one example about refusing a wedding invitation and another about planning a holiday.

That’s impressive! So how does ChatGPT actually work?!

ChatGPT is what’s called a generative AI as it can create seemingly new works. It was built by a private company called OpenAI using a technique called a large language model (LLM). This is when vast quantities of text, in this case from the internet, were fed into a complex machine learning (ML) program running on incredibly powerful computers.

The ML program is capable of recognising that certain patterns and relationships between different words will occur in text written about one subject but not another.

With this sophisticated pattern recognition in place, ChatGPT can then – when prompted – recombine what it has learned from its LLM in new sentences and paragraphs. This combination of transforming information that it has previously been trained on is what the ‘GPT’ in ChatGPT stands for – Generative Pre-trained Transformer.

Pixel 6 and iPhone 13: How smartphones are now clever enough to coach you at tennis

Artificial Intelligence (AI) now lives inside the brain of the latest smartphones, opening up intriguing new possibilities in the apps we use, from tracking your tennis swings to helping you sing your heart out.

So is ChatGPT as intelligent as an animal or a person?

ChatGPT is not truly ‘intelligent’ in that it doesn’t actually understand the meanings behind weddings or holidays. An AI that’s as intellectually capable as an animal or human would be known as an artificial general intelligence. No such AI is known to exist – not yet, anyway.

Generative AIs trained on LLMs, like ChatGPT, are making probability calculations – the more often words like ‘invitation’ and ‘bride’ are used when talking about ‘weddings’ in its training texts, the more likely it is to use those words when prompted by a human about weddings or indeed related subjects such as divorces.

Generative AI’s sophisticated weighting of probability based on analysing huge amounts of data is why it’s sometimes described as ‘applied statistics’ rather than ‘artificial intelligence’.

So what is ChatGPT actually being used for?

As OpenAI allows ChatGPT to be integrated into other software, it’s being used in all sorts of places besides its own publicly accessible website.

Microsoft, an investor in and partner of OpenAI, is using ChatGPT to answer queries submitted to its search engine Bing, as long as you’re using its Edge web browser. ChatGPT has also been integrated into some programming tools used by software developers, so they can ask it for suggestions on which bits of code to use to accomplish any given task.

CNet, a longstanding technology news website, has used ChatGPT to write short, simple news stories.

Cloud computing: Everything you need to know

These days cloud computing touches most aspects of our digital lives, from the apps we use to Vodafone’s mobile network itself. But what exactly is it?

Is ChatGPT trustworthy?

Not yet. Take everything generated by ChatGPT with a pinch of salt. In the words of OpenAI CEO Sam Altman, as reported by the Australian Financial Review: “I verify what it says… This is a generative technology, it is a creative helper, so please don’t rely on it for factual accuracy.”

As ChatGPT’s results are effectively a highly sophisticated recombination and regurgitation of pre-existing works, not all of which will have been accurate in the first place, there’s always a chance that what it suggests is incorrect.

For example, CNet, which used ChatGPT to write articles, allowed significant factual errors into a published article. In another notable example, a lawyer in the US used the bot to help write a court submission which he didn’t double check, discovering to his significant embarrassment that that it cited fictional cases in his brief rather than real ones.

Is ChatGPT the only generative AI out there?

ChatGPT is perhaps the most well-known generative AI, but it’s far from the only one. There are others that can also produce text, such as Google Bard. Other generative AIs can produce other types of content. For example, AIs such as Midjourney, Stable Diffusion and OpenAI’s own DALL-E can generate images based on your text prompts.

How cloud gaming could make consoles and PCs obsolete

Just as Netflix sidelined DVDs, streaming games over the internet could transform the gaming industry - as long as developers overcome several barriers first.

AI can create art?!

Putting aside philosophical definitions of what is and isn’t art, yes – AIs can indeed create whole new images. Like a text-producing generative AI, these AIs are trained on a huge trove of data – in this case hundreds of millions of images from the internet.

But rather than looking for patterns in and relationships between words, phrases and sentences, the ML programs behind art AIs repeatedly look for as many ‘variables’ as possible in each image fed to it.

For example, in photos of pandas it will identify not only the things present in all photos of pandas – four legs, a round body, black and white colouration. But also the things that identify it not as a panda, but possibly as some other animal, whether it be a cat or wombat, from the shape of the head and length of the snout to the distance between the eyes, so on and so forth.

The more images such ML programs are trained on, the better they get at distinguishing between pandas, cats and wombats. The number of ‘variables’ in any given image can reach into the hundreds and hundreds. To a human, each of these ‘variables’ looks like a complex mathematical equation. This collection of variables is known as a ‘latent space’.

The fact that the essence of photos, paintings, drawings and so on can be distilled into a series of mathematical equations will probably be either awe-inspiringly wondrous or deeply unsettling, depending on your point of view.

But how does the art AI then create a whole new image from all that training data?

This is the mind-bending bit. To create new works, art AIs use a technique called ‘diffusion’. To use an overly simplistic metaphor, this generative process is a bit like a very diligent school pupil practising their handwriting.

Just as a child will practise their penmanship by repeatedly writing out sentences until it looks more like their teacher’s handwriting, while still being recognisably their own, art AIs will compose a rough initial series of pixels based on your prompt. It will then ‘fill in’ and refine its attempt, repeatedly, using all of the variables available to it as a guide, that it thinks matches your prompt. This can happen in a matter of seconds or minutes.

The following images were generated based on the text prompt ‘a Pointillist-style painting of pandas and kittens searching for fossils on a beach in Dorset with flurries of snow in the air’:

The following images were generated based on the text prompt ‘a photo in the style of Annie Lebovitz showing a robot in a classroom teaching another robot about the concept of artificial intelligence using a blackboard’:

So why are generative AIs controversial?

For a start, some people object to their works being used as the training data for generative AIs without their permission or compensation.

There’s also concern about bias and bigotry seeping into the works created by generative AIs, a problem that notably occurred in one of Microsoft’s early attempts at creating a conversational chatbot.

While the works created by today’s generative AIs can sometimes be a bit basic or wonky in places, the fear is that they’ll soon become good enough to threaten people’s jobs.

Concerns about the future use of AI decreasing people’s earnings is one of the grievances behind the 2023 writers’ and actors’ strike in Hollywood. Scriptwriters are concerned that they’ll only be hired to refine ideas pumped out by AIs, rather than creating their own. Actors are worried that their work will be done by AI-generated doppelgängers. The latter has already occurred in a few cases involving voice actors, but with their permission.

In another example, Marvel Studios courted controversy when it was revealed that the opening credits for their TV show Secret Invasion was created by an AI rather than by human artists.

Pfff, artists are always complaining. All of that still seems a bit far-fetched to me.

They’re not the only ones who are worried. Activists, journalists and politicians are worried that convincing-enough texts, images and even videos produced by generative AI, which are then distributed on social media, will fool people with poor media literacy skills – in other words, make fake news even more believable. There have already been a few examples of political campaigns using AI-generated images in attempts to accuse opponents of doing things that never actually happened. This genre of images are popularly known as ‘deep fakes’.

Teaching your kids to question what they see online, on TV and in the papers

On Safer Internet Day, we give parents tips on how to protect kids against dodgy TikTok videos, Facebook conspiracy theories, fake news and unreliable social media influencers.

Should generative AIs be banned then?

Some countries are actively debating this and at least a few have banned ChatGPT specifically. UK authorities want to see ‘guard rails’ that restrict how generative AIs can be used. These could be similar to Midjourney’s rules which prevent the generation of images showing gore and sex. How such laws and guard rails will develop, and whether they can keep up with the rapidly developing AIs they’re attempting to regulate, remains to be seen.

Stay up-to-date with the very latest news from Vodafone by following us on Twitter and LinkedIn and signing up for News Centre website notifications.