This week has felt like a watershed moment for AI. Artificial intelligence is a term we’ve known from science fiction for decades, but 2023 appears to be the year of its arrival into our daily lives.
The technology has been accelerating in recent years, and we got a taste of its power when ChatGPT (standing for Chat Generative Pre-trained Transformer) launched to the public on 30 November.
Millions of us rushed to ask the linguistic chat bot to do humdrum tasks, or answer the most random questions. Students could get it to write university assignments, computer programmers found it to be pretty handy at correcting code, and the bored-at-work got it to write project summaries in the style of Yoda or Shakespearean sonnets about their sandwiches.
The results were undeniably impressive. ChatGPT served up passable and extensive answers for almost any request, drawing upon its intensive training by humans, and consulting over 300 billion words from across the internet (even if it’s not bang up to date and won’t give good answers about current events - yet).
It wasn’t perfect though. There were accusations of bias, and even racism in some extreme cases. A judge in Colombia admitted to using ChatGPT to ask if an autistic child should be exonerated from paying fees for their treatment - although we can’t blame the AI for that one.
Despite these concerns, minds are being blown across the world. It isn’t perfect, but it is extraordinarily good at mimicking human language.
But it was only this week that AI made the leap from the realm of Hollywood dystopia to the mainstream, with the news that Microsoft and Google, two of the world’s biggest tech giants, are going head-to-head in the race to capture the emerging market in AI-driven information.
Microsoft, which has invested $10 billion in OpenAI, the research lab behind Chat GPT and digital image creator DALL-E, announced that its Bing search engine would incorporate a “next-generation OpenAI model that is more powerful than ChatGPT”.
Cue panic in another Silicon Valley HQ, as Google rushed out an announcement about its own AI innovation: Bard, which will be coming to the world’s biggest search engine soon. It’s based on Google’s LaMDA model - a conversation technology so advanced that, famously, a Google engineer once claimed it had become sentient (an assertion which has been widely refuted by experts).
Let's gloss over the fact that Bard gave an incorrect answer in its grand unveiling demo, promptly wiping out some $120 billion from the value of Alphabet, Google's parent company.
But as AI becomes part of the fabric of our daily online lives, it is already causing massive implications for education, the world of work and the creative industries, with plenty of nervousness around plagiarism, redundancies and intellectual property.
So, should we fear AI? I asked ChatGPT itself, and this is what it said:
“AI will likely bring about significant changes to healthcare, transportation, education, work, entertainment and security. It may automate jobs and improve efficiencies, but also raise ethical concerns such as job loss, privacy issues and potential biases. To maximise its benefits and minimise its drawbacks, it’s important to approach AI with consideration for its ethical, legal, and social implications.”
Fine, and a bit dull, so can it try that again in rhyming couplets?
“AI is the future, that much is true,
It’s poised to change our lives anew...”
Ok, that’s more than enough. I want to keep my job.
What are your thoughts on AI? You can write to me with feedback on this or any of our stories at [email protected] or find me on Twitter @nickmitchell.
This originally appeared in our NationalWorld Today newsletter - you can subscribe for free here