Two years ago most people hadn’t heard of generative AI, but now it’s everywhere. Shortly after its release, ChatGPT became the fastest growing app in history. Jump forward to today and Apple has just announced plans to build it into its iPhones and it’s hard to find a piece of software that doesn’t offer generative capabilities.
However, there’s undoubtedly a lot of hype, confusion, and even fear around it. This ranges from enthusiastic advocates announcing that it’s going to solve all of the world’s problems to doom-mongers predicting that it will make millions of us redundant or even spell the end of human creativity.
So, here’s my overview of what I see as the biggest misconceptions about this revolutionary but misunderstood technology. Personally, I believe that most people can find a way to use it to help them in their work or day-to-day lives, but first, they need to understand it. Hopefully, this is helpful for cutting through the hype and getting to the facts.
Generative AI Is Intelligent
Let’s start with the biggie. Generative AI, like ChatGPT, mimics certain qualities of natural intelligence, specifically the ability to process, interpret, and create language.
However, natural (human or animal) intelligence has many facets, like reasoning, abstract thought, emotional intelligence, intuition, memory, creativity, and communication.
While generative AI excels in communication and memory, it only touches on those other aspects.
When ChatGPT or another large language model (LLM) creates output, it follows probability rules derived that it learns during its training. This means its “thought processes” are far more limited and less sophisticated than ours.
Is this real intelligence? No, it’s called “artificial” intelligence for a reason. It can do amazing things, but it’s still just an algorithm – albeit a very complex one!
Generative AI Will Replace Human Creativity
If computers can write stories and draw pictures, does that mean we don’t need human authors and artists anymore? The simple answer is no. Generative AI doesn’t fulfill all the criteria for true intelligence or creativity.
It doesn’t really have new ideas in the same way that humans do. Its creativity is only informed by data rather than by feelings, emotions, original thoughts and personal experience of the world.
A common reaction of humans to AI-generated content is that it’s bland and lacking in humanity. At first glance, this may not seem like a very scientific analysis, but when you consider it in the context of the limitations of generative AI, it makes sense.
Human creatives have nothing to fear from generative AI. While it can produce a formulaic novel or a generic picture quickly, it’s far from creating art that inspires and makes us think.
Generative AI Only Creates Words And Pictures
Ask most people what generative AI is, and they will probably tell you about ChatGPT and how it generates words or Dall-E 2 and how it can create pictures. While these are the most well-known use cases, they are just the tip of the iceberg of what AI can do today.
Generative AI tools are also available to create music, voice, and even video.
But it doesn’t stop there. Did you know it has also been used to create new medicines – including a new immunotherapy treatment for cancer?
It also creates data – known as synthetic data – which can be used to train AI algorithms and carry out statistical analytics. It also creates charts and reports that help us analyze that data.
It can also create design blueprints for anything from buildings to new devices. When combined with technology such as 3D printing or automated construction robots, it can create physical objects that exist in the real world.
Generative AI Doesn’t Need Human Input
When we think of generative AI output, we might think that everything is fully automated and that humans aren’t necessary. But this is far from the truth, today at least.
For a start, generative AI often makes mistakes or just plain makes stuff up – a phenomenon known as hallucination. For any critical use case, it’s essential to involve human oversight for fact-checking and error correction.
Human input is also necessary at every step of the process to ensure that AI is being used in a way that’s fair, ethical and responsible. This is particularly important when AI is put to work in ways that can affect human lives, for example when making decisions in the fields of healthcare, finance, human resources or law enforcement.
While generative AI can be an extremely powerful and useful tool, it’s nowhere near being good enough to fully replace human judgment and expertise.
Generative AI Is New
It may seem like generative AI burst into the world with the arrival of ChatGPT in late 2022, but that was really just when it hit the mainstream. People have been using AI to create things – including text, pictures and music – for far longer.
The first experiments with AI chatbots took place in the 1960s, with programs like Eliza that tried to hold human-like conversations. Early examples of AI-generated music emerged in the 1970s, like David Cope’s Experiments in Musical Intelligence, which aimed to mimic the style of famous composers. And AI image generators first appeared in the early 1990s, such as AARON, created by artist Harold Cohen. And
What’s new is that we’ve now reached the stage where a combination of factors, including availability of processing power and computer memory, cloud computing and advances in deep learning have made generative AI available to everyone.
Rather than needing expensive computers that were only previously available to big companies and universities, we all carry smartphones that can hook up to massively powerful data centers in the cloud, where the computing takes place. This means generative AI is best seen as a convergence of many technologies that have all matured at this point in time to kickstart the revolution rather than one breakthrough invention.