Skip to main content
Monthly Archives

April 2025

Why We Don’t Have Real Quantum Computing Yet

By UncategorizedNo Comments

From where we’re sitting today, it’s increasingly likely that quantum computing will be one of the most disruptive technologies on the medium-term horizon.

Here’s why: Harnessing the properties of matter as it behaves at the sub-atomic level – by taking advantage of strange phenomena like entanglement and superposition means certain types of computation can be vastly accelerated.

These include:

·         Identifying patterns across vast data sets

·         Solving complex optimization problems involving many variables

·         Cryptographic encryption for encoding and decoding information

Solving vital real-world challenges such as artificial intelligence, drug and materials discovery, and cyber security all rely on these calculations. So, the impact of quantum computing is likely to be immense.

There are some, though, who believe the reality is still some way off. Nvidia CEO Jensen Huang’s views on this recently caused a mini-crash in the stock price of quantum computing providers. His belief is that “very useful quantum computers” could be 30 years away.

On the other hand, evidence shows that quantum computing is increasingly accessible. Most of the big cloud providers—Google, Amazon, Microsoft—offer quantum-as-a-service, along with a growing ecosystem of startups and disruptors such as D-Wave and IonQ.

So, what’s the difference between what’s available today and what will be available when quantum becomes really useful?

Today’s Quantum Computers – The NISQ Era

Although they’re incredible feats of engineering, quantum computers today are plagued by a number of limitations. For this reason, the current era of quantum computing is dubbed the Noisy Intermediate Scale Quantum (NISQ) era. Although improvements and breakthroughs are being made constantly, systems that are accessible today suffer from low fault tolerance, high rates of error caused by qubits decaying out of their quantum state, and extreme sensitivity to interference.

Most systems still rely on classical computing architecture to handle many tasks, which creates speed bottlenecks.

And while today’s most powerful quantum computers have around 1,000 qubits, some predict that a scale of hundreds of thousands or even millions could be needed for advanced problems.

Adding new qubits isn’t as easy as it sounds. In fact, it’s a hugely complex engineering problem, as qubits have to be isolated from the outside world so they don’t decohere, and they have to be frozen to temperatures millionths of a degree above absolute zero.

In simple terms, today’s technologies are largely experimental, proof-of-concept or prototypes. Although they are constantly improving, they aren’t the scalable, robust systems needed for industrial applications.

Towards Quantum Supremacy

While big challenges remain, some hugely significant strides have been taken in recent years.

Google recently announced that it had developed revolutionary methods of improving the error tolerance of quantum computing by combining multiple qubits to make logical qubits.

New types of qubits, like photonic qubits and trapped ion qubits, are also showing promise when it comes to improving stability.

And breakthroughs have been made in the development of room-temperature qubits, which could remove the expense of super-cooling from the equations.

Progress is also ongoing in building the infrastructure that needs to be in place for quantum to be truly useful once the power is available.

This involves creating quantum programming languages like Microsoft Q#, IBM Qiskit, or the open-source PennyLane, as well as operating systems.

And Microsoft recently announced a breakthrough with Majorana 1, the world’s first topological qubit processor. This processor uses an entirely new state of matter to dramatically improve qubit stability and scale—potentially enabling the integration of over a million qubits on a single chip, a major leap toward practical quantum computing.

Challenges certainly also remain around building out a human workforce that will be able to fully leverage it. This will require a big investment in education, skills, and training.

So we’re heading in the right direction along the path to quantum supremacy – the point where quantum computers can solve problems that classical computers simply can’t.

Although “true” quantum might not be immediately around the corner, I don’t think it will be long before we can at least start to see it making a difference in our lives.

Agentic AI Enters Management: Taco Bell’s Byte-Sized Approach To Virtual Restaurant Leadership

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

YUM Brands, the parent company of Taco Bell and operator of 60,000 restaurants worldwide, has unveiled an AI-powered restaurant manager with the aim of bringing agentic AI capabilities to fast food.

Agents represent the latest wave of AI innovation, capable of complex, multi-step tasks with minimal human intervention. They are considered to be the next generation of cutting-edge AI applications, following generative AI chatbots like ChatGPT.

Some believe that they will pave the way for “virtual employees” that will work alongside humans, augmenting our capabilities and managing routine tasks so we can get on with the fun stuff.

And they are also seen as a stepping-stone on the road to artificial general intelligence (AGI) – the “holy grail” of building AI that can do just about anything we can.

The fast-food industry exists in a state of perpetual digital transformation. Now, YUM Brands has demonstrated its plans for the next leap forward with its Taco Bell franchise, and competitors are also eyeing opportunities.

So, let’s take a look at how fast food is cooking on all cylinders with agentic AI.

How Is Taco Bell Using Agentic AI?

Building on its Byte By Yum AI platform, which already uses AI to take customer orders at drive-through windows, Yum plans to deploy virtual restaurant managers. However, it also says that it doesn’t believe they will replace human management jobs.

As reported by Reuters, a video demonstration of the concept involved a character referred to as Byte AI Restaurant Coach. The character explains that they can help a restaurant manager track crew attendance and plan shift patterns. It also makes suggestions like altering opening hours to match market conditions and even taking over at the drive-through window.

Although clearly not presented as a market-ready product, it’s a strong indicator that the business – the world’s largest franchise operator – understands agents are the next step.

Yum’s platform is currently used by several other multinational brands that sit under their umbrella, including KFC and Pizza Hut. It offers both customer-facing (e.g., drive-through ordering) and internal AI (e.g., shift management) applications.

Putting an agentic layer above this, bringing all the data, insights, and (critically) actions together in order to understand their impact across the entire business, is the goal here.

about:blankClick to accept marketing cookies and enable this content

Virtual Management

Taco Bell’s vision for Bytes illustrates the progression of AI from very trivial, routine tasks to those involving thinking, planning, and decision-making.

This mirrors a trend we’re seeing across all industries, as business leaders look for use cases beyond automating very routine tasks like processing customer orders, or customer assistance chats.

The aim isn’t to replace managers—the types of decisions that AI agents can make about managing teams and human behavior will still require human oversight.

Instead, it’s about creating virtual management assistants to step in and offer advice and guidance when it’s needed or when there are tasks that it can clearly do better than a human.

Developing AI that can interface with our lives in this way, rather than just waiting for us to ask it questions or tell it to do specific tasks, is a problem we need to crack before we approach AGI.

In fact, it’s critical to some of the most hyped AI use cases across industries, like virtual healthcare assistants, teaching assistants and legal advisors.

So Yum doesn’t actually believe its platform will lead to a reduction in the number of fast-food management jobs. Instead, it aims to augment existing human workers so they can spend more time on more valuable work.

Faster Food?

Competition to leverage technology first is always fierce in fast food, and there are missteps. McDonald’s scrapped a pilot project involving AI drive-through agents last summer because it wasn’t great at getting orders right.

This is just one of many AI use cases at the Golden Arches. One of the most recently announced involves a collaboration with Google Cloud to deliver predictive maintenance for restaurant machinery. Heavily invested in AI, it’s highly likely it will soon find other opportunities to put agents to work.

Globally, fast food is a trillion-dollar business with a reputation for rewarding players who are the first to exploit emerging technologies.

Robot chefs, AI-driven personalized menus and voice-ordering systems are all set to revolutionize the industry in the coming years. It isn’t difficult to see agentic management platforms serving as the intelligence backbone for this infrastructure.

While the industry has already weathered significant transformation, the arrival of AI agents could be the industry’s biggest shake-up yet and will certainly test which companies are set to dominate in the AI era.

The AI Leadership Crisis: Why Chief AI Officers Are Failing (And How To Fix It)

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

When a Fortune 500 company hired its first Chief AI Officer last year, they announced it with tremendous fanfare. Eighteen months later, they quietly posted a new job listing for the same position. This is a scenario I am seeing play out across boardrooms worldwide as organizations grapple with a troubling challenge: the revolving door of Chief AI Officer positions.

The CAIO role emerged when organizations scrambled to harness the transformative potential of artificial intelligence. However, despite impressive salaries and reporting directly to CEOs, these positions frequently dissolve within two years. This leadership crisis threatens to derail AI initiatives at a time when strategic AI implementation has never been more critical.

So why exactly are these crucial leadership positions failing? And more importantly, what can organizations do differently? Let’s examine the five fundamental challenges undermining this pivotal role.

The Expertise Paradox

Imagine trying to find a world-class orchestra conductor who can also build violins from scratch. That’s often what companies are looking for when searching for Chief AI Officers – technical wizards who simultaneously excel at enterprise-wide business transformation.

This unicorn hunt typically ends with one of two compromises: hiring technical experts who grasp neural networks but struggle with organizational change or selecting business leaders who can’t earn credibility with AI teams because they lack technical depth.

One technology company I advised hired a renowned machine learning researcher as their CAIO. While brilliant at algorithm development, she struggled to translate technical capabilities into business value. The company’s AI initiatives became increasingly academic and disconnected from market needs.

Conversely, a retail organization appointed a seasoned business executive to the role. He excelled at stakeholder management but lacked the technical judgment to evaluate vendors’ increasingly outlandish AI claims, leading to several expensive missteps.

This expertise paradox creates an impossible standard that sets up even the most talented leaders for failure.

about:blankClick to accept marketing cookies and enable this content

The Integration Challenge

AI doesn’t exist in isolation – it’s part of a broader technology and data ecosystem. Yet companies frequently create CAIO positions as standalone silos, disconnected from existing digital and data initiatives.

This organizational design flaw creates territorial conflicts rather than collaboration. At one financial services firm, the Chief AI Officer and Chief Data Officer independently developed competing strategies for the same business problems. The result? Duplicated efforts, inconsistent approaches, and, ultimately, wasted resources.

Successful AI implementations require seamless integration with data infrastructure, IT systems, and business processes. When the CAIO operates in isolation, this integration becomes nearly impossible.

Think of it like adding a new specialist to a surgical team without introducing them to the other doctors. No matter how skilled the newcomer is, their effectiveness depends entirely on how well they coordinate with the existing team.

The Expectation Mismatch

Perhaps the most dangerous challenge facing CAIOs is the profound disconnect between expectations and reality. Many boards anticipate immediate, transformative results from AI initiatives – the digital equivalent of demanding harvest without sowing.

AI transformation isn’t a sprint; it’s a marathon with hurdles. Meaningful implementation requires persistent investment in data infrastructure, skills development, and organizational change management. Yet CAIOs often face arbitrary deadlines that are disconnected from these realities.

One manufacturing company I worked with expected their newly appointed CAIO to deliver $50 million in AI-driven cost savings within 12 months. When those unrealistic targets weren’t met, support for the role evaporated – despite significant progress in building foundational capabilities.

This timing mismatch creates a lose-lose scenario: either the CAIO pursues quick wins that deliver limited value, or they invest in proper foundations but get replaced before those investments bear fruit. Based on my experience, the right mix of both quick wins and strategic investments is the key to success.

The Governance Gap

There are many potential risks of AI, from bias to privacy concerns, and the right level of governance is essential. CAIOs are typically tasked with ensuring responsible AI use yet frequently lack the authority to enforce guidelines across departments.

This accountability-without-authority dilemma places CAIOs in an impossible position. They’re responsible for AI ethics and risk management, but departmental leaders can ignore their guidance with minimal consequences.

One healthcare organization appointed a CAIO who developed comprehensive, responsible AI guidelines. However, when a major business unit rushed to implement an AI system without proper assessment, the CAIO couldn’t halt deployment. Six months later, when bias issues emerged, guess who received the blame?

Effective governance requires structural power, not just policy documents. Without enforcement mechanisms, CAIOs become convenient scapegoats rather than effective guardians.

The Talent Tension

Even the most brilliant strategy falters without proper execution. Many CAIOs struggle to build effective teams because they’re competing for scarce AI talent with tech giants offering extraordinary compensation packages.

This talent shortage creates a cascading problem. Without strong teams, CAIOs can’t deliver results, and without results, they can’t secure additional resources. Without resources, attracting talent becomes even harder—a vicious cycle that undermines their position.

One CAIO at an energy company described their situation as “trying to build a Formula 1 team while only being able to offer bicycle mechanic salaries.” The talent gap creates a fundamental execution barrier that no amount of strategic brilliance can overcome.

The Path To Successful AI Leadership

Despite these challenges, some organizations have developed successful CAIO roles. The difference lies in how they position, support, and integrate this critical function.

Successful CAIOs aren’t isolated AI evangelists; they’re orchestrators who align AI with broader digital and data strategies. They have clear success metrics beyond implementation, focusing on business outcomes rather than technical deployments. They work with realistic timeframes and resources to build proper foundations.

Most importantly, they have both board support and structural authority to drive cross-functional collaboration.

Building The Right Foundations

For organizations serious about AI transformation, the CAIO role requires thoughtful positioning. Rather than seeking unicorns, consider complementary leadership teams that combine technical and business expertise. Integrate the CAIO function within existing technology and data leadership instead of creating competing silos.

Establish responsible AI governance with actual enforcement mechanisms. Set realistic expectations grounded in your organization’s data maturity. And critically, focus on building sustainable talent strategies rather than relying on a single heroic leader.

The CAIO role isn’t failing because of individual shortcomings – it’s struggling because of structural flaws in how organizations approach AI leadership. By addressing these fundamental challenges, companies can transform this troubled position into a catalyst for genuine AI-powered transformation.

The success of your AI initiatives doesn’t depend on finding that mythical, perfect leader. It depends on creating the organizational conditions where AI leaders can actually succeed.

Why 54% of Customers Are Disappointed: 5 CX Mistakes Your Business Can’t Afford

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

Consistently delivering advances in customer experience is a business strategy that’s driven phenomenal growth in the digital age.

Digital giants like Amazon, Netflix, and Uber have perfected this approach, gathering insights from countless touchpoints to continuously enhance every stage of their customers’ journeys.

And it’s working, too. Increasingly, we are reporting that end-to-end customer experience is a deciding factor in where we spend our money.

There are lots of metrics used to define and quantify success in customer experience. But a very simple way of describing it is giving your customers the feeling that you just “get them.”

Unfortunately, not every business gets them. In fact, 54% of US consumers think that customer experience at most companies needs improvement.

There are tremendous opportunities at stake for those who get CX right, but it can be easy to slip up. So, let’s look at some common pitfalls businesses are likely to stumble into in the coming year.

Mistake 1: Not Understanding How To Use Customer Experience Data

Lack of data won’t be an issue in CX today; every touchpoint and customer interaction can be captured. This includes point of sales data, web browsing behavior, customer service interactions, social engagement and a lot more.

Many businesses use this data to optimize sales or marketing strategies or internal processes like inventory management and logistics. Using it specifically to meet customer experience goals is a more recent challenge, though. “data rich, insight poor” describes any business function that collects data but doesn’t have a clear plan for putting data-driven insights into action. It’s a situation many involved in delivering CX improvements are likely to be familiar with.

Among the most common mistakes to make here are not having a defined data strategy or, perhaps even worse, improperly protecting sensitive customer data.

Mistake 2: Misjudging AI And Automation Opportunities

AI and automation transform CX when done effectively but can have significant downsides if mishandled. When shiny new technologies like LLM chatbots and virtual assistants actually help us to get things done and solve problems, they’re great. But when they cause frustration by not working, make customers feel detached, or even scare customers who don’t fully understand them, it’s a problem.

One tip here: Rather than just thinking about how AI can automate routine tasks like basic customer support, think about how it can enable entirely new strategies and business opportunities, like hyper-personalization and more engaging, immersive CX.

Mistake 3: Squandering The Opportunities Of Omnichannel

Meet customers where they are – that’s the mantra of omnichannel CX. Your customer base is more fractured than ever across a multitude of social networks, messaging platforms and “everything apps”. Retail is split between online and offline. Taking advantage of omnichannel is about bridging these divides in ways that break down barriers between your customers and business.

So today, you have to be everywhere. But unless you take a holistic and coordinated approach to doing so, this can be a recipe for creating disjointed experiences. Think of having to repeat information you’ve already given when switching between chatbot and human agents – super annoying.

The key to delivering superior omnichannel CX is creating a smooth, unified experience. For example, after spotting an advert for a product on their social feed, they should be able to chat to a sales agent about it, arrange to inspect it at a local bricks ‘n’ mortar store, and from there have it delivered safely to their home.

Treating each stage of this journey as a siloed process without strategic insight into how they impact CX is another fatal mistake.

Mistake 4: Taking A Reactive Rather Than Proactive Approach To CX

Relying on fixing issues with customer service after they arise isn’t a good idea. It’s often cited that one bad experience is all it takes for a lifelong customer to stop using a brand altogether. This means that every CX strategy should be focused on anticipating opportunities to make improvements and eliminating CX pitfalls before problems emerge.

AI and advanced analytics can help us understand when we are causing problems for customers rather than helping to solve them. Updates and improvements can be developed and shipped before anyone even knows they’re needed. We can monitor customer sentiment through social media to better understand what they really want, and we can use personalized outreach to let them know how our new solutions will fit their lives.

Squandering these opportunities would be a huge mistake for any business that wants to continually drive CX improvements.

Mistake 5: Not Empowering Everyone To Deliver Customer Experience

This is one that most of us have probably experienced – a receptionist who can’t offer a room upgrade because they aren’t authorized to, or a retail assistant who can’t offer a refund without permission from their manager. Customer experience can’t be dictated from the top down; it has to be present at every customer interaction and delivered at every touchpoint.

Frontline employees not being empowered to make decisions or resolve issues creates terrible CX. This means giving them the training, tools, and, above all, the trust to use their initiative and always be thinking experience-first.

This might require a culture shift for many businesses. But getting it right will foster a sense of ownership and stakeholding as well as create instant CX wins.

The CX Imperative: Turning Challenges Into Competitive Advantage

The customer experience landscape continues to evolve rapidly, and businesses that avoid these five critical mistakes will position themselves for substantial growth. As we’ve seen with industry leaders, exceptional CX isn’t just about avoiding problems—it’s about creating memorable, frictionless interactions that build lasting customer relationships. By strategically leveraging data, thoughtfully implementing AI, embracing true omnichannel integration, taking a proactive approach, and empowering every team member, companies can transform their customer experience from a potential liability into their strongest competitive advantage. The question isn’t whether your business can afford to invest in superior CX—it’s whether you can afford not to.

Why Trust Is The Missing Ingredient In Your AI Strategy

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

In the rush to deploy artificial intelligence, many organizations miss the crucial ingredient that determines whether AI initiatives succeed or fail. It’s not processing power, cutting-edge algorithms, or even data quality (though these certainly matter). The missing ingredient? Trust.

As I explored in a recent conversation with Leanne Allen, Partner and Head of AI at KPMG UK, the intersection between AI innovation and trust represents not merely an ethical consideration but a fundamental business imperative.

“Most of the challenges to the barriers of adoption are driven by this trust challenge,” Allen told me. “Trust can take a few different forms. The colleagues themselves, do they trust in the tools that they’re using? Some of that could be driven by their fear of their jobs. Then there’s trust with your customers and consumers. And then there’s trust with regulators.”

The Triple Trust Challenge

Organizations face a three-dimensional trust challenge when implementing AI. First, employees may resist AI tools if they fear job displacement or don’t understand how to work effectively with them. Second, customers may reject AI-enabled products or services if they doubt their reliability, fairness, or data-handling practices. Third, regulatory bodies worldwide are increasingly scrutinizing AI applications for compliance with emerging standards.

This multi-faceted trust challenge explains why many AI initiatives deliver disappointing results despite substantial investment. Without trust, adoption lags, customer engagement suffers, and regulatory pressures mount.

Why Responsible AI Is Your Competitive Advantage

KPMG’s Trusted AI framework, outlined in detail in their website, emphasizes that responsible implementation of AI isn’t just about avoiding harm—it’s about creating sustainable business value.

“The initial value is very much around productivity and efficiency gains,” Allen noted. “However, although it promises all of these amazing value and amazing gains, unless people start actually using these tools and not just using them in a very infancy side, like a bit of chat here and there, but actually using them to their full potential, you’re not going to drive that growth and promise that’s being made.”

KPMG’s approach centers around ten ethical pillars including fairness, transparency, explainability, accountability, data integrity, reliability, security, safety, privacy, and sustainability. These principles guide implementation throughout the AI lifecycle, from ideation to deployment and monitoring.

about:blankClick to accept marketing cookies and enable this content

The Three Waves Of AI Transformation

Allen describes three distinct waves of AI adoption that organizations typically experience:

“The first wave is very much what we call an enabler wave. It’s retooling, giving you access to tools to help you do your job a bit better and faster. Wave two is then looking at the actual end-to-end processes themselves and effectively it’s the redesign of that process. Wave three is reimagining. That’s really thinking about even your organizational structure, going back to what is your value stream of your organization.”

These waves highlight how trust must be built into AI systems from the beginning, as each successive wave involves deeper integration of AI into business processes and organizational structures.

Values-Driven AI: Aligning Technology With Corporate Principles

One of the most compelling aspects of KPMG’s framework is its emphasis on aligning AI initiatives with existing corporate values.

“The values-driven approach does align to corporate values and most corporate values will have techniques like, or statements like integrity baked into them. They will have social responsibility baked into them,” Allen explained.

In practice, this means establishing ethics boards or councils to review AI use cases. These boards aren’t compliance teams that simply check boxes against regulations. Instead, they serve as advisors who challenge whether potential AI applications align with organizational values and consider the diversity of thought essential for responsible innovation.

“Putting in ethics boards or ethics councils in place… they’re not compliance teams, so they’re not there to do the job of saying yes or no, and tick a box against regulation, they’re there as an advisory board and sometimes a challenge, to on the ethical side, more than anything,” Allen said.

Human-Centric Design: Augmentation Over Automation

The distinction between augmenting human capabilities versus replacing them entirely represents another key aspect of building trustworthy AI.

“Anything that requires decision making is still about augmenting humans, supporting humans, providing them extra information so they can make better decisions, rather than making those decisions directly themselves,” Allen emphasized. “And I think that’s really the shape of what the workforce of the future is going to look like. It will free up time for more critical thinking, more value, more creative type work.”

Organizations should measure whether AI truly augments human capabilities through metrics like time saved and the percentage of AI-generated decisions that humans modify, indicating genuine human oversight rather than rubber-stamping.

Overcoming Implementation Obstacles

When I asked Allen about the most common obstacles organizations face when implementing ethical AI principles, her answer was illuminating:

“The first one is the framework. Do you have your ethical principles clearly defined and how you communicate those? Then there’s an element of a higher-level operating model. Then it’s going to come back down to education. One of the biggest obstacles is still a lack of education and understanding.”

She also emphasized the persistent challenge of poor data infrastructure: “Fundamentally what hinders the acceleration here is the foundational elements. So, infrastructure and data, right? And the quality of the data and access to the data.”

Building A Global Consensus

Looking toward the future, Allen identified a significant collective challenge humanity needs to address to ensure AI benefits society as a whole:

“I think the first one is the global lens and we need, in my opinion, a level of consistency of standards or regulation across jurisdictions. And at the moment, I think we’re possibly going in the other direction,” Allen observed. “Data doesn’t have boundaries, right? So the challenge is AI doesn’t have boundaries. We have boundaries as countries. And I think that’s going to stifle the amount of innovation that can happen, or countries will develop AI in silos.”

The Path Forward: Trust By Design

Building trust in AI systems isn’t an afterthought—it must be designed in from the beginning. This “trust by design” approach involves embedding control points throughout the AI lifecycle to ensure systems align with both regulatory requirements and ethical principles.

Organizations that succeed in this space will avoid potential ethical crises and gain a competitive advantage through higher adoption rates, greater stakeholder confidence, and more sustainable innovation.

The promise of AI remains extraordinary, but its full potential will only be realized when paired with the human element of trust. As Allen aptly summarized: “We believe in the transformative power of AI. And that it can only reach its full potential when it is paired with human expertise and ingenuity.”

For businesses looking to implement AI successfully, this means going beyond the technical aspects to address the human dimensions of trust throughout the organization and its broader ecosystem of stakeholders. Only then will AI truly deliver on its transformative potential.

Credit: Bernard Marr

The Important Difference Between Agentic AI And AI Agents

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

If you’ve been reading about business technology recently, then “AI agents” and “agentic AI” are terms you’ve probably come across with increasing frequency.

They’re often used interchangeably, and I’ve noticed this can cause a little confusion as they refer to subtly different concepts.

As it’s something I’ve been asked about, I thought I’d put together a little explainer.

Remember, AI is a quickly evolving subject, and the terminology around it is evolving, too.

So here’s an overview of AI agents and agentic AI, and most importantly, a guide to telling the difference.

The Important Difference Between Agentic AI And AI Agents | Bernard Marr

So What Are They And How Are They Different?

Ok, both concepts are used to refer to AI that can work through multi-step problems independently, with little guidance from humans.

AI agents are specific applications created to do just this and are already widely in use today, even if we don’t often see them.

They’re used by banks and in e-commerce to verify our identities, automate transactions and record keeping, and learn about us in order to improve their service.

Agentic AI, on the other hand, refers to the field of AI that enables machines to operate as agents. Agentic AI is concerned with researching and developing AI models capable of the type of independent, autonomous work that agents can do.

Thinks of AI agents as specific medicines prescribed for particular conditions, while agentic AI is the entire field of pharmaceutical science that develops all medications.

Another way of looking at it is in the context of artificial general intelligence (AGI) – the future goal of one day creating AIs that are generalists rather than specialists, capable of any task we ask them to help out with.

Today’s AI agents are not AGI – and they probably won’t be for a few years yet. Yes, they can carry out complex tasks, but still only specific tasks they were created for. They can’t really apply their learning to doing other things in the same way humans can.

Agentic AI, however, is a field of AI research and development some believe will eventually lead to AGI. It includes building AIs that are capable of interacting with external systems – both digitally by interfacing with them and physically with robotics.

So, to put it in a very straightforward way – the term “AI agents” refers to a specific application of agentic AI, and “agentic” refers to the AI models, algorithms and methods that make them work.

Why Is This Important?

AI agents and agentic AI are two closely related concepts that everyone needs to understand if they’re planning on using technology to make a difference in the coming years.

As research and development into agentic AI continues, we will see increasingly sophisticated agents capable of automating many different tasks.

Truly useful and personalized digital assistants, capable of learning about what we need in detail and taking steps to help us achieve it, will just be the start.

Integrated with robotics, agents will also open the door to the automation of physical tasks, such as complex construction or engineering work.

With the massive and rapid advancements we’re seeing in AI, I think it’s likely these changes will take place at a speed that will take many of us by surprise.

Staying ahead of the curve now and understanding the latest developments in AI, robotics and automation is essential for anyone who wants to prosper in the new technological era.

Credit: Bernard Marr

Social Share Buttons and Icons powered by Ultimatelysocial