Skip to main content
Category

Career

Agentic AI Enters Management: Taco Bell’s Byte-Sized Approach To Virtual Restaurant Leadership

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

YUM Brands, the parent company of Taco Bell and operator of 60,000 restaurants worldwide, has unveiled an AI-powered restaurant manager with the aim of bringing agentic AI capabilities to fast food.

Agents represent the latest wave of AI innovation, capable of complex, multi-step tasks with minimal human intervention. They are considered to be the next generation of cutting-edge AI applications, following generative AI chatbots like ChatGPT.

Some believe that they will pave the way for “virtual employees” that will work alongside humans, augmenting our capabilities and managing routine tasks so we can get on with the fun stuff.

And they are also seen as a stepping-stone on the road to artificial general intelligence (AGI) – the “holy grail” of building AI that can do just about anything we can.

The fast-food industry exists in a state of perpetual digital transformation. Now, YUM Brands has demonstrated its plans for the next leap forward with its Taco Bell franchise, and competitors are also eyeing opportunities.

So, let’s take a look at how fast food is cooking on all cylinders with agentic AI.

How Is Taco Bell Using Agentic AI?

Building on its Byte By Yum AI platform, which already uses AI to take customer orders at drive-through windows, Yum plans to deploy virtual restaurant managers. However, it also says that it doesn’t believe they will replace human management jobs.

As reported by Reuters, a video demonstration of the concept involved a character referred to as Byte AI Restaurant Coach. The character explains that they can help a restaurant manager track crew attendance and plan shift patterns. It also makes suggestions like altering opening hours to match market conditions and even taking over at the drive-through window.

Although clearly not presented as a market-ready product, it’s a strong indicator that the business – the world’s largest franchise operator – understands agents are the next step.

Yum’s platform is currently used by several other multinational brands that sit under their umbrella, including KFC and Pizza Hut. It offers both customer-facing (e.g., drive-through ordering) and internal AI (e.g., shift management) applications.

Putting an agentic layer above this, bringing all the data, insights, and (critically) actions together in order to understand their impact across the entire business, is the goal here.

about:blankClick to accept marketing cookies and enable this content

Virtual Management

Taco Bell’s vision for Bytes illustrates the progression of AI from very trivial, routine tasks to those involving thinking, planning, and decision-making.

This mirrors a trend we’re seeing across all industries, as business leaders look for use cases beyond automating very routine tasks like processing customer orders, or customer assistance chats.

The aim isn’t to replace managers—the types of decisions that AI agents can make about managing teams and human behavior will still require human oversight.

Instead, it’s about creating virtual management assistants to step in and offer advice and guidance when it’s needed or when there are tasks that it can clearly do better than a human.

Developing AI that can interface with our lives in this way, rather than just waiting for us to ask it questions or tell it to do specific tasks, is a problem we need to crack before we approach AGI.

In fact, it’s critical to some of the most hyped AI use cases across industries, like virtual healthcare assistants, teaching assistants and legal advisors.

So Yum doesn’t actually believe its platform will lead to a reduction in the number of fast-food management jobs. Instead, it aims to augment existing human workers so they can spend more time on more valuable work.

Faster Food?

Competition to leverage technology first is always fierce in fast food, and there are missteps. McDonald’s scrapped a pilot project involving AI drive-through agents last summer because it wasn’t great at getting orders right.

This is just one of many AI use cases at the Golden Arches. One of the most recently announced involves a collaboration with Google Cloud to deliver predictive maintenance for restaurant machinery. Heavily invested in AI, it’s highly likely it will soon find other opportunities to put agents to work.

Globally, fast food is a trillion-dollar business with a reputation for rewarding players who are the first to exploit emerging technologies.

Robot chefs, AI-driven personalized menus and voice-ordering systems are all set to revolutionize the industry in the coming years. It isn’t difficult to see agentic management platforms serving as the intelligence backbone for this infrastructure.

While the industry has already weathered significant transformation, the arrival of AI agents could be the industry’s biggest shake-up yet and will certainly test which companies are set to dominate in the AI era.

The AI Leadership Crisis: Why Chief AI Officers Are Failing (And How To Fix It)

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

When a Fortune 500 company hired its first Chief AI Officer last year, they announced it with tremendous fanfare. Eighteen months later, they quietly posted a new job listing for the same position. This is a scenario I am seeing play out across boardrooms worldwide as organizations grapple with a troubling challenge: the revolving door of Chief AI Officer positions.

The CAIO role emerged when organizations scrambled to harness the transformative potential of artificial intelligence. However, despite impressive salaries and reporting directly to CEOs, these positions frequently dissolve within two years. This leadership crisis threatens to derail AI initiatives at a time when strategic AI implementation has never been more critical.

So why exactly are these crucial leadership positions failing? And more importantly, what can organizations do differently? Let’s examine the five fundamental challenges undermining this pivotal role.

The Expertise Paradox

Imagine trying to find a world-class orchestra conductor who can also build violins from scratch. That’s often what companies are looking for when searching for Chief AI Officers – technical wizards who simultaneously excel at enterprise-wide business transformation.

This unicorn hunt typically ends with one of two compromises: hiring technical experts who grasp neural networks but struggle with organizational change or selecting business leaders who can’t earn credibility with AI teams because they lack technical depth.

One technology company I advised hired a renowned machine learning researcher as their CAIO. While brilliant at algorithm development, she struggled to translate technical capabilities into business value. The company’s AI initiatives became increasingly academic and disconnected from market needs.

Conversely, a retail organization appointed a seasoned business executive to the role. He excelled at stakeholder management but lacked the technical judgment to evaluate vendors’ increasingly outlandish AI claims, leading to several expensive missteps.

This expertise paradox creates an impossible standard that sets up even the most talented leaders for failure.

about:blankClick to accept marketing cookies and enable this content

The Integration Challenge

AI doesn’t exist in isolation – it’s part of a broader technology and data ecosystem. Yet companies frequently create CAIO positions as standalone silos, disconnected from existing digital and data initiatives.

This organizational design flaw creates territorial conflicts rather than collaboration. At one financial services firm, the Chief AI Officer and Chief Data Officer independently developed competing strategies for the same business problems. The result? Duplicated efforts, inconsistent approaches, and, ultimately, wasted resources.

Successful AI implementations require seamless integration with data infrastructure, IT systems, and business processes. When the CAIO operates in isolation, this integration becomes nearly impossible.

Think of it like adding a new specialist to a surgical team without introducing them to the other doctors. No matter how skilled the newcomer is, their effectiveness depends entirely on how well they coordinate with the existing team.

The Expectation Mismatch

Perhaps the most dangerous challenge facing CAIOs is the profound disconnect between expectations and reality. Many boards anticipate immediate, transformative results from AI initiatives – the digital equivalent of demanding harvest without sowing.

AI transformation isn’t a sprint; it’s a marathon with hurdles. Meaningful implementation requires persistent investment in data infrastructure, skills development, and organizational change management. Yet CAIOs often face arbitrary deadlines that are disconnected from these realities.

One manufacturing company I worked with expected their newly appointed CAIO to deliver $50 million in AI-driven cost savings within 12 months. When those unrealistic targets weren’t met, support for the role evaporated – despite significant progress in building foundational capabilities.

This timing mismatch creates a lose-lose scenario: either the CAIO pursues quick wins that deliver limited value, or they invest in proper foundations but get replaced before those investments bear fruit. Based on my experience, the right mix of both quick wins and strategic investments is the key to success.

The Governance Gap

There are many potential risks of AI, from bias to privacy concerns, and the right level of governance is essential. CAIOs are typically tasked with ensuring responsible AI use yet frequently lack the authority to enforce guidelines across departments.

This accountability-without-authority dilemma places CAIOs in an impossible position. They’re responsible for AI ethics and risk management, but departmental leaders can ignore their guidance with minimal consequences.

One healthcare organization appointed a CAIO who developed comprehensive, responsible AI guidelines. However, when a major business unit rushed to implement an AI system without proper assessment, the CAIO couldn’t halt deployment. Six months later, when bias issues emerged, guess who received the blame?

Effective governance requires structural power, not just policy documents. Without enforcement mechanisms, CAIOs become convenient scapegoats rather than effective guardians.

The Talent Tension

Even the most brilliant strategy falters without proper execution. Many CAIOs struggle to build effective teams because they’re competing for scarce AI talent with tech giants offering extraordinary compensation packages.

This talent shortage creates a cascading problem. Without strong teams, CAIOs can’t deliver results, and without results, they can’t secure additional resources. Without resources, attracting talent becomes even harder—a vicious cycle that undermines their position.

One CAIO at an energy company described their situation as “trying to build a Formula 1 team while only being able to offer bicycle mechanic salaries.” The talent gap creates a fundamental execution barrier that no amount of strategic brilliance can overcome.

The Path To Successful AI Leadership

Despite these challenges, some organizations have developed successful CAIO roles. The difference lies in how they position, support, and integrate this critical function.

Successful CAIOs aren’t isolated AI evangelists; they’re orchestrators who align AI with broader digital and data strategies. They have clear success metrics beyond implementation, focusing on business outcomes rather than technical deployments. They work with realistic timeframes and resources to build proper foundations.

Most importantly, they have both board support and structural authority to drive cross-functional collaboration.

Building The Right Foundations

For organizations serious about AI transformation, the CAIO role requires thoughtful positioning. Rather than seeking unicorns, consider complementary leadership teams that combine technical and business expertise. Integrate the CAIO function within existing technology and data leadership instead of creating competing silos.

Establish responsible AI governance with actual enforcement mechanisms. Set realistic expectations grounded in your organization’s data maturity. And critically, focus on building sustainable talent strategies rather than relying on a single heroic leader.

The CAIO role isn’t failing because of individual shortcomings – it’s struggling because of structural flaws in how organizations approach AI leadership. By addressing these fundamental challenges, companies can transform this troubled position into a catalyst for genuine AI-powered transformation.

The success of your AI initiatives doesn’t depend on finding that mythical, perfect leader. It depends on creating the organizational conditions where AI leaders can actually succeed.

Why 54% of Customers Are Disappointed: 5 CX Mistakes Your Business Can’t Afford

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

Consistently delivering advances in customer experience is a business strategy that’s driven phenomenal growth in the digital age.

Digital giants like Amazon, Netflix, and Uber have perfected this approach, gathering insights from countless touchpoints to continuously enhance every stage of their customers’ journeys.

And it’s working, too. Increasingly, we are reporting that end-to-end customer experience is a deciding factor in where we spend our money.

There are lots of metrics used to define and quantify success in customer experience. But a very simple way of describing it is giving your customers the feeling that you just “get them.”

Unfortunately, not every business gets them. In fact, 54% of US consumers think that customer experience at most companies needs improvement.

There are tremendous opportunities at stake for those who get CX right, but it can be easy to slip up. So, let’s look at some common pitfalls businesses are likely to stumble into in the coming year.

Mistake 1: Not Understanding How To Use Customer Experience Data

Lack of data won’t be an issue in CX today; every touchpoint and customer interaction can be captured. This includes point of sales data, web browsing behavior, customer service interactions, social engagement and a lot more.

Many businesses use this data to optimize sales or marketing strategies or internal processes like inventory management and logistics. Using it specifically to meet customer experience goals is a more recent challenge, though. “data rich, insight poor” describes any business function that collects data but doesn’t have a clear plan for putting data-driven insights into action. It’s a situation many involved in delivering CX improvements are likely to be familiar with.

Among the most common mistakes to make here are not having a defined data strategy or, perhaps even worse, improperly protecting sensitive customer data.

Mistake 2: Misjudging AI And Automation Opportunities

AI and automation transform CX when done effectively but can have significant downsides if mishandled. When shiny new technologies like LLM chatbots and virtual assistants actually help us to get things done and solve problems, they’re great. But when they cause frustration by not working, make customers feel detached, or even scare customers who don’t fully understand them, it’s a problem.

One tip here: Rather than just thinking about how AI can automate routine tasks like basic customer support, think about how it can enable entirely new strategies and business opportunities, like hyper-personalization and more engaging, immersive CX.

Mistake 3: Squandering The Opportunities Of Omnichannel

Meet customers where they are – that’s the mantra of omnichannel CX. Your customer base is more fractured than ever across a multitude of social networks, messaging platforms and “everything apps”. Retail is split between online and offline. Taking advantage of omnichannel is about bridging these divides in ways that break down barriers between your customers and business.

So today, you have to be everywhere. But unless you take a holistic and coordinated approach to doing so, this can be a recipe for creating disjointed experiences. Think of having to repeat information you’ve already given when switching between chatbot and human agents – super annoying.

The key to delivering superior omnichannel CX is creating a smooth, unified experience. For example, after spotting an advert for a product on their social feed, they should be able to chat to a sales agent about it, arrange to inspect it at a local bricks ‘n’ mortar store, and from there have it delivered safely to their home.

Treating each stage of this journey as a siloed process without strategic insight into how they impact CX is another fatal mistake.

Mistake 4: Taking A Reactive Rather Than Proactive Approach To CX

Relying on fixing issues with customer service after they arise isn’t a good idea. It’s often cited that one bad experience is all it takes for a lifelong customer to stop using a brand altogether. This means that every CX strategy should be focused on anticipating opportunities to make improvements and eliminating CX pitfalls before problems emerge.

AI and advanced analytics can help us understand when we are causing problems for customers rather than helping to solve them. Updates and improvements can be developed and shipped before anyone even knows they’re needed. We can monitor customer sentiment through social media to better understand what they really want, and we can use personalized outreach to let them know how our new solutions will fit their lives.

Squandering these opportunities would be a huge mistake for any business that wants to continually drive CX improvements.

Mistake 5: Not Empowering Everyone To Deliver Customer Experience

This is one that most of us have probably experienced – a receptionist who can’t offer a room upgrade because they aren’t authorized to, or a retail assistant who can’t offer a refund without permission from their manager. Customer experience can’t be dictated from the top down; it has to be present at every customer interaction and delivered at every touchpoint.

Frontline employees not being empowered to make decisions or resolve issues creates terrible CX. This means giving them the training, tools, and, above all, the trust to use their initiative and always be thinking experience-first.

This might require a culture shift for many businesses. But getting it right will foster a sense of ownership and stakeholding as well as create instant CX wins.

The CX Imperative: Turning Challenges Into Competitive Advantage

The customer experience landscape continues to evolve rapidly, and businesses that avoid these five critical mistakes will position themselves for substantial growth. As we’ve seen with industry leaders, exceptional CX isn’t just about avoiding problems—it’s about creating memorable, frictionless interactions that build lasting customer relationships. By strategically leveraging data, thoughtfully implementing AI, embracing true omnichannel integration, taking a proactive approach, and empowering every team member, companies can transform their customer experience from a potential liability into their strongest competitive advantage. The question isn’t whether your business can afford to invest in superior CX—it’s whether you can afford not to.

Why Trust Is The Missing Ingredient In Your AI Strategy

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

In the rush to deploy artificial intelligence, many organizations miss the crucial ingredient that determines whether AI initiatives succeed or fail. It’s not processing power, cutting-edge algorithms, or even data quality (though these certainly matter). The missing ingredient? Trust.

As I explored in a recent conversation with Leanne Allen, Partner and Head of AI at KPMG UK, the intersection between AI innovation and trust represents not merely an ethical consideration but a fundamental business imperative.

“Most of the challenges to the barriers of adoption are driven by this trust challenge,” Allen told me. “Trust can take a few different forms. The colleagues themselves, do they trust in the tools that they’re using? Some of that could be driven by their fear of their jobs. Then there’s trust with your customers and consumers. And then there’s trust with regulators.”

The Triple Trust Challenge

Organizations face a three-dimensional trust challenge when implementing AI. First, employees may resist AI tools if they fear job displacement or don’t understand how to work effectively with them. Second, customers may reject AI-enabled products or services if they doubt their reliability, fairness, or data-handling practices. Third, regulatory bodies worldwide are increasingly scrutinizing AI applications for compliance with emerging standards.

This multi-faceted trust challenge explains why many AI initiatives deliver disappointing results despite substantial investment. Without trust, adoption lags, customer engagement suffers, and regulatory pressures mount.

Why Responsible AI Is Your Competitive Advantage

KPMG’s Trusted AI framework, outlined in detail in their website, emphasizes that responsible implementation of AI isn’t just about avoiding harm—it’s about creating sustainable business value.

“The initial value is very much around productivity and efficiency gains,” Allen noted. “However, although it promises all of these amazing value and amazing gains, unless people start actually using these tools and not just using them in a very infancy side, like a bit of chat here and there, but actually using them to their full potential, you’re not going to drive that growth and promise that’s being made.”

KPMG’s approach centers around ten ethical pillars including fairness, transparency, explainability, accountability, data integrity, reliability, security, safety, privacy, and sustainability. These principles guide implementation throughout the AI lifecycle, from ideation to deployment and monitoring.

about:blankClick to accept marketing cookies and enable this content

The Three Waves Of AI Transformation

Allen describes three distinct waves of AI adoption that organizations typically experience:

“The first wave is very much what we call an enabler wave. It’s retooling, giving you access to tools to help you do your job a bit better and faster. Wave two is then looking at the actual end-to-end processes themselves and effectively it’s the redesign of that process. Wave three is reimagining. That’s really thinking about even your organizational structure, going back to what is your value stream of your organization.”

These waves highlight how trust must be built into AI systems from the beginning, as each successive wave involves deeper integration of AI into business processes and organizational structures.

Values-Driven AI: Aligning Technology With Corporate Principles

One of the most compelling aspects of KPMG’s framework is its emphasis on aligning AI initiatives with existing corporate values.

“The values-driven approach does align to corporate values and most corporate values will have techniques like, or statements like integrity baked into them. They will have social responsibility baked into them,” Allen explained.

In practice, this means establishing ethics boards or councils to review AI use cases. These boards aren’t compliance teams that simply check boxes against regulations. Instead, they serve as advisors who challenge whether potential AI applications align with organizational values and consider the diversity of thought essential for responsible innovation.

“Putting in ethics boards or ethics councils in place… they’re not compliance teams, so they’re not there to do the job of saying yes or no, and tick a box against regulation, they’re there as an advisory board and sometimes a challenge, to on the ethical side, more than anything,” Allen said.

Human-Centric Design: Augmentation Over Automation

The distinction between augmenting human capabilities versus replacing them entirely represents another key aspect of building trustworthy AI.

“Anything that requires decision making is still about augmenting humans, supporting humans, providing them extra information so they can make better decisions, rather than making those decisions directly themselves,” Allen emphasized. “And I think that’s really the shape of what the workforce of the future is going to look like. It will free up time for more critical thinking, more value, more creative type work.”

Organizations should measure whether AI truly augments human capabilities through metrics like time saved and the percentage of AI-generated decisions that humans modify, indicating genuine human oversight rather than rubber-stamping.

Overcoming Implementation Obstacles

When I asked Allen about the most common obstacles organizations face when implementing ethical AI principles, her answer was illuminating:

“The first one is the framework. Do you have your ethical principles clearly defined and how you communicate those? Then there’s an element of a higher-level operating model. Then it’s going to come back down to education. One of the biggest obstacles is still a lack of education and understanding.”

She also emphasized the persistent challenge of poor data infrastructure: “Fundamentally what hinders the acceleration here is the foundational elements. So, infrastructure and data, right? And the quality of the data and access to the data.”

Building A Global Consensus

Looking toward the future, Allen identified a significant collective challenge humanity needs to address to ensure AI benefits society as a whole:

“I think the first one is the global lens and we need, in my opinion, a level of consistency of standards or regulation across jurisdictions. And at the moment, I think we’re possibly going in the other direction,” Allen observed. “Data doesn’t have boundaries, right? So the challenge is AI doesn’t have boundaries. We have boundaries as countries. And I think that’s going to stifle the amount of innovation that can happen, or countries will develop AI in silos.”

The Path Forward: Trust By Design

Building trust in AI systems isn’t an afterthought—it must be designed in from the beginning. This “trust by design” approach involves embedding control points throughout the AI lifecycle to ensure systems align with both regulatory requirements and ethical principles.

Organizations that succeed in this space will avoid potential ethical crises and gain a competitive advantage through higher adoption rates, greater stakeholder confidence, and more sustainable innovation.

The promise of AI remains extraordinary, but its full potential will only be realized when paired with the human element of trust. As Allen aptly summarized: “We believe in the transformative power of AI. And that it can only reach its full potential when it is paired with human expertise and ingenuity.”

For businesses looking to implement AI successfully, this means going beyond the technical aspects to address the human dimensions of trust throughout the organization and its broader ecosystem of stakeholders. Only then will AI truly deliver on its transformative potential.

Credit: Bernard Marr

The Important Difference Between Agentic AI And AI Agents

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

If you’ve been reading about business technology recently, then “AI agents” and “agentic AI” are terms you’ve probably come across with increasing frequency.

They’re often used interchangeably, and I’ve noticed this can cause a little confusion as they refer to subtly different concepts.

As it’s something I’ve been asked about, I thought I’d put together a little explainer.

Remember, AI is a quickly evolving subject, and the terminology around it is evolving, too.

So here’s an overview of AI agents and agentic AI, and most importantly, a guide to telling the difference.

The Important Difference Between Agentic AI And AI Agents | Bernard Marr

So What Are They And How Are They Different?

Ok, both concepts are used to refer to AI that can work through multi-step problems independently, with little guidance from humans.

AI agents are specific applications created to do just this and are already widely in use today, even if we don’t often see them.

They’re used by banks and in e-commerce to verify our identities, automate transactions and record keeping, and learn about us in order to improve their service.

Agentic AI, on the other hand, refers to the field of AI that enables machines to operate as agents. Agentic AI is concerned with researching and developing AI models capable of the type of independent, autonomous work that agents can do.

Thinks of AI agents as specific medicines prescribed for particular conditions, while agentic AI is the entire field of pharmaceutical science that develops all medications.

Another way of looking at it is in the context of artificial general intelligence (AGI) – the future goal of one day creating AIs that are generalists rather than specialists, capable of any task we ask them to help out with.

Today’s AI agents are not AGI – and they probably won’t be for a few years yet. Yes, they can carry out complex tasks, but still only specific tasks they were created for. They can’t really apply their learning to doing other things in the same way humans can.

Agentic AI, however, is a field of AI research and development some believe will eventually lead to AGI. It includes building AIs that are capable of interacting with external systems – both digitally by interfacing with them and physically with robotics.

So, to put it in a very straightforward way – the term “AI agents” refers to a specific application of agentic AI, and “agentic” refers to the AI models, algorithms and methods that make them work.

Why Is This Important?

AI agents and agentic AI are two closely related concepts that everyone needs to understand if they’re planning on using technology to make a difference in the coming years.

As research and development into agentic AI continues, we will see increasingly sophisticated agents capable of automating many different tasks.

Truly useful and personalized digital assistants, capable of learning about what we need in detail and taking steps to help us achieve it, will just be the start.

Integrated with robotics, agents will also open the door to the automation of physical tasks, such as complex construction or engineering work.

With the massive and rapid advancements we’re seeing in AI, I think it’s likely these changes will take place at a speed that will take many of us by surprise.

Staying ahead of the curve now and understanding the latest developments in AI, robotics and automation is essential for anyone who wants to prosper in the new technological era.

Credit: Bernard Marr

AI’s Competitive Edge: Turning Data Challenges Into Business Success

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

The artificial intelligence revolution is no longer creeping forward—it’s sprinting. In my recent conversation with Brett Roscoe, Senior Vice President and General Manager of Data Governance and Cloud Operations at Informatica, we explored the findings from their CDO Insights 2025 report that reveal both the explosive growth in AI investment and the obstacles companies are encountering on their journey.

What struck me most during our discussion wasn’t just the sheer scale of financial commitment—with 87% of companies increasing their generative AI investments in 2025—but the widening competitive chasm between early adopters and those hesitating to embrace AI’s transformative potential.

The Investment Stampede Is Real

The enthusiasm for generative AI isn’t just talk—it’s backed by serious capital. As Roscoe explained during our conversation, “87 percent of our customers… are increasing their investments in gen AI in 2025. Now that was already on top of what they’ve done to increase it in 2024.”

This investment surge isn’t happening in isolation. Alongside their AI spending, 86% of organizations are simultaneously boosting their investment in data management practices to support these initiatives. This dual-track approach signals a growing recognition that successful AI implementation requires robust data infrastructure.

What are companies hoping to achieve? According to Roscoe, “the biggest areas were operational efficiency… and enhancing customer experiences or employee experiences.” These goals are already materializing in companies like CN Rail, which has slashed data preparation time from months to just two weeks, creating the foundation for accelerated AI project development.

The Early Adopter Advantage Is Widening

Perhaps the most compelling insight from our conversation was the growing evidence that AI early adopters are pulling ahead—and fast. As Roscoe noted, “Early adopters of AI had a big advantage. In fact, to the point where these early adopters saw something like 35 percent cost decreases and 58 percent growth attributed to their Gen AI projects.”

This isn’t incremental improvement—it’s transformative change. When McKinsey surveyed organizations already deploying generative AI, they found that, on average, these companies attributed 20% of their EBITDA to their AI initiatives. Numbers like that aren’t just competitive advantages—they’re existential threats to laggards.

I’ve observed this dynamic firsthand across industries: the gap between AI leaders and followers isn’t stable—it’s expanding daily. Every organization that doesn’t see itself as AI-enabled is falling further behind those that do. This creates a powerful incentive for companies to push ahead despite challenges.

The Hidden Obstacles To AI Success

While investment enthusiasm runs high, implementation reality is more sobering. The CDO Insights 2025 report revealed that a staggering 97% of organizations struggle to demonstrate business value from their generative AI investments—a significant roadblock to securing continued funding and executive buy-in.

Technical challenges are equally prevalent, with 92% reporting that issues like data quality, responsible AI use, and compliance concerns are delaying the progression from proof-of-concept to production. As Roscoe emphasized, companies are grappling with questions like: “How will this AI model behave? What risks are associated with putting this into production?”

In my experience working with organizations across sectors, I’ve observed additional barriers: executive teams not thinking ambitiously enough, scattered pilot projects without strategic cohesion, leadership knowledge gaps, skills shortages, cultural resistance, and data silos that prevent AI from reaching its potential.

5 Fatal GenAI Mistakes That Could Destroy Your Business In 2025

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

According to recent research, 67% of business leaders believe that generative AI will bring significant change to their organizations over the next two years.

But in the rush to adopt and deploy this world-changing technology, it’s pretty likely that mistakes will be made.

The downside of this enormous potential is that when things go wrong, the damage can be quite serious too, from reputational harm to harsh fines and, perhaps worst of all, loss of customer trust.

So here’s my overview of five of the most common mistakes that I believe many businesses and business leaders will make in the coming year so you can plan to avoid them.

Omitting Human Oversight

Powerful and transformative as it undoubtedly is, we can’t ignore the fact that generative AI isn’t always entirely accurate. In fact, some sources state that factual errors can be found in as many as 46 percent of AI-generated texts. And in 2093, the tech news website CNET paused the publication of AI-generated news stories after having to issue corrections for 41 out of 77 stories. What this means for businesses is that proofreading, fact-checking and keeping a human-in-the-loop is essential if you don’t want to run the risk of making yourself look silly.

Of course, humans make mistakes, too, and any business involved in information exchange should have robust procedures for verification regardless of whether they use generative AI or not.

Substituting GenAI For Human Creativity And Authenticity

Another mistake I am worried we will see far too frequently is becoming over-reliant on genAI as a substitute for human creativity. This is likely to have negative consequences on the authenticity of a business or a brand voice. While it’s easy to use ChatGPT or similar tools to churn out huge volumes of emails, blogs, social media posts and suchlike super-fast, this frequently leads to overly generic, uninspiring content that leaves audiences feeling disconnected or even cheated. Video game publisher Activision Blizzard, for example, was recently criticized by fans for using “AI slop” in place of human-created artwork. It’s important to remember that generative AI should be used as a tool to augment human creativity, not to replace it.

Failing To Protect Personal Data

Unless a generative AI application is run securely on-premises on your own servers, there’s often no real knowing what will happen with the data entered into it. OpenAI and Google, for example, both state in their EULAs that data uploaded to their generative chatbots can be reviewed by humans or used to further train their algorithms. This has already caused problems for some organizations – Samsung stated that its employees had inadvertently leaked confidential company information by entering it into ChatGPT without being aware of the consequences. Incidents like this create a risk for companies that they will end up in breach of data protection regulations, which can lead to severe penalties. This is likely to be an increasingly common occurrence as more and more companies start using generative AI tools, and organizations – particularly those that handle personal customer data at scale – should ensure staff are thoroughly educated about these dangers.

Overlooking Intellectual Property Risks

Many commonly used generative AI tools, including ChatGPT, are trained on vast datasets scraped from the internet, and in many cases, this includes copyrighted data. Due to the lack of maturity in AI regulations, the jury is still out on whether this constitutes a breach of IP rights on the part of AI developers, with several cases currently going through the courts. The buck might not stop there, however. It’s been suggested that businesses using genAI tools could also find themselves liable at some point in the future if copyright holders manage to convince courts that their rights have been infringed. As of now, failing to assess whether AI-generated output could contain copyright or trademark-infringing materials is likely to land businesses in hot water in 2025, if they aren’t taking proactive measures to make sure it doesn’t.

Not Having A Generative AI Policy In Place

If you want to minimize the chances that anyone working for your organization makes any of these mistakes, then probably the best thing to do is to tell them not to. The potential use cases for genAI are so varied, and the opportunities it creates are so vast that it’s almost certainly going to be misused at some point. Perhaps the most important single step you can take to reduce the chance of that happening is to have a clear, defined framework in place setting out how it can – and can’t – be used.

As far as I’m concerned, this is a no-brainer for every organization that stops short of a blanket ban on generative AI, which would be a pretty big mistake, given the opportunities it creates. Without such a policy in place, you can almost guarantee it will be used without appropriate oversight, overused to the detriment of human creativity, and lead to unauthorized disclosure of personal data, IP infringement, and all the other mistakes covered here.

To wrap up – in 2025, we will see organizations take huge steps forward as they become increasingly confident, creative and innovative in the way they use generative AI. We will also see mistakes. Being fearful of the transformative potential of generative AI will most likely hand the lead to our competition, but adopting a careful and cautious approach can save us from costly mistakes.

4 Game-Changing Quantum Computer Types That Could Transform Everything

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

Quantum computing is shaping up to be among the most transformative technologies of our era.

While still in their infancy, these powerful machines are expected to help us solve many problems by accelerating the speed at which we can process certain types of data by a factor of hundreds of millions.

But not all quantum computers are the same. Researchers are working on many different ways to apply principles of quantum mechanics to computing technology. This has led to a variety of methods, architectures and paradigms, all suited for different use cases or tasks.

So here I’ll overview some of the different categories, giving a brief explanation of what makes each one unique and what it’s hoped they will achieve.

First, What Is Quantum Computing?

Just in case you’re completely new to the topic – quantum computing refers to a new approach to computing that harnesses some of the strange and powerful properties of quantum mechanics, such as entanglement and superposition. Instead of using traditional “bits” (ones and zeros) like a classical computer, quantum computers use “qubits” that are spookily able to exist in more than one state simultaneously. This means they can potentially solve some very complex mathematical problems, such as those involving optimization problems or simulating complex real-world systems like molecular physics – far faster than existing computers.

So What Are The Different “Types” Of Quantum Computers?

Several distinct quantum computing methodologies have emerged, each leveraging quantum properties in different ways, making them suitable for carrying out different types of computation. Here’s an overview of some of the most popular:

Quantum Annealing

This is a quantum computing methodology that’s particularly well-suited to solving optimization problems. These are computations that require finding the best combination of a large number of variables. It can be of use in real-world scenarios ranging from planning the most efficient route for multi-drop delivery drivers to optimizing stock portfolios. D-Wave is recognized as a leader in this field of quantum computing and has worked with companies, including Volkswagen, to create systems that use annealing methodology to optimize assembly line packaging operations and delivery logistics.

Superconducting Quantum Computers

One of the most mature quantum computing methods involves building circuits from superconductive materials such as niobium or aluminum, cooled to near absolute zero temperatures. This allows qubits to exist in superposition states of both one and zero simultaneously, where they can be manipulated by microwaves. In simple terms, this lets them carry out computational logic operations (and/or/not etc) in a way that lets them explore multiple possible solutions to a problem in parallel, rather than one at a time. Superconductive quantum computing is being pioneered by companies such as IBM and Google and has real-world applications in drug discovery, artificial intelligence, and encryption.

Trapped Ion Quantum Computers

This involves using positively charged atoms (ions) trapped and held within a 3D space in a way that entirely isolates it from the outside world. This means that it can be held in its superposition state for a very long time rather than decohering into one or zero. Lasers are used to switch the ions between different states as required for calculations, as well as to retrieve the information that forms the “answer” to the question that needs to be solved. Leaders in this field of quantum computing include IonQ, which has worked with the United States Air Force to create secure quantum networking technology for communicating between drones and ground stations.

Photonic Quantum Computers

This involves harnessing photons, which are light waves, and manipulating them using optical components like beam splitters, lenses and mirrors. Having no mass, light waves are not affected by temperature. This means that photonic quantum computing doesn’t require super-low temperatures and a specially configured environment. Another benefit of being light beams is that the qubits encoded in photons can maintain their coherence over relatively long distances. Real-world applications for it have been found in quantum cryptography and communications, and leaders in the field include Xanadu.

Where Next For Quantum?

Although real-world use cases for quantum computing are increasing, much of the work in the field is still purely hypothetical, and various other methods are under development in labs and academic institutions.

Other research is focused on reducing the error rate of quantum computing caused by the delicate nature of qubits held in a quantum state.

It’s also worth noting that most quantum computing taking place today involves a hybrid model of quantum and classical methodologies.

As research and development continue, there’s no doubt we’ll start to see more breakthroughs in the journey towards practical, scalable and useful quantum computing.

5 AI Mistakes That Could Kill Your Business

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

AI promises to make businesses of all shapes and sizes more efficient, more innovative, and better prepared to deliver whatever their customers want.

However, it will also inevitably expose them to new dangers and risks.

In fact, the risks of implementing new technology often scale in line with the potential for positive transformation – and with AI, that potential is huge.

So here are some of the biggest mistakes that I believe businesses will make this year as they race to implement it:

1. Failing To Align AI Strategy With Business Strategy

One of the mistakes I see many businesses making is rushing to adopt AI simply because they’ve been told they should do so without understanding how it fits with their overall goals.

This approach – putting technology before business strategy – is probably the number one driver of failed AI initiatives and – worse still – disillusionment and giving up on AI altogether.

It’s easy for us to get so excited by the hype around AI that we rush out and start spending money on tools, platforms and projects without aligning them with strategic goals and priorities. This inevitably leads to fragmented initiatives that fail to deliver meaningful results or ROI. To avoid this, always “start with strategy” – implementing a strategic plan that clearly shows how any project or initiative will progress your organization towards improving the metrics and hitting the targets that will define your success.

2. Underestimating The Impact Of AI On The Workforce

Implementing AI is likely to mean big and possibly frightening changes for the people who make your business tick.

While it won’t mean they all become redundant, it may mean they need to learn new skills or adapt to new ways of working. Assessing the skills and possibilities of training or reskilling, ensuring there is buy-in across the board, and addressing concerns people might have about job security are all critical.

Many businesses will make the mistake of thinking solely about the technical steps they have to take while forgetting that humans will still ultimately be responsible for success or failure – and this is a serious error.

3. Giving Up Too Quickly – Or Too Slowly

Not every AI project is going to work – in fact, recent Gartner research puts the current failure rate of AI initiatives at around 85%. But getting it wrong the first (or second, or third) time isn’t necessarily a reason to give up. I firmly believe that just about any business can benefit from AI, but that isn’t the same as believing that they will benefit right away or that all they have to do is launch an AI project and they’ll immediately be successful.

On the other hand, being slow to pull the plug on projects that aren’t working out can also be a recipe for disaster – potentially turning what should simply be a short, sharp lesson into a long-term waste of time and resources. There’s a reason that “fail fast” has become a mantra in tech circles. Projects should be designed so that their effectiveness can be quickly assessed, and if they aren’t working out, chalk it up to experience and move on to the next one.

4. Failing To Properly Assess The Cost

Like the previous trap, this one is also two-pronged – it can be easy to both overestimate and underestimate the cost of developing AI initiatives, and both can be problematic.

Make no mistake, going full-throttle on AI is expensive – hardware, software, specialist consulting expertise, compute resources, reskilling and upskilling a workforce and scaling projects from pilot to production – none of this comes cheap. The question of how much it costs to get an enterprise AI initiative off the ground is similar to “How long is a piece of string?” but most estimates go into the millions.

At the same time, lots of smaller businesses will be put off because they think AI is only for big companies with huge IT budgets, ignoring the fact that there are ways to implement AI cost-effectively while still providing real benefits.

Again, the answer is thorough preparation and being as accurate and diligent as possible in your costing before jumping headfirst into AI.

5. Letting The Competition Pip You To The Post

Of course, despite all of the precautions covered here, perhaps the biggest single mistake would be deciding it’s too difficult, risky and expensive and choosing to sit out the AI revolution entirely.

Make no mistake: AI is set to transform every industry, and those who do stick their heads in the sand and pretend it isn’t happening are going to be left behind.

AI will make businesses more efficient, meaning those that don’t implement it at all are effectively throwing away money. But it will also drive innovation, meaning competitors that do adopt it will create new products and services, redefining customer expectations and leaving laggards looking decidedly old-hat.

Don’t Fall Into These Traps

I know from my experience of working with companies of all shapes and sizes on digital innovation strategies that these hazards and pitfalls will cause many businesses to experience big problems in 2025.

Understanding the risks and having the foresight to see them before falling into them headfirst is the key to success, as well as the secret to delivering AI-driven growth and transformation.

By aligning AI projects with business goals, ensuring your people are included in everything you do, accurately assessing costs, and knowing when to press ahead and when to quit, you give yourself the best chance of avoiding making these mistakes.

This will result in AI projects that deliver better returns and, perhaps more importantly, prepare your organization for the even bigger opportunities that are yet to come.

5 Amazing Things You Can Do With ChatGPT’s New Operator Mode?

By Banking, Career, Cryptocurrency, Cybersecurity, Digitalization, Food for thoughtNo Comments

ChatGPT’s new Operator mode is its first step towards becoming an AI agent – a new type of AI tool that carries out far more complex tasks without the need for human intervention.

But what exactly can it do?

Currently, in preview mode, the latest upgrade to ChatGPT works by combining the GPT4o reasoning model with computer vision capabilities. This lets it “see” and interact with anything on a screen.

It does this with the help of a built-in web browser, and after telling it what we want it to do, you can just sit back and watch as it moves the mouse, presses buttons and inputs text.

So what can it do? Well, just from playing around with it, it’s evident that it is still at an early stage of development. Nevertheless, it has some impressive tricks up its sleeve.

Here’s an overview of some of the tasks I’ve seen it perform so far, as well as a look ahead to what this could mean for the future of “agentic” AI.

How To Access ChatGPT Operator?

First off, ChatGPT Operator is only available in the US right now and only to users who’ve taken out the $200-per-month Pro subscription.

This probably won’t always be the case – ChatGPT creators OpenAI tend to roll out new features to small groups such as Pro users first before opening them up to a broader audience.

But if you are lucky enough to meet those criteria, then you should be good to go. Simply head to operator.chatgpt.com to get started.

Things To Do With Operator

Operator is designed to carry out more complex, multi-step tasks than is possible using standard ChatGPT. It’s capable of carrying out up to three of these tasks at the same time.

While it’s designed to be autonomous, there are times when it will have to hand control back to you – for example, to log into websites or to solve CAPTCHA challenges.

One really useful feature is integrations. These are instructions on using specific sites or services, such as Airbnb or OpenTable, so Operator doesn’t have to learn how to use them from scratch every time it comes across them. Since these integrations are created using natural language prompts, businesses can easily develop and share their own with customers.

Here are some of the things it can already do:

Find And Book Accommodation Through Airbnb

Tell it to head to Airbnb to find a room based on your preferences, and it will search options, check reviews, and ensure you’re happy with its choice before going ahead with the booking.

Sample prompt: “Find an Airbnb room in [destination] from [check-in date] to [check-out date] for [number of guests]. Prioritize properties with good reviews, Wi-Fi, and amenities like [list preferred amenities, e.g., kitchen, balcony, pet-friendly]. Ensure the location is close to [landmarks or areas, if important]. Confirm availability and details before booking.”

Make Restaurant Reservations

Operator integrates with OpenTable and can scan services like TripAdvisor to research and book restaurant reservations.

Sample prompt: “Find a highly-rated restaurant near [location or city name] that serves [preferred cuisine or dietary preferences, e.g., Italian, vegan-friendly]. Book a table for [number of people] on [date] at [time]. Ensure the restaurant has good reviews and confirm the reservation before finalizing.”

Book Tickets To An Event

Want to watch live music or sports or take in a show? Operator will browse events directories and integrate directly with StubHub to find the best seats at the best prices.

Sample prompt: “Search for events happening near [your location] on [specific date or date range, e.g., February 10th or next weekend]. Include concerts, theater shows, festivals, or anything unique and interesting. List options with event details, times, and ticket prices. Once I choose one, book the tickets for [number of people] and confirm the booking.”

Plan And Shop For Meals

Not only will Operator plan your menu, but it will integrate with Instacart to order the ingredients and have them delivered to your door.

Sample Prompt: “Plan meals for a family of [number of people] for a week (breakfast, lunch, dinner). Take into account the following dietary requirements: [list requirements, e.g., gluten-free, vegetarian, nut-free, low-carb]. Include a mix of healthy and family-friendly recipes. Once planned, order the ingredients online and ensure delivery is scheduled for [preferred delivery date and time]. I already have [insert staples, e.g., milk, sugar, flour], so you don’t need to order them.”

Update Or Make Changes To A Website

How about something more challenging? From uploading blog pages to changing design elements and generating entirely new content, Operator can integrate with no-code building platforms like Wix to get the job done. While building an entire site from the ground up might be a little taxing (at the moment), it can carry out routine maintenance, design tweaks and updates with relative ease.

Sample Prompt: “Edit the website [website URL or description of the page] to update the following: [describe what you want to be changed]. Log into [insert no-code service, e.g., Wix] to make the changes. Ensure the design remains consistent and user-friendly. Once edits are made, show me the final result for review.”

The Start Of Something Big?

Although it’s early days, I believe that Operator could mark the start of a new chapter in the history of AI. Possibly even a change in the fundamental relationship between humans and machines.

Agentic AI is the first step towards the creation of truly autonomous thinking machines that can act as assistants to us in all fields of life.

Some believe it’s a step towards artificial general intelligence (AGI) – the “holy grail” of AI where machines can learn just like we do in order to perform tasks beyond those they’ve been programmed for.

The arrival of ChatGPT just two short years ago marked a watershed moment for AI. Now, the arrival of agentic AI marks another – and I’m just as confident it’s going to change the world in ways that are hard to imagine today.

Social Share Buttons and Icons powered by Ultimatelysocial