Connect with us

Artificial Intelligence

Deloitte sounds alarm as AI agent deployment outruns safety frameworks

Published

on

A new report from Deloitte has warned that businesses are deploying AI agents faster than their safety protocols and safeguards can keep up. Therefore, serious concerns around security, data privacy, and accountability are spreading.

According to the survey, agentic systems are moving from pilot to production so quickly that traditional risk controls, which were designed for more human-centred operations, are struggling to meet security demands.

Just 21% of organisations have implemented stringent governance or oversight for AI agents, despite the increased rate of adoption. Whilst 23% of companies stated that they are currently using AI agents, this is expected to rise to 74% in the next two years. The share of businesses yet to adopt this technology is expected to fall from 25% to just 5% over the same period.

Poor governance is the threat

Deloitte is not highlighting AI agents as inherently dangerous, but states the real risks are associated with poor context and weak governance. If agents operate as their own entities, their decisions and actions can easily become opaque. Without robust governance, it becomes difficult to manage and almost impossible to insure against mistakes.

According to Ali Sarrafi, CEO & Founder of Kovant, the answer is governed autonomy. “Well-designed agents with clear boundaries, policies and definitions managed the same way as an enterprise manages any worker can move fast on low-risk work inside clear guardrails, but escalate to humans when actions cross defined risk thresholds.”

“With detailed action logs, observability, and human gatekeeping for high-impact decisions, agents stop being mysterious bots and become systems you can inspect, audit, and trust.”

As Deloitte’s report suggests, AI agent adoption is set to accelerate in the coming years, and only the companies that deploy the technology with visibility and control will hold the upper hand over competitors, not those who deploy them quickest.

Why AI agents require robust guardrails

AI agents may perform well in controlled demos, but they struggle in real-world business settings where systems can be fragmented and data may be inconsistent.

Sarrafi commented on the unpredictable nature of AI agents in these scenarios. “When an agent is given too much context or scope at once, it becomes prone to hallucinations and unpredictable behaviour.”

“By contrast, production-grade systems limit the decision and context scope that models work with. They decompose operations into narrower, focused tasks for individual agents, making behaviour more predictable and easier to control. This structure also enables traceability and intervention, so failures can be detected early and escalated appropriately rather than causing cascading errors.”

Accountability for insurable AI

With agents taking real actions in business systems, such as keeping detailed action logs, risk and compliance are viewed differently. With every action recorded, agents’ activities become clear and evaluable, letting organisations inspect actions in detail.

Such transparency is crucial for insurers, who are reluctant to cover opaque AI systems. This level of detail helps insurers understand what agents have done, and the controls involved, thus making it easier to assess risk. With human oversight for risk-critical actions and auditable, replayable workflows, organisations can produce systems that are more manageable for risk assessment.

AAIF standards a good first step

Shared standards, like those being developed by the Agentic AI Foundation (AAIF), help businesses to integrate different agent systems, but current standardisation efforts focus on what is simplest to build, not what larger organisations need to operate agentic systems safely.

Sarrafi says enterprises require standards that support operation control, and which include, “access permissions, approval workflows for high-impact actions, and auditable logs and observability, so teams can monitor behaviour, investigate incidents, and prove compliance.”

Identity and permissions the first line of defence

Limiting what AI agents can access and the actions they can perform is important to ensure safety in real business environments. Sarrafi said, “When agents are given broad privileges or too much context, they become unpredictable and pose security or compliance risks.”

Visibility and monitoring are important to keep agents operating inside limits. Only then can stakeholders have confidence in the adoption of the technology. If every action is logged and manageable, teams can then see what has happened, identify issues, and better understand why events occurred.

Sarrafi continued, “This visibility, combined with human supervision where it matters, turns AI agents from inscrutable components into systems that can be inspected, replayed and audited. It also allows rapid investigation and correction when issues arise, which boosts trust among operators, risk teams and insurers alike.”

Deloitte’s blueprint

Deloitte’s strategy for safe AI agent governance sets out defined boundaries for the decisions agentic systems can make. For instance, they might operate with tiered autonomy, where agents can only view information or offer suggestions. From here, they can be allowed to take limited actions, but with human approval. Once they have proven to be reliable in low-risk areas, they can be allowed to act automatically.

Deloitte’s “Cyber AI Blueprints” suggest governance layers and embedding policies and compliance capability roadmaps into organisational controls. Ultimately, governance structures that track AI use and risk, and embedding oversight into daily operations are important for safe agentic AI use.

Readying workforces with training is another aspect of safe governance. Deloitte recommends training employees on what they shouldn’t share with AI systems, what to do if agents go off track, and how to spot unusual, potentially dangerous behaviour. If employees fail to understand how AI systems work and their potential risks, they may weaken security controls, albeit unintentionally.

Robust governance and control, alongside shared literacy are fundamental to the safe deployment and operation of AI agents, enabling secure, compliant, and accountable performance in real-world environments

(Image source: “Global Hawk, NASA’s New Remote-Controlled Plane” by NASA Goddard Photo and Video is licensed under CC BY 2.0. )

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

Klarna backs Google UCP to power AI agent payments

Published

on

Klarna aims to address the lack of interoperability between conversational AI agents and backend payment systems by backing Google’s Universal Commerce Protocol (UCP), an open standard designed to unify how AI agents discover products and execute transactions.

The partnership, which also sees Klarna supporting Google’s Agent Payments Protocol (AP2), places the Swedish fintech firm among the early payment providers to back a standardised framework for automated shopping.

The interoperability problem with AI agent payments

Current implementations of AI commerce often function as walled gardens. An AI agent on one platform typically requires a custom integration to communicate with a merchant’s inventory system, and yet another to process payments. This integration complexity inflates development costs and limits the reach of automated shopping tools.

Google’s UCP attempts to solve this by providing a standardised interface for the entire shopping lifecycle, from discovery and purchase to post-purchase support. Rather than building unique connectors for every AI platform, merchants and payment providers can interact through a unified standard.

David Sykes, Chief Commercial Officer at Klarna, states that as AI-driven shopping evolves, the underlying infrastructure must rely on openness, trust, and transparency. “Supporting UCP is part of Klarna’s broader work with Google to help define responsible, interoperable standards that support the future of shopping,” he explains.

Standardising the transaction layer

By integrating with UCP, Klarna allows its technology – including flexible payment options and real-time decisioning – to function within these AI agent environments. This removes the need for hardcoded platform-specific payment logic. Open standards provide a framework for the industry to explore how discovery, shopping, and payments work together across AI-powered environments.

The implications extend to how transactions settle. Klarna’s support for AP2 complements the UCP integration, helping advance an ecosystem where trusted payment options work across AI-powered checkout experiences. This combination aims to reduce the friction of users handing off a purchase decision to an automated agent.

“Open standards like UCP are essential to making AI-powered commerce practical at scale,” said Ashish Gupta, VP/GM of Merchant Shopping at Google. “Klarna’s support for UCP reflects the kind of cross-industry collaboration needed to build interoperable commerce experiences that expand choice while maintaining security.”

Adoption of Google’s UCP by Klarna is part of a broader shift

For retail and fintech leaders, the adoption of UCP by players like Klarna suggests a requirement to rethink commerce architecture. The shift implies that future payments may increasingly come through sources where the buyer interface is an AI agent rather than a branded storefront.

Implementing UCP generally does not require a complete re-platforming but does demand rigorous data hygiene. Because agents rely on structured data to manage transactions, the accuracy of product feeds and inventory levels becomes an operational priority.

Furthermore, the model maintains a focus on trust. Klarna’s technology provides upfront terms designed to build trust at checkout. As agent-led commerce develops, maintaining clear decisioning logic and transparency remains a priority for risk management.

The convergence of Klarna’s payment rails with Google’s open protocols offers a practical template for reducing the friction of using AI agents for commerce. The value lies in the efficiency of a standardised integration layer that reduces the technical debt associated with maintaining multiple sales channels. Success will likely depend on the ability to expose business logic and inventory data through these open standards.

See also: How SAP is modernising HMRC’s tax infrastructure with AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

How SAP is modernising HMRC’s tax infrastructure with AI

Published

on

HMRC has selected SAP to overhaul its core revenue systems and place AI at the centre of the UK’s tax administration strategy.

The contract represents a broader shift in how public sector bodies approach automation. Rather than layering AI tools over legacy infrastructure, HMRC is replacing the underlying architecture to support machine learning and automated decision-making natively.

The AI-powered modernisation effort focuses on the Enterprise Tax Management Platform (ETMP), the technological backbone responsible for managing over £800 billion in annual tax revenue and which currently supports over 45 tax regimes. By migrating this infrastructure to a managed cloud environment via RISE with SAP, HMRC aims to simplify a complex technology landscape that tens of thousands of staff rely on daily.

Effective machine learning requires unified data sets, which are often impossible to maintain across fragmented on-premise legacy systems. As part of the deployment, HMRC will implement SAP Business Technology Platform and AI capabilities. These tools are designed to surface insights faster and automate processes across tax administration.

SAP Sovereign Cloud meets local AI adoption requirements

Deploying AI in such highly-regulated sectors requires strict data governance. HMRC will host these new capabilities on SAP’s UK Sovereign Cloud. This ensures that while the tax authority adopts commercial AI tools, it adheres to localised requirements regarding data residency, security, and compliance.

“Large-scale public systems like those delivered by HMRC must operate reliably at national scale while adapting to changing demands,” said Leila Romane, Managing Director UKI at SAP.

“By modernising one of the UK’s most important platforms and hosting it on a UK sovereign cloud, we are helping to strengthen the resilience, security, and sustainability of critical national infrastructure.”

Using AI to modernise tax infrastructure

The modernisation ultimately aims to reduce friction in taxpayer interactions. SAP and HMRC will work together to define new AI capabilities specifically aimed at improving taxpayer experiences and enhancing decision-making.

For enterprise leaders, the lesson here is the link between data accessibility and operational value. The collaboration provides HMRC employees with better access to analytical data and an improved user interface. This structure supports greater confidence in real-time analysis and reporting; allowing for more responsive and transparent experiences for taxpayers.

The SAP project illustrates that AI adoption is an infrastructure challenge as much as a software one. HMRC’s approach involves securing a sovereign cloud foundation before attempting to scale automation. For executives, this underscores the need to address technical debt and data sovereignty to enable effective AI implementation in areas as regulated as tax and finance.

See also: Accenture: Insurers betting big on AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

ThoughtSpot: On the new fleet of agents delivering modern analytics

Published

on

If you are a data and analytics leader, then you know agentic AI is fuelling unprecedented speed of change right now. Knowing you need to do something and knowing what to do, however, are two different things. The good news is providers like ThoughtSpot are able to assist, with the company in its own words determined to ‘reimagin[e] analytics and BI from the ground up’.

“Certainly, agentic systems really are shifting us into very new territory,” explains Jane Smith, field chief data and AI officer at ThoughtSpot. “They’re shifting us away from passive reporting to much more active decision making.

“Traditional BI waits for you to find an insight,” adds Jane. “Agentic systems are proactively monitoring data from multiple sources 24/7; they’re diagnosing why changes happened; they’re triggering the next action automatically.

“We’re getting much more action-oriented.”

Alongside moving from passive to active, there are two other ways in which Jane sees this change taking place in BI. There is a shift towards the ‘true democratisation of data’ on one hand, but on the other is the ‘resurgence of focus’ on the semantic layer. “You cannot have an agent taking action in the way I just described when it doesn’t strictly understand business context,” says Jane. “A strong semantic layer is really the only way to make sense… of the chaos of AI.”

ThoughtSpot has a fleet of agents to take action and move the needle for customers. In December, the company launched four new BI agents, with the idea that they work as a team to deliver modern analytics.

Spotter 3, the latest iteration of an agent first debuted towards the end of 2024, is the star. It is conversant with applications like Slack and Salesforce, and can not only answer questions, but assess the quality of its answer and keep trying until it gets the right result.

“It leverages the [Model Context] protocol, so you can ask your questions to your organisation’s structured data – everything in your rows, your columns, your tables – but also incorporate your unstructured data,” says Jane. “So, you can get really context-rich answers to questions, all through our agent, or if you wish, through your own LLM.”

With this power, however, comes responsibility. As ThoughtSpot’s recent eBook exploring data and AI trends for 2026 notes, the C-suite needs to work out how to design systems so every decision – be it human or AI – can be explained, improved, and trusted.

ThoughtSpot calls this emerging architecture ‘decision intelligence’ (DI). “What we’ll see a lot of, I think, will be decision supply chains,” explains Jane. “Instead of a one-off insight, I think what we’re going to see is decisions… flow through repeatable stages, data analysis, simulation, action, feedback, and these are all interactions between humans and machines that will be logged in what we can think of as a decision system of record.”

What would this look like in practice? Jane offers an example from a clinical trial in the pharma industry. “The system would log and version, really, every step of how a patient is chosen for a clinical trial; how data from a health record is used to identify a candidate; how that decision was simulated against the trial protocol; how the matching occurred; how potentially a doctor ultimately recommended this patient for the trial,” she says.

“These are processes that can be audited, they can be improved for the following trial. But the very meticulous logging of every element of the flow of this decision into what we think of as a supply chain is a way that I would visualise that.”

ThoughtSpot is participating at the AI & Big Data Expo Global, in London, on February 4-5. You can watch the full interview with Jane Smith below:

Photo by Steve Johnson on Unsplash

Continue Reading

Trending