Connect with us

Artificial Intelligence

Franny Hsiao, Salesforce: Scaling enterprise AI

Published

on

Scaling enterprise AI requires overcoming architectural oversights that often stall pilots before production, a challenge that goes far beyond model selection. While generative AI prototypes are easy to spin up, turning them into reliable business assets involves solving the difficult problems of data engineering and governance.

Ahead of AI & Big Data Global 2026 in London, Franny Hsiao, EMEA Leader of AI Architects at Salesforce, discussed why so many initiatives hit a wall and how organisations can architect systems that actually survive the real world.

The ‘pristine island’ problem of scaling enterprise AI

Most failures stem from the environment in which the AI is built. Pilots frequently begin in controlled settings that create a false sense of security, only to crumble when faced with enterprise scale.

“The single most common architectural oversight that prevents AI pilots from scaling is the failure to architect a production-grade data infrastructure with built-in end to end governance from the start,” Hsiao explains.

“Understandably, pilots often start on ‘pristine islands’ – using small, curated datasets and simplified workflows. But this ignores the messy reality of enterprise data: the complex integration, normalisation, and transformation required to handle real-world volume and variability.”

When companies attempt to scale these island-based pilots without addressing the underlying data mess, the systems break. Hsiao warns that “the resulting data gaps and performance issues like inference latency render the AI systems unusable—and, more importantly, untrustworthy.”

Hsiao argues that the companies successfully bridging this gap are those that “bake end-to-end observability and guardrails into the entire lifecycle.” This approach provides “visibility and control into how effective the AI systems are and how users are adopting the new technology.”

Engineering for perceived responsiveness

As enterprises deploy large reasoning models – like the ‘Atlas Reasoning Engine’ – they face a trade-off between the depth of the model’s “thinking” and the user’s patience. Heavy compute creates latency.

Salesforce addresses this by focusing on “perceived responsiveness through Agentforce Streaming,” according to Hsiao.

“This allows us to deliver AI-generated responses progressively, even while the reasoning engine performs heavy computation in the background. It’s an incredibly effective approach for reducing perceived latency, which often stalls production AI.”

Transparency also plays a functional role in managing user expectations when scaling enterprise AI. Hsiao elaborates on using design as a trust mechanism: “By surfacing progress indicators that show the reasoning steps or the tools being used, as well images like spinners and progress bars to depict loading states, we don’t just keep users engaged; we improve perceived responsiveness and build trust.

“This visibility, combined with strategic model selection – like choosing smaller models for fewer computations, meaning faster response times – and explicit length constraints, ensures the system feels deliberate and responsive.”

Offline intelligence at the edge

For industries with field operations, such as utilities or logistics, reliance on continuous cloud connectivity is a non-starter. “For many of our enterprise customers, the biggest practical driver is offline functionality,” states Hsiao.

Hsiao highlights the shift toward on-device intelligence, particularly in field services, where the workflow must continue regardless of signal strength.

“A technician can photograph a faulty part, error code, or serial number while offline. Then an on-device LLM can then identify the asset or error, and provide guided troubleshooting steps from a cached knowledge base instantly,” explains Hsiao.

Data synchronisation happens automatically once connectivity returns. “Once a connection is restored, the system handles the ‘heavy lifting’ of syncing that data back to the cloud to maintain a single source of truth. This ensures that work gets done, even in the most disconnected environments.”

Hsiao expects continued innovation in edge AI due to benefits like “ultra-low latency, enhanced privacy and data security, energy efficiency, and cost savings.”

High-stakes gateways

Autonomous agents are not set-and-forget tools. When scaling enterprise AI deployments, governance requires defining exactly when a human must verify an action. Hsiao describes this not as dependency, but as “architecting for accountability and continuous learning.”

Salesforce mandates a “human-in-the-loop” for specific areas Hsiao calls “high-stakes gateways”:

“This includes specific action categories, including any ‘CUD’ (Creating, Uploading, or Deleting) actions, as well as verified contact and customer contact actions,” says Hsiao. “We also default to human confirmation for critical decision-making or any action that could be potentially exploited through prompt manipulation.”

This structure creates a feedback loop where “agents learn from human expertise,” creating a system of “collaborative intelligence” rather than unchecked automation.

Trusting an agent requires seeing its work. Salesforce has built a “Session Tracing Data Model (STDM)” to provide this visibility. It captures “turn-by-turn logs” that offer granular insight into the agent’s logic.

“This gives us granular step-by-step visibility that captures every interaction including user questions, planner steps, tool calls, inputs/outputs, retrieved chunks, responses, timing, and errors,” says Hsiao.

This data allows organisations to run ‘Agent Analytics’ for adoption metrics, ‘Agent Optimisation’ to drill down into performance, and ‘Health Monitoring’ for uptime and latency tracking.

“Agentforce observability is the single mission control for all your Agentforce agents for unified visibility, monitoring, and optimisation,” Hsiao summarises.

Standardising agent communication

As businesses deploy agents from different vendors, these systems need a shared protocol to collaborate. “For multi-agent orchestration to work, agents can’t exist in a vacuum; they need common language,” argues Hsiao.

Hsiao outlines two layers of standardisation: orchestration and meaning. For orchestration, Salesforce is adopting open-source standards like MCP (Model Context Protocol) and A2A (Agent to Agent Protocol).”

“We believe open source standards are non-negotiable; they prevent vendor lock-in, enable interoperability, and accelerate innovation.”

However, communication is useless if the agents interpret data differently. To solve for fragmented data, Salesforce co-founded OSI (Open Semantic Interchange) to unify semantics so an agent in one system “truly understands the intent of an agent in another.”

The future enterprise AI scaling bottleneck: agent-ready data

Looking forward, the challenge will shift from model capability to data accessibility. Many organisations still struggle with legacy, fragmented infrastructure where “searchability and reusability” remain difficult.

Hsiao predicts the next major hurdle – and solution – will be making enterprise data “‘agent-ready’ through searchable, context-aware architectures that replace traditional, rigid ETL pipelines.” This shift is necessary to enable “hyper-personalised and transformed user experience because agents can always access the right context.”

“Ultimately, the next year isn’t about the race for bigger, newer models; it’s about building the orchestration and data infrastructure that allows production-grade agentic systems to thrive,” Hsiao concludes.

Salesforce is a key sponsor of this year’s AI & Big Data Global in London and will have a range of speakers, including Franny Hsiao, sharing their insights during the event. Be sure to swing by Salesforce’s booth at stand #163 for more from the company’s experts.

See also: Databricks: Enterprise AI adoption shifts to agentic systems

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

Klarna backs Google UCP to power AI agent payments

Published

on

Klarna aims to address the lack of interoperability between conversational AI agents and backend payment systems by backing Google’s Universal Commerce Protocol (UCP), an open standard designed to unify how AI agents discover products and execute transactions.

The partnership, which also sees Klarna supporting Google’s Agent Payments Protocol (AP2), places the Swedish fintech firm among the early payment providers to back a standardised framework for automated shopping.

The interoperability problem with AI agent payments

Current implementations of AI commerce often function as walled gardens. An AI agent on one platform typically requires a custom integration to communicate with a merchant’s inventory system, and yet another to process payments. This integration complexity inflates development costs and limits the reach of automated shopping tools.

Google’s UCP attempts to solve this by providing a standardised interface for the entire shopping lifecycle, from discovery and purchase to post-purchase support. Rather than building unique connectors for every AI platform, merchants and payment providers can interact through a unified standard.

David Sykes, Chief Commercial Officer at Klarna, states that as AI-driven shopping evolves, the underlying infrastructure must rely on openness, trust, and transparency. “Supporting UCP is part of Klarna’s broader work with Google to help define responsible, interoperable standards that support the future of shopping,” he explains.

Standardising the transaction layer

By integrating with UCP, Klarna allows its technology – including flexible payment options and real-time decisioning – to function within these AI agent environments. This removes the need for hardcoded platform-specific payment logic. Open standards provide a framework for the industry to explore how discovery, shopping, and payments work together across AI-powered environments.

The implications extend to how transactions settle. Klarna’s support for AP2 complements the UCP integration, helping advance an ecosystem where trusted payment options work across AI-powered checkout experiences. This combination aims to reduce the friction of users handing off a purchase decision to an automated agent.

“Open standards like UCP are essential to making AI-powered commerce practical at scale,” said Ashish Gupta, VP/GM of Merchant Shopping at Google. “Klarna’s support for UCP reflects the kind of cross-industry collaboration needed to build interoperable commerce experiences that expand choice while maintaining security.”

Adoption of Google’s UCP by Klarna is part of a broader shift

For retail and fintech leaders, the adoption of UCP by players like Klarna suggests a requirement to rethink commerce architecture. The shift implies that future payments may increasingly come through sources where the buyer interface is an AI agent rather than a branded storefront.

Implementing UCP generally does not require a complete re-platforming but does demand rigorous data hygiene. Because agents rely on structured data to manage transactions, the accuracy of product feeds and inventory levels becomes an operational priority.

Furthermore, the model maintains a focus on trust. Klarna’s technology provides upfront terms designed to build trust at checkout. As agent-led commerce develops, maintaining clear decisioning logic and transparency remains a priority for risk management.

The convergence of Klarna’s payment rails with Google’s open protocols offers a practical template for reducing the friction of using AI agents for commerce. The value lies in the efficiency of a standardised integration layer that reduces the technical debt associated with maintaining multiple sales channels. Success will likely depend on the ability to expose business logic and inventory data through these open standards.

See also: How SAP is modernising HMRC’s tax infrastructure with AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

How SAP is modernising HMRC’s tax infrastructure with AI

Published

on

HMRC has selected SAP to overhaul its core revenue systems and place AI at the centre of the UK’s tax administration strategy.

The contract represents a broader shift in how public sector bodies approach automation. Rather than layering AI tools over legacy infrastructure, HMRC is replacing the underlying architecture to support machine learning and automated decision-making natively.

The AI-powered modernisation effort focuses on the Enterprise Tax Management Platform (ETMP), the technological backbone responsible for managing over £800 billion in annual tax revenue and which currently supports over 45 tax regimes. By migrating this infrastructure to a managed cloud environment via RISE with SAP, HMRC aims to simplify a complex technology landscape that tens of thousands of staff rely on daily.

Effective machine learning requires unified data sets, which are often impossible to maintain across fragmented on-premise legacy systems. As part of the deployment, HMRC will implement SAP Business Technology Platform and AI capabilities. These tools are designed to surface insights faster and automate processes across tax administration.

SAP Sovereign Cloud meets local AI adoption requirements

Deploying AI in such highly-regulated sectors requires strict data governance. HMRC will host these new capabilities on SAP’s UK Sovereign Cloud. This ensures that while the tax authority adopts commercial AI tools, it adheres to localised requirements regarding data residency, security, and compliance.

“Large-scale public systems like those delivered by HMRC must operate reliably at national scale while adapting to changing demands,” said Leila Romane, Managing Director UKI at SAP.

“By modernising one of the UK’s most important platforms and hosting it on a UK sovereign cloud, we are helping to strengthen the resilience, security, and sustainability of critical national infrastructure.”

Using AI to modernise tax infrastructure

The modernisation ultimately aims to reduce friction in taxpayer interactions. SAP and HMRC will work together to define new AI capabilities specifically aimed at improving taxpayer experiences and enhancing decision-making.

For enterprise leaders, the lesson here is the link between data accessibility and operational value. The collaboration provides HMRC employees with better access to analytical data and an improved user interface. This structure supports greater confidence in real-time analysis and reporting; allowing for more responsive and transparent experiences for taxpayers.

The SAP project illustrates that AI adoption is an infrastructure challenge as much as a software one. HMRC’s approach involves securing a sovereign cloud foundation before attempting to scale automation. For executives, this underscores the need to address technical debt and data sovereignty to enable effective AI implementation in areas as regulated as tax and finance.

See also: Accenture: Insurers betting big on AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

ThoughtSpot: On the new fleet of agents delivering modern analytics

Published

on

If you are a data and analytics leader, then you know agentic AI is fuelling unprecedented speed of change right now. Knowing you need to do something and knowing what to do, however, are two different things. The good news is providers like ThoughtSpot are able to assist, with the company in its own words determined to ‘reimagin[e] analytics and BI from the ground up’.

“Certainly, agentic systems really are shifting us into very new territory,” explains Jane Smith, field chief data and AI officer at ThoughtSpot. “They’re shifting us away from passive reporting to much more active decision making.

“Traditional BI waits for you to find an insight,” adds Jane. “Agentic systems are proactively monitoring data from multiple sources 24/7; they’re diagnosing why changes happened; they’re triggering the next action automatically.

“We’re getting much more action-oriented.”

Alongside moving from passive to active, there are two other ways in which Jane sees this change taking place in BI. There is a shift towards the ‘true democratisation of data’ on one hand, but on the other is the ‘resurgence of focus’ on the semantic layer. “You cannot have an agent taking action in the way I just described when it doesn’t strictly understand business context,” says Jane. “A strong semantic layer is really the only way to make sense… of the chaos of AI.”

ThoughtSpot has a fleet of agents to take action and move the needle for customers. In December, the company launched four new BI agents, with the idea that they work as a team to deliver modern analytics.

Spotter 3, the latest iteration of an agent first debuted towards the end of 2024, is the star. It is conversant with applications like Slack and Salesforce, and can not only answer questions, but assess the quality of its answer and keep trying until it gets the right result.

“It leverages the [Model Context] protocol, so you can ask your questions to your organisation’s structured data – everything in your rows, your columns, your tables – but also incorporate your unstructured data,” says Jane. “So, you can get really context-rich answers to questions, all through our agent, or if you wish, through your own LLM.”

With this power, however, comes responsibility. As ThoughtSpot’s recent eBook exploring data and AI trends for 2026 notes, the C-suite needs to work out how to design systems so every decision – be it human or AI – can be explained, improved, and trusted.

ThoughtSpot calls this emerging architecture ‘decision intelligence’ (DI). “What we’ll see a lot of, I think, will be decision supply chains,” explains Jane. “Instead of a one-off insight, I think what we’re going to see is decisions… flow through repeatable stages, data analysis, simulation, action, feedback, and these are all interactions between humans and machines that will be logged in what we can think of as a decision system of record.”

What would this look like in practice? Jane offers an example from a clinical trial in the pharma industry. “The system would log and version, really, every step of how a patient is chosen for a clinical trial; how data from a health record is used to identify a candidate; how that decision was simulated against the trial protocol; how the matching occurred; how potentially a doctor ultimately recommended this patient for the trial,” she says.

“These are processes that can be audited, they can be improved for the following trial. But the very meticulous logging of every element of the flow of this decision into what we think of as a supply chain is a way that I would visualise that.”

ThoughtSpot is participating at the AI & Big Data Expo Global, in London, on February 4-5. You can watch the full interview with Jane Smith below:

Photo by Steve Johnson on Unsplash

Continue Reading

Trending