Artificial Intelligence
PepsiCo is using AI to rethink how factories are designed and updated
For many large companies, the most useful form of AI right now has little to do with writing emails or answering questions. At PepsiCo, AI is being tested in places where mistakes are costly and changes are hard to undo — factory layouts, production lines, and physical operations.
That shift is visible in how PepsiCo is using AI and digital twins to model and adjust its manufacturing facilities before making changes in the real world. Rather than experimenting with chat interfaces or office tools, the company is applying AI to one of its core problems: how to configure factories faster, with less risk, and fewer disruptions.
Digital twins are virtual models of physical systems. In manufacturing, they can simulate equipment placement, material flow, and production speed. When combined with AI, these models can test thousands of scenarios that would be impractical — or expensive — to try on a live production line.
PepsiCo has been working with partners to apply AI-driven digital twins to parts of its manufacturing network, with early pilots focused on improving how facilities are designed and adjusted over time.
The goal is not automation for its own sake. It is cycle time. Instead of taking weeks or months to validate changes through physical trials, teams can test configurations virtually, identify problems earlier, and move faster when updates are needed.
From planning bottleneck to operational shortcut
In large consumer goods companies, factory changes tend to move slowly. Even small adjustments — a new line layout, different packaging flow, or equipment upgrade — can require long planning cycles, approvals, and staged testing. Each delay has knock-on effects on supply chains and product availability.
Digital twins offer a way around that bottleneck. By simulating production environments, teams can see how changes might affect throughput, safety, or downtime before touching the actual facility.
PepsiCo’s early pilots showed faster validation times and signs of throughput improvement at initial sites, though the company has not published detailed metrics yet. What matters more than the numbers is the pattern: AI is being used to compress decision cycles in physical operations, not to replace workers or remove human judgment.
This kind of use case fits a broader trend. Enterprises that move beyond pilot projects often focus on narrow, well-defined problems where AI can reduce friction in existing workflows. Manufacturing, logistics, and healthcare operations are showing more traction than open-ended knowledge work.
Why PepsiCo treats AI as operations engineering, not office productivity
PepsiCo’s approach also highlights a quieter shift in how AI programs are being justified inside large firms. The value is tied to operational outcomes — time saved, fewer disruptions, better planning — rather than general claims about productivity.
That distinction matters. Many enterprise AI efforts stall because they struggle to connect usage with measurable impact. Tools get deployed, but workflows stay the same.
Digital twins change that dynamic because they sit directly inside planning and engineering processes. If a simulated change cuts weeks off a factory upgrade, the benefit is visible. If it reduces downtime risk, operations teams can measure that over time.
This focus on process change, rather than tools, mirrors what is happening in other sectors. In healthcare, for example, Amazon is testing an AI assistant inside its One Medical app that uses patient history to reduce repetitive intake and support care interactions, according to comments from CEO Andy Jassy reported this week. The assistant is embedded in the care workflow, not offered as a standalone feature.
Both cases point to the same lesson: AI adoption moves faster when it fits into how work already gets done, instead of asking teams to invent new habits.
Why this matters for other enterprises
PepsiCo’s digital-twin work is unlikely to be unique for long. Large manufacturers across food, chemicals, and industrial goods face similar planning constraints and cost pressures. Many already use simulation software. AI adds speed and scale to those models.
What is more interesting is what this says about the next phase of enterprise AI adoption.
First, the centre of gravity is shifting away from broad, generic tools toward focused systems tied to specific decisions. Second, success depends less on model quality and more on data quality, process ownership, and governance. A digital twin is only as useful as the operational data feeding it.
Third, this kind of AI work tends to stay out of the spotlight. It does not generate flashy demos, but it can reshape how companies plan capital spending and manage risk.
That also explains why many firms remain cautious. Building and maintaining accurate digital twins takes time, cross-team coordination, and deep knowledge of physical systems. The payoff comes from repeated use, not one-off wins.
PepsiCo’s manufacturing AI work is a quiet signal worth watching
In AI coverage, it is easy to focus on new models, agents, or interfaces. Stories like PepsiCo’s point in a different direction. They show AI being treated as infrastructure — something that sits underneath daily decisions and gradually changes how work flows through an organisation.
For enterprise leaders, the takeaway is not to copy the technology stack. It is to look for places where planning delays, validation cycles, or operational risk slow the business down. Those friction points are where AI has the best chance of sticking.
PepsiCo’s digital-twin pilots suggest that the factory floor may be one of the most practical testing grounds for AI today — not because it is trendy, but because the impact is easier to see when time and mistakes have a clear cost.
(Photo by NIKHIL)
See also: Deloitte sounds alarm as AI agent deployment outruns safety frameworks
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Klarna backs Google UCP to power AI agent payments
Klarna aims to address the lack of interoperability between conversational AI agents and backend payment systems by backing Google’s Universal Commerce Protocol (UCP), an open standard designed to unify how AI agents discover products and execute transactions.
The partnership, which also sees Klarna supporting Google’s Agent Payments Protocol (AP2), places the Swedish fintech firm among the early payment providers to back a standardised framework for automated shopping.
The interoperability problem with AI agent payments
Current implementations of AI commerce often function as walled gardens. An AI agent on one platform typically requires a custom integration to communicate with a merchant’s inventory system, and yet another to process payments. This integration complexity inflates development costs and limits the reach of automated shopping tools.
Google’s UCP attempts to solve this by providing a standardised interface for the entire shopping lifecycle, from discovery and purchase to post-purchase support. Rather than building unique connectors for every AI platform, merchants and payment providers can interact through a unified standard.
David Sykes, Chief Commercial Officer at Klarna, states that as AI-driven shopping evolves, the underlying infrastructure must rely on openness, trust, and transparency. “Supporting UCP is part of Klarna’s broader work with Google to help define responsible, interoperable standards that support the future of shopping,” he explains.
Standardising the transaction layer
By integrating with UCP, Klarna allows its technology – including flexible payment options and real-time decisioning – to function within these AI agent environments. This removes the need for hardcoded platform-specific payment logic. Open standards provide a framework for the industry to explore how discovery, shopping, and payments work together across AI-powered environments.
The implications extend to how transactions settle. Klarna’s support for AP2 complements the UCP integration, helping advance an ecosystem where trusted payment options work across AI-powered checkout experiences. This combination aims to reduce the friction of users handing off a purchase decision to an automated agent.
“Open standards like UCP are essential to making AI-powered commerce practical at scale,” said Ashish Gupta, VP/GM of Merchant Shopping at Google. “Klarna’s support for UCP reflects the kind of cross-industry collaboration needed to build interoperable commerce experiences that expand choice while maintaining security.”
Adoption of Google’s UCP by Klarna is part of a broader shift
For retail and fintech leaders, the adoption of UCP by players like Klarna suggests a requirement to rethink commerce architecture. The shift implies that future payments may increasingly come through sources where the buyer interface is an AI agent rather than a branded storefront.
Implementing UCP generally does not require a complete re-platforming but does demand rigorous data hygiene. Because agents rely on structured data to manage transactions, the accuracy of product feeds and inventory levels becomes an operational priority.
Furthermore, the model maintains a focus on trust. Klarna’s technology provides upfront terms designed to build trust at checkout. As agent-led commerce develops, maintaining clear decisioning logic and transparency remains a priority for risk management.
The convergence of Klarna’s payment rails with Google’s open protocols offers a practical template for reducing the friction of using AI agents for commerce. The value lies in the efficiency of a standardised integration layer that reduces the technical debt associated with maintaining multiple sales channels. Success will likely depend on the ability to expose business logic and inventory data through these open standards.
See also: How SAP is modernising HMRC’s tax infrastructure with AI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
How SAP is modernising HMRC’s tax infrastructure with AI
HMRC has selected SAP to overhaul its core revenue systems and place AI at the centre of the UK’s tax administration strategy.
The contract represents a broader shift in how public sector bodies approach automation. Rather than layering AI tools over legacy infrastructure, HMRC is replacing the underlying architecture to support machine learning and automated decision-making natively.
The AI-powered modernisation effort focuses on the Enterprise Tax Management Platform (ETMP), the technological backbone responsible for managing over £800 billion in annual tax revenue and which currently supports over 45 tax regimes. By migrating this infrastructure to a managed cloud environment via RISE with SAP, HMRC aims to simplify a complex technology landscape that tens of thousands of staff rely on daily.
Effective machine learning requires unified data sets, which are often impossible to maintain across fragmented on-premise legacy systems. As part of the deployment, HMRC will implement SAP Business Technology Platform and AI capabilities. These tools are designed to surface insights faster and automate processes across tax administration.
SAP Sovereign Cloud meets local AI adoption requirements
Deploying AI in such highly-regulated sectors requires strict data governance. HMRC will host these new capabilities on SAP’s UK Sovereign Cloud. This ensures that while the tax authority adopts commercial AI tools, it adheres to localised requirements regarding data residency, security, and compliance.
“Large-scale public systems like those delivered by HMRC must operate reliably at national scale while adapting to changing demands,” said Leila Romane, Managing Director UKI at SAP.
“By modernising one of the UK’s most important platforms and hosting it on a UK sovereign cloud, we are helping to strengthen the resilience, security, and sustainability of critical national infrastructure.”
Using AI to modernise tax infrastructure
The modernisation ultimately aims to reduce friction in taxpayer interactions. SAP and HMRC will work together to define new AI capabilities specifically aimed at improving taxpayer experiences and enhancing decision-making.
For enterprise leaders, the lesson here is the link between data accessibility and operational value. The collaboration provides HMRC employees with better access to analytical data and an improved user interface. This structure supports greater confidence in real-time analysis and reporting; allowing for more responsive and transparent experiences for taxpayers.
The SAP project illustrates that AI adoption is an infrastructure challenge as much as a software one. HMRC’s approach involves securing a sovereign cloud foundation before attempting to scale automation. For executives, this underscores the need to address technical debt and data sovereignty to enable effective AI implementation in areas as regulated as tax and finance.
See also: Accenture: Insurers betting big on AI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
ThoughtSpot: On the new fleet of agents delivering modern analytics
If you are a data and analytics leader, then you know agentic AI is fuelling unprecedented speed of change right now. Knowing you need to do something and knowing what to do, however, are two different things. The good news is providers like ThoughtSpot are able to assist, with the company in its own words determined to ‘reimagin[e] analytics and BI from the ground up’.
“Certainly, agentic systems really are shifting us into very new territory,” explains Jane Smith, field chief data and AI officer at ThoughtSpot. “They’re shifting us away from passive reporting to much more active decision making.
“Traditional BI waits for you to find an insight,” adds Jane. “Agentic systems are proactively monitoring data from multiple sources 24/7; they’re diagnosing why changes happened; they’re triggering the next action automatically.
“We’re getting much more action-oriented.”
Alongside moving from passive to active, there are two other ways in which Jane sees this change taking place in BI. There is a shift towards the ‘true democratisation of data’ on one hand, but on the other is the ‘resurgence of focus’ on the semantic layer. “You cannot have an agent taking action in the way I just described when it doesn’t strictly understand business context,” says Jane. “A strong semantic layer is really the only way to make sense… of the chaos of AI.”
ThoughtSpot has a fleet of agents to take action and move the needle for customers. In December, the company launched four new BI agents, with the idea that they work as a team to deliver modern analytics.
Spotter 3, the latest iteration of an agent first debuted towards the end of 2024, is the star. It is conversant with applications like Slack and Salesforce, and can not only answer questions, but assess the quality of its answer and keep trying until it gets the right result.
“It leverages the [Model Context] protocol, so you can ask your questions to your organisation’s structured data – everything in your rows, your columns, your tables – but also incorporate your unstructured data,” says Jane. “So, you can get really context-rich answers to questions, all through our agent, or if you wish, through your own LLM.”
With this power, however, comes responsibility. As ThoughtSpot’s recent eBook exploring data and AI trends for 2026 notes, the C-suite needs to work out how to design systems so every decision – be it human or AI – can be explained, improved, and trusted.
ThoughtSpot calls this emerging architecture ‘decision intelligence’ (DI). “What we’ll see a lot of, I think, will be decision supply chains,” explains Jane. “Instead of a one-off insight, I think what we’re going to see is decisions… flow through repeatable stages, data analysis, simulation, action, feedback, and these are all interactions between humans and machines that will be logged in what we can think of as a decision system of record.”
What would this look like in practice? Jane offers an example from a clinical trial in the pharma industry. “The system would log and version, really, every step of how a patient is chosen for a clinical trial; how data from a health record is used to identify a candidate; how that decision was simulated against the trial protocol; how the matching occurred; how potentially a doctor ultimately recommended this patient for the trial,” she says.
“These are processes that can be audited, they can be improved for the following trial. But the very meticulous logging of every element of the flow of this decision into what we think of as a supply chain is a way that I would visualise that.”
ThoughtSpot is participating at the AI & Big Data Expo Global, in London, on February 4-5. You can watch the full interview with Jane Smith below:
Photo by Steve Johnson on Unsplash
-
Fintech6 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech6 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech5 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Artificial Intelligence7 months agoThe best Android phones
-
Fintech3 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Fintech7 months agoIntuit Adds Agentic AI to Its Enterprise Suite | PYMNTS.com
