Artificial Intelligence
Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance
Banks are testing a new type of artificial intelligence, like agentic AI, that does more than scan for keywords or follow preset rules. Instead of relying only on static alerts, some trading desks are beginning to use systems designed to reason through patterns in real time and flag conduct that may need human review.
Bloomberg detailed how Goldman Sachs and Deutsche Bank are exploring or deploying so-called “agentic” AI tools for trading surveillance. The goal is to strengthen oversight of orders and trades by using software agents that can analyse activity as it happens and identify patterns that could suggest misconduct.
Adaptive agents
Large banks use automated surveillance systems to monitor trading activity, systems that often rely on predefined rules: if a trade exceeds a certain size, deviates from a benchmark, or fits a known risk pattern, it triggers an alert. Compliance teams then review the case manually.
The challenge is scale and complexity. Modern markets generate huge volumes of data in asset classes, time zones, and trading venues. Static rules can generate large numbers of false positives, while more subtle forms of manipulation may not match known patterns.
According to Bloomberg, the newer agentic systems aim to go beyond that approach. Rather than simply matching trades against a checklist, the AI agents are designed to examine trading behaviour in multiple signals, compare it with historical activity, and detect unusual combinations of actions.
The tools are not described as replacing compliance officers. Instead, they appear to function as an additional layer of monitoring, surfacing cases that warrant closer human inspection.
Deutsche Bank’s work with Google Cloud
Bloomberg reported that Deutsche Bank is working with Google Cloud on developing AI agents that can monitor trading activity. The system is designed to review large sets of order and execution data and flag anomalies in near real time.
The bank has been expanding its AI initiatives over the past few years, and this surveillance effort reflects how financial institutions are applying generative and large language model technology beyond chat interfaces. In this context, the AI is not answering customer questions but analysing structured and unstructured data streams tied to trading behaviour. The AI agents can help identify “complex anomalies” in orders and trades. That suggests the system may look at relationships between trades, timing, market conditions, and trader history not single events in isolation.
Human compliance staff remain responsible for reviewing flagged cases and determining whether further action is required.
Goldman Sachs’ agentic AI strategy
Goldman Sachs is also exploring the use of agentic AI for surveillance, according to Bloomberg. The bank has invested heavily in AI in its trading and risk systems in recent years, and this effort appears to extend that work into compliance.
The focus, as described in the report, is on using AI agents that can operate with a degree of independence in scanning for misconduct indicators. The system may identify patterns that do not fit a clear rule but still stand out as unusual.
For regulators, the appeal is straightforward: earlier detection can reduce market harm and reputational risk. For banks, there is also an operational dimension. Compliance departments face pressure to handle large volumes of alerts while maintaining strict oversight standards. Tools that can reduce noise without lowering scrutiny are likely to attract attention.
Why “agentic AI” matters
The term “agentic AI” refers to systems that can take goal-directed actions not respond to prompts. In practice, that can mean the software is able to decide what data to examine next, compare multiple signals, and escalate findings without constant human input. In a trading context, that might involve monitoring order flows, price movements, communications metadata, and historical behaviour to assess whether activity aligns with normal patterns.
This does not mean the system makes disciplinary decisions on its own. Financial institutions operate under strict regulatory regimes, and accountability remains with human supervisors. The agent’s role is to identify and organise information more effectively than static systems can.
Part of a wider compliance shift
What appears new is the application of more advanced generative AI architectures to internal control functions.
Regulators in the US and Europe have encouraged firms to improve the monitoring of market abuse and manipulation. While rules do not mandate agentic AI, they do require firms to maintain effective systems and controls. If AI tools can help meet that standard, adoption is likely to grow.
At the same time, AI in compliance raises its own questions. Banks must ensure that models are explainable, that they do not introduce bias, and that they can withstand regulatory review. Model governance, data security, and audit trails remain central concerns.
What changes for the industry
If agentic surveillance tools prove effective, they could alter how compliance teams work. Instead of sorting through large volumes of simple alerts, staff may spend more time evaluating complex cases surfaced by AI agents.
That change would not remove the need for human judgement. It may, however, change where human effort is focused. In markets where speed and data volume continue to rise, the ability to analyse patterns in real time is becoming harder to achieve with rule-based systems alone.
(Photo by Markus Spiske)
See also: Mastercard’s AI payment demo points to agent-led commerce
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Poor implementation of AI may be behind workforce reduction
Many organisations are eroding the foundations of business – productivity, competitiveness, and efficiency. This is happening due to poor implementation of human-AI collaboration, according to cloud data and AI consultancy, Datatonic. The company says in the next phase of enterprise AI, success will come from carefully-governed and designed AI that works alongside humans in “human-in-the-loop (HiTL)’ systems.
The company’s research shows that companies that fail to embed AI into their human workflows are falling behind the competition as productivity slows down. Datatonic says a hybrid human-AI approach speeds up decision-making, thus improving overall operations. Scott Eivers, CEO of Datatonic says, “AI [is] about redesigning how work gets done. The biggest risk we see in the market is productivity leakage when AI exists in isolation from the people who actually run the business.”
After years of AI investment, pressure is mounting on businesses to show returns. However, some research shows some initiatives remaining in their pilot stage due to limited trust among users. As a result, organisations are failing to use AI-powered insights to positively affect decisions and workflows, meaning efficiency gains never materialise.
According to Datatonic, HiTL models are crucial for future success, providing a combination of AI speed with human judgement and accountability. This is evident in agent-assisted software development, where AI systems create code from loose prompts and transform them into code. In this case, human teams decide what needs to be developed, inspect all requirements, and review plans before being brought into existence. Once this direction is clear, AI agents construct modular components.
The trend for AI in the workplace is starting to appear in finance and operations. For instance, in back-office and finance departments, AI-powered document processing is already delivering a 70% reduction in invoice-processing costs according to some, but finance teams still approve the final outcomes.
“They’re partnership stories,” says Andrew Harding, CTO of Datatonic. “Humans create evaluation systems, validate plans, set guardrails, and make decisions. AI executes at speed and scale. That combination is where real enterprise value shows up.”
Many enterprises are failing to deploy fully autonomous agents safely, according to Datatonic, with shortfalls in security controls and governance frameworks. Autonomy can only scale when organisations introduce approval checkpoints and benchmark performance standards. Evaluation systems must also be implemented as AI models evolve, ensuring they always operate safely and as intended without violating any compliance obligations.
Harding says, “As trust builds, companies can responsibly delegate more to AI. But skipping governance doesn’t build speed, it creates risk.”
Datatonic predicts major acceleration in workloads in the next two years, with preparation and validation handled by AI agents. AI systems may also be implemented to test and invalidate decisions before teams invest resources.
Scott Eivers believes the future “looks like expert departments run by smaller, nimble teams – finance, HR, marketing – each amplified by AI. The companies that win will be those that teach people to work with AI — not around it,” he said.
(Image source: “Waterfall” by PMillera4 is licensed under CC BY-NC-ND 2.0.)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Upgrading agentic AI for finance workflows
Improving trust in agentic AI for finance workflows remains a major priority for technology leaders today.
Over the past two years, enterprises have rushed to put automated agents into real workflows, spanning customer support and back-office operations. These tools excel at retrieving information, yet they often struggle to provide consistent and explainable reasoning during multi-step scenarios.
Solving the automation opacity problem
Financial institutions especially rely on massive volumes of unstructured data to inform investment memos, conduct root-cause investigations, and run compliance checks. When agents handle these tasks, any failure to trace exact logic can lead to severe regulatory fines or poor asset allocation. Technology executives often find that adding more agents creates more complexity than value without better orchestration.
Open-source AI laboratory Sentient launched Arena today, which is designed as a live and production-grade stress-testing environment that allows developers to evaluate competing computational approaches against demanding cognitive problems.
Sentient’s system replicates the reality of corporate workflows, deliberately feeding agents incomplete information, ambiguous instructions, and conflicting sources. Instead of scoring whether a tool generated a correct output, the platform records the full reasoning trace to help engineering teams debug failures over time.
Building reliable agentic AI systems for finance
Evaluating these capabilities before production deployment has attracted no shortage of institutional interest. Sentient has partnered with a cohort including Founders Fund, Pantera, and asset management giant Franklin Templeton, which oversees more than $1.5 trillion. Other participants in the initial phase include alphaXiv, Fireworks, Openhands, and OpenRouter.
Julian Love, Managing Principal at Franklin Templeton Digital Assets, said: “As companies look to apply AI agents across research, operations, and client-facing workflows, the question is no longer whether these systems are powerful or if they can generate an answer, but whether they’re reliable in real workflows.
“A sandbox environment like Arena – where agents are tested on real, complex workflows, and their reasoning can be inspected – will help the ecosystem separate promising ideas from production-ready capabilities and boost confidence in how this technology is integrated and scaled.”
Himanshu Tyagi, Co-Founder of Sentient, added: “AI agents are no longer an experiment inside the enterprise; they’re being put into workflows that touch customers, money, and operational outcomes.
“That shift changes what matters. It’s not enough for a system to be impressive in a demo. Enterprises need to know whether agents can reason reliably in production, where failures are expensive, and trust is fragile.”
Organisations in sensitive industries like finance require repeatability, comparability, and a method to track reliability improvements regardless of the underlying models they use for agentic AI. Incorporating platforms like Arena allows engineering directors to build resilient data pipelines while adapting open-source agent capabilities to their private internal data.
Overcoming integration bottlenecks
Survey data highlights a gap between ambition and reality. While 85 percent of businesses want to operate as agentic enterprises – and nearly three-quarters plan to deploy autonomous agents – fewer than a quarter possess mature governance frameworks.
Advancing from a pilot phase to full scale proves difficult for many. This happens because current corporate environments run an average of twelve separate agents, frequently in silos.
Open-source development models offer a path forward by providing infrastructure that enables faster experimentation. Sentient itself acts as the architect behind frameworks like ROMA and the Dobby open-source model to assist with these coordination efforts.
Focusing on computational transparency ensures that when an automated process makes a recommendation on a portfolio, human auditors can track exactly how that conclusion was reached.
By prioritising environments that record full logic traces rather than isolated right answers, technology leaders integrating agentic AI for operations like finance can secure better ROI and maintain regulatory compliance across their business.
See also: Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
ASML's high-NA EUV tools clear the runway for next-gen AI chips
The machine that will make tomorrow’s AI chips possible has just been declared ready for mass production–and the clock for the industry’s next leap has officially started. ASML, the Dutch company that holds a global monopoly on commercial extreme ultraviolet lithography equipment, confirmed this week that its High-NA EUV tools have crossed the threshold from technically impressive to genuinely production-ready.
The announcement, made exclusively to Reuters by ASML’s chief technology officer Marco Pieters ahead of a technical conference in San Jose, marks a turning point that chipmakers and AI companies have been waiting years for.
Why this matters for AI
The timing is not incidental. Current-generation EUV machines are approaching the outer edge of what they can do for advanced AI chip production–meaning the semiconductors powering large language models and AI accelerators are bumping up against a physical ceiling.
High-NA EUV tools are designed to break through it, enabling chipmakers to print finer, denser circuit patterns in fewer steps. That translates directly into more powerful and efficient chips for AI workloads.
“I think that it’s at a critical point to look at the amount of learning cycles that have happened,” Pieters told Reuters, referring to the volume of customer testing the machines have now accumulated.
The numbers that matter
ASML’s case for readiness rests on three data points it plans to release publicly. The High-NA EUV tools have now processed 500,000 silicon wafers, achieved roughly 80% uptime–with a target of 90% by year-end–and demonstrated imaging precision capable of replacing multiple conventional patterning steps with a single High-NA pass.
Together, Pieters said, those figures signal that the tools are ready for manufacturers to begin qualification. The machines don’t come cheap. At approximately US$400 million per unit–double the cost of the previous EUV generation–they represent one of the most expensive pieces of capital equipment in industrial history.
TSMC and Intel are among the named early adopters.
A two-to-three-year runway
Technical readiness and manufacturing integration are two different things, and Pieters was careful to separate them. Despite the milestone, full integration into high-volume production lines is still expected to take two to three years as chipmakers work through qualification and process development.
“Chipmakers have all the knowledge to qualify these tools,” he said–a vote of confidence in the industry’s ability to move, even if the timeline remains measured.
For the AI sector, that means the next generation of chip performance improvements is on the horizon, not yet in hand. But with ASML now saying the starting gun has fired, the race to integrate High-NA EUV into production has formally begun.
(Photo by ASML)
See also: 2025’s AI chip wars: What enterprise leaders learned about supply chain reality
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
-
Fintech7 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Fintech6 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech7 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech4 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Fintech6 months ago
Esh Bank Unveils Experience That Includes Revenue Sharing With Customers | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Artificial Intelligence7 months agoThe best Android phones
