Artificial Intelligence
Basware's AI agents: From invoicing to “100% automated”
A survey conducted on behalf of Basware found that 61% of organisations had deployed AI agents as experiments, and a quarter “did not fully understand” what an AI agent looks like in practice. The implication is that adoption remains uneven and, in many cases, exploratory. Basware’s would like to see its customers move from experimentation to operational use. The survey figures comprised of responses from 200 finance leaders in the US, United Kingdom, France, and Germany.
The question permeating agentic activities in financial platforms is one of governance. Finance functions will delegate tasks to AI systems only human operators retain control over authorisation, are assured of compliance, and have access to an audit trail. Basware’s agents actions pass through what the company describes as a central policy engine. This applies business rules and sets compliance requirements and risk thresholds, referring to such controls as autonomy ‘gates’.
Kurtz described the principle: “Autonomy without trust is just risk. Our platform is uniquely designed to ensure that every AI decision is explainable and governed through the same controls finance teams already rely on.” The company sees its agents integrating with established processes, rather than working in parallel outside governance frameworks.
Basware has several more agentic AIs in development. A Supplier Agent will manage invoice disputes and payment queries, able to contact suppliers and summarise discussions. An AP Pro Agent is intended to assist staff to resolve processing questions via a generative AI interface.
The company cites early user experiences from Billerud, a paper manufacturer. Jesper Persson from the company said there had been benefits. “Since day one, we’ve perceived the desired values from the project. The quality of invoices has improved considerably, and the AI continues to evolve and improve with each passing day. The efficiency gains we achieved translated directly into tangible cost savings.”
The company’s objective is to have finance teams delegate decisions and actions to agents in the future, and it plans to release more AI tools in 2026. The company states that AI is in its platform not an add-on feature.
Artificial Intelligence
Anthropic: Claude faces ‘industrial-scale’ AI model distillation
Anthropic has detailed three “industrial-scale” AI model distillation campaigns by overseas labs designed to extract abilities from Claude.
These competitors generated over 16 million exchanges using approximately 24,000 deceptive accounts. Their goal was to acquire proprietary logic to improve their competing platforms.
The extraction technique, known as distillation, involves training a weaker system on the high-quality outputs of a stronger one.
When applied legitimately, distillation helps companies build smaller and cheaper versions of their applications for customers. Yet, malicious actors weaponise this method to acquire powerful capabilities in a fraction of the time and cost required for independent development.
Protecting intellectual property like Anthropic’s Claude
Unmitigated distillation presents a severe intellectual property challenge. Because Anthropic blocks commercial access in China for national security reasons, attackers bypass regional access restrictions by deploying commercial proxy networks.
These services run what Anthropic calls “hydra cluster” architectures, which distribute traffic across APIs and third-party cloud platforms. The massive breadth of these networks means there are no single points of failure. As Anthropic noted, “when one account is banned, a new one takes its place.”
In one identified case, a single proxy network managed more than 20,000 fraudulent accounts simultaneously. These networks mix AI model distillation traffic with standard customer requests to evade detection. This directly impacts corporate resilience and forces security teams to reconsider how they monitor cloud API traffic.
Illicitly-trained models also bypass established safety guardrails, creating severe national security risks. US developers, for example, build protections to prevent state and non-state actors from using these systems to develop bioweapons or carry out malicious cyber activities.
Cloned systems lack the safeguards implemented by systems like Anthropic’s Claude, allowing dangerous capabilities to proliferate with protections stripped out entirely. Foreign competitors can feed these unprotected capabilities into military, intelligence, and surveillance systems, enabling authoritarian governments to deploy them for offensive operations.
If these distilled versions are open-sourced, the danger further multiplies as the capabilities spread freely beyond any single government’s control.
Unlawful extraction allows foreign entities, including those under the control of the Chinese Communist Party, to close the competitive advantage protected by export controls. Without visibility into these attacks, rapid advancements by foreign developers incorrectly appear as innovation circumventing export controls.
In reality, these advancements depend heavily on extracting American intellectual property at scale, an effort that still requires access to advanced chips. Restricted chip access limits both direct model training and the scale of illicit distillation.
The playbook for AI model distillation
The perpetrators followed a similar operational playbook, utilising fraudulent accounts and proxy services to access systems at scale while evading detection. The volume, structure, and focus of their prompts were distinct from normal usage patterns, reflecting deliberate capability extraction rather than legitimate use.
Anthropic attributed these campaigns targeting Claude through IP address correlation, request metadata, and infrastructure indicators. Each operation targeted highly differentiated functions: agentic reasoning, tool use, and coding.
One campaign generated over 13 million exchanges targeting agentic coding and tool orchestration. Anthropic detected this operation while it was still active, mapping timings against the competitor’s public product roadmap. When Anthropic released a new model, the competitor pivoted within 24 hours, redirecting nearly half their traffic to extract capabilities from the latest system.
Another operation generated over 3.4 million requests focused on computer vision, data analysis, and agentic reasoning. This group utilised hundreds of varied accounts to obscure their coordinated efforts. Anthropic attributed this campaign by matching request metadata to the public profiles of senior staff at the foreign laboratory. In a later phase, this competitor attempted to extract and reconstruct the host system’s reasoning traces.
Anthropic says a third AI model distillation campaign targeting Claude extracted reasoning capabilities and rubric-based grading data through over 150,000 interactions. This group forced the targeted system to map out its internal logic step-by-step, effectively generating massive volumes of chain-of-thought training data. They also extracted censorship-safe alternatives to politically sensitive queries to train their own systems to steer conversations away from restricted topics. The perpetrators generated synchronised traffic using identical patterns and shared payment methods to enable load balancing.
Request metadata for this third campaign traced these accounts back to specific researchers at the laboratory. These requests often appear benign on their own, such as a prompt simply asking the system to act as an expert data analyst delivering insights grounded in complete reasoning. But when variations of that exact prompt arrive tens of thousands of times across hundreds of coordinated accounts targeting the same narrow capability, the extraction pattern becomes clear.
Massive volume concentrated in specific areas, highly repetitive structures, and content mapping directly to training needs are the hallmarks of a distillation attack.
Implementing actionable defences
Protecting enterprise environments requires adopting multi-layered defences to make such extraction efforts harder to execute and easier to identify. Anthropic advises implementing behavioural fingerprinting and traffic classifiers designed to identify AI model distillation patterns in API traffic.
IT leaders must also strengthen verification processes for common vulnerability pathways, such as educational accounts, security research programmes, and startup organisations.
Companies should integrate product-level and API-level safeguards designed to reduce the efficacy of model outputs for illicit distillation. This must be done without degrading the experience for legitimate, paying customers.
Detecting coordinated activity across large numbers of accounts is an absolute necessity. This includes specifically monitoring for the continuous elicitation of chain-of-thought outputs used to construct reasoning training data.
Cross-industry collaboration also remains essential, as these attacks are growing in intensity and sophistication. This requires rapid and coordinated intelligence sharing across AI laboratories, cloud providers, and policymakers.
Anthropic has published its findings about Claude being targeted by AI model distillation campaigns to provide a more holistic picture of the landscape and make the evidence available to all stakeholders. By treating AI architectures with rigorous access controls, technology officers can secure their competitive edge while ensuring ongoing governance.
See also: How disconnected clouds improve AI data governance
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
How disconnected clouds improve AI data governance
Disconnected clouds aim to improve AI data governance as businesses rethink their infrastructure under tighter regulatory expectations.
Ensuring operational continuity in isolated environments has become increasingly vital for businesses. Facilities lacking continuous internet access face unique constraints where external dependencies become unacceptable.
Microsoft recently expanded its capabilities to allow regulated industries and public sectors to participate independently in the digital economy. Trust in these systems stems from confidence that data remains protected, controls are enforceable, and operations proceed regardless of external conditions.
The company now offers full stack options across connected, intermittently connected, and fully disconnected modes. This architecture unifies Azure Local, Microsoft 365 Local, and Foundry Local into a single sovereign private cloud.
Bringing these elements together provides a localised experience resilient to any connectivity condition. By standardising governance across all deployments, it helps enterprises to prevent fragmented architectures.
Azure Local disconnected operations enable organisations to run vital infrastructure using familiar Azure governance and policy controls completely offline. Execution, management, and policy enforcement stay entirely within customer-operated facilities.
This approach allows companies to maintain uninterrupted operations and keep identities protected within their established boundaries. Implementations scale from minor deployments to demanding and data-intensive workloads.
Improving resilience and AI data governance in tandem
Deploying AI in sovereign environments introduces high compute requirements. Foundry Local enables enterprises to run multimodal large models completely offline.
Utilising modern hardware from partners like NVIDIA, customers deploy AI inferencing on their own physical servers. This ensures data and application programming interfaces operate strictly within customer-controlled boundaries. Customers maintain complete authority over their hardware even as AI inferencing demands increase over time.
Gerard Hoffmann, CEO of Proximus Luxembourg, said: “The availability of Azure Local disconnected operations represents a breakthrough for organisations that need control over their data without sacrificing the power of the Microsoft Cloud.
“For Luxembourg, where digital sovereignty is not just a principle but a strategic necessity, this model offers the resilience, autonomy and trust our market expects. By combining Microsoft’s technological leadership with Proximus NXT’s sovereign cloud expertise, we are enabling our customers to innovate confidently—even in fully-disconnected mode.”
CIOs planning offline deployments must map workloads to the correct control posture based on risk, regulation, and specific mission requirements. Since disconnected environments are not one-size-fits-all, businesses can start fast with smaller deployments and expand their capabilities over time.
Implementing a disconnected private cloud with AI support answers a business requirement for highly-regulated sectors, enabling secure data governance even when external connectivity is absent.
See also: Deploying agentic finance AI for immediate business ROI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Deploying agentic finance AI for immediate business ROI
Agentic finance AI improves business efficiency and ROI only when deployed with strict governance and clear return on investment targets.
A recent FT Longitude survey of 200 finance leaders across the US, UK, France, and Germany showed 61 percent have deployed AI agents merely as experiments. Meanwhile, one in four executives admit they do not fully grasp what these agents look like in practice.
Advancing agentic finance AI beyond experiments
Finance departments need governed systems that combine language processing with business logic to deliver actual value.
Providers of Invoice Lifecycle Management platforms are introducing new agents designed to accelerate invoice processing and push accounts payable toward greater autonomy. Recent market solutions use generative AI, deep learning, and natural language processing to manage the entire workflow, from initial data ingestion through to final reconciliation.
These digital teammates handle task execution, allowing human employees to focus on higher-level business planning rather than replacing them entirely.
Within these ecosystems, specialised business agents provide contextual and real-time guidance regarding the next best actions for handling invoices. Data agents allow staff to query system information using natural language, easily finding answers about awaiting approvals in specific regions or identifying suppliers offering early payment discounts.
Governing autonomous finance workflows
Finance teams will only hand over tasks to agentic AI if they retain control. Finance departments require verifiable audit trails and explainable logic for every action, avoiding networks of disconnected bots.
Industry leaders note that autonomy without trust isn’t acceptable, especially in sensitive industries like finance. Platforms must ensure every AI decision is explainable, auditable, and governed through existing finance controls. This approach helps safely delegate workloads to algorithms while remaining fully compliant and protected.
To enable this trust, every action performed by an AI agent routes through a central policy engine. Before executing any task, the system passes the proposed action through specific autonomy gates that enforce the customer’s business rules, risk thresholds, and compliance requirements. This architecture ensures algorithms manage the bulk of the workload while finance personnel retain total visibility and a complete audit trail.
Building automated procurement operations
Future agentic finance AI capabilities will automate issue resolution and connect data across systems for faster decision-making.
Modern capabilities in 2026 include supplier agents designed to manage invoice disputes and payment queries. These agents will automatically telephone suppliers to explain discrepancies, summarise the conversation, and outline subsequent steps to achieve faster resolutions. Professional agents, meanwhile, will assist clerks in resolving real-time processing questions using natural language to cut manual effort and delays.
AI must operate as an integral business component rather than a bonus feature, requiring intelligent, secure, and ethical application to drive cost efficiencies and enhance operations. By centralising control and ensuring every automated decision from agentic AI passes through established compliance checks, organisations can safely elevate their finance operations to fully autonomous execution.
See also: Mastercard’s AI payment demo points to agent-led commerce
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
-
Fintech7 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech6 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Fintech7 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech4 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Fintech5 months ago
Esh Bank Unveils Experience That Includes Revenue Sharing With Customers | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Artificial Intelligence7 months agoThe best Android phones
