Connect with us

Artificial Intelligence

COBOL modernisation just got an AI shortcut–and the market noticed

Published

on

It’s an open secret (that is, not many people seem to know) that the institutions keeping the global financial system turnig over run code that is ancient, barely understood, and frighteningly hard to replace. Now, AI is finally making that problem solvable – and the market has responded with a reality check for one of technology’s oldest names.

IBM shares recorded their worst single-day drop in more than 25 years earlier this week, plunging 13% after AI startup Anthropic said its Claude Code tool can accelerate COBOL modernisation – the kind of painstaking, expensive legacy work that has underpinned a portion of IBM’s consulting revenue for years.

An Anthropic blog stated that “modernising a COBOL system once required armies of consultants spending years mapping workflows,” and argued that tools like Claude Code can now automate the exploration and analysis phases that consume most of the effort in COBOL modernisation. That single claim was enough to send investors reaching for the sell button.

COBOL is bigger than most realise

To understand why the reaction was so sharp, it helps to understand just how entrenched COBOL remains. Hundreds of billions of lines of COBOL code run in production daily, powering critical systems in finance and government sectors. The language handles an estimated 95% of ATM transactions in the US alone.

The deeper problem isn’t the code itself – it’s the people who understand it. The number of developers who understand COBOL continues to shrink as the workforce that built these systems has largely retired. That talent scarcity is precisely what made COBOL modernisation so expensive for so long, and what made large consulting engagements – the kind IBM and rivals like Accenture and Cognizant built profitable practices around – essentially unavoidable.

Anthropic argues that AI flips this equation entirely. Claude Code works by mapping dependencies in thousands of lines of code, documenting workflows, identifying risks faster than human analysts, and providing teams with deep insights for informed decision-making. The company says teams can now modernise COBOL codebases in quarters not years.

IBM was already here

What the market’s reaction may be overlooking is that IBM itself has been making this argument for some time. Anthropic’s post comes about three years after IBM itself suggested using AI to rewrite COBOL as Java and created a product called “watsonx Code Assistant for Z” to do it. IBM CEO Arvind Krishna said as recently as July 2025 that the company’s AI coding assistant for mainframes “has got very adoption,” with the majority of customers using it to understand their COBOL codebase and decide what to modernise.

IBM defended its position on Monday, saying its mainframe platform delivers the same quality of performance and security regardless of programming language – COBOL or otherwise. And analysts were quick to add nuance to the panic.

Evercore ISI analyst Amit Daryanani noted that “clients already had the option to migrate from the mainframe, yet they are sticking with the platform,” suggesting the fear of displacement may be outrunning the reality.

The broader pattern

IBM wasn’t alone in taking a hit. Accenture and Cognizant also declined following the news – a sign that investors are looking at the entire consulting model around legacy modernisation, not IBM’s mainframe hardware business. Just last week, cybersecurity stocks sold off sharply after Anthropic announced Claude Code Security, a tool that scans codebases for vulnerabilities.

The pattern is becoming familiar: each new AI ability announcement triggers a reassessment of which existing revenue streams might be compressed, and the market prices in fear immediately.

IBM didn’t stay quiet. Rob Thomas, the company’s Senior Vice President and Chief Commercial Officer, pushed back directly in the aforementioned blog post, drawing a line the market appeared to have missed: “Translating code is one thing. Modernising a platform is something else entirely. The two are not the same, and the gap between them is where most enterprises run into trouble.”

His argument is worth sitting with. The value IBM’s mainframe delivers, Thomas contends, has nothing to do with COBOL as a language – it lives in the vertically integrated stack underneath it: z/OS, transaction processing architecture, quantum-safe encryption, and decades of hardware-software optimisation that no code translation tool touches.

Anthropic’s Claude Code, in his reading, is solving a real problem – just not the one that matters most for enterprises running IBM Z. He also raised a point that complicates the headline narrative further: roughly 40% of COBOL actually runs on Windows, Linux, and other distributed platforms – not mainframes at all.

Much of what’s being framed as an IBM mainframe story is partly a distributed systems problem that has been folded into a mainframe headline. IBM’s own clients are already making the case.

Royal Bank of Canada has used IBM’s watsonx Code Assistant for Z to map dependencies and build modernisation blueprints for core applications. The National Organisation for Social Insurance reported a 94% reduction in time to analyse legacy COBOL code using the same tool – cutting an eight-hour task to roughly 30 minutes.

Whether Monday’s selloff was a fair verdict or a reflexive one, the underlying change is real: AI is making COBOL modernisation economically viable for the first time in decades. The question IBM is asking – and the market hasn’t fully answered – is whether that’s a threat to its business or an acceleration of the transformation it’s already leading.

See also: Hitachi bets on industrial expertise to win the physical AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

Anthropic: Claude faces ‘industrial-scale’ AI model distillation

Published

on

Anthropic has detailed three “industrial-scale” AI model distillation campaigns by overseas labs designed to extract abilities from Claude.

These competitors generated over 16 million exchanges using approximately 24,000 deceptive accounts. Their goal was to acquire proprietary logic to improve their competing platforms.

The extraction technique, known as distillation, involves training a weaker system on the high-quality outputs of a stronger one.

When applied legitimately, distillation helps companies build smaller and cheaper versions of their applications for customers. Yet, malicious actors weaponise this method to acquire powerful capabilities in a fraction of the time and cost required for independent development.

Protecting intellectual property like Anthropic’s Claude

Unmitigated distillation presents a severe intellectual property challenge. Because Anthropic blocks commercial access in China for national security reasons, attackers bypass regional access restrictions by deploying commercial proxy networks.

These services run what Anthropic calls “hydra cluster” architectures, which distribute traffic across APIs and third-party cloud platforms. The massive breadth of these networks means there are no single points of failure. As Anthropic noted, “when one account is banned, a new one takes its place.”

In one identified case, a single proxy network managed more than 20,000 fraudulent accounts simultaneously. These networks mix AI model distillation traffic with standard customer requests to evade detection. This directly impacts corporate resilience and forces security teams to reconsider how they monitor cloud API traffic.

Illicitly-trained models also bypass established safety guardrails, creating severe national security risks. US developers, for example, build protections to prevent state and non-state actors from using these systems to develop bioweapons or carry out malicious cyber activities.

Cloned systems lack the safeguards implemented by systems like Anthropic’s Claude, allowing dangerous capabilities to proliferate with protections stripped out entirely. Foreign competitors can feed these unprotected capabilities into military, intelligence, and surveillance systems, enabling authoritarian governments to deploy them for offensive operations.

If these distilled versions are open-sourced, the danger further multiplies as the capabilities spread freely beyond any single government’s control.

Unlawful extraction allows foreign entities, including those under the control of the Chinese Communist Party, to close the competitive advantage protected by export controls. Without visibility into these attacks, rapid advancements by foreign developers incorrectly appear as innovation circumventing export controls.

In reality, these advancements depend heavily on extracting American intellectual property at scale, an effort that still requires access to advanced chips. Restricted chip access limits both direct model training and the scale of illicit distillation.

The playbook for AI model distillation

The perpetrators followed a similar operational playbook, utilising fraudulent accounts and proxy services to access systems at scale while evading detection. The volume, structure, and focus of their prompts were distinct from normal usage patterns, reflecting deliberate capability extraction rather than legitimate use. 

Anthropic attributed these campaigns targeting Claude through IP address correlation, request metadata, and infrastructure indicators. Each operation targeted highly differentiated functions: agentic reasoning, tool use, and coding.

One campaign generated over 13 million exchanges targeting agentic coding and tool orchestration. Anthropic detected this operation while it was still active, mapping timings against the competitor’s public product roadmap. When Anthropic released a new model, the competitor pivoted within 24 hours, redirecting nearly half their traffic to extract capabilities from the latest system.

Another operation generated over 3.4 million requests focused on computer vision, data analysis, and agentic reasoning. This group utilised hundreds of varied accounts to obscure their coordinated efforts. Anthropic attributed this campaign by matching request metadata to the public profiles of senior staff at the foreign laboratory. In a later phase, this competitor attempted to extract and reconstruct the host system’s reasoning traces.

Anthropic says a third AI model distillation campaign targeting Claude extracted reasoning capabilities and rubric-based grading data through over 150,000 interactions. This group forced the targeted system to map out its internal logic step-by-step, effectively generating massive volumes of chain-of-thought training data. They also extracted censorship-safe alternatives to politically sensitive queries to train their own systems to steer conversations away from restricted topics. The perpetrators generated synchronised traffic using identical patterns and shared payment methods to enable load balancing. 

Request metadata for this third campaign traced these accounts back to specific researchers at the laboratory. These requests often appear benign on their own, such as a prompt simply asking the system to act as an expert data analyst delivering insights grounded in complete reasoning. But when variations of that exact prompt arrive tens of thousands of times across hundreds of coordinated accounts targeting the same narrow capability, the extraction pattern becomes clear.

Massive volume concentrated in specific areas, highly repetitive structures, and content mapping directly to training needs are the hallmarks of a distillation attack.

Implementing actionable defences

Protecting enterprise environments requires adopting multi-layered defences to make such extraction efforts harder to execute and easier to identify. Anthropic advises implementing behavioural fingerprinting and traffic classifiers designed to identify AI model distillation patterns in API traffic.

IT leaders must also strengthen verification processes for common vulnerability pathways, such as educational accounts, security research programmes, and startup organisations.

Companies should integrate product-level and API-level safeguards designed to reduce the efficacy of model outputs for illicit distillation. This must be done without degrading the experience for legitimate, paying customers.

Detecting coordinated activity across large numbers of accounts is an absolute necessity. This includes specifically monitoring for the continuous elicitation of chain-of-thought outputs used to construct reasoning training data.

Cross-industry collaboration also remains essential, as these attacks are growing in intensity and sophistication. This requires rapid and coordinated intelligence sharing across AI laboratories, cloud providers, and policymakers.

Anthropic has published its findings about Claude being targeted by AI model distillation campaigns to provide a more holistic picture of the landscape and make the evidence available to all stakeholders. By treating AI architectures with rigorous access controls, technology officers can secure their competitive edge while ensuring ongoing governance.

See also: How disconnected clouds improve AI data governance

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

How disconnected clouds improve AI data governance

Published

on

Disconnected clouds aim to improve AI data governance as businesses rethink their infrastructure under tighter regulatory expectations.

Ensuring operational continuity in isolated environments has become increasingly vital for businesses. Facilities lacking continuous internet access face unique constraints where external dependencies become unacceptable.

Microsoft recently expanded its capabilities to allow regulated industries and public sectors to participate independently in the digital economy. Trust in these systems stems from confidence that data remains protected, controls are enforceable, and operations proceed regardless of external conditions.

The company now offers full stack options across connected, intermittently connected, and fully disconnected modes. This architecture unifies Azure Local, Microsoft 365 Local, and Foundry Local into a single sovereign private cloud.

Bringing these elements together provides a localised experience resilient to any connectivity condition. By standardising governance across all deployments, it helps enterprises to prevent fragmented architectures.

Azure Local disconnected operations enable organisations to run vital infrastructure using familiar Azure governance and policy controls completely offline. Execution, management, and policy enforcement stay entirely within customer-operated facilities. 

This approach allows companies to maintain uninterrupted operations and keep identities protected within their established boundaries. Implementations scale from minor deployments to demanding and data-intensive workloads.

Improving resilience and AI data governance in tandem

Deploying AI in sovereign environments introduces high compute requirements. Foundry Local enables enterprises to run multimodal large models completely offline.

Utilising modern hardware from partners like NVIDIA, customers deploy AI inferencing on their own physical servers. This ensures data and application programming interfaces operate strictly within customer-controlled boundaries. Customers maintain complete authority over their hardware even as AI inferencing demands increase over time.

Gerard Hoffmann, CEO of Proximus Luxembourg, said: “The availability of Azure Local disconnected operations represents a breakthrough for organisations that need control over their data without sacrificing the power of the Microsoft Cloud.

“For Luxembourg, where digital sovereignty is not just a principle but a strategic necessity, this model offers the resilience, autonomy and trust our market expects. By combining Microsoft’s technological leadership with Proximus NXT’s sovereign cloud expertise, we are enabling our customers to innovate confidently—even in fully-disconnected mode.”

CIOs planning offline deployments must map workloads to the correct control posture based on risk, regulation, and specific mission requirements. Since disconnected environments are not one-size-fits-all, businesses can start fast with smaller deployments and expand their capabilities over time.

Implementing a disconnected private cloud with AI support answers a business requirement for highly-regulated sectors, enabling secure data governance even when external connectivity is absent.

See also: Deploying agentic finance AI for immediate business ROI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

Deploying agentic finance AI for immediate business ROI

Published

on

Agentic finance AI improves business efficiency and ROI only when deployed with strict governance and clear return on investment targets.

A recent FT Longitude survey of 200 finance leaders across the US, UK, France, and Germany showed 61 percent have deployed AI agents merely as experiments. Meanwhile, one in four executives admit they do not fully grasp what these agents look like in practice.

Advancing agentic finance AI beyond experiments

Finance departments need governed systems that combine language processing with business logic to deliver actual value.

Providers of Invoice Lifecycle Management platforms are introducing new agents designed to accelerate invoice processing and push accounts payable toward greater autonomy. Recent market solutions use generative AI, deep learning, and natural language processing to manage the entire workflow, from initial data ingestion through to final reconciliation.

These digital teammates handle task execution, allowing human employees to focus on higher-level business planning rather than replacing them entirely.

Within these ecosystems, specialised business agents provide contextual and real-time guidance regarding the next best actions for handling invoices. Data agents allow staff to query system information using natural language, easily finding answers about awaiting approvals in specific regions or identifying suppliers offering early payment discounts.

Governing autonomous finance workflows

Finance teams will only hand over tasks to agentic AI if they retain control. Finance departments require verifiable audit trails and explainable logic for every action, avoiding networks of disconnected bots.

Industry leaders note that autonomy without trust isn’t acceptable, especially in sensitive industries like finance. Platforms must ensure every AI decision is explainable, auditable, and governed through existing finance controls. This approach helps safely delegate workloads to algorithms while remaining fully compliant and protected.

To enable this trust, every action performed by an AI agent routes through a central policy engine. Before executing any task, the system passes the proposed action through specific autonomy gates that enforce the customer’s business rules, risk thresholds, and compliance requirements. This architecture ensures algorithms manage the bulk of the workload while finance personnel retain total visibility and a complete audit trail.

Building automated procurement operations

Future agentic finance AI capabilities will automate issue resolution and connect data across systems for faster decision-making.

Modern capabilities in 2026 include supplier agents designed to manage invoice disputes and payment queries. These agents will automatically telephone suppliers to explain discrepancies, summarise the conversation, and outline subsequent steps to achieve faster resolutions. Professional agents, meanwhile, will assist clerks in resolving real-time processing questions using natural language to cut manual effort and delays.

AI must operate as an integral business component rather than a bonus feature, requiring intelligent, secure, and ethical application to drive cost efficiencies and enhance operations. By centralising control and ensuring every automated decision from agentic AI passes through established compliance checks, organisations can safely elevate their finance operations to fully autonomous execution.

See also: Mastercard’s AI payment demo points to agent-led commerce

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Trending