Connect with us

Artificial Intelligence

State-sponsored hackers exploit AI for advanced cyberattacks

Published

on

State-sponsored hackers are exploiting AI to accelerate cyberattacks, with threat actors from Iran, North Korea, China, and Russia weaponising models like Google’s Gemini to craft sophisticated phishing campaigns and develop malware, according to a new report from Google’s Threat Intelligence Group (GTIG).

The quarterly AI Threat Tracker report, released today, reveals how government-backed attackers have integrated artificial intelligence throughout the attack lifecycle – achieving productivity gains in reconnaissance, social engineering, and malware development during the final quarter of 2025.

“For government-backed threat actors, large language models have become essential tools for technical research, targeting, and the rapid generation of nuanced phishing lures,” GTIG researchers stated in the report.

AI-powered reconnaissance by state-sponsored hackers targets the defence sector

Iranian threat actor APT42 used Gemini to augment reconnaissance and targeted social engineering operations. The group misused the AI model to enumerate official email addresses for specific entities and conduct research to establish credible pretexts for approaching targets.

By feeding Gemini a target’s biography, APT42 crafted personas and scenarios designed to elicit engagement. The group also used the AI to translate between languages and better understand non-native phrases – abilities that help state-sponsored hackers bypass traditional phishing red flags like poor grammar or awkward syntax.

North Korean government-backed actor UNC2970, which focuses on defence targeting and impersonating corporate recruiters, used Gemini to synthesise open-source intelligence and profile high-value targets. The group’s reconnaissance included searching for information on major cybersecurity and defence companies, mapping specific technical job roles, and gathering salary information.

“This activity blurs the distinction between routine professional research and malicious reconnaissance, as the actor gathers the necessary components to create tailored, high-fidelity phishing personas,” GTIG noted.

Model extraction attacks surge

Beyond operational misuse, Google DeepMind and GTIG identified a increase in model extraction attempts – also known as “distillation attacks” – aimed at stealing intellectual property from AI models.

One campaign targeting Gemini’s reasoning abilities involved over 100,000 prompts designed to coerce the model into outputting full reasoning processes. The breadth of questions suggested an attempt to replicate Gemini’s reasoning ability in non-English target languages in various tasks.

How model extraction attacks work to steal AI intellectual property. (Image: Google GTIG)

While GTIG observed no direct attacks on frontier models from advanced persistent threat actors, the team identified and disrupted frequent model extraction attacks from private sector entities globally and researchers seeking to clone proprietary logic.

Google’s systems recognised these attacks in real-time and deployed defences to protect internal reasoning traces.

AI-integrated malware emerges

GTIG observed malware samples, tracked as HONESTCUE, that use Gemini’s API to outsource functionality generation. The malware is designed to undermine traditional network-based detection and static analysis through a multi-layered obfuscation approach.

HONESTCUE functions as a downloader and launcher framework that sends prompts via Gemini’s API and receives C# source code as responses. The fileless secondary stage compiles and executes payloads directly in memory, leaving no artefacts on disk.

HONESTCUE malware’s two-stage attack process using Gemini’s API. (Image: Google GTIG)

Separately, GTIG identified COINBAIT, a phishing kit whose construction was likely accelerated by AI code generation tools. The kit, which masquerades as a major cryptocurrency exchange for credential harvesting, was built using the AI-powered platform Lovable AI.

ClickFix campaigns abuse AI chat platforms

In a novel social engineering campaign first observed in December 2025, Google saw threat actors abuse the public sharing features of generative AI services – including Gemini, ChatGPT, Copilot, DeepSeek, and Grok – to host deceptive content distributing ATOMIC malware targeting macOS systems.

Attackers manipulated AI models to create realistic-looking instructions for common computer tasks, embedding malicious command-line scripts as the “solution.” By creating shareable links to these AI chat transcripts, threat actors used trusted domains to host their initial attack stage.

The three-stage ClickFix attack chain exploiting AI chat platforms. (Image: Google GTIG)

Underground marketplace thrives on stolen API keys

GTIG’s observations of English and Russian-language underground forums indicate a persistent demand for AI-enabled tools and services. However, state-sponsored hackers and cybercriminals struggle to develop custom AI models, instead relying on mature commercial products accessed through stolen credentials.

One toolkit, “Xanthorox,” advertised itself as a custom AI for autonomous malware generation and phishing campaign development. GTIG’s investigation revealed Xanthorox was not a bespoke model but actually powered by several commercial AI products, including Gemini, accessed through stolen API keys.

Google’s response and mitigations

Google has taken action against identified threat actors by disabling accounts and assets associated with malicious activity. The company has also applied intelligence to strengthen both classifiers and models, letting them refuse assistance with similar attacks moving forward.

“We are committed to developing AI boldly and responsibly, which means taking proactive steps to disrupt malicious activity by disabling the projects and accounts associated with bad actors, while continuously improving our models to make them less susceptible to misuse,” the report stated.

GTIG emphasised that despite these developments, no APT or information operations actors have achieved breakthrough abilities that fundamentally alter the threat landscape.

The findings underscore the evolving role of AI in cybersecurity, as both defenders and attackers race to use the technology’s abilities.

For enterprise security teams, particularly in the Asia-Pacific region where Chinese and North Korean state-sponsored hackers remain active, the report serves as an important reminder to enhance defences against AI-augmented social engineering and reconnaissance operations.

(Photo by SCARECROW artworks)

See also: Anthropic just revealed how AI-orchestrated cyberattacks actually work – Here’s what enterprises need to know

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

Red Hat unifies AI and tactical edge deployment for UK MOD

Published

on

The UK Ministry of Defence (MOD) has selected Red Hat to architect a unified AI and hybrid cloud backbone across its entire estate. Announced today, the agreement is designed to break down data silos and accelerate the deployment of AI models from the data centre to the tactical edge.

For CIOs, it’s part of a broader move away from fragmented and project-specific AI pilots toward a more platform engineering approach. By standardising on Red Hat’s infrastructure, the MOD aims to decouple its AI capabilities from underlying hardware, allowing algorithms to be developed once and deployed anywhere—whether on-premise, in the cloud, or on disconnected field devices.

Red Hat industrialises the AI lifecycle for the MOD

The agreement focuses on the Defence Digital Foundry, the MOD’s central software delivery hub. The Foundry will now provide a consistent MLOps environment to all service branches, including the Royal Navy, British Army, and Royal Air Force.

At the core of this initiative is Red Hat AI, a suite that includes Red Hat OpenShift AI. This platform addresses a familiar bottleneck in enterprise AI: the “inference gap” between data science teams and operational infrastructure.

The new agreement will allow MOD developers to collaborate on a single platform, choosing the most appropriate AI models and hardware accelerators for their specific mission requirements without being locked into a single vendor’s ecosystem.

This standardisation is vital for “enabling AI at scale,” according to Red Hat. By unifying disparate efforts, the MOD intends to reduce the duplication that often plagues large government IT programs. The platform supports optimised inference, ensuring that AI models can run efficiently on restricted hardware footprints often found in military environments.

Mivy James, CTO at the UK MOD, said: “Easing access to Red Hat platforms becomes all the more important for the UK Ministry of Defence in the era of AI, where rapid adoption, replicating good practice, and the ability to scale are critical to strategic advantage.”

Bridging legacy and autonomous systems

A major hurdle for defence modernisation is the coexistence of legacy virtualised workloads with modern, containerised AI applications. The agreement includes Red Hat OpenShift Virtualization, which provides a “well-lit migration path” for existing systems. This allows the MOD to manage traditional virtual machines alongside new neural networks on the same control plane to reduce operational complexity and cost.

The MOD deal also incorporates Red Hat Ansible Automation Platform to drive enterprise-wide AI automation. In an AI context, automation is the enforcement mechanism for governance. It ensures that as models are retrained and redeployed, the underlying configuration management, security orchestration, and service provisioning remain compliant with rigorous defence standards.

Security and ecosystem alignment

Deploying AI in defence naturally requires a “consistent security footprint” that can withstand sophisticated cyber threats.

The Red Hat platform enables DevSecOps practices, integrating security gates directly into the software supply chain. This is particularly relevant for maintaining a trusted software pedigree when integrating code from approved third-party providers, who can now align their deliverables with the MOD’s standardised Red Hat environment.

Joanna Hodgson, Regional Manager for the UK and Ireland at Red Hat, commented: “Red Hat offers flexibility and scalability to deploy any application or any AI model on their choice of hardware – whether on premise, in any cloud, or at the edge – helping the UK Ministry of Defence to harness the latest technologies, including AI.”

The deployment shows that AI maturity is moving beyond the model itself to the infrastructure that supports it. Success in high-stakes environments like defence depends less on individual algorithm performance and more on the ability to reliably deliver, update, and govern those models at scale.

See also: Chinese hyperscalers and industry-specific agentic AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

How insurance leaders use agentic AI to cut operational costs

Published

on

Agentic AI offers insurance leaders a path to scalable efficiency as the sector confronts a tough digital transformation.

Insurers hold deep data reserves and employ a workforce skilled in analytic decision-making. Despite these advantages, the industry has largely failed to advance beyond pilot programmes. Research suggests only seven percent of insurers have scaled these initiatives effectively across their organisations.

The barrier is rarely a lack of interest. Instead, legacy infrastructure and fragmented data architectures often stop integration before it starts. Financial pressure compounds the technical debt. The sector has absorbed losses exceeding $100 billion annually for six consecutive years. High-frequency property losses are now a structural issue that standard operational tweaks cannot fix.

Automating complex insurance workflows with agentic AI

Intelligent agents provide a way to bypass these bottlenecks. Unlike passive analytical tools, these systems support autonomous tasks and help make decisions under human supervision. Embedding these agents into workflows allows companies to navigate legacy constraints and talent shortages.

Workforce augmentation is a primary application. Sedgwick, in collaboration with Microsoft, deployed the Sidekick Agent to assist claims professionals. The system improved claims processing efficiency by more than 30 percent through real-time guidance.

Operational gains extend to customer support. Standard chatbots usually answer a query or transfer the user to a queue. An agentic solution manages the process from end-to-end. This can include capturing the first notice of loss, requesting missing documentation, updating policy and billing systems, and proactively notifying customers of next steps.

This “resolve, not route” approach has produced results in live environments. One major insurer implemented over 80 models in its claims domain. The rollout cut complex-case liability assessment time by 23 days and improved routing accuracy by 30 percent. Customer complaints fell by 65 percent during the same period.

Such promising metrics indicate that agentic AI can compress cycle times and control loss-adjustment expenses for the insurance industry, all while maintaining necessary oversight.

Navigating internal friction

Adoption requires navigating internal resistance. Siloed teams and unclear priorities often slow deployment speed. A shortage of talent in specialised roles, such as actuarial analysis and underwriting, also limits how effectively companies use their data. Agentic AI can target these areas to augment roles that are hard to fill.

Success relies on aligning technology with specific business goals. Establishing an ‘AI Center of Excellence’ provides the governance and technical expertise needed to stop fragmented adoption. Teams should start with the high-volume and repeatable tasks to refine models through feedback loops.

Industry accelerators can also speed up the process. Many platforms are now available with prebuilt frameworks that can support the full lifecycle of agent deployment. This approach reduces implementation time and aids compliance efforts.

Of course, technology matters less than organisational readiness. About 70 percent of scaling challenges are organisational rather than technical. Insurers must build a culture of accountability to see returns on these tools.

Agentic AI is a necessity for insurance leaders trying to survive in a market defined by financial pressure and legacy complexity. Addressing structural challenges improves efficiency and resilience. Executives who invest in scalable frameworks will position themselves to lead the next era of innovation.

See also: Chinese hyperscalers and industry-specific agentic AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

Barclays bets on AI to cut costs and boost returns

Published

on

Barclays recorded a 12 % jump in annual profit for 2025, reporting £9.1 billion in earnings before tax, up from £8.1 billion a year earlier. The bank also raised its performance targets out through 2028, aiming for a return on tangible equity (RoTE) of more than 14 %, up from a previous goal of above 12 % by 2026. A growing US business and cost reductions underpinned this outcome, with Barclays citing AI as a key driver of those efficiency gains.

At a time when many large companies are still experimenting with AI pilots, Barclays is tying the technology directly to its cost structure and profit outlook. In public statements and investor filings, leadership positions AI as one of the levers that can help the bank sustain lower costs and improved returns, especially as macroeconomic conditions shift.

Barclays’ 12 % profit rise this week matters, not just for its shareholders, but because it reflects a trend that traditional, highly regulated firms are now positioning AI as a core part of running the business, not something kept in separate innovation labs. For companies outside tech, linking AI to measurable results such as profit and efficiency marks a shift toward operational use over hype.

Why AI matters for cost discipline

Barclays has said that technology such as AI is part of its plan to cut costs and make its operations more efficient. That includes trimming parts of the legacy technology stack and rethinking where and how work happens. Investment in AI tools complements broader cost savings goals that stretch back multiple years.

For many large companies, labour and legacy systems still make up a large chunk of operating expenses. Using AI to automate repetitive tasks or streamline data processing can reduce that burden. In Barclays’ case, these efficiencies are part of the bank’s rationale for setting higher performance targets, even though margins remain under pressure in parts of its business.

It’s important to be specific about what these efficiencies mean in practice. AI technologies, for example, models that assist with risk analysis, customer service workflows, and internal reporting, can reduce the hours staff spend on manual work. That doesn’t always mean cutting jobs outright, but it can lower the overall cost base, especially in functions that are routine or transaction-driven.

From investment to impact

Investments in AI don’t translate to results overnight. Barclays’ approach combines these tools with structural cost reduction programs, helping the bank manage expenses at a time when revenue growth alone isn’t enough to lift returns to desired levels.

Barclays’ performance targets for 2028 reflect this dual focus. The bank’s leadership has said that its plans include returning more than £15 billion to shareholders between 2026 and 2028, supported by improved efficiency and profit strength.

Often, companies talk about technology investment in vague terms. Barclays’ latest figures make the link between tech and profit more concrete: the 12 % profit rise was reported in the same breath as the role of technology in trimming costs. It’s not the only factor; improved market conditions and growth in the US also helped, but it’s clearly part of the narrative that management is presenting to investors.

This emphasis on cost discipline and profit impact sets Barclays apart from firms that treat AI as a long-term bet or a future project. Here, AI is integrated into ongoing cost management and financial planning, giving the bank a plausible pathway to stronger returns in the years ahead.

What this means for legacy firms

Barclays is far from unique in exploring AI for cost savings and efficiency. Other banks have also flagged technology investments as part of broader restructuring efforts. But what makes Barclays’ case noteworthy is the scale of the strategy and the way it is tied to measured performance targets, not just experimentation or small-scale pilots.

In traditional industries, especially ones as regulated as banking, adopting AI is harder than in tech startups. Firms must navigate compliance, risk, customer privacy, and legacy systems that weren’t designed for automation. Yet Barclays’ public comments suggest that the bank is now comfortable enough with these tools to anchor part of its financial forecast on them. That signals a degree of maturity in how the institution operationalises AI.

Barclays isn’t simply building isolated AI projects; leadership is weaving technology into cost discipline, modernisation of systems, and long-term planning. That shift matters because it shows how legacy firms, even those with large, complex operations, can start to move beyond pilots and into business-wide use cases that affect the bottom line.

For other end-user companies evaluating AI investments, Barclays offers a working example: a large, regulated company can use technology to help hit cost and profitability targets, not just to explore new capabilities.

(Photo by Jose Marroquin)

See also: Goldman Sachs tests autonomous AI agents for process-heavy work

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Trending