Artificial Intelligence
Insurance giant AIG deploys agentic AI with orchestration layer
American International Group (AIG) has reported faster than expected gains from its use of generative AI, with implications for underwriting capacity, operating cost, and portfolio integration. The company’s recent disclosures at an Investor Day merit attention from AI decision-makers as they contain assertions about measurable throughput and workflow redesign.
AIG has outlined potential benefits from generative AI. Chief executive Peter Zaffino later described the company’s early projections as “aspirational,” yet in a fourth quarter earnings call, he stated that “we see the abilities are much greater.” The change in tone is indicative of positive internal results, and according to Zaffino, “We’re seeing a massive change in our ability to process a submission flow way […] without additional human capital resources. That has been the biggest surprise.”
The company’s claims that generative AI has increased submission processing capacity, the economic impact is direct. AIG reports that in 2025 it “made progress embedding generative AI in our core underwriting and claims processes, and expanding it.” The company’s internal tool, AIG Assist, is implemented in most commercial lines of businesses.
Lexington Insurance, AIG’s excess and surplus unit has targetted reaching 500,000 submissions by 2030. Zaffino reports that Lexington has already surpassed 370,000 submissions in 2025. AIG uses generative models to extract and summarise incoming data, and has developed an orchestration layer in the technology stack “to coordinate AI agents to drive better decision-making and reduce costs in the organisation.” Previous Investor Days, this level of orchestration was not a focus.
The chief executive describes AI agents “as companions that operate with our teams” that provide real-time information, draw on historical cases, and challenge underwriting decisions. The company relies on its ability to manage incoming data “at a fraction of the time” and to orchestrate agents so they can “scale and be able to analyse that information that’s not biased in any way; that’s through the entire workflow.”
AIG links orchestration to compression of what it terms a “front-to-back workflow,” a tighter integration between intake, risk assessment and claims handling. The company states that multiple agents, coordinated through a orchestration layer, streamlines repetitive and previously-lengthy processes.
AIG has applied its generative AI stack in specific transactions. During the conversion of Everest’s retail commercial business, the company reports that accounts were prioritised for renewal “in a fraction of the time.” Management states that it built an ontology of Everest’s portfolio and combined it with its own, which “allowed [the company] to prioritise how the portfolios could blend together.” Ontological alignment is technically demanding and often creates underestimated costs.
The launch of Lloyd’s Syndicate 2479, in partnership with Amwins and Blackstone, extended the ontological approach to a special purpose vehicle. In conjunction with Palantir, AIG used LLMs to assess whether Amwins’ programme portfolio aligned with the syndicate’s stated risk appetite. Zaffino stated that AIG has a “strong pipeline of SPV opportunities.”
For AI decision-makers, the case illustrates the use that orchestration and workflow integration can provide when generative models are embedded in core processes, and the degree to which economic impact depends on measurable changes in capacity and cycle time.
(Image source: “Nagasaki, AIG (Insurance company) building” by Admanchester is licensed under CC BY-NC-ND 2.0. )
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
SS&C Blue Prism: On the journey from RPA to agentic automation
For organizations who are still wedded to the rules and structures of robotic process automation (RPA), then considering agentic AI as the next step for automation may be faintly terrifying. SS&C Blue Prism, however, is here to help, taking customers on the journey from RPA to agentic automation at a pace with which they’re comfortable.
Big as it may be, this move is a necessary one. Modern workflows are at a level of complexity that outlines what traditional RPA was designed to do, according to Steven Colquitt, VP Software Engineering, SS&C Blue Prism. Unstructured data comes from various sources resembling non-deterministic real-world interactions. “Inputs can vary, outcomes can shift and decisions depend on context in real-time,” notes Colquitt.
Brian Halpin, Managing Director, Automation, SS&C Blue Prism, gives the example of a credit agreement where you might need to get 30 or 40 answers from it. He uses the word “answers” deliberately as opposed to data points to account for the level of reasoning that a large language model (LLM) performs.
The element of this being a journey continues to resonate, however. “We’re now saying we’re giving an AI agent the outcome that we want, but we’re not giving it the instructions on how to complete,” says Halpin. “We’re not saying, ‘follow step one, two, three, four, five.’ We’re saying, ‘I want this loan reviewed’ or ‘I want this customer onboarded.’
“Ultimately, I think that’s where the market will go,” adds Halpin. “Is it ready for that? No. Why? Because there’s trust, there’s regulations, there’s auditability […] stability, security. We know LLMs are prone to hallucinations, we know they drift, and [if] you change the underlying model, things change and responses get different.
“There’s an awful lot of learning to happen before I think companies go fully autonomous and real agentic workflows [are] driven from that sort of non-deterministic perspective,” says Halpin. “But then, there will be something else, right? There will be another model. So really, it is all a journey right now.”
SS&C Blue Prism has thousands of customers who have automated processes in place, from centers of excellence (CoEs) to running digital workers in their operations, who they’re hoping to upgrade into the “world of AI”, as Halpin puts it. Sometimes it’s about connecting two separate areas.
“It’s been interesting,” Halpin notes. “As I talk to [our] customers, I see a common thread among companies right now where, in a lot of cases, AI has been established as a separate unit in a company. You go over to the process automation team, and they’re maybe not even allowed to use the AI.
“So, it’s about, ‘How do you help them get that capability and blend it into their process efficiency and allow them to get to the next 20%, 30% of automation, in terms of the end-to-end process?’”
As part of this, SS&C Blue Prism is soon to launch new technology which helps organizations build and embed AI agents within workflows, as well as assist with orchestration. Those who attended TechEx Global, on February 4-5 as part of the Intelligent Automation conference, where SS&C Blue Prism participated, got the full story, as well as understanding the company’s ongoing path.
“[SS&C Technologies] are one of the biggest users of RPA in the world,” adds Halpin. “We have over three and a half thousand digital workers deployed [across the SS&C estate]. We’re saving hundreds of millions in run-rate benefit. We’ve about 35 AI agents in production attached to those digital workers doing […] complex tasks, and really, we just want to share that journey.”
Watch the full interview with Brian Halpin below:
Photo by Patrick Tomasso on Unsplash
Artificial Intelligence
Alibaba Qwen is challenging proprietary AI model economics
The release of Alibaba’s latest Qwen model challenges proprietary AI model economics with comparable performance on commodity hardware.
While US-based labs have historically held the performance advantage, open-source alternatives like the Qwen 3.5 series are closing the gap with frontier models. This offers enterprises a potential reduction in inference costs and increased flexibility in deployment architecture.
The central narrative of the Qwen 3.5 release is this technical alignment with leading proprietary systems. Alibaba is explicitly targeting benchmarks established by high-performance US models, including GPT-5.2 and Claude 4.5. This positioning indicates an intent to compete directly on output quality rather than just price or accessibility.
Technology expert Anton P. states that the model is “trading blows with Claude Opus 4.5 and GPT-5.2 across the board.” He adds that the model “beats frontier models on browsing, reasoning, instruction following.”
Alibaba Qwen’s performance convergence with closed models
For enterprises, this performance parity suggests that open-weight models are no longer solely for low-stakes or experimental use cases. They are becoming viable candidates for core business logic and complex reasoning tasks.
The flagship Alibaba Qwen model contains 397 billion parameters but utilises a more efficient architecture with only 17 billion active parameters. This sparse activation method, often associated with Mixture-of-Experts (MoE) architectures, allows for high performance without the computational penalty of activating every parameter for every token.
This architectural choice results in speed improvements. Shreyasee Majumder, a Social Media Analyst at GlobalData, highlights a “massive improvement in decoding speed, which is up to nineteen times faster than the previous flagship version.”
Faster decoding ultimately translates directly to lower latency in user-facing applications and reduced compute time for batch processing.
The release operates under an Apache 2.0 license. This licensing model allows enterprises to run the model on their own infrastructure, mitigating data privacy risks associated with sending sensitive information to external APIs.
The hardware requirements for Qwen 3.5 are relatively accessible compared to previous generations of large models. The efficient architecture allows developers to run the model on personal hardware, such as Mac Ultras.
David Hendrickson, CEO at GenerAIte Solutions, observes that the model is available on OpenRouter for “$3.6/1M tokens,” a pricing that he highlights is “a steal.”
Alibaba’s Qwen 3.5 series introduces native multimodal capabilities. This allows the model to process and reason across different data types without relying on separate, bolted-on modules. Majumder points to the “ability to navigate applications autonomously through visual agentic capabilities.”
Qwen 3.5 also supports a context window of one million tokens in its hosted version. Large context windows enable the processing of extensive documents, codebases, or financial records in a single prompt.
If that wasn’t enough, the model also includes native support for 201 languages. This broad linguistic coverage helps multinational enterprises deploy consistent AI solutions across diverse regional markets.
Considerations for implementation
While the technical specifications are promising, integration requires due diligence. TP Huang notes that he has “found larger Qwen models to not be all that great” in the past, though Alibaba’s new release looks “reasonably better.”
Anton P. provides a necessary caution for enterprise adopters: “Benchmarks are benchmarks. The real test is production.”
Leaders must also consider the geopolitical origin of the technology. As the model comes from Alibaba, governance teams will need to assess compliance requirements regarding software supply chains. However, the open-weight nature of the release allows for code inspection and local hosting, which mitigates some data sovereignty concerns compared to closed APIs.
Alibaba’s release of Qwen 3.5 forces a decision point. Anton P. asserts that open-weight models “went from ‘catching up’ to ‘leading’ faster than anyone predicted.”
For the enterprise, the decision is whether to continue paying premiums for proprietary US-hosted models or to invest in the engineering resources required to leverage capable yet lower-cost open-source alternatives.
See also: Alibaba enters physical AI race with open-source robot model RynnBrain
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Goldman Sachs deploys Anthropic systems with success
Goldman’s prior experience with Claude models used internally for software development informed its decision to extend AI to other areas of operations. Developers use a version of Claude with Cognition’s Devin agent to aid them with programming. In this context, human developers set specifications and regulatory parameters, the agent produces code, and humans review outputs. The agent is also used to run code tests and validations. He describes this as a change to devs’ workflows, with agents operating according to defined instructions. The benefit is increased developer productivity and the faster completion of projects.s
For trade accounting and client onboarding, Goldman and Anthropic AI project owners observed existing workflows with domain experts to identify work bottlenecks. The implemented agents review documents, extract entities, determine whether additional documentation is required, assess ownership structures, and can trigger further compliance checks. Tasks automated in this way tend to be document-heavy and require individual judgement. By automating extraction and preliminary assessment, the agents reduce the time analysts spend on comparison work.
Indranil Bandyopadhyay, principal analyst at Forrester, says that reconciliation in trade accounting requires comparing fragmented data in internal ledgers, counterparty confirmations, and the perusal of bank statements, and that a typical workflow depends on accurate extraction and matching of figures and text to existing documents. Claude’s ability to process large context windows and follow instructions, he says, makes it suited to just such workflows. The labour involved in client onboarding, such as parsing passports and corporate registration documents, and the cross-referencing of all sources means AI’s ability to extract structured data and flag inconsistencies makes the technology a good fit, reducing overall workloads.
Bandyopadhyay stresses that accounting and compliance platforms remain the canonical systems of record. Claude operates in the workflow layer, handling extraction and comparison so human analysts can handle the code’s exceptions.. In his assessment, the operational value in a regulated environments like banking lies in such a division of labour.
Jonathan Pelosi, head of financial services at Anthropic says Claude is trained to surface uncertainty and to provide source attribution, creating an audit trail – reducing the effect of hallucinations. Bandyopadhyay also notes the importance of human oversight and validation, saying institutions should design systems so that errors are detected early.
Goldman’s Marco Argenti rejects the view that AI systems are inherently easier to deceive than people, arguing that social engineering exploits human vulnerabilities and that AI can detect subtle anomalies at scale, and reiterates the need to combine human judgement with automated scrutiny in teams. His claim implies a increase in operational capacity without proportional increases in staff, even with the issues known to affect AI rollouts.
AI in banking operations
In the banking sector, generative AI is a tool that improves operational performance by accelerating document processing, reducing exception handling time, and increasing throughput in high volume workflows. But the need to retain human oversight to counteract AI’s errors means the retention of and reliance on existing systems of records remains.
(Image source: “Dreams…” by noahwesley is licensed under CC BY-NC-SA 2.0)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
-
Fintech7 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech6 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Fintech7 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech4 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Fintech5 months ago
Esh Bank Unveils Experience That Includes Revenue Sharing With Customers | PYMNTS.com
-
Artificial Intelligence7 months agoThe best Android phones
