Connect with us

Fintech

Agentic AI: What It Will Take to Move From Rhetoric to Reality | PYMNTS.com

Published

on

Most chief financial officers (CFOs) say they understand the concept of agentic artificial intelligence — systems that can plan, reason and take actions with minimal or no human input — but few are ready to implement the technology.

According to a July 2025 PYMNTS Intelligence report, nearly all CFOs polled were aware of agentic AI, yet only 15% expressed interest in deploying it within their organizations.

The gap reflects lingering skepticism among business leaders about the maturity and business value of artificial intelligence agents in their current form. While the technology holds promise for automating complex workflows and improving decision-making, many remain cautious amid concerns about implementation risks, oversight challenges and unproven ROI.

“A lot of companies are excited about what agentic AI can do, but not enough are thinking about what it takes to use it safely,” James Prolizo, chief information security officer at Sovos, told PYMNTS. “These tools are starting to make real decisions, not just automate tasks, and that changes the game.”

A key roadblock is lack of trust in agentic systems. According the PYMNTS report’s supplemental data, building trust depends on the following:

  • User-friendly traceability: Able to provide user-friendly reports and visualizations that clearly explain the reasons behind an AI agent’s actions, with the ability to trace outputs back to the underlying input data and logic.
  • Human-in-the-loop safeguards: Human-in-the-loop controls must be present to provide ongoing human supervision and enable intervention when critical decisions are being made.
  • Built-in bias monitoring: Mechanisms must be present that identify and minimize bias in AI-generated content and analyses to ensure fairness and accuracy.

Without these elements, CFOs are unlikely to delegate decision-making authority to software agents, according to experts.

“Finance leaders need to implicitly trust their systems to be accurate and predictable,” Justin Etkin, co-founder and COO of Tropic, told PYMNTS. “Mainstream adoption will only occur when CFOs can see the concrete value and have confidence that these systems won’t go rogue or produce the unpredictable results that currently tarnish AI’s reputation in financial contexts.”

Etkin added that CFOs are looking for AI solutions that go beyond basic automation by offering paper trails, step-by-step actions with “undo” capabilities, and measurable time savings.

The hesitation among finance executives stands in contrast to a more optimistic outlook in other parts of the enterprise.

Raju Malhotra, chief product and technology officer at Certinia, pointed to strong momentum in the professional services sector. “Agentic AI is the most transformative addition. Autonomous digital workers can create a hybrid workforce that blends AI with human teams,” he said.

Malhotra cited Certinia’s own research showing that 83% of professional services firms are already deploying or planning to deploy agentic artificial intelligence in professional services automation within the next year.

Yet Malhotra acknowledged that even among early adopters, results are uneven. “Many organizations still struggle to see returns, with 29% stating current AI solutions fall short of expectations. The reasons are clear: a lack of internal skills and fragmented data,” he said.

See also: The Two Faces of AI: Gen AI’s Triumph Meets Agentic AI’s Caution

Agents Must Connect to Legacy Systems

In financial settings, where decisions must often comply with regulatory mandates and also pass audits, the technical and cultural barriers are even higher. Integration is a major issue.

Agentic systems must connect with a wide array of internal platforms — from enterprise resource planning (ERP) software to forecasting models to compliance tools. According to Chaim Mazal, chief security officer at Gigamon, legacy IT infrastructure can become a bottleneck, especially when it comes to visibility and security.

“As AI workloads drive a surge in network traffic — doubling it in one in three organizations — traditional monitoring tools are increasingly overwhelmed,” Mazal told PYMNTS. “This makes it harder to detect when agentic AI is acting outside intended parameters, sharing sensitive data, or accessing unauthorized systems.”

Mazal said encrypted traffic and siloed authentication systems make it difficult to monitor agentic AI activity in real time. “Until companies integrate AI-aware telemetry and achieve deep, real-time observability into all data in motion, they won’t be equipped to securely govern agentic AI. Simply put: if you can’t see it, you can’t secure it.”

Beyond the technical obstacles, agentic AI also poses human and cultural challenges.

Eric Karofsky, founder of VectorHX, emphasized the importance of trust and usability. “Regardless of how sophisticated the technology or streamlined the automation, adoption comes down to whether real people can actually trust and use these systems without frustration,” he told PYMNTS. “When customers experience agentic AI that feels like a black box making unexplained decisions, their trust disappears faster than any efficiency gains can make up.”

Even in markets like payments, where automation has long played a role, agentic AI raises fresh concerns.

Edwin Loredo, partner at Core Innovation Capital, pointed to the risks of fraud, misuse of permissions and the need for stronger authentication and oversight mechanisms. “Fraud is very likely to increase. Agents can be manipulated, permissions can be exploited and bad actors will look for the edge cases in these automated systems,” he told PYMNTS.

In Loredo’s view, agentic AI will require domain-specific interfaces and rigorous logic tailored to the unique needs of industries or sectors such as finance, insurance and healthcare. “It’s unlikely a single general-purpose agent can serve all use cases well,” he said.

For now, CFOs are staying cautious. While the potential of agentic AI is widely acknowledged, few finance leaders are willing to deploy systems that operate beyond their control. As the technology matures, vendors and internal teams will need to meet higher standards for transparency, security and ROI to make adoption viable. Until then, agentic AI may remain mostly a buzzword in the boardroom.

Read more:

AI at the Crossroads: Agentic Ambitions Meet Operational Realities

The Agentic Trust Gap: Enterprise CFOs Push Pause on Agentic AI

Payments Execs on Agentic AI: ‘The Back Office Will Never Be the Same’

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Fintech

SEC Forms Task Force Promoting ‘Responsible AI Integration’ | PYMNTS.com

Published

on

The initiative, announced Monday (Aug. 4), is designed to promote responsible use of AI while enhancing innovation and efficiency in the SEC operations. Valerie Szczepanik, who has been named the SEC’s chief AI officer, will head the task force.

“Recognizing the transformative potential of AI, the SEC’s AI Task Force will accelerate AI integration to bolster the SEC’s mission,” the regulator said in a news release.

“It will centralize the agency’s efforts and enable internal cross-agency and cross-disciplinary collaboration to navigate the AI lifecycle, remove barriers to progress, focus on AI applications that maximize benefits, and maintain governance. The task force will support innovation from the SEC’s divisions and offices and facilitate responsible AI integration across the agency.”

Before being named the chief AI officer, Szczepanik directed the SEC’s Strategic Hub for Innovation and Financial Technology. She has also served as associate director in the SEC’s Division of Corporation Finance a Special Assistant United States Attorney at the United States Attorney’s Office for the Eastern District of New York, according to the release.

The announcement comes two weeks after the White House released a policy roadmap outlining President Trump’s push to keep America in the lead in the global AI race.

“America’s AI Action Plan” follows Trump’s executive order in January that ordered federal agencies to overturn AI regulations put in place by the Biden administration, which focused on oversight and risk mitigation.

“As our global competitors race to exploit these technologies, it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance,” Trump said in the opening of the AI action plan.

In other AI news, recent research by PYMNTS Intelligence finds that almost all chief product officers (CPOs) expect generative AI to reshape the way they work.

That research showed that nearly all product leaders say AI will streamline workflows within three years, compared to 70% last year. And more than 80% anticipate improvements in data security, compared to half of the CPOs surveyed last year.

“The shift over the past year among CPOs reflects a deeper change in institutional mindset. Gen AI is no longer experimental — it’s strategic,” PYMNTS wrote. “The pressure to deliver more with fewer resources has pushed firms to scale automation of routine, labor-intensive tasks, not just explore how that can be done.”

Continue Reading

Fintech

Experian Unveils New AI Tool for Managing Credit and Risk Models | PYMNTS.com

Published

on

Experian Assistant for Model Risk Management is designed to help financial institutions better manage the complex credit and risk models they use to decide who gets a loan or how much credit someone should receive. The tool validates models faster and improves their auditability and transparency, according to a Thursday (July 31) press release.

The tool helps speed up the review process by using automation to create documents, check for errors and monitor model performance, helping organizations reduce mistakes and avoid regulatory fines. It can cut internal approval times by up to 70% by streamlining model documentation, the release said.

It is the latest tool to be integrated into Experian’s Ascend platform, which unifies data, analytics and decision tools in one place. Ascend combines Experian’s data with clients’ data to deliver AI-powered insights across the credit lifecycle to do things like fraud detection.

Last month, Experian added Mastercard’s identity verification and fraud prevention technology to the Ascend platform to bolster identity verification services for more than 1,800 Experian customers using Ascend to help them prevent fraud and cybercrime.

The tool is also Experian’s latest AI initiative after it launched its AI assistant in October. The assistant provides a deeper understanding of credit and fraud data at an accelerated pace while optimizing analytical models. It can reduce months of work into days, and in some cases, hours.

Experian said in the Thursday press release that the model risk management tool may help reduce regulatory risks since it will help companies comply with regulations in the United States and the United Kingdom, a process that normally requires a lot of internal paperwork, testing and reviews.

As financial institutions embrace generative AI, the risk management of their credit and risk models must meet regulatory guidelines such as SR 11-7 in the U.S. and SS1/23 in the U.K., the release said. Both aim to ensure models are accurate, well-documented and used responsibly.

SR 11-7 is guidance from the Federal Reserve that outlines expectations for how banks should manage the risks of using models in decision making, including model development, validation and oversight.

Similarly, SS1/23 is the U.K. Prudential Regulation Authority’s supervisory statement that sets out expectations for how U.K. banks and insurers should govern and manage model risk, especially in light of increasing use of AI and machine learning.

Experian’s model risk management tool offers customizable, pre-defined templates, centralized model repositories and transparent internal workflow approvals to help financial institutions meet regulatory requirements, per the release.

“Manual documentation, siloed validations and limited performance model monitoring can increase risk and slow down model deployment,” Vijay Mehta, executive vice president of global solutions and analytics at Experian, said in the release. With this new tool, companies can “create, review and validate documentation quickly and at scale,” giving them a strategic advantage.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read more:

Experian and Plaid Partner on Cash Flow Data for Lenders

Experian Targets ‘Credit Invisible’ Borrowers With Cashflow Score

CFPB Sues Experian, Alleging Improper Investigations of Consumer Complaints

Continue Reading

Fintech

Anthropologie Elevates Maeve in Rare Retail Brand Launch | PYMNTS.com

Published

on

Anthropologie is spinning off its Maeve product line as a standalone brand, a rare move in a retail sector where brand extensions have become less common.

The decision reflects shifting strategies among specialty retailers as they work to adapt to changes in women’s fast-fashion and evolving consumer behavior.

Maeve, known for its blend of classic silhouettes and modern flourishes, will now operate independently with dedicated storefronts and separate digital channels, including new social media accounts and editorial content platforms, according to a Monday (Aug. 4) press release. The brand is inclusive, spanning plus, petite, tall and adaptive options, which broaden its reach as the industry contends with demands for representation.

Maeve has nearly 2 million customers and was the most-searched brand on the Anthropologie website over the past year, the release said. It is also a driver of TikTok engagement. Several of the company’s most “hearted” items online are already from the Maeve label.

“Maeve has emerged as a true driver of growth within Anthropologie’s portfolio,” Anu Narayanan, president of women’s and home at Anthropologie Group, said in the release. “Its consistent performance, combined with our customers’ emotional connection to the brand, made this the right moment to evolve Maeve into a standalone identity.”

While many retailers have retreated from new brand creation, opting instead to consolidate or focus on core labels, Anthropologie’s move suggests confidence in cultivating sizable, engaged consumer communities around sub-brands.

Anthropologie is backing Maeve’s standalone debut with a comprehensive marketing campaign, including influencer-driven content, a new Substack, a launch event in New York, and a charitable partnership, per the release. The first Maeve brick-and-mortar store is set to open in Raleigh, North Carolina, in the fall.

The move comes as the apparel sector in the United States sees shoppers valuing not just price and selection, but brand story, inclusivity and digital experience. While the outcome remains to be seen, Anthropologie’s gamble on Maeve reflects a belief that consumers remain eager to embrace distinctive, thoughtfully curated fashion.

Continue Reading

Trending