Connect with us

Fintech

Not Just for Big Tech: SMBs Must Heed EU AI Law, Too | PYMNTS.com

Published

on

Small and medium-sized businesses (SMBs) developing or using artificial intelligence (AI) systems must comply with the European Union’s AI Act, even if they are not based in or have a presence in the bloc.

Experts say that due to the Act’s scope, many U.S.-based SMBs could be affected and should take steps now to evaluate their exposure. This includes using AI to generate content that is accessed by EU citizens.

A key deadline is Aug. 2, which marks the start of the enforcement of general-purpose AI governance rules. The first deadline, Feb. 2, pertained to “high-risk” AI systems and the final deadline, Aug. 2, 2026, is when the rest of the rules become enforceable.

The AI Act, formally adopted in 2024, is the world’s first comprehensive AI regulation. It introduces a risk-based framework for AI systems placed on the EU market or whose outputs are used in the EU. Failure to comply brings fines of up to 35 million euros ($40 million) or 7% of annual revenue, whichever is higher, according to the EU.

“The EU AI Act will require compliance by U.S. companies if they do business in the EU — otherwise they risk massive fines,” Robert Harrison, a Europe-based patent lawyer at Sonnenberg Harrison, told PYMNTS. “SMBs cannot simply ignore regulations because the U.S. federal government has different ideas on AI regulation.”

Unlike laws that exempt small businesses, the AI Act bases its scope on the nature of the technology, not company size. “It is a risk-based framework that applies to any company that places, makes available, or uses an AI system in the EU or whose outputs are used in the EU,” Scott Bickley, advisory fellow at Info-Tech Research Group, told PYMNTS.

That means “if you offer an AI product to the EU market, or even if the output of your AI is used in the EU, you’re on the hook,” Wyatt Mayham founder of Northwest AI Consulting told PYMNTS.

For example, a U.S.-based marketing firm using artificial intelligence to generate ad copy for a client’s campaign in Germany has to comply, Mayham said.

A simple way to think about it is this: “If you’re building AI that could affect people’s jobs, health or finances, you’ll have to follow tighter rules. But if you’re making tools like chatbots or smart assistants, it’s mostly about being transparent — letting people know they’re talking to AI,” Shay Boloor, chief market strategist at Futurum Equities, told PYMNTS.

See also: European Commission Says It Won’t Delay Implementation of AI Act

How to Comply With the EU AI Act

Mayham said the exact compliance requirements will depend on where SMBs fall in the AI Act’s four AI risk levels:

  • Unacceptable risk: AI systems that do social scoring or untargeted facial recognition.
  • High-risk systems: These include AI used in hiring, education, credit scoring or infrastructure. It must comply with the Act’s rules for risk management, data governance transparency and registration.
  • Limited risk systems: SMBs using an AI chatbot like ChatGPT or Perplexity to create content or generate deepfakes must disclose AI use and content must be labeled as AI-generated.
  • Minimal risk systems: These include using AI in spam filters or video games. They do not face binding rules but users are encouraged to follow voluntary codes of practice.

Andrew Gamino-Cheong, CTO and co-founder of Trustible, a company that helps enterprises comply with regulations, said SMBs that don’t believe they are building a “high risk” tool should double-check.

“Any SMB building a tool off OpenAI [models] or Claude can still end up being considered a ‘provider’ of a high-risk system and get subjected to its requirements,” Gamino-Cheong told PYMNTS.

Bickley said the EU does provide some relief for smaller businesses:

  • Access to free regulatory sandboxes where SMBs can test AI under supervision without risking full liability.
  • Simplified technical documentation templates for high-risk systems.
  • Reduced conformity assessment fees for smaller companies.
  • Dedicated helplines and training from national supervisory authorities.

Still, “the core requirements are not waived and apply equally to all applicable organizations,” Bickley said.

To get started, Mayham and Bickley recommend the following steps:

  • Audit and classify: SMBs can’t comply if they don’t know what they have. Create an inventory of every artificial intelligence system being used or built and classify its risk level under the Act. There are even free online checkers from groups like the European Digital SME Alliance.
  • Address high-risk systems first: Start building the compliance framework now and document data sources, establish a risk management process, and ensure meaningful human oversight. High-risk obligations start to take effect in 2026.
  • Perform a compliance gap analysis, design a compliance process, implement a quality management system suitable for SMBs (for example, ISO 42001 or NIST AI RMF) and ensure transparency by disclosing the use of AI in chatbots, deepfakes and gen AI outputs.
  • Perform vendor due diligence, including requiring them to provide proof that they comply with the AI Act. Monitor standards and codes of practice in an ongoing manner in case of EU changes.

Boloor urged SMBs not to view compliance as a burden, but as an opportunity for growth.

“I don’t see the EU AI Act as a death blow for SMBs — it’s more of a filter. You’re not off the hook, but you’re not being crushed either,” Boloor said. “The earlier you learn to play by the rules, the faster you can grow. Big companies want compliant, trusted partners — and if you get ahead of this, that can be you.”

Read more:

OpenAI CEO Sam Altman: EU Regulations Could Limit Access to AI

Homeland Security Head Criticizes EU’s ‘Adversarial’ AI Approach

OpenAI, Australia and EU Each Push Own AI Regulations

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Fintech

SEC Forms Task Force Promoting ‘Responsible AI Integration’ | PYMNTS.com

Published

on

The initiative, announced Monday (Aug. 4), is designed to promote responsible use of AI while enhancing innovation and efficiency in the SEC operations. Valerie Szczepanik, who has been named the SEC’s chief AI officer, will head the task force.

“Recognizing the transformative potential of AI, the SEC’s AI Task Force will accelerate AI integration to bolster the SEC’s mission,” the regulator said in a news release.

“It will centralize the agency’s efforts and enable internal cross-agency and cross-disciplinary collaboration to navigate the AI lifecycle, remove barriers to progress, focus on AI applications that maximize benefits, and maintain governance. The task force will support innovation from the SEC’s divisions and offices and facilitate responsible AI integration across the agency.”

Before being named the chief AI officer, Szczepanik directed the SEC’s Strategic Hub for Innovation and Financial Technology. She has also served as associate director in the SEC’s Division of Corporation Finance a Special Assistant United States Attorney at the United States Attorney’s Office for the Eastern District of New York, according to the release.

The announcement comes two weeks after the White House released a policy roadmap outlining President Trump’s push to keep America in the lead in the global AI race.

“America’s AI Action Plan” follows Trump’s executive order in January that ordered federal agencies to overturn AI regulations put in place by the Biden administration, which focused on oversight and risk mitigation.

“As our global competitors race to exploit these technologies, it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance,” Trump said in the opening of the AI action plan.

In other AI news, recent research by PYMNTS Intelligence finds that almost all chief product officers (CPOs) expect generative AI to reshape the way they work.

That research showed that nearly all product leaders say AI will streamline workflows within three years, compared to 70% last year. And more than 80% anticipate improvements in data security, compared to half of the CPOs surveyed last year.

“The shift over the past year among CPOs reflects a deeper change in institutional mindset. Gen AI is no longer experimental — it’s strategic,” PYMNTS wrote. “The pressure to deliver more with fewer resources has pushed firms to scale automation of routine, labor-intensive tasks, not just explore how that can be done.”

Continue Reading

Fintech

Experian Unveils New AI Tool for Managing Credit and Risk Models | PYMNTS.com

Published

on

Experian Assistant for Model Risk Management is designed to help financial institutions better manage the complex credit and risk models they use to decide who gets a loan or how much credit someone should receive. The tool validates models faster and improves their auditability and transparency, according to a Thursday (July 31) press release.

The tool helps speed up the review process by using automation to create documents, check for errors and monitor model performance, helping organizations reduce mistakes and avoid regulatory fines. It can cut internal approval times by up to 70% by streamlining model documentation, the release said.

It is the latest tool to be integrated into Experian’s Ascend platform, which unifies data, analytics and decision tools in one place. Ascend combines Experian’s data with clients’ data to deliver AI-powered insights across the credit lifecycle to do things like fraud detection.

Last month, Experian added Mastercard’s identity verification and fraud prevention technology to the Ascend platform to bolster identity verification services for more than 1,800 Experian customers using Ascend to help them prevent fraud and cybercrime.

The tool is also Experian’s latest AI initiative after it launched its AI assistant in October. The assistant provides a deeper understanding of credit and fraud data at an accelerated pace while optimizing analytical models. It can reduce months of work into days, and in some cases, hours.

Experian said in the Thursday press release that the model risk management tool may help reduce regulatory risks since it will help companies comply with regulations in the United States and the United Kingdom, a process that normally requires a lot of internal paperwork, testing and reviews.

As financial institutions embrace generative AI, the risk management of their credit and risk models must meet regulatory guidelines such as SR 11-7 in the U.S. and SS1/23 in the U.K., the release said. Both aim to ensure models are accurate, well-documented and used responsibly.

SR 11-7 is guidance from the Federal Reserve that outlines expectations for how banks should manage the risks of using models in decision making, including model development, validation and oversight.

Similarly, SS1/23 is the U.K. Prudential Regulation Authority’s supervisory statement that sets out expectations for how U.K. banks and insurers should govern and manage model risk, especially in light of increasing use of AI and machine learning.

Experian’s model risk management tool offers customizable, pre-defined templates, centralized model repositories and transparent internal workflow approvals to help financial institutions meet regulatory requirements, per the release.

“Manual documentation, siloed validations and limited performance model monitoring can increase risk and slow down model deployment,” Vijay Mehta, executive vice president of global solutions and analytics at Experian, said in the release. With this new tool, companies can “create, review and validate documentation quickly and at scale,” giving them a strategic advantage.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read more:

Experian and Plaid Partner on Cash Flow Data for Lenders

Experian Targets ‘Credit Invisible’ Borrowers With Cashflow Score

CFPB Sues Experian, Alleging Improper Investigations of Consumer Complaints

Continue Reading

Fintech

Anthropologie Elevates Maeve in Rare Retail Brand Launch | PYMNTS.com

Published

on

Anthropologie is spinning off its Maeve product line as a standalone brand, a rare move in a retail sector where brand extensions have become less common.

The decision reflects shifting strategies among specialty retailers as they work to adapt to changes in women’s fast-fashion and evolving consumer behavior.

Maeve, known for its blend of classic silhouettes and modern flourishes, will now operate independently with dedicated storefronts and separate digital channels, including new social media accounts and editorial content platforms, according to a Monday (Aug. 4) press release. The brand is inclusive, spanning plus, petite, tall and adaptive options, which broaden its reach as the industry contends with demands for representation.

Maeve has nearly 2 million customers and was the most-searched brand on the Anthropologie website over the past year, the release said. It is also a driver of TikTok engagement. Several of the company’s most “hearted” items online are already from the Maeve label.

“Maeve has emerged as a true driver of growth within Anthropologie’s portfolio,” Anu Narayanan, president of women’s and home at Anthropologie Group, said in the release. “Its consistent performance, combined with our customers’ emotional connection to the brand, made this the right moment to evolve Maeve into a standalone identity.”

While many retailers have retreated from new brand creation, opting instead to consolidate or focus on core labels, Anthropologie’s move suggests confidence in cultivating sizable, engaged consumer communities around sub-brands.

Anthropologie is backing Maeve’s standalone debut with a comprehensive marketing campaign, including influencer-driven content, a new Substack, a launch event in New York, and a charitable partnership, per the release. The first Maeve brick-and-mortar store is set to open in Raleigh, North Carolina, in the fall.

The move comes as the apparel sector in the United States sees shoppers valuing not just price and selection, but brand story, inclusivity and digital experience. While the outcome remains to be seen, Anthropologie’s gamble on Maeve reflects a belief that consumers remain eager to embrace distinctive, thoughtfully curated fashion.

Continue Reading

Trending