As usual, 2025 was a year of deep congressional dysfunction in the US. But state legislatures were passing laws that govern everything from AI to social media to the right to repair. Many of these laws, alongside rules passed in past years, take effect in 2026 — either right now or in the coming months.
Artificial Intelligence
Meet the new tech laws of 2026
As of January 1st, Americans should have the right to crypto ATM refunds in Colorado, wide-ranging electronics repairs in Colorado and Washington, and AI system transparency in California, among other things. But a last-minute court ruling offered a reprieve from one high-profile state law: Texas’ App Store-based age verification rule.
For a longer rundown of tech-related regulations that go into force in 2026 — including a major piece of one federal law, the Take It Down Act — read on.
California: AI transparency, chatbots, and more
California passed a parcel of AI-related rules last year. The most prominent is SB 53: a transparency law that requires major AI companies to publish safety and security details and protects whistleblowers. It’s a revised version of SB 1047, which Gov. Gavin Newsom vetoed after a heated fight in 2024, and it goes into effect on January 1st, 2026.
Several other bills deal with more specific implementations of AI. Among them is SB 243, one of the first regulations on so-called companion chatbots, requiring them to maintain protocols for preventing suicidal ideation and self-harm, as well as remind known underage users every few hours that the system isn’t human. SB 524, another of the bills, requires law enforcement agencies to “conspicuously” disclose how they use AI.
All this has set up California as a test case for how far state AI laws can push, especially as Donald Trump’s administration aims to ban them altogether. That fight, too, is poised to play out in 2026.
Colorado: Right to repair and crypto ATMs
Colorado passed one of the country’s most comprehensive right-to-repair rules in 2024, requiring manufacturers to facilitate repairs on a large swath of electronic devices. That law, HB24-1121, will finally kick in this year. The state is also adding consumer protections to a major fraud vector: cryptocurrency ATMs, which — because they let users convert fiat money into crypto and send it to an anonymous wallet — reportedly helped scammers extract hundreds of millions of dollars from victims this year. SB25-079 requires daily transaction limits for new and existing customers, plus refund options for first-time users who transfer money outside the US — a major signal they might have been sending money because they were duped by a scam.
Idaho: Speech protections
Idaho joins the long list of states with laws combating strategic lawsuits against public participation, or anti-SLAPP laws, with SB 1001. While this isn’t technically a tech law, SLAPP suits have been a key weapon of tech billionaires like Elon Musk, and limiting them helps prevent what can amount to online censorship. A much-needed federal law remains nowhere to be seen.
Illinois: Public officials’ privacy
Starting this year, Illinois will restrict sharing personal information of public officials at their request. HB 576 covers general assembly members and former members, public defenders, and county clerks, among others, and the covered information includes home addresses, home phone numbers, personal email addresses, and the identity of children under 18. The goal is preventing harassment — an increasingly prominent issue — as officials “administer their public duties.”
Data privacy is another area long neglected by Congress but taken up by states, with highly mixed results. Indiana’s Consumer Data Protection Act aims to provide a “data consumer bill of rights” that includes obtaining, correcting, and deleting personal information a company holds about you. But data privacy and consumer protection groups have denounced the law as toothless — a 2025 privacy report card by PIRG and the Electronic Privacy Information Center (EPIC) gave it an F.
HB 15 is another data privacy framework that failed the 2025 PIRG/EPIC evaluation. Kentucky and Indiana both fall under what that report dubs the “Virginia model”: a framework they allege lets companies “continue collecting whatever data they wanted as long they disclosed it somewhere in a privacy policy,” while making opt-outs onerous.
As with so many regulations, the federal rule banning difficult-to-cancel subscriptions is in legal hell, but some states have been stepping up. Maine is joining them with LD 1642, a rule modeled on the FTC standard — which means, among other things, making companies disclose the terms of subscriptions and offer a cancellation method as simple as the system for signing up.
Nebraska: Age-appropriate design
LB 504 is one of multiple state-level “age-appropriate design” rules — it restricts app features like notifications, in-game purchases, and infinite scrolling for children, aiming to combat compulsive use by stopping “dark patterns” that keep kids online. A similar code was blocked in California, however, so a legal challenge could materialize later this year.
With AB 73, Nevada joins the slew of states trying to curb undisclosed AI-powered electioneering. Its disclosure rules include letting candidates sue if they find themselves starring in unwelcome, unlabeled AI-generated ads.
Oklahoma: Data breach notifications
Oklahoma is broadening the scope of its data breach notification rules with SB 626, including by expanding them to cover biometric data and offering some new safe harbors for avoiding legal damages.
Oregon: Deepfakes, data privacy, and ticket scalpers
HB 2299 adds AI-generated (or otherwise digitally manipulated) imagery to its ban on nonconsensual sexual imagery — a move seen in nearly every state since 2019. HB 2008 bans data collectors from selling personal information and targeting ads using data from users they know are under 16, while adding a similar all-ages ban for precise geolocation data. And HB 3167 bans the sale of software designed to facilitate ticket-scalping bots, addressing a maddening problem the Federal Trade Commission focused on in 2025 as well.
Rhode Island: Data privacy
Rhode Island’s HB 7787, the Rhode Island Data Transparency and Privacy Protection Act, includes rules that require disclosure of how personal information is collected and sold. It rounds out the trifecta of “Virginia model” rules that failed a privacy evaluation and take effect this year.
Texas: AI rules — but not App Store age verification (yet)
Mere weeks ago, Texas was set to implement a new form of online age-gating: requiring app stores to check users’ ages and pass that information to app developers. But a district court granted a preliminary injunction blocking SB 2420. The law remains worth watching, however, because Texas will likely appeal to the Fifth Circuit — which is notorious for reversing lower court decisions on internet regulation.
Texas is enacting HB 149, an AI regulatory framework that prohibits using the technology to incite harm, capture biometric identifiers, or discriminate based on characteristics like race and gender or “political viewpoint.” That’s going to be another test of the Trump administration’s plan to repeal state-level AI laws, highlighting a split between state and federal Republicans on AI.
Virginia: Social media time limits
If you’re under 16 years old in Virginia, your screen time may have just been drastically reduced. SB 854 requires social media companies to verify users’ ages and limit younger teens to one hour of use per app per day. A parent can choose to increase or decrease that limit. Like many internet regulations, this one is being challenged in court, so its ultimate fate remains undecided.
Washington: Right to repair
Washington passed a pair of right-to-repair laws, HB 1483 and SB 5680, in 2025. As iFixit explains, they require companies to make repair materials available for most consumer electronics, block parts pairing, and provide specific protections for wheelchair users.
The RAISE Act has been touted as a landmark AI law that would require large model developers to follow new safety and transparency rules. But it was significantly stripped down at the last minute, lessening its likely impact. Regardless, it’ll take effect on March 19th, 90 days after being signed late last year.
Michigan: Anti-SLAPP and Taylor Swift
Michigan is another state getting a new anti-SLAPP law — HB 4045 — as of March 24th. On the same date, it’s effectuating a package of rules known as the “Taylor Swift” bills, targeting ticket bots and modeled on the federal BOTS Act.
The Take It Down Act criminalized AI-generated nonconsensual intimate imagery distribution at a federal level in 2025, a change groups like the Cyber Civil Rights Initiative (CCRI) called long overdue. But in the words of CCRI president Mary Anne Franks, it included a “poison pill” with a broad, ambiguous requirement that online platforms remove such images rapidly, raising concerns about censorship and enforcement. That platform takedown provision came with a one-year enforcement delay that will expire on May 19th — so we’ll soon figure out how effective (or disruptive) it actually is.
Utah: App Store age verification
Utah’s App Store Accountability Act, SB 142, technically took effect last year. But app stores were given until May 6th of 2026 to start verifying users’ ages with “commercially available methods” and require parental consent if they detect minors. One final piece — letting minors or their parents sue for damages if app stores don’t comply — will take effect on December 31st.
Colorado’s SB 24-205 is a named target of the Trump administration’s war on state AI laws. It requires AI companies to disclose information about high-risk systems, and more specifically, take “reasonable care to protect consumers” from algorithmic discrimination. Originally slated for February, it’s now set to take effect June 30th instead.
Arkansas: Children’s privacy
HB 1717 is a children’s data privacy rule similar to the federal COPPA law and the proposed COPPA 2.0, barring online services from collecting unnecessary personal data if they’re aimed at minors or know a user is underage. It takes effect July 1st.
Utah’s HB 418, dubbed the Digital Choice Act, aims to make social media networks less sticky by letting you move data between them. A writeup from Harvard’s Ash Center explains the nuances, but broadly, it requires social media companies to implement open protocols that allow users to share personal data across different services. Europe has mandated data portability for years and the results haven’t been revolutionary, but there’s still a chance it could promote more competition on a centralized web. Its enforcement date is also July 1st.
Did you think we were done with California AI laws? Well, a delay pushed back the original January goalpost for SB 942, which requires the government to develop standards for AI detection systems and requires covered providers to make such tools available. Now its first provisions kick in on August 2nd, with additional requirements for companies taking effect in 2027 and 2028. It’s taking on a serious issue, but also an incredibly messy one — and like other rules, it depends on preserving the right to state-level AI laws.
Artificial Intelligence
Poor implementation of AI may be behind workforce reduction
Many organisations are eroding the foundations of business – productivity, competitiveness, and efficiency. This is happening due to poor implementation of human-AI collaboration, according to cloud data and AI consultancy, Datatonic. The company says in the next phase of enterprise AI, success will come from carefully-governed and designed AI that works alongside humans in “human-in-the-loop (HiTL)’ systems.
The company’s research shows that companies that fail to embed AI into their human workflows are falling behind the competition as productivity slows down. Datatonic says a hybrid human-AI approach speeds up decision-making, thus improving overall operations. Scott Eivers, CEO of Datatonic says, “AI [is] about redesigning how work gets done. The biggest risk we see in the market is productivity leakage when AI exists in isolation from the people who actually run the business.”
After years of AI investment, pressure is mounting on businesses to show returns. However, some research shows some initiatives remaining in their pilot stage due to limited trust among users. As a result, organisations are failing to use AI-powered insights to positively affect decisions and workflows, meaning efficiency gains never materialise.
According to Datatonic, HiTL models are crucial for future success, providing a combination of AI speed with human judgement and accountability. This is evident in agent-assisted software development, where AI systems create code from loose prompts and transform them into code. In this case, human teams decide what needs to be developed, inspect all requirements, and review plans before being brought into existence. Once this direction is clear, AI agents construct modular components.
The trend for AI in the workplace is starting to appear in finance and operations. For instance, in back-office and finance departments, AI-powered document processing is already delivering a 70% reduction in invoice-processing costs according to some, but finance teams still approve the final outcomes.
“They’re partnership stories,” says Andrew Harding, CTO of Datatonic. “Humans create evaluation systems, validate plans, set guardrails, and make decisions. AI executes at speed and scale. That combination is where real enterprise value shows up.”
Many enterprises are failing to deploy fully autonomous agents safely, according to Datatonic, with shortfalls in security controls and governance frameworks. Autonomy can only scale when organisations introduce approval checkpoints and benchmark performance standards. Evaluation systems must also be implemented as AI models evolve, ensuring they always operate safely and as intended without violating any compliance obligations.
Harding says, “As trust builds, companies can responsibly delegate more to AI. But skipping governance doesn’t build speed, it creates risk.”
Datatonic predicts major acceleration in workloads in the next two years, with preparation and validation handled by AI agents. AI systems may also be implemented to test and invalidate decisions before teams invest resources.
Scott Eivers believes the future “looks like expert departments run by smaller, nimble teams – finance, HR, marketing – each amplified by AI. The companies that win will be those that teach people to work with AI — not around it,” he said.
(Image source: “Waterfall” by PMillera4 is licensed under CC BY-NC-ND 2.0.)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Upgrading agentic AI for finance workflows
Improving trust in agentic AI for finance workflows remains a major priority for technology leaders today.
Over the past two years, enterprises have rushed to put automated agents into real workflows, spanning customer support and back-office operations. These tools excel at retrieving information, yet they often struggle to provide consistent and explainable reasoning during multi-step scenarios.
Solving the automation opacity problem
Financial institutions especially rely on massive volumes of unstructured data to inform investment memos, conduct root-cause investigations, and run compliance checks. When agents handle these tasks, any failure to trace exact logic can lead to severe regulatory fines or poor asset allocation. Technology executives often find that adding more agents creates more complexity than value without better orchestration.
Open-source AI laboratory Sentient launched Arena today, which is designed as a live and production-grade stress-testing environment that allows developers to evaluate competing computational approaches against demanding cognitive problems.
Sentient’s system replicates the reality of corporate workflows, deliberately feeding agents incomplete information, ambiguous instructions, and conflicting sources. Instead of scoring whether a tool generated a correct output, the platform records the full reasoning trace to help engineering teams debug failures over time.
Building reliable agentic AI systems for finance
Evaluating these capabilities before production deployment has attracted no shortage of institutional interest. Sentient has partnered with a cohort including Founders Fund, Pantera, and asset management giant Franklin Templeton, which oversees more than $1.5 trillion. Other participants in the initial phase include alphaXiv, Fireworks, Openhands, and OpenRouter.
Julian Love, Managing Principal at Franklin Templeton Digital Assets, said: “As companies look to apply AI agents across research, operations, and client-facing workflows, the question is no longer whether these systems are powerful or if they can generate an answer, but whether they’re reliable in real workflows.
“A sandbox environment like Arena – where agents are tested on real, complex workflows, and their reasoning can be inspected – will help the ecosystem separate promising ideas from production-ready capabilities and boost confidence in how this technology is integrated and scaled.”
Himanshu Tyagi, Co-Founder of Sentient, added: “AI agents are no longer an experiment inside the enterprise; they’re being put into workflows that touch customers, money, and operational outcomes.
“That shift changes what matters. It’s not enough for a system to be impressive in a demo. Enterprises need to know whether agents can reason reliably in production, where failures are expensive, and trust is fragile.”
Organisations in sensitive industries like finance require repeatability, comparability, and a method to track reliability improvements regardless of the underlying models they use for agentic AI. Incorporating platforms like Arena allows engineering directors to build resilient data pipelines while adapting open-source agent capabilities to their private internal data.
Overcoming integration bottlenecks
Survey data highlights a gap between ambition and reality. While 85 percent of businesses want to operate as agentic enterprises – and nearly three-quarters plan to deploy autonomous agents – fewer than a quarter possess mature governance frameworks.
Advancing from a pilot phase to full scale proves difficult for many. This happens because current corporate environments run an average of twelve separate agents, frequently in silos.
Open-source development models offer a path forward by providing infrastructure that enables faster experimentation. Sentient itself acts as the architect behind frameworks like ROMA and the Dobby open-source model to assist with these coordination efforts.
Focusing on computational transparency ensures that when an automated process makes a recommendation on a portfolio, human auditors can track exactly how that conclusion was reached.
By prioritising environments that record full logic traces rather than isolated right answers, technology leaders integrating agentic AI for operations like finance can secure better ROI and maintain regulatory compliance across their business.
See also: Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance
Banks are testing a new type of artificial intelligence, like agentic AI, that does more than scan for keywords or follow preset rules. Instead of relying only on static alerts, some trading desks are beginning to use systems designed to reason through patterns in real time and flag conduct that may need human review.
Bloomberg detailed how Goldman Sachs and Deutsche Bank are exploring or deploying so-called “agentic” AI tools for trading surveillance. The goal is to strengthen oversight of orders and trades by using software agents that can analyse activity as it happens and identify patterns that could suggest misconduct.
Adaptive agents
Large banks use automated surveillance systems to monitor trading activity, systems that often rely on predefined rules: if a trade exceeds a certain size, deviates from a benchmark, or fits a known risk pattern, it triggers an alert. Compliance teams then review the case manually.
The challenge is scale and complexity. Modern markets generate huge volumes of data in asset classes, time zones, and trading venues. Static rules can generate large numbers of false positives, while more subtle forms of manipulation may not match known patterns.
According to Bloomberg, the newer agentic systems aim to go beyond that approach. Rather than simply matching trades against a checklist, the AI agents are designed to examine trading behaviour in multiple signals, compare it with historical activity, and detect unusual combinations of actions.
The tools are not described as replacing compliance officers. Instead, they appear to function as an additional layer of monitoring, surfacing cases that warrant closer human inspection.
Deutsche Bank’s work with Google Cloud
Bloomberg reported that Deutsche Bank is working with Google Cloud on developing AI agents that can monitor trading activity. The system is designed to review large sets of order and execution data and flag anomalies in near real time.
The bank has been expanding its AI initiatives over the past few years, and this surveillance effort reflects how financial institutions are applying generative and large language model technology beyond chat interfaces. In this context, the AI is not answering customer questions but analysing structured and unstructured data streams tied to trading behaviour. The AI agents can help identify “complex anomalies” in orders and trades. That suggests the system may look at relationships between trades, timing, market conditions, and trader history not single events in isolation.
Human compliance staff remain responsible for reviewing flagged cases and determining whether further action is required.
Goldman Sachs’ agentic AI strategy
Goldman Sachs is also exploring the use of agentic AI for surveillance, according to Bloomberg. The bank has invested heavily in AI in its trading and risk systems in recent years, and this effort appears to extend that work into compliance.
The focus, as described in the report, is on using AI agents that can operate with a degree of independence in scanning for misconduct indicators. The system may identify patterns that do not fit a clear rule but still stand out as unusual.
For regulators, the appeal is straightforward: earlier detection can reduce market harm and reputational risk. For banks, there is also an operational dimension. Compliance departments face pressure to handle large volumes of alerts while maintaining strict oversight standards. Tools that can reduce noise without lowering scrutiny are likely to attract attention.
Why “agentic AI” matters
The term “agentic AI” refers to systems that can take goal-directed actions not respond to prompts. In practice, that can mean the software is able to decide what data to examine next, compare multiple signals, and escalate findings without constant human input. In a trading context, that might involve monitoring order flows, price movements, communications metadata, and historical behaviour to assess whether activity aligns with normal patterns.
This does not mean the system makes disciplinary decisions on its own. Financial institutions operate under strict regulatory regimes, and accountability remains with human supervisors. The agent’s role is to identify and organise information more effectively than static systems can.
Part of a wider compliance shift
What appears new is the application of more advanced generative AI architectures to internal control functions.
Regulators in the US and Europe have encouraged firms to improve the monitoring of market abuse and manipulation. While rules do not mandate agentic AI, they do require firms to maintain effective systems and controls. If AI tools can help meet that standard, adoption is likely to grow.
At the same time, AI in compliance raises its own questions. Banks must ensure that models are explainable, that they do not introduce bias, and that they can withstand regulatory review. Model governance, data security, and audit trails remain central concerns.
What changes for the industry
If agentic surveillance tools prove effective, they could alter how compliance teams work. Instead of sorting through large volumes of simple alerts, staff may spend more time evaluating complex cases surfaced by AI agents.
That change would not remove the need for human judgement. It may, however, change where human effort is focused. In markets where speed and data volume continue to rise, the ability to analyse patterns in real time is becoming harder to achieve with rule-based systems alone.
(Photo by Markus Spiske)
See also: Mastercard’s AI payment demo points to agent-led commerce
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
-
Fintech7 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Fintech6 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech7 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech4 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Fintech6 months ago
Esh Bank Unveils Experience That Includes Revenue Sharing With Customers | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Artificial Intelligence7 months agoThe best Android phones
