Hello and welcome back to Regulator. It’s been a very long two weeks away from your inboxes, but luckily for us, Big Tech and Big Government did not stop fighting. In fact, it’s gotten even spicier. Let’s get into it.
Artificial Intelligence
What the leaked AI executive order tells us about the Big Tech power grab
Last week, I was following up on several rumors that Donald Trump would sign an executive order that would fulfill a longstanding goal of the AI industry: legal preemption that would prevent states from passing their own AI laws. Mostly, I was calling sources trying to get a sense of how the Trump administration planned to approach it: Which agency would be spearheading it? What legal arguments would they use? How would it interact with Congress, which was trying to pass a similar moratorium in the National Defense Authorization Act?
And then I got a copy of the draft order itself — possibly a sign that someone in the administration deeply, deeply loathes David Sacks, Trump’s Special Advisor on AI and Crypto. Even though he’s not a permanent government employee — he is, in fact, a billionaire tech venture capitalist with a provisional employment status similar to the one Elon Musk previously held — Sacks has become deeply influential in setting the administration’s AI and crypto policies. (Just look at Trump’s recent statements about federal AI preemption.)
Leaks rarely come out of Donald Trump’s White House these days, especially compared to his previous term. Back then, everyone in the Trump administration was trying to undermine everyone else on a regular basis, dropping juicy, scandalous tidbits to their favored reporters on a near-hourly basis. In that first term, the career, norm-following officials of the federal bureaucracy were anxiously texting journalists about chaos they’d been preventing. Conservative media figures were eagerly telling reporters about their late-night phone calls with Trump. And the Fifth Avenue New Yorkers were fighting the Steve Bannon–led populists, who were fighting the establishment Republicans, who were fighting the Democrats, who were fighting each other.
There are fewer leaks this time around, primarily because the administration is built around blind loyalty to the president. DOGE gutted the bureaucracy and Trump hired people who would always tell him yes. But on that rare occasion that an actual document leaks out of the administration, it’s a sign that someone has carefully weighed the risks of undermining their enemy, considered the cost of doing so, and decided they hated them enough to do it anyway.
Which brings us to this draft executive order. If it had been signed, it may not have been an actual ban on state AI laws, but it would have given the executive branch the power to strongly dissuade states from writing or enforcing their own laws. Below the break, I walk through the policy implications of the order with Charlie Bullock, a senior research fellow at the Institute for Law and AI, who pointed out all the ways that the government would be able to punish the states for trying to regulate AI: suing them, withholding billions of dollars in federal funding, and hitting them with FTC fines.
While the order itself might eventually be found illegal, he noted, the order would make it painful for a state to fight back: “A state that really needs broadband funding, for example, could say, It might take a long time for us to get our funding. Even if it can win a court case to make them give us that funding eventually, it would take a long time. States might be incentivized not to pass legislation contrary to the policy of the order.”
But politically, the second Trump administration takes an approach to executive orders that’s more shoot first, ask questions about its legality later. Other equally overbroad orders haven’t leaked prior to their signing, and there was nothing spiritually different about this executive order, save for who had been empowered most: every directive included language that required the government to consult the Special Advisor for AI and Crypto, who happens to be a certain tech billionaire from the private sector from outside the political world. And that power play was clearly enough for someone to break Trumpworld omerta.
We’re going to go into the parameters of the power grab below, but before we do, here’s the latest from The Verge:
- “The new silicon valley (literally)”, Justine Calma: Is the promise of jobs worth all the water and chemicals it takes to manufacture chips in the Arizona desert?
- “Google’s Nano Banana Pro generates excellent conspiracy fuel”, Robert Hart and Thomas Ricker: We easily created images related to the JFK assassination, 9/11, and Mickey Mouse.
- “The music industry is all in on AI”, Mia Sato: After months of fighting about whether AI has a place in music, major labels have settled lawsuits and struck deals with startups.
- “The FCC is rolling back steps meant to stop a repeat of a massive telecom hack”, Lauren Feiner: The agency is set to vote to undo the actions it took after the Salt Typhoon breach.
- “UN climate negotiations burned up and then fizzled out”, Justine Calma: The wildest UN climate conference in years went out with a whimper.
- “I looked into CoreWeave and the abyss gazed back”, Elizabeth Lopatto: Meet the company Nvidia is propping up.
“I suspect that if it’s effective, it’ll probably be by having a chilling effect on state legislation”
This interview has been edited for clarity.
Tina Nguyen: This order is supposed to, somehow, implement a moratorium on state AI regulation. We’ve seen that try to take place via legislation that’s clearly failed, and probably going to fail again. How effective would this order as it stands be in actually making a moratorium happen?
Charlie Bullock: In my opinion, this order cannot make a moratorium happen — literally. It’s an executive order. An executive order is not congressional legislation. Executive orders can do a number of things, but mostly what they do is they announce the policy, goals, and opinions of the executive, of the president, and direct executive branch agencies to take actions.
So the way that this executive order tries to implement preemption, you could say, is that it tells the Department of Justice to establish a task force to sue states over their AI laws. An executive order could do that, because the Department of Justice is within the executive branch.
What it cannot do is announce, Okay, state AI laws are now preempted. An executive order cannot unilaterally override state laws. It can announce executive actions that are going to happen that might, in effect, stop states from promulgating new AI legislation or it could, in theory, invalidate existing AI legislation, but it cannot just override it.
Now, as for how effective it will be in what it tries to do, it’s difficult to predict how the future will go, especially in situations that are legally complex. I suspect that if it’s effective, it’ll probably be by having a chilling effect on state legislation. For example, section 5 of the order regards restrictions on state funding. They’re announcing that they’re going to attempt to withhold various federal grant funds for the states that would otherwise get them, if those states have so-called “onerous AI laws” — laws that are contrary to the policy positions expressed in the order. That does not directly preempt state laws or get rid of any state laws. But a state that really needs broadband funding, for example, could say, It might take a long time for us to get our funding. Even if it can win a court case to make them give us that funding eventually, it would take a long time. States might be incentivized not to pass legislation contrary to the policy of the order.
So framing it more as a chilling effect rather than “no more state AI laws” is the correct way of looking at this.
Yes. In theory, this AI litigation task force could find some great legal argument and go to court and get a lot of state AI laws knocked out. But the arguments that are specifically mentioned in the executive order are pretty weak, in my opinion, and unlikely to succeed in court if states fight back against them. I think it’s likely that states like California have a strong interest in regulating AI and are often in opposition to the Trump administration and have political incentives to oppose the Trump administration in various ways. If they fight it in court, I think that at least on the legal theories that are mentioned in the executive order, that task force would not succeed.
The FCC’s inclusion seems to have sparked a new discussion of whether telecom policy has any influence on AI law.
So section 6 says that the chairman of the FCC shall, in consultation with David Sacks, initiate a proceeding to determine whether to adopt federal reporting and disclosure standards that preempts conflicting state laws. Now, that’s nothing illegal because all you’re doing is initiating a proceeding for determining whether you do something. But if the FCC actually tried to promulgate a federal reporting disclosure standard for AI models that preempted existing state laws, they would have to have some legal authority, probably congressionally granted, to do that.
As far as I know, and as far as people with experience in telecom law have written about this know, the FCC just does not have authority to regulate it. It’s not clear exactly what legal theory they’re relying on there. We have some past statements by the FCC chair and various people involved in the FCC that sort of indicate that maybe they think they do have some sort of regulatory authority over AI, but as far as I know, they do not. So that means that they’re not legally going to be able to promulgate that standard and have it be effective, or conflict with state laws for being contrary to that standard.
Can we talk about what the FTC is being empowered to do here? It sounds like they’ve been granted an ideological, anti-woke enforcement mechanism, but you’re able to describe it in a much more informed manner.
The FTC Act prohibits unfair deceptive acts or practices concerning commerce, and it’s a consumer protection provision, essentially. The FTC can do enforcement to prevent things like scams, things like false advertising and stuff like that. I’m really not an expert on FTC Act stuff or on algorithm discrimination laws, but as far as I know, this is the first time anyone’s ever attempted to do anything like this to preempt state laws for this kind of reason. They’re basically saying: algorithm discrimination laws like Colorado’s, and presumably other so-called woke AI laws, are deceptive because they require models to say untruthful outputs in some way, or something like that. And because they require alterations to make untruthful outputs in AI models, they’re preempted by the FTC Act’s prohibition on engaging in deceptive acts concerning commerce.
Again, I’m not an expert on this area, so I don’t know how plausible that argument is. I’m not gonna speculate on whether it’ll succeed in court, if it’s challenged or something. But I’ve never seen anything like it before.
The most power in this executive order is given to the Secretary of Commerce. And it looks like it’s two things that he’s being directed to do: in section 4, they want to evaluate any state laws that are inconsistent with what Trump wants. And then section five is determining which states could get their BEAD [the Broadband Equity Access and Deployment program] funding pulled. Is that a correct way of describing it, and how easily could those two be stitched together into something quasi-legal?
So section 4 has no real, substantial legal effect on states on its own, but it’s asking the Commerce Department, in consultation with a bunch of other figures in the White House, to publish a list identifying the bad laws, the “onerous laws,” the laws that conflict with the policy set out in the EO.
That list is going to inform a lot of the other substantive parts of the order. Section 5’s restrictions of state funding: which states have laws that are on the bad list? Section 3: okay, we’re going to sue states for having bad laws, which laws are bad? The Task Force is going to decide, but they can also look at the bad list and that can tell them. Likewise, there’s no requirement that the FCC try to preempt the laws from section 4, but they might also try to preempt other laws.
Is there one detail that’s been overlooked that you need to highlight in this EO?
So you know how the section 5 restrictions on state funding is split into two subsections? Section 5(a) is instructing Commerce to withhold all the non-deployment funding from BEAD from states with laws identified as onerous. It’s about 42.45 billion that the states are supposed to get. And then section 5(b) is really interesting. It instructs all other agencies to review all of their discretionary grants, and see if any of them can be withheld from states with “onerous AI laws.”
So not just BEAD — anything could fall into this bucket?
All federal discretionary grant funds, and that’s a ton of money. Hundreds of billions of dollars. There was recently a different legal fight over highway funding: the way that the federal government pays for interstate maintenance is that it gives grant money to states, and then the states use it to fix roads or whatever. There was recently an attempt by the Trump administration to add a condition to all the Department of Transportation grants, saying: you have to help the federal government enforce immigration law if you take this money. And states sued, and they were successful, at least at the district court level, the lowest court level. We’ll see what happens on appeal, but they were successful in saying, this is unconstitutional, and it’s unlawful. You can’t impose this requirement on us.
Essentially what the [AI draft order] is saying is, Those highway grants, any education grants, all discretionary funding, in a ton of areas with the federal government gives money to the states, we’re gonna look at all of them and see if we can legally withhold any of them from you if you have bad AI laws. So there’s potentially tons of money that a lot of states want, and even if they succeed in suing, it could take a while for them to get that money. Even the delay in receiving funding could be impactful.
I may have been out for two weeks, but I wasn’t living in a news cave. In fact, I was eagerly mainlining two huge stories: New York City mayor-elect Zohran Mamdani meeting Trump in the Oval Office, and Rep. Marjorie Taylor Greene (R-GA) breaking from Trump and announcing she would leave Congress in January.
I also like vintage memes.
Artificial Intelligence
Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical'
Before you set sail on your AI journey, always check the state of your data – because if there is one thing likely to sink your ship, it is data quality.
Gartner estimates that poor data quality costs organisations an average of $12.9 million each year in wasted resources and lost opportunities. That’s the bad news. The good news is that organisations are increasingly understanding the importance of their data quality – and less likely to fall into this trap.
That’s the view of Ronnie Sheth, CEO of AI strategy, execution and governance firm SENEN Group. The company focuses on data and AI advisory, operationalisation and literacy, and Sheth notes she has been in the data and AI space ‘ever since [she] was a corporate baby’, so there is plenty of real-world experience behind the viewpoint. There is also plenty of success; Sheth notes that her company has a 99.99% client repeat rate.
“If I were to be very practical, the one thing I’ve noticed is companies jump into adopting AI before they’re ready,” says Sheth. Companies, she notes, will have an executive direction insisting they adopt AI, but without a blueprint or roadmap to accompany it. The result may be impressive user numbers, but with no measurable outcome to back anything up.
Even as recently as 2024, Sheth saw many organisations struggling because their data was ‘nowhere where it needed to be.’ “Not even close,” she adds. Now, the conversation has turned more practical and strategic. Companies are realising this, and coming to SENEN Group initially to get help with their data, rather than wanting to adopt AI immediately.
“When companies like that come to us, the first course of order is really fixing their data,” says Sheth. “The next course of order is getting to their AI model. They are building a strong foundation for any AI initiative that comes after that.
“Once they fix their data, they can build as many AI models as they want, and they can have as many AI solutions as they want, and they will get accurate outputs because now they have a strong foundation,” Sheth adds.
With breadth and depth in expertise, SENEN Group allows organisations to right their course. Sheth notes the example of one customer who came to them wanting a data governance initiative. Ultimately, it was the data strategy which was needed – the why and how, the outcomes of what they were trying to do with their data – before adding in governance and providing a roadmap for an operating model. “They’ve moved from raw data to descriptive analytics, moving into predictive analytics, and now we’re actually setting up an AI strategy for them,” says Sheth.
It is this attitude and requirement for practical initiatives which will be the cornerstone of Sheth’s discussion at AI & Big Data Expo Global in London this week. “Now would be the time to get practical with AI, especially enterprise AI adoption, and not think about ‘look, we’re going to innovate, we’re going to do pilots, we’re going to experiment,’” says Sheth. “Now is not the time to do that. Now is the time to get practical, to get AI to value. This is the year to do that in the enterprise.”
Watch the full video conversation with Ronnie Sheth below:
Artificial Intelligence
Apptio: Why scaling intelligent automation requires financial rigour
Greg Holmes, Field CTO for EMEA at Apptio, an IBM company, argues that successfully scaling intelligent automation requires financial rigour.
The “build it and they will come” model of technology adoption often leaves a hole in the budget when applied to automation. Executives frequently find that successful pilot programmes do not translate into sustainable enterprise-wide deployments because initial financial modelling ignored the realities of production scaling.
“When we integrate FinOps capabilities with automation, we’re looking at a change from being very reactive on cost management to being very proactive around value engineering,” says Holmes.
This shifts the assessment criteria for technical leaders. Rather than waiting “months or years to assess whether things are getting value,” engineering teams can track resource consumption – such as cost per transaction or API call – “straight from the beginning.”
The unit economics of scaling intelligent automation
Innovation projects face a high mortality rate. Holmes notes that around 80 percent of new innovation projects fail, often because financial opacity during the pilot phase masks future liabilities.
“If a pilot demonstrates that automating a process saves, say, 100 hours a month, leadership thinks that’s really successful,” says Holmes. “But what it fails to track is that the pilot sometimes is running on over-provisioned infrastructure, so it looks like it performs really well. But you wouldn’t over-provision to that degree during a real production rollout.”
Moving that workload to production changes the calculus. The requirements for compute, storage, and data transfer increase. “API calls can multiply, exceptions and edge cases appear at volume that might have been out of scope for the pilot phase, and then support overheads just grow as well,” he adds.
To prevent this, organisations must track the marginal cost at scale. This involves monitoring unit economics, such as the cost per customer served or cost per transaction. If the cost per customer increases as the customer base grows, the business model is flawed.
Conversely, effective scaling should see these unit costs decrease. Holmes cites a case study from Liberty Mutual where the insurer was able to find around $2.5 million of savings by bringing in consumption metrics and “not just looking at labour hours that they were saving.”
However, financial accountability cannot sit solely with the finance department. Holmes advocates for putting governance “back in the hands of the developers into their development tools and workloads.”
Integration with infrastructure-as-code tools like HashiCorp Terraform and GitHub allows organisations to enforce policies during deployment. Teams can spin up resources programmatically with immediate cost estimates.
“Rather than deploying things and then fixing them up, which gets into the whole whack-a-mole kind of problem,” Holmes explains, companies can verify they are “deploying the right things at the right time.”
When scaling intelligent automation, tension often simmers between the CFO, who focuses on return on investment, and the Head of Automation, who tracks operational metrics like hours saved.
“This translation challenge is precisely what TBM (Technology Business Management) and Apptio are designed to solve,” says Holmes. “It’s having a common language between technology and finance and with the business.”
The TBM taxonomy provides a standardised framework to reconcile these views. It maps technical resources (such as compute, storage, and labour) into IT towers and further up to business capabilities. This structure translates technical inputs into business outputs.
“I don’t necessarily know what goes into all the IT layers underneath it,” Holmes says, describing the business user’s perspective. “But because we’ve got this taxonomy, I can get a detailed bill that tells me about my service consumption and precisely which costs are driving it to be more expensive as I consume more.”
Addressing legacy debt and budgeting for the long-term
Organisations burdened by legacy ERP systems face a binary choice: automation as a patch, or as a bridge to modernisation. Holmes warns that if a company is “just trying to mask inefficient processes and not redesign them,” they are merely “building up more technical debt.”
A total cost of ownership (TCO) approach helps determine the correct strategy. The Commonwealth Bank of Australia utilised a TCO model across 2,000 different applications – of various maturity stages – to assess their full lifecycle costs. This analysis included hidden costs such as infrastructure, labour, and the engineering time required to keep automation running.
“Just because of something’s legacy doesn’t mean you have to retire it,” says Holmes. “Some of those legacy systems are worth maintaining just because the value is so good.”
In other cases, calculating the cost of the automation wrappers required to keep an old system functional reveals a different reality. “Sometimes when you add up the TCO approach, and you’re including all these automation layers around it, you suddenly realise, the real cost of keeping that old system alive is not just the old system, it’s those extra layers,” Holmes argues.
Avoiding sticker shock requires a budgeting strategy that balances variable costs with long-term commitments. While variable costs (OPEX) offer flexibility, they can fluctuate wildly based on demand and engineering efficiency.
Holmes advises that longer-term visibility enables better investment decisions. Committing to specific technologies or platforms over a multi-year horizon allows organisations to negotiate economies of scale and standardise architecture.
“Because you’ve made those longer term commitments and you’ve standardised on different platforms and things like that, it makes it easier to build the right thing out for the long term,” Holmes says.
Combining tight management of variable costs with strategic commitments supports enterprises in scaling intelligent automation without the volatility that often derails transformation.
IBM is a key sponsor of this year’s Intelligent Automation Conference Global in London on 4-5 February 2026. Greg Holmes and other experts will be sharing their insights during the event. Be sure to check out the day one panel session, Scaling Intelligent Automation Successfully: Frameworks, Risks, and Real-World Lessons, to hear more from Holmes and swing by IBM’s booth at stand #362.
See also: Klarna backs Google UCP to power AI agent payments

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
FedEx tests how far AI can go in tracking and returns management
FedEx is using AI to change how package tracking and returns work for large enterprise shippers. For companies moving high volumes of goods, tracking no longer ends when a package leaves the warehouse. Customers expect real-time updates, flexible delivery options, and returns that do not turn into support tickets or delays.
That pressure is pushing logistics firms to rethink how tracking and returns operate at scale, especially across complex supply chains.
This is where artificial intelligence is starting to move from pilot projects into daily operations.
FedEx plans to roll out AI-powered tracking and returns tools designed for enterprise shippers, according to a report by PYMNTS. The tools are aimed at automating routine customer service tasks, improving visibility into shipments, and reducing friction when packages need to be rerouted or sent back.
Rather than focusing on consumer-facing chatbots, the effort centres on operational workflows that sit behind the scenes. These are the systems enterprise customers rely on to manage exceptions, returns, and delivery changes without manual intervention.
How FedEx is applying AI to package tracking
Traditional tracking systems tell customers where a package is and when it might arrive. AI-powered tracking takes a step further by utilising historical delivery data, traffic patterns, weather conditions, and network constraints to flag potential delays before they happen.
According to the PYMNTS report, FedEx’s AI tools are designed to help enterprise shippers anticipate issues earlier in the delivery process. Instead of reacting to missed delivery windows, shippers may be able to reroute packages or notify customers ahead of time.
For businesses that ship thousands of parcels per day, that shift matters. Small improvements in prediction accuracy can reduce support calls, lower refund rates, and improve customer trust, particularly in retail, healthcare, and manufacturing supply chains.
This approach also reflects a broader trend in enterprise software, in which AI is being embedded into existing systems rather than introduced as standalone tools. The goal is not to replace logistics teams, but to minimise the number of manual decisions they need to make.
Returns as an operational problem, not a customer issue
Returns are one of the most expensive parts of logistics. For enterprise shippers, particularly those in e-commerce, returns affect warehouse capacity, inventory planning, and transportation costs.
According to PYMNTS, FedEx’s AI-enabled returns tools aim to automate parts of the returns process, including label generation, routing decisions, and status updates. Companies that use AI to determine the most efficient return path may be able to reduce delays and avoid returning things to the wrong facility.
This is less about convenience and more about operational discipline. Returns that sit idle or move through the wrong channel create cost and uncertainty across the supply chain. AI systems trained on past return patterns can help standardise decisions that were previously handled case by case.
For enterprise customers, this type of automation supports scale. As return volumes fluctuate, especially during peak seasons, systems that adjust automatically reduce the need for temporary staffing or manual overrides.
What FedEx’s AI tracking approach says about enterprise adoption
What stands out in FedEx’s approach is how narrowly focused the AI use case is. There are no broad claims about transformation or reinvention. The emphasis is on reducing friction in processes that already exist.
This mirrors how other large organisations are adopting AI internally. In a separate context, Microsoft described a similar pattern in its article. The company outlined how AI tools were rolled out gradually, with clear limits, governance rules, and feedback loops.
While Microsoft’s case focused on knowledge work and FedEx’s on logistics operations, the underlying lesson is the same. AI adoption tends to work best when applied to specific activities with measurable results rather than broad promises of efficiency.
For logistics firms, those advantages include fewer delivery exceptions, lower return handling costs, and better coordination between shipping partners and enterprise clients.
What this signals for enterprise customers
For end-user companies, FedEx’s move signals that logistics providers are investing in AI as a way to support more complex shipping demands. As supply chains become more distributed, visibility and predictability become harder to maintain without automation.
AI-driven tracking and returns could also change how businesses measure logistics performance. Companies may focus less on delivery speed and more on how quickly issues are recognised and resolved.
That shift could influence procurement decisions, contract structures, and service-level agreements. Enterprise customers may start asking not just where a shipment is, but how well a provider anticipates problems.
FedEx’s plans reflect a quieter phase of enterprise AI adoption. The focus is less on experimentation and more on integration. These systems are not designed to draw attention but to reduce noise in operations that customers only notice when something goes wrong.
(Photo by Liam Kevan)
See also: PepsiCo is using AI to rethink how factories are designed and updated
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
-
Fintech6 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech6 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech5 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Artificial Intelligence7 months agoThe best Android phones
-
Fintech4 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Fintech7 months agoIntuit Adds Agentic AI to Its Enterprise Suite | PYMNTS.com
