At an AI and fossil fuel lovefest in Pittsburgh, Pennsylvania last week, President Donald Trump — flanked by cabinet members and executives from major tech and energy giants like Google and ExxonMobil — said that “the most important man of the day” was Environmental Protection Agency head Lee Zeldin. “He’s gonna get you a permit for the largest electric producing plant in the world in about a week, would you say?” Trump said to chuckles in the audience. Later that week, the Trump administration exempted coal-fired power plants, facilities that make chemicals for semiconductor manufacturing, and certain other industrial sites from Biden-era air pollution regulations.
Artificial Intelligence
How Trump’s war on clean energy is making AI a bigger polluter
If Trump has his way, the next generation of data centers will run dirtier than the last. It isn’t enough to kill renewables and pave the way for more coal and gas plants to power energy-hungry AI data centers. Trump is also obsessed with tossing out environmental protections.
“It costs much more to do things environmentally clean,” Trump claimed in an interview with Joe Rogan in October 2024. Upon his appointment to head the EPA (or, rather, run it into the ground), Zeldin said that he would be focused on “unleash[ing] US energy dominance” and “mak[ing] America the AI capital of the world.” The EPA announced thousands of layoffs on on July 18th, gutting its research and development arm.
“It costs much more to do things environmentally clean.”
At the Pennsylvania Energy and Innovation Summit, Trump attempted to take credit for private investments totaling around $36 billion for data center projects and $56 million for new energy infrastructure. The ceremony itself was mostly pomp and circumstance, but it’s telling that the Trump administration says it wants to make Pennsylvania a new hub for AI data centers. It’s a swing state that Republicans are eager to move into their column, but it’s also a major coal and gas producer. Sitting atop a major gas reserve, fracking in Pennsylvania (as well as Texas) helped usher in the “shale revolution” in the 2000s that made the US the world’s leading gas producer.
That was supposed to start changing under former President Joe Biden’s direction. He set a goal for the US to get all its electricity from carbon pollution-free sources by 2035. And in 2022, he signed the Inflation Reduction Act, which was full of tax incentives to make it cheaper to build out new solar and wind farms, as well as other carbon-free energy sources. If it had stayed intact, the law was expected to reduce US greenhouse gas emissions by around 40 percent this decade.
The law came at a crucial time for tech companies, which were expanding data centers as the AI arms race picked up steam. Electricity demand in the US is rising for the first time in more than a decade, thanks in large part to energy-hungry data centers. Google, Amazon, Microsoft, Meta, and other tech giants all have their own climate goals, pledging to shrink their carbon footprints by supporting renewable energy projects.
But Trump is making it harder to build those projects in the US. Republicans voted to wind down Biden-era tax incentives for solar and wind energy in the big spending bill they passed this month. The bill will likely decrease electricity generation capacity in 2035 by 340 GW, according to one analysis, with the vast majority of losses coming from solar and wind farms that will no longer get built.
All these new data centers still need to get their electricity from somewhere. “They won’t be powered by wind,” Trump said during the summit, repeating misleading talking points about renewable energy that have become a cornerstone of new climate denial. He signed an executive order in April, directing the Commerce, Energy, and Interior Departments to study “where coal-powered infrastructure is available and suitable for supporting AI data centers.” Trump, backed by fossil fuel donors, campaigned on a promise to “drill, baby, drill” — a slogan that he doubled down on again at the event. He also referenced the Homer City Generating Station, an old coal plant that’s reopening as a gas plant that will power a new data center.
The deals announced at the summit include Enbridge investing $1 billion to expand its gas pipelines into Pennsylvania and Equinor spending $1.6 billion to “boost natural gas production at Equinor’s Pennsylvania facilities and explore opportunities to link gas to flexible power generation for data centers.”
“They won’t be powered by wind.”
Data centers are a “main driver” for a boom in new gas pipelines and power plants in the Southeast, according to a January report from the Institute for Energy Economics and Financial Analysis (IEEFA). The Southeast is home to “data center alley,” a hub in Virginia through which around 70 percent of the world’s internet traffic flows through. Even if AI models become more efficient over time, the amount of electricity they’re currently projected to demand could lock communities across the US into prolonged reliance on fossil fuels as utilities build out new gas infrastructure.
Zeldin’s job now is essentially to remove any regulatory hurdles that might slow down that growth. From his first day in office, “it was clear that EPA would have a major hand in permitting reform to cut down barriers that have acted as a roadblock so we can bolster the growth of AI,” as Zeldin wrote in a Fox News op-ed last week. “A company looking to build an industrial facility or a power plant should be able to build what it can before obtaining an emissions permit,” he added. And after moving to roll back pollution regulations for power plants, the Trump administration is now reportedly working on a rule that would undo the 2009 “endangerment finding” that allows the EPA to regulate greenhouse gas emissions under the Clean Air Act.
Zeldin also writes that when it comes to Clean Air Act permits for polluters it considers “minor emitters,” the EPA will only meet “minimum requirements for public participation.” An AI Action Plan that the White House dropped on July 23rd proposes creating new categorical exclusions for data center-related projects from the National Environmental Policy Act (NEPA), a sunshine law that mandates input from local communities on major federal projects. The plan directs agencies to identify federal lands for the “large-scale development” of data centers and power generation.
There are other factors at play that could derail Trump’s fossil-fueled agenda, including a backlog for gas turbines in high demand. Solar and wind farms are still generally faster to build and a more affordable source of new electricity than coal or gas, and we could see some developers rush to complete projects before Biden-era tax credits fully disappear. One early bright spot for renewables was the fact that data centers used to train AI are theoretically easier to build close to far-flung wind and solar projects. Unlike other data centers, they don’t need to be built near population centers to reduce latency. They could also theoretically time their operations to match the ebb and flow of electricity generation when the sun shines and winds blow.
But so far, things are shaping up differently in the real world. “It’s just a race to get connected as quickly as possible,” says Nathalie Limandibhratha, senior associate US power at BloombergNEF.
Data center developers are also concerned that if they build facilities specifically to train AI closer to renewable energy, they could be left with stranded assets down the road. They’d rather keep building data centers close to population centers where they can repurpose the facility for other uses if needed. They also get more bang for their buck running 24/7, so data centers are leaning toward around-the-clock electricity generation from gas and nuclear energy (and nuclear energy has more bipartisan support than other sources of carbon-free energy).
“There’s no question right now that AI is driving greater fossil fuel use in the United States and really setting us back in terms of climate change,” says Cathy Kunkel, an energy consultant at IEEFA. Tech giants Google and Amazon made announcements coinciding with the Pennsylvania summit committing to purchasing hydropower and nuclear energy, respectively. But their most recent sustainability reports show that their greenhouse gas pollution is still growing, taking them further away from their climate goals of reaching net zero emissions.
“If [tech companies] wanted to meet their sustainability goals, they could do so,” Kunkel says. “They’re getting a free pass, obviously, from the Trump administration.”
Artificial Intelligence
Poor implementation of AI may be behind workforce reduction
Many organisations are eroding the foundations of business – productivity, competitiveness, and efficiency. This is happening due to poor implementation of human-AI collaboration, according to cloud data and AI consultancy, Datatonic. The company says in the next phase of enterprise AI, success will come from carefully-governed and designed AI that works alongside humans in “human-in-the-loop (HiTL)’ systems.
The company’s research shows that companies that fail to embed AI into their human workflows are falling behind the competition as productivity slows down. Datatonic says a hybrid human-AI approach speeds up decision-making, thus improving overall operations. Scott Eivers, CEO of Datatonic says, “AI [is] about redesigning how work gets done. The biggest risk we see in the market is productivity leakage when AI exists in isolation from the people who actually run the business.”
After years of AI investment, pressure is mounting on businesses to show returns. However, some research shows some initiatives remaining in their pilot stage due to limited trust among users. As a result, organisations are failing to use AI-powered insights to positively affect decisions and workflows, meaning efficiency gains never materialise.
According to Datatonic, HiTL models are crucial for future success, providing a combination of AI speed with human judgement and accountability. This is evident in agent-assisted software development, where AI systems create code from loose prompts and transform them into code. In this case, human teams decide what needs to be developed, inspect all requirements, and review plans before being brought into existence. Once this direction is clear, AI agents construct modular components.
The trend for AI in the workplace is starting to appear in finance and operations. For instance, in back-office and finance departments, AI-powered document processing is already delivering a 70% reduction in invoice-processing costs according to some, but finance teams still approve the final outcomes.
“They’re partnership stories,” says Andrew Harding, CTO of Datatonic. “Humans create evaluation systems, validate plans, set guardrails, and make decisions. AI executes at speed and scale. That combination is where real enterprise value shows up.”
Many enterprises are failing to deploy fully autonomous agents safely, according to Datatonic, with shortfalls in security controls and governance frameworks. Autonomy can only scale when organisations introduce approval checkpoints and benchmark performance standards. Evaluation systems must also be implemented as AI models evolve, ensuring they always operate safely and as intended without violating any compliance obligations.
Harding says, “As trust builds, companies can responsibly delegate more to AI. But skipping governance doesn’t build speed, it creates risk.”
Datatonic predicts major acceleration in workloads in the next two years, with preparation and validation handled by AI agents. AI systems may also be implemented to test and invalidate decisions before teams invest resources.
Scott Eivers believes the future “looks like expert departments run by smaller, nimble teams – finance, HR, marketing – each amplified by AI. The companies that win will be those that teach people to work with AI — not around it,” he said.
(Image source: “Waterfall” by PMillera4 is licensed under CC BY-NC-ND 2.0.)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Upgrading agentic AI for finance workflows
Improving trust in agentic AI for finance workflows remains a major priority for technology leaders today.
Over the past two years, enterprises have rushed to put automated agents into real workflows, spanning customer support and back-office operations. These tools excel at retrieving information, yet they often struggle to provide consistent and explainable reasoning during multi-step scenarios.
Solving the automation opacity problem
Financial institutions especially rely on massive volumes of unstructured data to inform investment memos, conduct root-cause investigations, and run compliance checks. When agents handle these tasks, any failure to trace exact logic can lead to severe regulatory fines or poor asset allocation. Technology executives often find that adding more agents creates more complexity than value without better orchestration.
Open-source AI laboratory Sentient launched Arena today, which is designed as a live and production-grade stress-testing environment that allows developers to evaluate competing computational approaches against demanding cognitive problems.
Sentient’s system replicates the reality of corporate workflows, deliberately feeding agents incomplete information, ambiguous instructions, and conflicting sources. Instead of scoring whether a tool generated a correct output, the platform records the full reasoning trace to help engineering teams debug failures over time.
Building reliable agentic AI systems for finance
Evaluating these capabilities before production deployment has attracted no shortage of institutional interest. Sentient has partnered with a cohort including Founders Fund, Pantera, and asset management giant Franklin Templeton, which oversees more than $1.5 trillion. Other participants in the initial phase include alphaXiv, Fireworks, Openhands, and OpenRouter.
Julian Love, Managing Principal at Franklin Templeton Digital Assets, said: “As companies look to apply AI agents across research, operations, and client-facing workflows, the question is no longer whether these systems are powerful or if they can generate an answer, but whether they’re reliable in real workflows.
“A sandbox environment like Arena – where agents are tested on real, complex workflows, and their reasoning can be inspected – will help the ecosystem separate promising ideas from production-ready capabilities and boost confidence in how this technology is integrated and scaled.”
Himanshu Tyagi, Co-Founder of Sentient, added: “AI agents are no longer an experiment inside the enterprise; they’re being put into workflows that touch customers, money, and operational outcomes.
“That shift changes what matters. It’s not enough for a system to be impressive in a demo. Enterprises need to know whether agents can reason reliably in production, where failures are expensive, and trust is fragile.”
Organisations in sensitive industries like finance require repeatability, comparability, and a method to track reliability improvements regardless of the underlying models they use for agentic AI. Incorporating platforms like Arena allows engineering directors to build resilient data pipelines while adapting open-source agent capabilities to their private internal data.
Overcoming integration bottlenecks
Survey data highlights a gap between ambition and reality. While 85 percent of businesses want to operate as agentic enterprises – and nearly three-quarters plan to deploy autonomous agents – fewer than a quarter possess mature governance frameworks.
Advancing from a pilot phase to full scale proves difficult for many. This happens because current corporate environments run an average of twelve separate agents, frequently in silos.
Open-source development models offer a path forward by providing infrastructure that enables faster experimentation. Sentient itself acts as the architect behind frameworks like ROMA and the Dobby open-source model to assist with these coordination efforts.
Focusing on computational transparency ensures that when an automated process makes a recommendation on a portfolio, human auditors can track exactly how that conclusion was reached.
By prioritising environments that record full logic traces rather than isolated right answers, technology leaders integrating agentic AI for operations like finance can secure better ROI and maintain regulatory compliance across their business.
See also: Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance
Banks are testing a new type of artificial intelligence, like agentic AI, that does more than scan for keywords or follow preset rules. Instead of relying only on static alerts, some trading desks are beginning to use systems designed to reason through patterns in real time and flag conduct that may need human review.
Bloomberg detailed how Goldman Sachs and Deutsche Bank are exploring or deploying so-called “agentic” AI tools for trading surveillance. The goal is to strengthen oversight of orders and trades by using software agents that can analyse activity as it happens and identify patterns that could suggest misconduct.
Adaptive agents
Large banks use automated surveillance systems to monitor trading activity, systems that often rely on predefined rules: if a trade exceeds a certain size, deviates from a benchmark, or fits a known risk pattern, it triggers an alert. Compliance teams then review the case manually.
The challenge is scale and complexity. Modern markets generate huge volumes of data in asset classes, time zones, and trading venues. Static rules can generate large numbers of false positives, while more subtle forms of manipulation may not match known patterns.
According to Bloomberg, the newer agentic systems aim to go beyond that approach. Rather than simply matching trades against a checklist, the AI agents are designed to examine trading behaviour in multiple signals, compare it with historical activity, and detect unusual combinations of actions.
The tools are not described as replacing compliance officers. Instead, they appear to function as an additional layer of monitoring, surfacing cases that warrant closer human inspection.
Deutsche Bank’s work with Google Cloud
Bloomberg reported that Deutsche Bank is working with Google Cloud on developing AI agents that can monitor trading activity. The system is designed to review large sets of order and execution data and flag anomalies in near real time.
The bank has been expanding its AI initiatives over the past few years, and this surveillance effort reflects how financial institutions are applying generative and large language model technology beyond chat interfaces. In this context, the AI is not answering customer questions but analysing structured and unstructured data streams tied to trading behaviour. The AI agents can help identify “complex anomalies” in orders and trades. That suggests the system may look at relationships between trades, timing, market conditions, and trader history not single events in isolation.
Human compliance staff remain responsible for reviewing flagged cases and determining whether further action is required.
Goldman Sachs’ agentic AI strategy
Goldman Sachs is also exploring the use of agentic AI for surveillance, according to Bloomberg. The bank has invested heavily in AI in its trading and risk systems in recent years, and this effort appears to extend that work into compliance.
The focus, as described in the report, is on using AI agents that can operate with a degree of independence in scanning for misconduct indicators. The system may identify patterns that do not fit a clear rule but still stand out as unusual.
For regulators, the appeal is straightforward: earlier detection can reduce market harm and reputational risk. For banks, there is also an operational dimension. Compliance departments face pressure to handle large volumes of alerts while maintaining strict oversight standards. Tools that can reduce noise without lowering scrutiny are likely to attract attention.
Why “agentic AI” matters
The term “agentic AI” refers to systems that can take goal-directed actions not respond to prompts. In practice, that can mean the software is able to decide what data to examine next, compare multiple signals, and escalate findings without constant human input. In a trading context, that might involve monitoring order flows, price movements, communications metadata, and historical behaviour to assess whether activity aligns with normal patterns.
This does not mean the system makes disciplinary decisions on its own. Financial institutions operate under strict regulatory regimes, and accountability remains with human supervisors. The agent’s role is to identify and organise information more effectively than static systems can.
Part of a wider compliance shift
What appears new is the application of more advanced generative AI architectures to internal control functions.
Regulators in the US and Europe have encouraged firms to improve the monitoring of market abuse and manipulation. While rules do not mandate agentic AI, they do require firms to maintain effective systems and controls. If AI tools can help meet that standard, adoption is likely to grow.
At the same time, AI in compliance raises its own questions. Banks must ensure that models are explainable, that they do not introduce bias, and that they can withstand regulatory review. Model governance, data security, and audit trails remain central concerns.
What changes for the industry
If agentic surveillance tools prove effective, they could alter how compliance teams work. Instead of sorting through large volumes of simple alerts, staff may spend more time evaluating complex cases surfaced by AI agents.
That change would not remove the need for human judgement. It may, however, change where human effort is focused. In markets where speed and data volume continue to rise, the ability to analyse patterns in real time is becoming harder to achieve with rule-based systems alone.
(Photo by Markus Spiske)
See also: Mastercard’s AI payment demo points to agent-led commerce
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
-
Fintech7 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Fintech6 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech7 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech4 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Fintech6 months ago
Esh Bank Unveils Experience That Includes Revenue Sharing With Customers | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Artificial Intelligence7 months agoThe best Android phones
