Connect with us

Artificial Intelligence

How Trump let Boeing off the hook for the 737 MAX crashes

Published

on

On July 18th, a federal judge in Texas scheduled what will likely be the final hearing in the case of United States v. The Boeing Company. After five years of litigation, the end result can only be described as a victory for Boeing — and a permanent setback for those who hoped that the company would be held accountable for a decade of safety violations.

Last year, Boeing’s prospects looked far bleaker. In 2021, the Department of Justice charged the company with conspiracy to defraud the government about the Maneuvering Characteristics Augmentation System (MCAS) software on the 737 MAX, which has been linked to the deaths of 346 people in the crashes of Lion Air 610 and Ethiopian Airlines 302. (The Verge first covered this story in 2019.)

After years of legal maneuvering, the company agreed to plead guilty to the conspiracy charge in July 2024 in order to avoid a criminal trial. Under the plea bargain’s terms, Boeing would pay nearly $2.5 billion to airlines, families of crash victims, and the government, plus accept three years of monitoring from an independent safety consultant. That agreement was thrown out by a federal judge in December, and a trial date was set for June 2025.

If convicted, Boeing would not be able to simply pay its way out of trouble. As a corporate felon, the company would have to permanently accept increased government scrutiny over every part of its business — a return to a regulatory model that Congress repealed in 2005, after significant lobbying by the aviation and defense industries. According to one legal think tank, United States v. Boeing had the potential to be one of the most significant corporate compliance judgments in decades.

Photo: Olivier Douliery/AFP via Getty Images

But then Donald Trump returned to the White House. Many of Trump’s strongest political allies have benefited from significant changes in policy under the new administration: the crypto industry, industrial polluters, and Elon Musk, to name a few. Boeing has spent a considerable amount of money building a relationship with Trump, too. It donated $1 million to his inauguration fund, and its CEO accompanied Trump on his recent trip to Qatar.

Its payout came last May, when the head of the DOJ’s Criminal Division, Matthew Galeotti, announced a change of enforcement strategy. Galeotti directed his division to no longer pursue “overboard and unchecked corporate and white-collar enforcement [that] burdens U.S. businesses and harms U.S. interests.” Instead, he wanted it to focus on a narrower set of crimes, including terrorism, tariff-dodging, drug trafficking, and “Chinese Money Laundering Organizations.”

“Not all corporate misconduct warrants federal criminal prosecution,” the memo stated. “It is critical to American prosperity to acknowledge …companies that are willing to learn from their mistakes.”

Boeing has spent a considerable amount of money building a relationship with Trump.

Two weeks later, the DOJ agreed to drop the charges against Boeing completely. Instead of pleading guilty, Boeing would now just be liable for a reduced monetary penalty of around $1.2 billion: $235 million in new fines, plus $445 million into a fund for the families of the 737 MAX crash victims. It would also have to invest $455 million to enhance its “compliance and safety programs,” part of which would pay for an “independent compliance consultant” for two years of oversight. It avoided a felony charge, and more importantly, it was allowed to continue self-auditing its own products.

The DOJ’s rationale for the change was that it expects companies to be “willing to learn from [their] mistakes.” This is not a skill that Boeing seems to possess.

The company makes plenty of mistakes. Its 737 MAX has been plagued by computer errors that go far beyond MCAS. Its strategy of outsourcing production to third-party suppliers has been a consistent source of manufacturing errors and delays for almost a decade. Its lack of investment in quality control in its factories have caused new airplanes to be delivered with a variety of severe defects: excessive gaps in airplane fuselages, metal debris near critical wiring bundles or inside fuel tanks, and door plugs installed without security bolts. The latter issue led to the explosive decompression of Alaska Airlines 1282 in January 2024, an incident that went viral thanks to the dramatic passenger video taken from inside the cabin.

But Boeing does not seem to be able to learn from its mistakes. According to the DOJ, Boeing has known all of this and has still “fail[ed] to design, implement, and enforce a compliance and ethics program.” Although the company has brought on two new CEOs in the last six years, each of whom promised to clean things up, Boeing’s core culture still remains — which is the root cause of all of its technical problems.

The DOJ’s rationale for the change was that it expects companies to be “willing to learn from [their] mistakes.” This is not a skill that Boeing seems to possess.

As I wrote in my book about the 737 MAX crashes, Boeing is so large and so firmly entrenched as one of the world’s two major commercial airplane makers that it is functionally immune from the market’s invisible hand. It is so strategically and economically important that it will always get bailed out, even in the face of a global crisis such as the COVID-19 pandemic. And it makes so much money every year that even the multibillion-dollar fines that the DOJ is willing to impose amount to just a small portion of its annual revenues.

“Boeing became too big to fail,” former FTC chair Lina Khan said in a 2024 speech. “Worse quality is one of the harms that most economists expect from monopolization, because firms that face little competition have limited incentive to improve their products.”

If regulators won’t step in and force Boeing to change, then it will continue to prioritize profits over safety — the only rational choice in a consequence-free environment. This might be a good bargain for its shareholders, but not for passengers.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.


Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

Poor implementation of AI may be behind workforce reduction

Published

on

Many organisations are eroding the foundations of business – productivity, competitiveness, and efficiency. This is happening due to poor implementation of human-AI collaboration, according to cloud data and AI consultancy, Datatonic. The company says in the next phase of enterprise AI, success will come from carefully-governed and designed AI that works alongside humans in “human-in-the-loop (HiTL)’ systems.

The company’s research shows that companies that fail to embed AI into their human workflows are falling behind the competition as productivity slows down. Datatonic says a hybrid human-AI approach speeds up decision-making, thus improving overall operations. Scott Eivers, CEO of Datatonic says, “AI [is] about redesigning how work gets done. The biggest risk we see in the market is productivity leakage when AI exists in isolation from the people who actually run the business.”

After years of AI investment, pressure is mounting on businesses to show returns. However, some research shows some initiatives remaining in their pilot stage due to limited trust among users. As a result, organisations are failing to use AI-powered insights to positively affect decisions and workflows, meaning efficiency gains never materialise.

According to Datatonic, HiTL models are crucial for future success, providing a combination of AI speed with human judgement and accountability. This is evident in agent-assisted software development, where AI systems create code from loose prompts and transform them into code. In this case, human teams decide what needs to be developed, inspect all requirements, and review plans before being brought into existence. Once this direction is clear, AI agents construct modular components.

The trend for AI in the workplace is starting to appear in finance and operations. For instance, in back-office and finance departments, AI-powered document processing is already delivering a 70% reduction in invoice-processing costs according to some, but finance teams still approve the final outcomes.

“They’re partnership stories,” says Andrew Harding, CTO of Datatonic. “Humans create evaluation systems, validate plans, set guardrails, and make decisions. AI executes at speed and scale. That combination is where real enterprise value shows up.”

Many enterprises are failing to deploy fully autonomous agents safely, according to Datatonic, with shortfalls in security controls and governance frameworks. Autonomy can only scale when organisations introduce approval checkpoints and benchmark performance standards. Evaluation systems must also be implemented as AI models evolve, ensuring they always operate safely and as intended without violating any compliance obligations.

Harding says, “As trust builds, companies can responsibly delegate more to AI. But skipping governance doesn’t build speed, it creates risk.”

Datatonic predicts major acceleration in workloads in the next two years, with preparation and validation handled by AI agents. AI systems may also be implemented to test and invalidate decisions before teams invest resources.

Scott Eivers believes the future “looks like expert departments run by smaller, nimble teams – finance, HR, marketing – each amplified by AI. The companies that win will be those that teach people to work with AI — not around it,” he said.

(Image source: “Waterfall” by PMillera4 is licensed under CC BY-NC-ND 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

Upgrading agentic AI for finance workflows

Published

on

Improving trust in agentic AI for finance workflows remains a major priority for technology leaders today.

Over the past two years, enterprises have rushed to put automated agents into real workflows, spanning customer support and back-office operations. These tools excel at retrieving information, yet they often struggle to provide consistent and explainable reasoning during multi-step scenarios.

Solving the automation opacity problem

Financial institutions especially rely on massive volumes of unstructured data to inform investment memos, conduct root-cause investigations, and run compliance checks. When agents handle these tasks, any failure to trace exact logic can lead to severe regulatory fines or poor asset allocation. Technology executives often find that adding more agents creates more complexity than value without better orchestration.

Open-source AI laboratory Sentient launched Arena today, which is designed as a live and production-grade stress-testing environment that allows developers to evaluate competing computational approaches against demanding cognitive problems.

Sentient’s system replicates the reality of corporate workflows, deliberately feeding agents incomplete information, ambiguous instructions, and conflicting sources. Instead of scoring whether a tool generated a correct output, the platform records the full reasoning trace to help engineering teams debug failures over time.

Building reliable agentic AI systems for finance

Evaluating these capabilities before production deployment has attracted no shortage of institutional interest. Sentient has partnered with a cohort including Founders Fund, Pantera, and asset management giant Franklin Templeton, which oversees more than $1.5 trillion. Other participants in the initial phase include alphaXiv, Fireworks, Openhands, and OpenRouter.

Julian Love, Managing Principal at Franklin Templeton Digital Assets, said: “As companies look to apply AI agents across research, operations, and client-facing workflows, the question is no longer whether these systems are powerful or if they can generate an answer, but whether they’re reliable in real workflows.

“A sandbox environment like Arena – where agents are tested on real, complex workflows, and their reasoning can be inspected – will help the ecosystem separate promising ideas from production-ready capabilities and boost confidence in how this technology is integrated and scaled.”

Himanshu Tyagi, Co-Founder of Sentient, added: “AI agents are no longer an experiment inside the enterprise; they’re being put into workflows that touch customers, money, and operational outcomes.

“That shift changes what matters. It’s not enough for a system to be impressive in a demo. Enterprises need to know whether agents can reason reliably in production, where failures are expensive, and trust is fragile.”

Organisations in sensitive industries like finance require repeatability, comparability, and a method to track reliability improvements regardless of the underlying models they use for agentic AI. Incorporating platforms like Arena allows engineering directors to build resilient data pipelines while adapting open-source agent capabilities to their private internal data.

Overcoming integration bottlenecks

Survey data highlights a gap between ambition and reality. While 85 percent of businesses want to operate as agentic enterprises – and nearly three-quarters plan to deploy autonomous agents – fewer than a quarter possess mature governance frameworks.

Advancing from a pilot phase to full scale proves difficult for many. This happens because current corporate environments run an average of twelve separate agents, frequently in silos.

Open-source development models offer a path forward by providing infrastructure that enables faster experimentation. Sentient itself acts as the architect behind frameworks like ROMA and the Dobby open-source model to assist with these coordination efforts.

Focusing on computational transparency ensures that when an automated process makes a recommendation on a portfolio, human auditors can track exactly how that conclusion was reached. 

By prioritising environments that record full logic traces rather than isolated right answers, technology leaders integrating agentic AI for operations like finance can secure better ROI and maintain regulatory compliance across their business.

See also: Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance

Published

on

Banks are testing a new type of artificial intelligence, like agentic AI, that does more than scan for keywords or follow preset rules. Instead of relying only on static alerts, some trading desks are beginning to use systems designed to reason through patterns in real time and flag conduct that may need human review.

Bloomberg detailed how Goldman Sachs and Deutsche Bank are exploring or deploying so-called “agentic” AI tools for trading surveillance. The goal is to strengthen oversight of orders and trades by using software agents that can analyse activity as it happens and identify patterns that could suggest misconduct.

Adaptive agents

Large banks use automated surveillance systems to monitor trading activity, systems that often rely on predefined rules: if a trade exceeds a certain size, deviates from a benchmark, or fits a known risk pattern, it triggers an alert. Compliance teams then review the case manually.

The challenge is scale and complexity. Modern markets generate huge volumes of data in asset classes, time zones, and trading venues. Static rules can generate large numbers of false positives, while more subtle forms of manipulation may not match known patterns.

According to Bloomberg, the newer agentic systems aim to go beyond that approach. Rather than simply matching trades against a checklist, the AI agents are designed to examine trading behaviour in multiple signals, compare it with historical activity, and detect unusual combinations of actions.

The tools are not described as replacing compliance officers. Instead, they appear to function as an additional layer of monitoring, surfacing cases that warrant closer human inspection.

Deutsche Bank’s work with Google Cloud

Bloomberg reported that Deutsche Bank is working with Google Cloud on developing AI agents that can monitor trading activity. The system is designed to review large sets of order and execution data and flag anomalies in near real time.

The bank has been expanding its AI initiatives over the past few years, and this surveillance effort reflects how financial institutions are applying generative and large language model technology beyond chat interfaces. In this context, the AI is not answering customer questions but analysing structured and unstructured data streams tied to trading behaviour. The AI agents can help identify “complex anomalies” in orders and trades. That suggests the system may look at relationships between trades, timing, market conditions, and trader history not single events in isolation.

Human compliance staff remain responsible for reviewing flagged cases and determining whether further action is required.

Goldman Sachs’ agentic AI strategy

Goldman Sachs is also exploring the use of agentic AI for surveillance, according to Bloomberg. The bank has invested heavily in AI in its trading and risk systems in recent years, and this effort appears to extend that work into compliance.

The focus, as described in the report, is on using AI agents that can operate with a degree of independence in scanning for misconduct indicators. The system may identify patterns that do not fit a clear rule but still stand out as unusual.

For regulators, the appeal is straightforward: earlier detection can reduce market harm and reputational risk. For banks, there is also an operational dimension. Compliance departments face pressure to handle large volumes of alerts while maintaining strict oversight standards. Tools that can reduce noise without lowering scrutiny are likely to attract attention.

Why “agentic AI” matters

The term “agentic AI” refers to systems that can take goal-directed actions not respond to prompts. In practice, that can mean the software is able to decide what data to examine next, compare multiple signals, and escalate findings without constant human input. In a trading context, that might involve monitoring order flows, price movements, communications metadata, and historical behaviour to assess whether activity aligns with normal patterns.

This does not mean the system makes disciplinary decisions on its own. Financial institutions operate under strict regulatory regimes, and accountability remains with human supervisors. The agent’s role is to identify and organise information more effectively than static systems can.

Part of a wider compliance shift

What appears new is the application of more advanced generative AI architectures to internal control functions.

Regulators in the US and Europe have encouraged firms to improve the monitoring of market abuse and manipulation. While rules do not mandate agentic AI, they do require firms to maintain effective systems and controls. If AI tools can help meet that standard, adoption is likely to grow.

At the same time, AI in compliance raises its own questions. Banks must ensure that models are explainable, that they do not introduce bias, and that they can withstand regulatory review. Model governance, data security, and audit trails remain central concerns.

What changes for the industry

If agentic surveillance tools prove effective, they could alter how compliance teams work. Instead of sorting through large volumes of simple alerts, staff may spend more time evaluating complex cases surfaced by AI agents.

That change would not remove the need for human judgement. It may, however, change where human effort is focused. In markets where speed and data volume continue to rise, the ability to analyse patterns in real time is becoming harder to achieve with rule-based systems alone.

(Photo by Markus Spiske)

See also: Mastercard’s AI payment demo points to agent-led commerce

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Trending