Artificial Intelligence
OpenAI's enterprise push: The hidden story behind AI's sales arms race
As OpenAI races toward its ambitious US$100 billion revenue target by 2027, the ChatGPT maker is reportedly building an army of AI consultants to bridge the gap between cutting-edge technology and enterprise boardrooms—a move that signals a fundamental shift in how AI companies are approaching the notoriously difficult challenge of enterprise adoption.
According to industry data and recent hiring patterns, OpenAI is significantly expanding its go-to-market teams at a time when the company’s enterprise business is exploding. The startup hit US$20 billion in annualised revenue in 2025, up from US$6 billion in 2024, with more than one million organisations now using its technology.
The enterprise adoption challenge
The aggressive hiring strategy reflects a broader truth about enterprise AI: the technology sells itself in demos, but implementing it at scale requires an entirely different skill set. Recent research seen in Second Talent shows that while 87% of large enterprises are implementing AI solutions, only 31% of AI use cases reach full production, with the gap between pilot projects and enterprise-wide deployment remaining stubbornly wide.
“The real story isn’t just about hiring consultants—it’s about what this reveals about enterprise AI’s maturation,” said one industry analyst who requested anonymity. “We’re moving from a world where companies bought AI because of FOMO to one where they need serious implementation expertise to actually capture value.”
The challenge is multifaceted. According to multiple industry surveys, the top enterprise AI adoption challenges in 2025 include integration complexity at 64%, data privacy risks at 67%, and reliability concerns at 60%. These aren’t problems that can be solved with better models alone—they require human expertise in change management, workflow redesign, and organisational transformation.
The competitive landscape
OpenAI isn’t alone in recognising the enterprise implementation gap. Anthropic, which is on track to meet a goal of US$9 billion in annualised revenue by the end of 2025 with targets of US$20 billion to US$26 billion for 2026, has taken a different approach by focusing on large-scale partnerships.
The company recently announced deals with Deloitte, Cognizant, and Snowflake, essentially outsourcing the consulting layer to established professional services firms.
“Anthropic is positioning Claude as the enterprise-friendly alternative—essentially ‘OpenAI for companies that don’t want to rely on OpenAI,’” according to industry research firm Sacra.
Microsoft, meanwhile, leverages its existing enterprise relationships and consulting partnerships, while Google is bundling AI capabilities into its Workspace and Cloud ecosystem. Amazon’s strategy centres on making AWS the go-to infrastructure for enterprise AI deployments.
What OpenAI’s hiring reveals
The reported consultant hiring wave suggests OpenAI is betting that direct customer engagement will prove more effective than pure partnership models. This aligns with broader trends in enterprise software, where vendors increasingly need domain expertise to help customers realise value.
Job postings analysed across multiple platforms show OpenAI recruiting for roles spanning enterprise account directors, AI deployment managers, and solutions architects—all focused on helping organisations move from proof-of-concept to production deployment.
The timing is critical. With OpenAI’s enterprise market share dropping from 50% to 34% while Anthropic doubled its presence from 12% to 24% in foundation models, the company needs to prove it can not only build the best technology but also help enterprises successfully deploy it.
The implementation reality
For enterprise IT leaders, the flood of AI consultants hiring from vendors represents both an opportunity and a warning. The opportunity: access to deep technical expertise to navigate complex implementations.
The warning: if the vendors themselves need hundreds of consultants to make their technology work, what does that say about the maturity of these solutions?
“Most organisations treat AI as a tactical enhancement rather than a strategic enabler, resulting in fragmented execution,” according to a recent industry report. Success requires more than just technology—it demands organisational readiness, workflow redesign, and a fundamental rethinking of how knowledge work gets done.
The real question isn’t whether OpenAI or its competitors can hire enough consultants. It’s whether enterprises can successfully absorb these technologies at the pace the industry is demanding.
With 42% of C-suite executives reporting that AI adoption is ‘tearing their company apart’ due to power struggles, conflicts, and organisational silos, the human challenge may prove harder to solve than the technical one.
As the AI sales arms race intensifies, one thing is clear: the winners won’t just be the companies with the best models, but those who can successfully guide enterprises through the messy, difficult work of organisational transformation.
OpenAI’s consultant hiring spree suggests it’s learning this lesson—the hard way.
(Photo by Andrew Neel)
See also: AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here
Artificial Intelligence
How separating logic and search boosts AI agent scalability
Separating logic from inference improves AI agent scalability by decoupling core workflows from execution strategies.
The transition from generative AI prototypes to production-grade agents introduces a specific engineering hurdle: reliability. LLMs are stochastic by nature. A prompt that works once may fail on the second attempt. To mitigate this, development teams often wrap core business logic in complex error-handling loops, retries, and branching paths.
This approach creates a maintenance problem. The code defining what an agent should do becomes inextricably mixed with the code defining how to handle the model’s unpredictability. A new framework proposed by researchers from Asari AI, MIT CSAIL, and Caltech suggests a different architectural standard is required to scale agentic workflows in the enterprise.
The research introduces a programming model called Probabilistic Angelic Nondeterminism (PAN) and a Python implementation named ENCOMPASS. This method allows developers to write the “happy path” of an agent’s workflow while relegating inference-time strategies (e.g. beam search or backtracking) to a separate runtime engine. This separation of concerns offers a potential route to reduce technical debt while improving the performance of automated tasks.
The entanglement problem in agent design
Current approaches to agent programming often conflate two distinct design aspects. The first is the core workflow logic, or the sequence of steps required to complete a business task. The second is the inference-time strategy, which dictates how the system navigates uncertainty, such as generating multiple drafts or verifying outputs against a rubric.
When these are combined, the resulting codebase becomes brittle. Implementing a strategy like “best-of-N” sampling requires wrapping the entire agent function in a loop. Moving to a more complex strategy, such as tree search or refinement, typically requires a complete structural rewrite of the agent’s code.
The researchers argue that this entanglement limits experimentation. If a development team wants to switch from simple sampling to a beam search strategy to improve accuracy, they often must re-engineer the application’s control flow. This high cost of experimentation means teams frequently settle for suboptimal reliability strategies to avoid engineering overhead.
Decoupling logic from search to boost AI agent scalability
The ENCOMPASS framework addresses this by allowing programmers to mark “locations of unreliability” within their code using a primitive called branchpoint().
These markers indicate where an LLM call occurs and where execution might diverge. The developer writes the code as if the operation will succeed. At runtime, the framework interprets these branch points to construct a search tree of possible execution paths.
This architecture enables what the authors term “program-in-control” agents. Unlike “LLM-in-control” systems, where the model decides the entire sequence of operations, program-in-control agents operate within a workflow defined by code. The LLM is invoked only to perform specific subtasks. This structure is generally preferred in enterprise environments for its higher predictability and auditability compared to fully autonomous agents.
By treating inference strategies as a search over execution paths, the framework allows developers to apply different algorithms – such as depth-first search, beam search, or Monte Carlo tree search – without altering the underlying business logic.
Impact on legacy migration and code translation
The utility of this approach is evident in complex workflows such as legacy code migration. The researchers applied the framework to a Java-to-Python translation agent. The workflow involved translating a repository file-by-file, generating inputs, and validating the output through execution.
In a standard Python implementation, adding search logic to this workflow required defining a state machine. This process obscured the business logic and made the code difficult to read or lint. Implementing beam search required the programmer to break the workflow into individual steps and explicitly manage state across a dictionary of variables.
Using the proposed framework to boost AI agent scalability, the team implemented the same search strategies by inserting branchpoint() statements before LLM calls. The core logic remained linear and readable. The study found that applying beam search at both the file and method level outperformed simpler sampling strategies.
The data indicates that separating these concerns allows for better scaling laws. Performance improved linearly with the logarithm of the inference cost. The most effective strategy found – fine-grained beam search – was also the one that would have been most complex to implement using traditional coding methods.
Cost efficiency and performance scaling
Controlling the cost of inference is a primary concern for data officers managing P&L for AI projects. The research demonstrates that sophisticated search algorithms can yield better results at a lower cost compared to simply increasing the number of feedback loops.
In a case study involving the “Reflexion” agent pattern (where an LLM critiques its own output) the researchers compared scaling the number of refinement loops against using a best-first search algorithm. The search-based approach achieved comparable performance to the standard refinement method but at a reduced cost per task.
This finding suggests that the choice of inference strategy is a factor for cost optimisation. By externalising this strategy, teams can tune the balance between compute budget and required accuracy without rewriting the application. A low-stakes internal tool might use a cheap and greedy search strategy, while a customer-facing application could use a more expensive and exhaustive search, all running on the same codebase.
Adopting this architecture requires a change in how development teams view agent construction. The framework is designed to work in conjunction with existing libraries such as LangChain, rather than replacing them. It sits at a different layer of the stack, managing control flow rather than prompt engineering or tool interfaces.
However, the approach is not without engineering challenges. The framework reduces the code required to implement search, but it does not automate the design of the agent itself. Engineers must still identify the correct locations for branch points and define verifiable success metrics.
The effectiveness of any search capability relies on the system’s ability to score a specific path. In the code translation example, the system could run unit tests to verify correctness. In more subjective domains, such as summarisation or creative generation, defining a reliable scoring function remains a bottleneck.
Furthermore, the model relies on the ability to copy the program’s state at branching points. While the framework handles variable scoping and memory management, developers must ensure that external side effects – such as database writes or API calls – are managed correctly to prevent duplicate actions during the search process.
Implications for AI agent scalability
The change represented by PAN and ENCOMPASS aligns with broader software engineering principles of modularity. As agentic workflows become core to operations, maintaining them will require the same rigour applied to traditional software.
Hard-coding probabilistic logic into business applications creates technical debt. It makes systems difficult to test, difficult to audit, and difficult to upgrade. Decoupling the inference strategy from the workflow logic allows for independent optimisation of both.
This separation also facilitates better governance. If a specific search strategy yields hallucinations or errors, it can be adjusted globally without assessing every individual agent’s codebase. It simplifies the versioning of AI behaviours, a requirement for regulated industries where the “how” of a decision is as important as the outcome.
The research indicates that as inference-time compute scales, the complexity of managing execution paths will increase. Enterprise architectures that isolate this complexity will likely prove more durable than those that permit it to permeate the application layer.
See also: Intuit, Uber, and State Farm trial AI agents inside enterprise workflows
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Intuit, Uber, and State Farm trial AI agents inside enterprise workflows
The way large companies use artificial intelligence is changing. For years, AI in business meant experimenting with tools that could answer questions or help with small tasks. Now, some big enterprises are moving beyond tools and into AI agents that can actually do work across systems and workflows, not just answer prompts.
This week, OpenAI introduced a new platform designed to help companies build, run, and manage those kinds of AI agents at scale. The announcement has drawn attention because a handful of large corporations in finance, insurance, mobility, and life sciences are among the first to start using it. That signals a shift: AI may be ready to move from pilots and proofs of concept into real operational roles.
From tools to agents
The new platform, called Frontier, is meant to help companies deploy what are sometimes described as AI coworkers. These are software agents that can connect to corporate systems like data warehouses, customer relationship tools, ticketing systems, and internal apps, and then carry out tasks inside them. The idea is to give the AI agents a shared understanding of how work happens in a company, so they can perform meaningful work reliably over time.
Rather than treating every task as a separate, isolated use case, Frontier is built so that AI agents can function across an organisation’s systems with a common context. In OpenAI’s words, the platform provides the same kinds of basics that people need at work: access to shared business context, onboarding, ways to learn from feedback, and clear permissions and boundaries.
Frontier also includes tools for security, auditing, and ongoing evaluation, so companies can monitor how agents perform and ensure they follow internal rules.
Who’s using this now
What makes this shift newsworthy is not just the technology itself, but who is said to be using it early.
According to multiple reports and OpenAI’s own posts, early adopters include Intuit, Uber, State Farm Insurance, Thermo Fisher Scientific, HP, and Oracle. Larger pilot programs are also said to be underway with companies such as Cisco, T-Mobile, and Banco Bilbao Vizcaya Argentaria.
Having actual companies in different sectors test or adopt a new platform this early on shows a move toward real-world application, not just research or internal experimentation. These are firms with complex operations, heavy regulatory needs, or large customer bases, environments where AI tools must work reliably and safely if they are to be adopted beyond experimental teams.
What rxecutives are saying
Direct quotes from executives and leaders involved in these moves give a sense of how companies view the shift.
On LinkedIn, a senior executive from Intuit commented on the early adoption:
“AI is moving from ‘tools that help’ to ‘agents that do.’ Proud Intuit is an early adopter of OpenAI Frontier as we build intelligent systems that remove friction, expand what people and small businesses can accomplish, and unlock new opportunities.”
That comment reflects a belief among some enterprise leaders that AI agents could reduce manual steps and expand what teams can accomplish.
OpenAI’s message to business customers emphasises that the company believes agents need more than raw model power; they need governance, context, and ways to operate inside real business environments. As one commenter on social media put it, the challenge isn’t the ability of the AI models anymore: it is the ability to integrate and manage them at scale.
Why this matters for enterprises
For end-user companies considering or already investing in AI, this moment points to a broader shift in how they might use the technology.
In the past few years, most enterprise AI work has focused on narrow tasks: auto-tagging tickets, summarising documents, or generating content. These applications were useful, but often limited in scope. They didn’t connect to the workflows and systems that run a business’s core processes.
AI agents are meant to close that gap. In principle, an agent can pull together data from multiple systems, reason about it, and act; whether that means updating records, running analyses, or triggering actions across tools.
This means AI could start to touch real workflow work rather than just provide assistance. For example, instead of an AI drafting a reply to a customer complaint, it could open the ticket, gather relevant account data, propose a resolution, and even update the customer record; all while respecting internal permissions and audit rules.
That is a different kind of value proposition. It is no longer about saving time on a task; it is about letting software take on pieces of the work itself.
Real adoption has practical requirements
The companies testing Frontier are not using it lightly. These are organisations with compliance needs, strict data controls, and complex technology stacks. For an AI agent to function there, it has to be integrated with internal systems in a way that respects access rules and keeps human teams in the loop.
That kind of integration, connecting to CRM, ERP, data warehouses, and ticketing systems, is a long-standing challenge in enterprise IT. The promise of AI agents is that they can bridge these systems with a shared understanding of process and context. Whether that works in practice at scale will depend on how well companies can govern and monitor these systems over time.
The early signs are that enterprises see enough potential to begin serious trials. That itself is news: for AI deployments to move beyond isolated pilots and become part of broader operations is a visible step in technology adoption.
What comes next
If these early experiments succeed and spread, the next phase for enterprise AI could look very different from earlier years of tooling and automation. Instead of using AI to generate outputs for people to act on, companies could start relying on AI to carry out work directly under defined rules and boundaries.
That will raise questions for leaders in operations, IT, security, and compliance. It will also create new roles; not just data scientists and AI engineers, but governance specialists and execution leads who can take responsibility for agent performance over time.
The shift points to a future where AI agents become part of the everyday workflow for large organisations, not as assistants, but as active participants in how work gets done.
(Photo by Growtika)
See also: OpenAI’s enterprise push: The hidden story behind AI’s sales race
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
AI Expo 2026 Day 2: Moving experimental pilots to AI production
The second day of the co-located AI & Big Data Expo and Digital Transformation Week in London showed a market in a clear transition.
Early excitement over generative models is fading. Enterprise leaders now face the friction of fitting these tools into current stacks. Day two sessions focused less on large language models and more on the infrastructure needed to run them: data lineage, observability, and compliance.
Data maturity determines deployment success
AI reliability depends on data quality. DP Indetkar from Northern Trust warned against allowing AI to become a “B-movie robot.” This scenario occurs when algorithms fail because of poor inputs. Indetkar noted that analytics maturity must come before AI adoption. Automated decision-making amplifies errors rather than reducing them if the data strategy is unverified.
Eric Bobek of Just Eat supported this view. He explained how data and machine learning guide decisions at the global enterprise level. Investments in AI layers are wasted if the data foundation remains fragmented.
Mohsen Ghasempour from Kingfisher also noted the need to turn raw data into real-time actionable intelligence. Retail and logistics firms must cut the latency between data collection and insight generation to see a return.
Scaling in regulated environments
The finance, healthcare, and legal sectors have near-zero tolerance for error. Pascal Hetzscholdt from Wiley addressed these sectors directly.
Hetzscholdt stated that responsible AI in science, finance, and law relies on accuracy, attribution, and integrity. Enterprise systems in these fields need audit trails. Reputational damage or regulatory fines make “black box” implementations impossible.
Konstantina Kapetanidi of Visa outlined the difficulties in building multilingual, tool-using, scalable generative AI applications. Models are becoming active agents that execute tasks rather than just generating text. Allowing a model to use tools – like querying a database – creates security vectors that need serious testing.
Parinita Kothari from Lloyds Banking Group detailed the requirements for deploying, scaling, monitoring, and maintaining AI systems. Kothari challenged the “deploy-and-forget” mentality. AI models need continuous oversight, similar to traditional software infrastructure.
The change in developer workflows
Of course, AI is fundamentally changing how code is written. A panel with speakers from Valae, Charles River Labs, and Knight Frank examined how AI copilots reshape software creation. While these tools speed up code generation, they also force developers to focus more on review and architecture.
This change requires new skills. A panel with representatives from Microsoft, Lloyds, and Mastercard discussed the tools and mindsets needed for future AI developers. A gap exists between current workforce capabilities and the needs of an AI-augmented environment. Executives must plan training programmes that ensure developers sufficiently validate AI-generated code.
Dr Gurpinder Dhillon from Senzing and Alexis Ego from Retool presented low-code and no-code strategies. Ego described using AI with low-code platforms to make production-ready internal apps. This method aims to cut the backlog of internal tooling requests.
Dhillon argued that these strategies speed up development without dropping quality. For the C-suite, this suggests cheaper internal software delivery if governance protocols stay in place.
Workforce capability and specific utility
The broader workforce is starting to work with “digital colleagues.” Austin Braham from EverWorker explained how agents reshape workforce models. This terminology implies a move from passive software to active participants. Business leaders must re-evaluate human-machine interaction protocols.
Paul Airey from Anthony Nolan gave an example of AI delivering literally life-changing value. He detailed how automation improves donor matching and transplant timelines for stem cell transplants. The utility of these technologies extends to life-saving logistics.
A recurring theme throughout the event is that effective applications often solve very specific and high-friction problems rather than attempting to be general-purpose solutions.
Managing the transition
The day two sessions from the co-located events show that enterprise focus has now moved to integration. The initial novelty is gone and has been replaced by demands for uptime, security, and compliance. Innovation heads should assess which projects have the data infrastructure to survive contact with the real world.
Organisations must prioritise the basic aspects of AI: cleaning data warehouses, establishing legal guardrails, and training staff to supervise automated agents. The difference between a successful deployment and a stalled pilot lies in these details.
Executives, for their part, should direct resources toward data engineering and governance frameworks. Without them, advanced models will fail to deliver value.
See also: AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
-
Fintech6 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech6 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech5 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Fintech4 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Artificial Intelligence7 months agoThe best Android phones
-
Fintech5 months ago
Esh Bank Unveils Experience That Includes Revenue Sharing With Customers | PYMNTS.com
