Connect with us

Artificial Intelligence

Rivian’s AI pivot is about more than chasing Tesla

Published

on

RJ Scaringe is sitting in Rivian’s Palo Alto offices, explaining why the adventure-themed EV company suddenly decided to build its own self-driving cars, when an unexpected guest glides by the window outside: Waymo.

A robotaxi from the Alphabet-owned company pulls up outside the office. The passenger, an analyst from Goldman Sachs, briefly takes a selfie before climbing inside. The Rivian founder and CEO chuckles at the scene.

“That’s amazing,” he laughs. “So perfect.”

The arrival of the Waymo helps clarify the challenge that lies ahead for Rivian. A few hours earlier, Scaringe was onstage in front of an audience of hundreds of investors, reporters, and influencers gathered for the company’s announcement of a huge, expensive, and undeniably risky bet on autonomy and AI. The goal, he said, is for Rivian to design its own AI chips that can help power higher levels of autonomy, eventually leading to Level 4 — no human supervision required within certain limits. Could Rivian achieve what has taken Waymo decades to accomplish, but on a much shorter timeline?

More importantly, can it do it safer and more effectively than Tesla, the other EV-only company that is attempting its own messy transformation into an AI and robotics company? Rivian was founded as the outdoor lover’s answer to Elon Musk’s company: rugged, off-road worthy, and fully electric. Now it seems like it’s trying to chase the mercurial billionaire down an AI rabbit hole.

Scaringe insists this isn’t anything motivated by Tesla, but a recognition that the auto industry is staring up its own cliff. Advances in transformer-based encoding and large-parameter models prompted a fundamental shift in Rivian’s thinking of the “physical AI” of autonomous driving, he says. As such, Rivian began a clean-sheet redesign of its autonomy platform in early 2022. With the introduction of its Gen 2 platform, the company began building a “data flywheel” to train a large driving model (sort of like an LLM, but for driving) using real-world driving data from its fleet. And because the model is trained end-to-end, Rivian says that improvements in sensors or compute directly enhance its capabilities, allowing the system to continuously improve as hardware advances.

Either Rivian pursues its own self-produced, vertically integrated autonomy strategy, or it risks being left in the dust by Waymo, Tesla, and others. Several times during the presentation, Scaringe notes that this isn’t some bandwagon chasing, but rather a move born out of years of hard work and thoughtful design.

“I spent some time last night with the team talking about this right before we’re about to show it today,” Scaringe says onstage. “And one of the lead engineers looked at me and said, ‘Boy, we’ve been working on this for years, and I haven’t been able to talk about it. It’s so cool. Tomorrow I can start to talk about what I do every day, all day long.’”

Universal Hands-Free will work on 3.5 million miles of road in North America.
Image: Rivian

Hands-free, eyes-off, and beyond

Rivian certainly isn’t shy about sharing details now. The company’s “AI and Autonomy Day” is set up like a Silicon Valley high school science fair. Tours take us throughout the Rivian office to see various projects: the new chip that will help power its AI ambitions, complete with microscopes to inspect the silicon in extreme close-up; its new in-car, AI-powered voice assistant that can navigate you to your favorite winery or pick the right song by Jelly Roll; and the lidar sensor that will help its vehicles drive themselves by creating a 3D picture of the world around it. There’s even an R2 wrapped to look like R2-D2 from Star Wars.

The most notable thing I get to experience is a test drive with an early preview of Rivian’s new hands-free point-to-point capability. A software update, released soon after the event, allows Rivian owners to operate “hands-free” on 3.5 million miles of road in the US and Canada. The point-to-point capability will come at a later date and will unlock even more partial autonomous driving. Think of it as Rivian’s answer to Tesla’s Full Self-Driving feature.

Rivian says that starting next year, if a road has clear, painted lane lines, drivers will be able to use hands-free driving. It’ll be part of a paid package called Autonomy Plus, offered either as a one-time purchase of $2,900 or a monthly $49.99 subscription. (By comparison, Tesla offers its premium FSD option for $8,000 upfront or a $99-per-month fee.) Rivian’s Autonomy Plus package will be available free to all customers until March 2026.

The most notable thing I get to experience is a test drive with an early preview of Rivian’s new hands-free point-to-point capability.

I get to experience about 45 minutes of this feature from the passenger seat of an R1S as it drives through a series of tricky scenarios around Palo Alto. A Rivian engineer sits in the driver’s seat, ready to take control if anything goes wrong, but he mostly keeps his hands in his lap and his feet off the pedals as the R1S navigates intersections, traffic signals, and a bevy of pedestrians, cyclists, and other vulnerable road users.

The car seems to handle itself just fine, with the engineer only once taking control of the vehicle to avoid getting stuck at a red light. At one point, we thread a narrow path between a passing truck on one side and a cyclist in a bike lane on the other — and while I think we come a little too close to the cyclist, the car didn’t slow down or wobble in the lane. Days after the event, I learn of other test rides with more disengagements than mine.

There’s none of the jerkiness or hesitancy that I typically associate with a first-gen driver-assist system. Maybe that’s because in addition to 11 cameras, the system is also receiving data from five radar sensors, helping it form a more complete view of the world around it. Tesla’s FSD system, by comparison, is camera-only.

With more cars equipped with partial automation hitting the road, safety researchers are growing increasingly worried about driver attention and the potential for these systems to cause crashes. But Rivian subscribes to the theory that drivers should work with the hands-free system by adjusting the steering or pushing the accelerator without disengaging it. Rivian can switch between two different types of steering controllers that allow the driver to input their own steering when they want, Nick Nguyen, director of product and programs for autonomy, tells me.

“We’re big believers in collaboration,” he says. “We want you to interact with it as much or as little as you want.”

The company’s new lidar sensor will come with the R2.

The company’s new lidar sensor will come with the R2.
Image: Rivian

The real test is still to come. Rivian’s Gen 3 system will include the company’s proprietary silicon chip and the same sensor stack as Gen 2, with one crucial addition: lidar. The laser sensor is not typically found in privately owned vehicles, featuring more prominently in robotaxis like Waymo. Rivian is planning to add it to the upcoming more affordable model, the R2 — and taking a huge risk in the process.

Lidar is pricey, but those costs are coming down. Rivian wouldn’t disclose its lidar supplier, but the difference between a camera-only system like Tesla’s and one with multiple sensors is stark. During the presentation, James Philbin, Rivian’s vice president for autonomy and AI, shows a side-by-side visual comparison of Rivian’s autonomy software identifying and interpreting objects ahead using only its cameras; cameras and radar; and the trifecta of cameras, radar, and lidar. The latter setup is the clear victor, spotting several hidden objects and even pedestrians that the camera-only or camera-plus-radar approaches could not.

In our interview, Scaringe clarifies that the R2 will begin production next year without lidar or the Gen 3 autonomy computer, both of which won’t be added until the end of 2026. On Reddit, Rivian fans are upset that the company couldn’t better time the rollout so that early adopters wouldn’t be left to choose between a less powerful R2 right away or a more powerful one later. But Scaringe insists that demand for R2 is already so high that Rivian doesn’t expect a significant sales impact.

What could impact sales is a janky, unreliable system. But Scaringe assures me that Gen 3 is anything but. The AI computer will feature a dual-chip setup capable of 1,600 trillion operations a second (TOPS), a figure that he claims would have been unimaginable a few years ago.

But what actually matters is how fast the computer processes information, like pixels for camera-based robotics. Rivian says its Gen 3 system will be able to process 5 billion pixels a second, which is not a common specification you hear tossed around in the AI world, but one Scaringe believes should impress people even more than the TOPS measure.

Scaringe insists that demand for R2 is already so high that Rivian doesn’t expect a significant sales impact.

“Tesla recently talked about theirs,” he says. “I’ll let you Google it, but it’s a lot less.” (It’s 1 million pixels per milisecond.)

The next major milestone is “eyes-off” driving, where users no longer need to watch the road and, according to Scaringe, can fully reclaim their time to read, use their phone, or relax. Beyond that lies personal Level 4 capability, where the vehicle operates entirely on its own on certain roads, no human supervision required.

Whether Rivian ultimately gets there will depend on whether the company can create trust around its products. And that will require a transparent approach to legal liability that most automakers, Rivian included, have yet to fully embrace.

Scaringe notes that as autonomy improves, human driving will shrink from roughly 80–90 percent of miles today to 10–20 percent within a few years, and eventually to zero. At that point, a driver’s personal skill level becomes irrelevant to risk assessment, and insurance must shift accordingly, he says. Rivian has a partnership with Nationwide for its insurance claims, but has yet to work out how it will accept liability for crashes that occur in its autonomous vehicles.

“This is like the ticky-tacky of the real work streams to make this all real,” Scaringe says. “Beyond the technology, but actually, like, the business systems that need to be designed.”

Rivian’s RAP1 chip, or the first-generation Rivian Autonomy Processor.

Rivian’s RAP1 chip, or the first-generation Rivian Autonomy Processor.
Image: Bloomberg via Getty Images

In Palo Alto, the reactions to Rivian’s announcements are enthusiastic. All the influencers and analysts I talk to are impressed with what they’re being shown. But despite this, the company’s stock falls 6 percent the day of the event to close at $16.43 per share. Tesla, meanwhile, continues to trade at over $460 per share. Rivian’s stock has since bounced back slightly, and while the company has eked out a gross profit in the past, it’s still facing a very tough road ahead with the elimination of the EV tax credit and the enormous capital costs that will be needed to realize its AI dreams.

And there’s still the issue of why Rivian is doing this in the first place. While fans may be left scratching their heads over why their favorite outdoor brand is chasing after Tesla with its AI strategy, the company is doing the necessary work to keep itself relevant in a new era, says mobility investor Reilly Brennan.

“Having an AI strategy is growing into a necessary component for public auto companies in this era — at least the ones that want public market comps on the high end of the continuum, a la Tesla,” Brennan tells me. “Consider it like those overachieving high school students trying to get into Harvard: It’s not a requirement that they both started a charity and play the contrabassoon, but it sure seems like everyone else who gets in is doing it.”

There’s peer pressure, and then there’s the necessity of finding new ways to earn money off your customers. A $50-a-month autonomy subscription could be a lucrative new revenue stream for Rivian, which needs to become profitable to survive.

Scaringe sounds like a true convert. He thinks that we’re rapidly approaching a time in which AI will become as accessible as “running water and electricity,” he says during the presentation. No mention is made of how much running water and electricity is consumed by AI, which, for a company that purports to care about the environment, could quickly become a problem.

And Rivian is not fully committed to following Musk’s ceaseless, clout-chasing pursuit of robotaxis and humanoid robots — though Scaringe admits that both remain a possibility. The company spun off its own robotics division as Mind Robotics earlier this year, after all.

“The part of the problem that’s not solved is the Level 4,” he tells me, standing up because he has several more interviews ahead, and my time is up. “And so that’s our focus, on the tech.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.


Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical'

Published

on

Before you set sail on your AI journey, always check the state of your data – because if there is one thing likely to sink your ship, it is data quality.

Gartner estimates that poor data quality costs organisations an average of $12.9 million each year in wasted resources and lost opportunities. That’s the bad news. The good news is that organisations are increasingly understanding the importance of their data quality – and less likely to fall into this trap.

That’s the view of Ronnie Sheth, CEO of AI strategy, execution and governance firm SENEN Group. The company focuses on data and AI advisory, operationalisation and literacy, and Sheth notes she has been in the data and AI space ‘ever since [she] was a corporate baby’, so there is plenty of real-world experience behind the viewpoint. There is also plenty of success; Sheth notes that her company has a 99.99% client repeat rate.

“If I were to be very practical, the one thing I’ve noticed is companies jump into adopting AI before they’re ready,” says Sheth. Companies, she notes, will have an executive direction insisting they adopt AI, but without a blueprint or roadmap to accompany it. The result may be impressive user numbers, but with no measurable outcome to back anything up.

Even as recently as 2024, Sheth saw many organisations struggling because their data was ‘nowhere where it needed to be.’ “Not even close,” she adds. Now, the conversation has turned more practical and strategic. Companies are realising this, and coming to SENEN Group initially to get help with their data, rather than wanting to adopt AI immediately.

“When companies like that come to us, the first course of order is really fixing their data,” says Sheth. “The next course of order is getting to their AI model. They are building a strong foundation for any AI initiative that comes after that.

“Once they fix their data, they can build as many AI models as they want, and they can have as many AI solutions as they want, and they will get accurate outputs because now they have a strong foundation,” Sheth adds.

With breadth and depth in expertise, SENEN Group allows organisations to right their course. Sheth notes the example of one customer who came to them wanting a data governance initiative. Ultimately, it was the data strategy which was needed – the why and how, the outcomes of what they were trying to do with their data – before adding in governance and providing a roadmap for an operating model. “They’ve moved from raw data to descriptive analytics, moving into predictive analytics, and now we’re actually setting up an AI strategy for them,” says Sheth.

It is this attitude and requirement for practical initiatives which will be the cornerstone of Sheth’s discussion at AI & Big Data Expo Global in London this week. “Now would be the time to get practical with AI, especially enterprise AI adoption, and not think about ‘look, we’re going to innovate, we’re going to do pilots, we’re going to experiment,’” says Sheth. “Now is not the time to do that. Now is the time to get practical, to get AI to value. This is the year to do that in the enterprise.”

Watch the full video conversation with Ronnie Sheth below:

Continue Reading

Artificial Intelligence

Apptio: Why scaling intelligent automation requires financial rigour

Published

on

Greg Holmes, Field CTO for EMEA at Apptio, an IBM company, argues that successfully scaling intelligent automation requires financial rigour.

The “build it and they will come” model of technology adoption often leaves a hole in the budget when applied to automation. Executives frequently find that successful pilot programmes do not translate into sustainable enterprise-wide deployments because initial financial modelling ignored the realities of production scaling.

“When we integrate FinOps capabilities with automation, we’re looking at a change from being very reactive on cost management to being very proactive around value engineering,” says Holmes.

This shifts the assessment criteria for technical leaders. Rather than waiting “months or years to assess whether things are getting value,” engineering teams can track resource consumption – such as cost per transaction or API call – “straight from the beginning.”

The unit economics of scaling intelligent automation

Innovation projects face a high mortality rate. Holmes notes that around 80 percent of new innovation projects fail, often because financial opacity during the pilot phase masks future liabilities.

“If a pilot demonstrates that automating a process saves, say, 100 hours a month, leadership thinks that’s really successful,” says Holmes. “But what it fails to track is that the pilot sometimes is running on over-provisioned infrastructure, so it looks like it performs really well. But you wouldn’t over-provision to that degree during a real production rollout.”

Moving that workload to production changes the calculus. The requirements for compute, storage, and data transfer increase. “API calls can multiply, exceptions and edge cases appear at volume that might have been out of scope for the pilot phase, and then support overheads just grow as well,” he adds.

To prevent this, organisations must track the marginal cost at scale. This involves monitoring unit economics, such as the cost per customer served or cost per transaction. If the cost per customer increases as the customer base grows, the business model is flawed.

Conversely, effective scaling should see these unit costs decrease. Holmes cites a case study from Liberty Mutual where the insurer was able to find around $2.5 million of savings by bringing in consumption metrics and “not just looking at labour hours that they were saving.”

However, financial accountability cannot sit solely with the finance department. Holmes advocates for putting governance “back in the hands of the developers into their development tools and workloads.”

Integration with infrastructure-as-code tools like HashiCorp Terraform and GitHub allows organisations to enforce policies during deployment. Teams can spin up resources programmatically with immediate cost estimates.

“Rather than deploying things and then fixing them up, which gets into the whole whack-a-mole kind of problem,” Holmes explains, companies can verify they are “deploying the right things at the right time.”

When scaling intelligent automation, tension often simmers between the CFO, who focuses on return on investment, and the Head of Automation, who tracks operational metrics like hours saved.

“This translation challenge is precisely what TBM (Technology Business Management) and Apptio are designed to solve,” says Holmes. “It’s having a common language between technology and finance and with the business.”

The TBM taxonomy provides a standardised framework to reconcile these views. It maps technical resources (such as compute, storage, and labour) into IT towers and further up to business capabilities. This structure translates technical inputs into business outputs.

“I don’t necessarily know what goes into all the IT layers underneath it,” Holmes says, describing the business user’s perspective. “But because we’ve got this taxonomy, I can get a detailed bill that tells me about my service consumption and precisely which costs are driving  it to be more expensive as I consume more.”

Addressing legacy debt and budgeting for the long-term

Organisations burdened by legacy ERP systems face a binary choice: automation as a patch, or as a bridge to modernisation. Holmes warns that if a company is “just trying to mask inefficient processes and not redesign them,” they are merely “building up more technical debt.”

A total cost of ownership (TCO) approach helps determine the correct strategy. The Commonwealth Bank of Australia utilised a TCO model across 2,000 different applications – of various maturity stages – to assess their full lifecycle costs. This analysis included hidden costs such as infrastructure, labour, and the engineering time required to keep automation running.

“Just because of something’s legacy doesn’t mean you have to retire it,” says Holmes. “Some of those legacy systems are worth maintaining just because the value is so good.”

In other cases, calculating the cost of the automation wrappers required to keep an old system functional reveals a different reality. “Sometimes when you add up the TCO approach, and you’re including all these automation layers around it, you suddenly realise, the real cost of keeping that old system alive is not just the old system, it’s those extra layers,” Holmes argues.

Avoiding sticker shock requires a budgeting strategy that balances variable costs with long-term commitments. While variable costs (OPEX) offer flexibility, they can fluctuate wildly based on demand and engineering efficiency.

Holmes advises that longer-term visibility enables better investment decisions. Committing to specific technologies or platforms over a multi-year horizon allows organisations to negotiate economies of scale and standardise architecture.

“Because you’ve made those longer term commitments and you’ve standardised on different platforms and things like that, it makes it easier to build the right thing out for the long term,” Holmes says.

Combining tight management of variable costs with strategic commitments supports enterprises in scaling intelligent automation without the volatility that often derails transformation.

IBM is a key sponsor of this year’s Intelligent Automation Conference Global in London on 4-5 February 2026. Greg Holmes and other experts will be sharing their insights during the event. Be sure to check out the day one panel session, Scaling Intelligent Automation Successfully: Frameworks, Risks, and Real-World Lessons, to hear more from Holmes and swing by IBM’s booth at stand #362.

See also: Klarna backs Google UCP to power AI agent payments

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

FedEx tests how far AI can go in tracking and returns management

Published

on

FedEx is using AI to change how package tracking and returns work for large enterprise shippers. For companies moving high volumes of goods, tracking no longer ends when a package leaves the warehouse. Customers expect real-time updates, flexible delivery options, and returns that do not turn into support tickets or delays.

That pressure is pushing logistics firms to rethink how tracking and returns operate at scale, especially across complex supply chains.

This is where artificial intelligence is starting to move from pilot projects into daily operations.

FedEx plans to roll out AI-powered tracking and returns tools designed for enterprise shippers, according to a report by PYMNTS. The tools are aimed at automating routine customer service tasks, improving visibility into shipments, and reducing friction when packages need to be rerouted or sent back.

Rather than focusing on consumer-facing chatbots, the effort centres on operational workflows that sit behind the scenes. These are the systems enterprise customers rely on to manage exceptions, returns, and delivery changes without manual intervention.

How FedEx is applying AI to package tracking

Traditional tracking systems tell customers where a package is and when it might arrive. AI-powered tracking takes a step further by utilising historical delivery data, traffic patterns, weather conditions, and network constraints to flag potential delays before they happen.

According to the PYMNTS report, FedEx’s AI tools are designed to help enterprise shippers anticipate issues earlier in the delivery process. Instead of reacting to missed delivery windows, shippers may be able to reroute packages or notify customers ahead of time.

For businesses that ship thousands of parcels per day, that shift matters. Small improvements in prediction accuracy can reduce support calls, lower refund rates, and improve customer trust, particularly in retail, healthcare, and manufacturing supply chains.

This approach also reflects a broader trend in enterprise software, in which AI is being embedded into existing systems rather than introduced as standalone tools. The goal is not to replace logistics teams, but to minimise the number of manual decisions they need to make.

Returns as an operational problem, not a customer issue

Returns are one of the most expensive parts of logistics. For enterprise shippers, particularly those in e-commerce, returns affect warehouse capacity, inventory planning, and transportation costs.

According to PYMNTS, FedEx’s AI-enabled returns tools aim to automate parts of the returns process, including label generation, routing decisions, and status updates. Companies that use AI to determine the most efficient return path may be able to reduce delays and avoid returning things to the wrong facility.

This is less about convenience and more about operational discipline. Returns that sit idle or move through the wrong channel create cost and uncertainty across the supply chain. AI systems trained on past return patterns can help standardise decisions that were previously handled case by case.

For enterprise customers, this type of automation supports scale. As return volumes fluctuate, especially during peak seasons, systems that adjust automatically reduce the need for temporary staffing or manual overrides.

What FedEx’s AI tracking approach says about enterprise adoption

What stands out in FedEx’s approach is how narrowly focused the AI use case is. There are no broad claims about transformation or reinvention. The emphasis is on reducing friction in processes that already exist.

This mirrors how other large organisations are adopting AI internally. In a separate context, Microsoft described a similar pattern in its article. The company outlined how AI tools were rolled out gradually, with clear limits, governance rules, and feedback loops.

While Microsoft’s case focused on knowledge work and FedEx’s on logistics operations, the underlying lesson is the same. AI adoption tends to work best when applied to specific activities with measurable results rather than broad promises of efficiency.

For logistics firms, those advantages include fewer delivery exceptions, lower return handling costs, and better coordination between shipping partners and enterprise clients.

What this signals for enterprise customers

For end-user companies, FedEx’s move signals that logistics providers are investing in AI as a way to support more complex shipping demands. As supply chains become more distributed, visibility and predictability become harder to maintain without automation.

AI-driven tracking and returns could also change how businesses measure logistics performance. Companies may focus less on delivery speed and more on how quickly issues are recognised and resolved.

That shift could influence procurement decisions, contract structures, and service-level agreements. Enterprise customers may start asking not just where a shipment is, but how well a provider anticipates problems.

FedEx’s plans reflect a quieter phase of enterprise AI adoption. The focus is less on experimentation and more on integration. These systems are not designed to draw attention but to reduce noise in operations that customers only notice when something goes wrong.

(Photo by Liam Kevan)

See also: PepsiCo is using AI to rethink how factories are designed and updated

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Trending