Flash floods have wrought more havoc in the US this week, from the Northeast to the Midwest, just weeks after swollen rivers took more than 130 lives across central Texas earlier this month. Frustrations have grown in the aftermath of that catastrophe over why more wasn’t done to warn people in advance.
Artificial Intelligence
How to design an actually good flash flood alert system
Local officials face mounting questions over whether they sent too many or sent too few mobile phone alerts to people. Some Texans have accused the state of sending out too many alerts for injured police officers in the months leading up to the floods, which may have led to residents opting out of receiving warnings. And hard-hit Kerr County, where more than 100 people died, lacked sirens along riverbanks to warn people of rising waters.
These are all important questions to answer that can help keep history from repeating itself in another disaster. Failing to translate flood forecasts into timely messages that tell people what they need to do to stay safe can have tragic consequences. In Texas and elsewhere, the solution is more wide-ranging than fixing any single channel of communication. The Verge spoke with experts about what it would take to design an ideal disaster warning system.
The solution is more wide-ranging than fixing any single channel of communication
When you have a matter of hours or maybe even minutes to send a lifesaving message, you need to use every tool at your disposal. That communication needs to start long before the storm rolls in, and involves everyone from forecasters to disaster managers and local officials. Even community members will need to reach out to each other when no one else may be able to get to them.
By definition, flash floods are difficult to forecast with specificity or much lead time. But forecasts are only one part of the process. There are more hurdles when it comes to getting those forecasts out to people, an issue experts describe as getting past “the last mile.” Doing so starts with a shift in thinking from “‘what will the weather be’ to ‘what will the weather do,’” explains Olufemi Osidele, CEO of Hydrologic Research Center (HRC), which oversees a global flash flood guidance program. The technical term is “impact-based forecasting,” and the goal is to relay messages that help people understand what actions to take to keep themselves safe.
In the hours leading up to devastating floods in central Texas, the National Weather Service sent out escalating alerts about the growing risk of flash floods. But not everyone received alerts on their phones with safety instructions from Kerr County officials during crucial hours, according to records obtained by NBC News. While meteorologists can say there’s a life-threatening storm approaching, it typically falls to local authorities to determine what guidance to give to specific communities on how and when to evacuate or take shelter.
“Emergency responders need to know what are the appropriate actions to take or what’s needed in the case of a flash flood before an event happens so that they can react quickly, because the time to respond to that event is likely very short,” says Theresa Modrick Hansen, chief operating officer at HRC. “Time is really the critical issue for disaster managers.”
Without prior planning, local alerting authorities might be stuck staring at a blank screen when deciding what warning to send to people in the heat of the moment. Many alerting platforms don’t include instructions on how to write that message, according to Jeannette Sutton, an associate professor in the College of Emergency Preparedness, Homeland Security and Cybersecurity at the University at Albany, SUNY. Sutton is also the founder of The Warn Room and consults with local organizations on how to improve their warning systems.
“When you sit down at the keyboard, you have a blank box that you have to fill in with the information that’s going to be useful to the public,” Sutton says. “And when you are in a highly volatile, emotional, chaotic situation, and you all of a sudden have to create [a] message very quickly that is really clear and complete and directed to the right people at the right time, it’s really hard to think of all of that in the moment.”
There aren’t national standards for how a flood alert system should work in the US, so practices vary from place to place. Sutton recommends an end-to-end warning system that connects each step of the process and the people along the way. It includes forecasters and hydrologists who collect data and run it through predictive models to understand the potential impact on communities — identifying which specific populations or infrastructure are most vulnerable. They need to get that information quickly to disaster managers who can then reach people most at risk with safety instructions using channels of communication they’ve thought through in advance.
Ideally, those alerts are tailored to specific locations and give people clear instructions — telling them who should evacuate, when, and where, for instance. A strong message should include five things, according to Sutton: who the message is from, what the hazard is doing, the location and timing of the threat, and what actions to take to protect yourself.
“If you are receiving a warning that’s statewide or county wide, it can be difficult for some people to understand if they should act or evacuate,” says Juliette Murphy, CEO and co-founder of the flood forecasting company FloodMapp. “Or if a warning states that a river will reach 30 feet, that might not mean much to some people if they don’t have a hydrology understanding.”
Murphy’s company is now using its mapping tools to help state and federal agencies find dozens of people still missing since the July 4th floods. FloodMapp hadn’t worked with counties affected by the floods prior to this disaster, but Murphy says she’d like to work with local agencies in the future that want to improve their warning systems.
Kerr County is under scrutiny for lacking flood sirens, even though county commissioners had been talking about the need to upgrade its flood systems — including adding sirens — since at least 2016. The county sits in an area known as “flash flood alley” because of the way the hilly topography of the area heightens flood risk during storms. Sirens in neighboring communities have been credited with saving lives.
“If I were to envision a really good, robust warning system in flash flood alley, I would say that there would be sirens in these very rural, remote areas,” Sutton says.
Sirens can be critical for reaching people outdoors who may not have cell service and are hard to reach. Even so, it’s no silver bullet. The sound doesn’t necessarily reach people indoors who are further from the riverbanks but still in harm’s way. And it doesn’t provide clear instructions on what actions people need to take.
Along with sirens, Sutton says she’d recommend making sure communities are prepared with “call trees” in advance. That means people are physically picking up the phone; each person is responsible for calling three more people, and so on. “It’s the human touch,” Sutton says. In worst-case scenarios, that might include going out to pound on neighbors’ doors. And that human touch can be especially important for reaching someone who might be skeptical of a government agency sending an alert but might trust a friend or fellow church member, for example, or for those who speak a different language than what officials use.
Wireless emergency alerts are also critical; Sutton considers them the most powerful alerting system across the US because it does not require people to opt in to get a message. But there are also warning systems that people can opt in to for alerts, including CodeRed weather warnings. Kerr County used CodeRed to send out warnings to people subscribed to that system, and audio recordings from disaster responders on July 4th have raised more questions about whether those messages were too delayed to keep people out of danger.
In an email to The Verge, a Kerr County spokesperson said the county is committed to “transparency” and a “full review” of the disaster response. State lawmakers start a special session next week and are expected to consider legislation to bolster flood warning systems and emergency communications. One Senate bill would let municipalities gather residents’ contact information to enroll them in text alerts that they could opt out of if they don’t want to receive them.
Disaster fatigue and Swiss cheese
People opting out of notifications has also been a concern — particularly after a deluge of “Blue alerts” sent after a law enforcement officer has been injured or killed. Frustrations have flared up on social media this month over a statewide Blue alert issued for someone suspected of being involved in the “serious injury” of a police officer at an Immigration and Customs Enforcement (ICE) detention facility in Alvarado, Texas. “Texas can’t adequately warn people about deadly floods, but it can immediately let me know that a cop got hurt 250 miles away from me,” one post with more than 20,000 likes on Bluesky says. The FCC has received thousands complaints about the Blue alert system in Texas, CBS News reported in October of last year.
“Alert fatigue” is a concern if it pushes people to ignore warnings or opt out of receiving them altogether. That can be an issue during extreme weather if authorities include Blue alerts and extreme weather warnings in the same “imminent threat” category of wireless emergency alerts. Again, this can vary from locality to locality. “It’s really frustrating when they choose to send a Blue alert through an imminent threat channel,” Sutton says. To stop getting those pings about police officers, someone might opt out of the imminent threat category of wireless emergency alerts — but that means they would also stop getting other alerts in the same channel for weather emergencies.
“This is exactly what we don’t want to have happen, because when you turn it off you’re not going to get the message for that flash flood. So it’s really dangerous,” Sutton says.
“This is exactly what we don’t want to have happen”
Even so, we still don’t have data on who might have missed a lifesaving alert because of frustration with Blue alerts. Nor do we know the extent to which people are just ignoring notifications, or why. The number of public safety alerts sent in Texas has doubled since 2018 for a wide range of warnings, including Blue alerts, Silver alerts for missing elderly adults, Amber alerts for missing children, and more, the Houston Chronicle reports.
And when it comes to warning people about flash floods in particular, experts still stress the need to get warnings to people via every means possible. If someone misses a wireless emergency alert, there should be another way to reach them. There are likely going to be gaps when it comes to any single strategy for alerting people, as well as other complications that can impede the message getting out. (On July 4th, floodwaters rose in the dead of night — making it even harder to notify people as they slept.)
That’s why a “Swiss cheese” approach to warning people can be most effective in overcoming that last mile, Chris Vagasky, a meteorologist and manager of the Wisconsin Environmental Mesonet at the University of Wisconsin-Madison, explains. (And it’s similar to an ideology used to prevent the spread of disease.)
“You know you got slices of Swiss cheese and they’ve got holes in them. Nothing is ever perfect. But if you layer enough pieces of cheese, it reduces the risk because something might go through one hole, but then it gets blocked,” Vagasky says. “We always want people to have multiple ways of receiving warnings.”
Artificial Intelligence
Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical'
Before you set sail on your AI journey, always check the state of your data – because if there is one thing likely to sink your ship, it is data quality.
Gartner estimates that poor data quality costs organisations an average of $12.9 million each year in wasted resources and lost opportunities. That’s the bad news. The good news is that organisations are increasingly understanding the importance of their data quality – and less likely to fall into this trap.
That’s the view of Ronnie Sheth, CEO of AI strategy, execution and governance firm SENEN Group. The company focuses on data and AI advisory, operationalisation and literacy, and Sheth notes she has been in the data and AI space ‘ever since [she] was a corporate baby’, so there is plenty of real-world experience behind the viewpoint. There is also plenty of success; Sheth notes that her company has a 99.99% client repeat rate.
“If I were to be very practical, the one thing I’ve noticed is companies jump into adopting AI before they’re ready,” says Sheth. Companies, she notes, will have an executive direction insisting they adopt AI, but without a blueprint or roadmap to accompany it. The result may be impressive user numbers, but with no measurable outcome to back anything up.
Even as recently as 2024, Sheth saw many organisations struggling because their data was ‘nowhere where it needed to be.’ “Not even close,” she adds. Now, the conversation has turned more practical and strategic. Companies are realising this, and coming to SENEN Group initially to get help with their data, rather than wanting to adopt AI immediately.
“When companies like that come to us, the first course of order is really fixing their data,” says Sheth. “The next course of order is getting to their AI model. They are building a strong foundation for any AI initiative that comes after that.
“Once they fix their data, they can build as many AI models as they want, and they can have as many AI solutions as they want, and they will get accurate outputs because now they have a strong foundation,” Sheth adds.
With breadth and depth in expertise, SENEN Group allows organisations to right their course. Sheth notes the example of one customer who came to them wanting a data governance initiative. Ultimately, it was the data strategy which was needed – the why and how, the outcomes of what they were trying to do with their data – before adding in governance and providing a roadmap for an operating model. “They’ve moved from raw data to descriptive analytics, moving into predictive analytics, and now we’re actually setting up an AI strategy for them,” says Sheth.
It is this attitude and requirement for practical initiatives which will be the cornerstone of Sheth’s discussion at AI & Big Data Expo Global in London this week. “Now would be the time to get practical with AI, especially enterprise AI adoption, and not think about ‘look, we’re going to innovate, we’re going to do pilots, we’re going to experiment,’” says Sheth. “Now is not the time to do that. Now is the time to get practical, to get AI to value. This is the year to do that in the enterprise.”
Watch the full video conversation with Ronnie Sheth below:
Artificial Intelligence
Apptio: Why scaling intelligent automation requires financial rigour
Greg Holmes, Field CTO for EMEA at Apptio, an IBM company, argues that successfully scaling intelligent automation requires financial rigour.
The “build it and they will come” model of technology adoption often leaves a hole in the budget when applied to automation. Executives frequently find that successful pilot programmes do not translate into sustainable enterprise-wide deployments because initial financial modelling ignored the realities of production scaling.
“When we integrate FinOps capabilities with automation, we’re looking at a change from being very reactive on cost management to being very proactive around value engineering,” says Holmes.
This shifts the assessment criteria for technical leaders. Rather than waiting “months or years to assess whether things are getting value,” engineering teams can track resource consumption – such as cost per transaction or API call – “straight from the beginning.”
The unit economics of scaling intelligent automation
Innovation projects face a high mortality rate. Holmes notes that around 80 percent of new innovation projects fail, often because financial opacity during the pilot phase masks future liabilities.
“If a pilot demonstrates that automating a process saves, say, 100 hours a month, leadership thinks that’s really successful,” says Holmes. “But what it fails to track is that the pilot sometimes is running on over-provisioned infrastructure, so it looks like it performs really well. But you wouldn’t over-provision to that degree during a real production rollout.”
Moving that workload to production changes the calculus. The requirements for compute, storage, and data transfer increase. “API calls can multiply, exceptions and edge cases appear at volume that might have been out of scope for the pilot phase, and then support overheads just grow as well,” he adds.
To prevent this, organisations must track the marginal cost at scale. This involves monitoring unit economics, such as the cost per customer served or cost per transaction. If the cost per customer increases as the customer base grows, the business model is flawed.
Conversely, effective scaling should see these unit costs decrease. Holmes cites a case study from Liberty Mutual where the insurer was able to find around $2.5 million of savings by bringing in consumption metrics and “not just looking at labour hours that they were saving.”
However, financial accountability cannot sit solely with the finance department. Holmes advocates for putting governance “back in the hands of the developers into their development tools and workloads.”
Integration with infrastructure-as-code tools like HashiCorp Terraform and GitHub allows organisations to enforce policies during deployment. Teams can spin up resources programmatically with immediate cost estimates.
“Rather than deploying things and then fixing them up, which gets into the whole whack-a-mole kind of problem,” Holmes explains, companies can verify they are “deploying the right things at the right time.”
When scaling intelligent automation, tension often simmers between the CFO, who focuses on return on investment, and the Head of Automation, who tracks operational metrics like hours saved.
“This translation challenge is precisely what TBM (Technology Business Management) and Apptio are designed to solve,” says Holmes. “It’s having a common language between technology and finance and with the business.”
The TBM taxonomy provides a standardised framework to reconcile these views. It maps technical resources (such as compute, storage, and labour) into IT towers and further up to business capabilities. This structure translates technical inputs into business outputs.
“I don’t necessarily know what goes into all the IT layers underneath it,” Holmes says, describing the business user’s perspective. “But because we’ve got this taxonomy, I can get a detailed bill that tells me about my service consumption and precisely which costs are driving it to be more expensive as I consume more.”
Addressing legacy debt and budgeting for the long-term
Organisations burdened by legacy ERP systems face a binary choice: automation as a patch, or as a bridge to modernisation. Holmes warns that if a company is “just trying to mask inefficient processes and not redesign them,” they are merely “building up more technical debt.”
A total cost of ownership (TCO) approach helps determine the correct strategy. The Commonwealth Bank of Australia utilised a TCO model across 2,000 different applications – of various maturity stages – to assess their full lifecycle costs. This analysis included hidden costs such as infrastructure, labour, and the engineering time required to keep automation running.
“Just because of something’s legacy doesn’t mean you have to retire it,” says Holmes. “Some of those legacy systems are worth maintaining just because the value is so good.”
In other cases, calculating the cost of the automation wrappers required to keep an old system functional reveals a different reality. “Sometimes when you add up the TCO approach, and you’re including all these automation layers around it, you suddenly realise, the real cost of keeping that old system alive is not just the old system, it’s those extra layers,” Holmes argues.
Avoiding sticker shock requires a budgeting strategy that balances variable costs with long-term commitments. While variable costs (OPEX) offer flexibility, they can fluctuate wildly based on demand and engineering efficiency.
Holmes advises that longer-term visibility enables better investment decisions. Committing to specific technologies or platforms over a multi-year horizon allows organisations to negotiate economies of scale and standardise architecture.
“Because you’ve made those longer term commitments and you’ve standardised on different platforms and things like that, it makes it easier to build the right thing out for the long term,” Holmes says.
Combining tight management of variable costs with strategic commitments supports enterprises in scaling intelligent automation without the volatility that often derails transformation.
IBM is a key sponsor of this year’s Intelligent Automation Conference Global in London on 4-5 February 2026. Greg Holmes and other experts will be sharing their insights during the event. Be sure to check out the day one panel session, Scaling Intelligent Automation Successfully: Frameworks, Risks, and Real-World Lessons, to hear more from Holmes and swing by IBM’s booth at stand #362.
See also: Klarna backs Google UCP to power AI agent payments

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
FedEx tests how far AI can go in tracking and returns management
FedEx is using AI to change how package tracking and returns work for large enterprise shippers. For companies moving high volumes of goods, tracking no longer ends when a package leaves the warehouse. Customers expect real-time updates, flexible delivery options, and returns that do not turn into support tickets or delays.
That pressure is pushing logistics firms to rethink how tracking and returns operate at scale, especially across complex supply chains.
This is where artificial intelligence is starting to move from pilot projects into daily operations.
FedEx plans to roll out AI-powered tracking and returns tools designed for enterprise shippers, according to a report by PYMNTS. The tools are aimed at automating routine customer service tasks, improving visibility into shipments, and reducing friction when packages need to be rerouted or sent back.
Rather than focusing on consumer-facing chatbots, the effort centres on operational workflows that sit behind the scenes. These are the systems enterprise customers rely on to manage exceptions, returns, and delivery changes without manual intervention.
How FedEx is applying AI to package tracking
Traditional tracking systems tell customers where a package is and when it might arrive. AI-powered tracking takes a step further by utilising historical delivery data, traffic patterns, weather conditions, and network constraints to flag potential delays before they happen.
According to the PYMNTS report, FedEx’s AI tools are designed to help enterprise shippers anticipate issues earlier in the delivery process. Instead of reacting to missed delivery windows, shippers may be able to reroute packages or notify customers ahead of time.
For businesses that ship thousands of parcels per day, that shift matters. Small improvements in prediction accuracy can reduce support calls, lower refund rates, and improve customer trust, particularly in retail, healthcare, and manufacturing supply chains.
This approach also reflects a broader trend in enterprise software, in which AI is being embedded into existing systems rather than introduced as standalone tools. The goal is not to replace logistics teams, but to minimise the number of manual decisions they need to make.
Returns as an operational problem, not a customer issue
Returns are one of the most expensive parts of logistics. For enterprise shippers, particularly those in e-commerce, returns affect warehouse capacity, inventory planning, and transportation costs.
According to PYMNTS, FedEx’s AI-enabled returns tools aim to automate parts of the returns process, including label generation, routing decisions, and status updates. Companies that use AI to determine the most efficient return path may be able to reduce delays and avoid returning things to the wrong facility.
This is less about convenience and more about operational discipline. Returns that sit idle or move through the wrong channel create cost and uncertainty across the supply chain. AI systems trained on past return patterns can help standardise decisions that were previously handled case by case.
For enterprise customers, this type of automation supports scale. As return volumes fluctuate, especially during peak seasons, systems that adjust automatically reduce the need for temporary staffing or manual overrides.
What FedEx’s AI tracking approach says about enterprise adoption
What stands out in FedEx’s approach is how narrowly focused the AI use case is. There are no broad claims about transformation or reinvention. The emphasis is on reducing friction in processes that already exist.
This mirrors how other large organisations are adopting AI internally. In a separate context, Microsoft described a similar pattern in its article. The company outlined how AI tools were rolled out gradually, with clear limits, governance rules, and feedback loops.
While Microsoft’s case focused on knowledge work and FedEx’s on logistics operations, the underlying lesson is the same. AI adoption tends to work best when applied to specific activities with measurable results rather than broad promises of efficiency.
For logistics firms, those advantages include fewer delivery exceptions, lower return handling costs, and better coordination between shipping partners and enterprise clients.
What this signals for enterprise customers
For end-user companies, FedEx’s move signals that logistics providers are investing in AI as a way to support more complex shipping demands. As supply chains become more distributed, visibility and predictability become harder to maintain without automation.
AI-driven tracking and returns could also change how businesses measure logistics performance. Companies may focus less on delivery speed and more on how quickly issues are recognised and resolved.
That shift could influence procurement decisions, contract structures, and service-level agreements. Enterprise customers may start asking not just where a shipment is, but how well a provider anticipates problems.
FedEx’s plans reflect a quieter phase of enterprise AI adoption. The focus is less on experimentation and more on integration. These systems are not designed to draw attention but to reduce noise in operations that customers only notice when something goes wrong.
(Photo by Liam Kevan)
See also: PepsiCo is using AI to rethink how factories are designed and updated
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
-
Fintech6 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech6 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech5 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Artificial Intelligence7 months agoThe best Android phones
-
Fintech4 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Fintech7 months agoIntuit Adds Agentic AI to Its Enterprise Suite | PYMNTS.com
