Scenario: A radiologist is looking at your brain scan and flags an abnormality in the basal ganglia. It’s an area of the brain that helps you with motor control, learning, and emotional processing. The name sounds a bit like another part of the brain, the basilar artery, which supplies blood to your brainstem — but the radiologist knows not to confuse them. A stroke or abnormality in one is typically treated in a very different way than in the other.
Artificial Intelligence
Google’s healthcare AI made up a body part — what happens when doctors don’t notice?

Now imagine your doctor is using an AI model to do the reading. The model says you have a problem with your “basilar ganglia,” conflating the two names into an area of the brain that does not exist. You’d hope your doctor would catch the mistake and double-check the scan. But there’s a chance they don’t.
Though not in a hospital setting, the “basilar ganglia” is a real error that was served up by Google’s healthcare AI model, Med-Gemini. A 2024 research paper introducing Med-Gemini included the hallucination in a section on head CT scans, and nobody at Google caught it, in either that paper or a blog post announcing it. When Bryan Moore, a board-certified neurologist and researcher with expertise in AI, flagged the mistake, he tells The Verge, the company quietly edited the blog post to fix the error with no public acknowledgement — and the paper remained unchanged. Google calls the incident a simple misspelling of “basal ganglia.” Some medical professionals say it’s a dangerous error and an example of the limitations of healthcare AI.
Med-Gemini is a collection of AI models that can summarize health data, create radiology reports, analyze electronic health records, and more. The pre-print research paper, meant to demonstrate its value to doctors, highlighted a series of abnormalities in scans that radiologists “missed” but AI caught. One of its examples was that Med-Gemini diagnosed an “old left basilar ganglia infarct.” But as established, there’s no such thing.
Fast-forward about a year, and Med-Gemini’s trusted tester program is no longer accepting new entrants — likely meaning that the program is being tested in real-life medical scenarios on a pilot basis. It’s still an early trial, but the stakes of AI errors are getting higher. Med-Gemini isn’t the only model making them. And it’s not clear how doctors should respond.
“What you’re talking about is super dangerous,” Maulin Shah, chief medical information officer at Providence, a healthcare system serving 51 hospitals and more than 1,000 clinics, tells The Verge. He added, “Two letters, but it’s a big deal.”
In a statement, Google spokesperson Jason Freidenfelds told The Verge that the company partners with the medical community to test its models and that Google is transparent about their limitations.
“Though the system did spot a missed pathology, it used an incorrect term to describe it (basilar instead of basal). That’s why we clarified in the blog post,” Freidenfelds said. He added, “We’re continually working to improve our models, rigorously examining an extensive range of performance attributes — see our training and deployment practices for a detailed view into our process.”
A ‘common mis-transcription’
On May 6th, 2024, Google debuted its newest suite of healthcare AI models with fanfare. It billed “Med-Gemini” as a “leap forward” with “substantial potential in medicine,” touting its real-world applications in radiology, pathology, dermatology, ophthalmology, and genomics.
The models trained on medical images, like chest X-rays, CT slices, pathology slides, and more, using de-identified medical data with text labels, according to a Google blog post. The company said the AI models could “interpret complex 3D scans, answer clinical questions, and generate state-of-the-art radiology reports” — even going as far as to say they could help predict disease risk via genomic information.
Moore saw the authors’ promotions of the paper early on and took a look. He caught the mistake and was alarmed, flagging the error to Google on LinkedIn and contacting authors directly to let them know.
The company, he saw, quietly switched out evidence of the AI model’s error. It updated the debut blog post phrasing from “basilar ganglia” to “basal ganglia” with no other differences and no change to the paper itself. In communication viewed by The Verge, Google Health employees responded to Moore, calling the mistake a typo.
In response, Moore publicly called out Google for the quiet edit. This time the company changed the result back with a clarifying caption, writing that “‘basilar’ is a common mis-transcription of ‘basal’ that Med-Gemini has learned from the training data, though the meaning of the report is unchanged.”
Google acknowledged the issue in a public LinkedIn comment, again downplaying the issue as a “misspelling.”
“Thank you for noting this!” the company said. “We’ve updated the blog post figure to show the original model output, and agree it is important to showcase how the model actually operates.”
As of this article’s publication, the research paper itself still contains the error with no updates or acknowledgement.
Whether it’s a typo, a hallucination, or both, errors like these raise much larger questions about the standards healthcare AI should be held to, and when it will be ready to be released into public-facing use cases.
“The problem with these typos or other hallucinations is I don’t trust our humans to review them”
“The problem with these typos or other hallucinations is I don’t trust our humans to review them, or certainly not at every level,” Shah tells The Verge. “These things propagate. We found in one of our analyses of a tool that somebody had written a note with an incorrect pathologic assessment — pathology was positive for cancer, they put negative (inadvertently) … But now the AI is reading all those notes and propagating it, and propagating it, and making decisions off that bad data.”
Errors with Google’s healthcare models have persisted. Two months ago, Google debuted MedGemma, a newer and more advanced healthcare model that specializes in AI-based radiology results, and medical professionals found that if they phrased questions differently when asking the AI model questions, answers varied and could lead to inaccurate outputs.
In one example, Dr. Judy Gichoya, an associate professor in the department of radiology and informatics at Emory University School of Medicine, asked MedGemma about a problem with a patient’s rib X-ray with a lot of specifics — “Here is an X-ray of a patient [age] [gender]. What do you see in the X-ray?” — and the model correctly diagnosed the issue. When the system was shown the same image but with a simpler question — “What do you see in the X-ray?” — the AI said there weren’t any issues at all. “The X-ray shows a normal adult chest,” MedGemma wrote.
In another example, Gichoya asked MedGemma about an X-ray showing pneumoperitoneum, or gas under the diaphragm. The first time, the system answered correctly. But with slightly different query wording, the AI hallucinated multiple types of diagnoses.
“The question is, are we going to actually question the AI or not?” Shah says. Even if an AI system is listening to a doctor-patient conversation to generate clinical notes, or translating a doctor’s own shorthand, he says, those have hallucination risks which could lead to even more dangers. That’s because medical professionals could be less likely to double-check the AI-generated text, especially since it’s often accurate.
“If I write ‘ASA 325 mg qd,’ it should change it to ‘Take an aspirin every day, 325 milligrams,’ or something that a patient can understand,” Shah says. “You do that enough times, you stop reading the patient part. So if it now hallucinates — if it thinks the ASA is the anesthesia standard assessment … you’re not going to catch it.”
Shah says he’s hoping the industry moves toward augmentation of healthcare professionals instead of replacing clinical aspects. He’s also looking to see real-time hallucination detection in the AI industry — for instance, one AI model checking another for hallucination risk and either not showing those parts to the end user or flagging them with a warning.
“In healthcare, ‘confabulation’ happens in dementia and in alcoholism where you just make stuff up that sounds really accurate — so you don’t realize someone has dementia because they’re making it up and it sounds right, and then you really listen and you’re like, ‘Wait, that’s not right’ — that’s exactly what these things are doing,” Shah says. “So we have these confabulation alerts in our system that we put in where we’re using AI.”
Gichoya, who leads Emory’s Healthcare Al Innovation and Translational Informatics lab, says she’s seen newer versions of Med-Gemini hallucinate in research environments, just like most large-scale AI healthcare models.
“Their nature is that [they] tend to make up things, and it doesn’t say ‘I don’t know,’ which is a big, big problem for high-stakes domains like medicine,” Gichoya says.
She added, “People are trying to change the workflow of radiologists to come back and say, ‘AI will generate the report, then you read the report,’ but that report has so many hallucinations, and most of us radiologists would not be able to work like that. And so I see the bar for adoption being much higher, even if people don’t realize it.”
Dr. Jonathan Chen, associate professor at the Stanford School of Medicine and the director for medical education in AI, searched for the right adjective — trying out “treacherous,” “dangerous,” and “precarious” — before settling on how to describe this moment in healthcare AI. “It’s a very weird threshold moment where a lot of these things are being adopted too fast into clinical care,” he says. “They’re really not mature.”
On the “basilar ganglia” issue, he says, “Maybe it’s a typo, maybe it’s a meaningful difference — all of those are very real issues that need to be unpacked.”
Some parts of the healthcare industry are desperate for help from AI tools, but the industry needs to have appropriate skepticism before adopting them, Chen says. Perhaps the biggest danger is not that these systems are sometimes wrong — it’s how credible and trustworthy they sound when they tell you an obstruction in the “basilar ganglia” is a real thing, he says. Plenty of errors slip into human medical notes, but AI can actually exacerbate the problem, thanks to a well-documented phenomenon known as automation bias, where complacency leads people to miss errors in a system that’s right most of the time. Even AI checking an AI’s work is still imperfect, he says. “When we deal with medical care, imperfect can feel intolerable.”
“Maybe other people are like, ‘If we can get as high as a human, we’re good enough.’ I don’t buy that for a second”
“You know the driverless car analogy, ‘Hey, it’s driven me so well so many times, I’m going to go to sleep at the wheel.’ It’s like, ‘Whoa, whoa, wait a minute, when your or somebody else’s life is on the line, maybe that’s not the right way to do this,’” Chen says, adding, “I think there’s a lot of help and benefit we get, but also very obvious mistakes will happen that don’t need to happen if we approach this in a more deliberate way.”
Requiring AI to work perfectly without human intervention, Chen says, could mean “we’ll never get the benefits out of it that we can use right now. On the other hand, we should hold it to as high a bar as it can achieve. And I think there’s still a higher bar it can and should reach for.” Getting second opinions from multiple, real people remains vital.
That said, Google’s paper had more than 50 authors, and it was reviewed by medical professionals before publication. It’s not clear exactly why none of them caught the error; Google did not directly answer a question about why it slipped through.
Dr. Michael Pencina, chief data scientist at Duke Health, tells The Verge he’s “much more likely to believe” the Med-Gemini error is a hallucination than a typo, adding, “The question is, again, what are the consequences of it?” The answer, to him, rests in the stakes of making an error — and with healthcare, those stakes are serious. “The higher-risk the application is and the more autonomous the system is … the higher the bar for evidence needs to be,” he says. “And unfortunately we are at a stage in the development of AI that is still very much what I would call the Wild West.”
“In my mind, AI has to have a way higher bar of error than a human,” Providence’s Shah says. “Maybe other people are like, ‘If we can get as high as a human, we’re good enough.’ I don’t buy that for a second. Otherwise, I’ll just keep my humans doing the work. With humans I know how to go and talk to them and say, ‘Hey, let’s look at this case together. How could we have done it differently?’ What are you going to do when the AI does that?”
Artificial Intelligence
Valve founder Gabe Newell just purchased a superyacht company

In a post about the change, Oceanco says Newell’s interest in the brand comes from a “lifelong fascination with the sea” and “a deep respect for the people who live and work on it.” Oceanco is based in the Netherlands, and it has changed leadership a few times since its founding in 1987, with private investor Mohammed Al Barwani helming the company for the past 15 years before Newell came along.
As for what Newell plans to do now that he’s the head of a big superyacht builder, Oceanco puts it pretty simply: “His first decision? Leave the team alone. Seriously. Oceanco has vision and integrity, and a culture that actually works. Gabe doesn’t want to fix it, he wants to fuel it.”
Artificial Intelligence
Sengled’s downfall shows the peril of relying on cloud connections for smart home control

After repeatedly leaving customers without smart control of their lights, Sengled has been booted from Amazon’s Works With Alexa program. As first reported by TechHive, beginning August 1st, Sengled’s Alexa skill for controlling its line of LED lights, plugs, switches, and sensors with your voice and routines is no longer available.
In a statement to The Verge explaining the decision, Amazon spokesperson Lauren Raemhild said, “We hold a high bar for the Alexa experience. Sengled has experienced a series of prolonged outages over the past few months that have not been resolved, preventing customers from being able to use Sengled’s Alexa skill to control their light bulbs.”
The future doesn’t look bright for Sengled, which has been silent since the problems started appearing earlier this summer. There appears to have been no communication to customers from the company (Amazon did reach out to its customers about the outages), and no indication of these issues on its website. Repeated attempts by The Verge to contact Sengled have been met with no response.
If your Sengled bulbs were Wi-Fi, you’re out of luck.
There is some good news. If you own Sengled bulbs that use Zigbee, BLE Mesh, or Matter, rather than Wi-Fi, they can still work with Alexa by bypassing Sengled’s spotty servers and connecting to a compatible Echo speaker or Eero Wi-Fi router (this may require setting them up again). Another option is to connect Zigbee bulbs to third-party platform hubs that support the protocol, such as Home Assistant, Hubitat, or the Aeotec SmartThings hub.
But if your bulbs were Sengled’s Wi-Fi ones, you’re out of luck. These won’t connect to Alexa, although they will still work with Sengled’s app, for as long as Sengled’s servers are still running. Users have started reporting problems there, too. All of which goes to show that relying on cloud services to turn your lights on is a fragile solution.
This is a story we’ve seen too often in the smart home. Just last month, Belkin shuttered its WeMo smart home business, and the smart home graveyard is littered with other examples: iHome, Revolv, Staples Connect, Lowes’ Iris, Best Buy’s Insignia, and more.
A common thread with these shutdowns is that the products relied on cloud servers. At one time, it was easier and less expensive for a company to develop a cloud-based controller than a local system, as they don’t require a hub or bridge and can be simpler to set up and use.
However, companies have to maintain those servers, as well as API connections to smart home platforms and voice assistants like Alexa and Google Assistant, which can be costly and resource-intensive. When the business model no longer pans out, history shows us that if they can’t sell it, companies just shut it down.
This brings me to my best piece of advice to anyone buying a smart home device today, especially something as integral as lighting: make sure it has the option of local control. That way, if the company goes under or stops providing the service you signed up for, then your device will still keep working (in some fashion). Plus, locally controlled devices tend to be faster, as they don’t have to wait for a response from a server.
Relying on cloud services to turn your lights on is a fragile solution.
As noted, some Sengled bulbs don’t rely on a cloud connection and instead can work locally in your home. Thanks to a connection via local protocols like Apple’s HomeKit or Zigbee, some products from those companies listed above also still work, even though their servers are gone.
That’s one of the reasons why the new Matter standard is so crucial to the smart home. While it has had its problems, Matter is built on the foundations of HomeKit, Zigbee, and other technologies. It’s an entirely local protocol, communicating to a Matter controller (hub) in your home, not to a company’s cloud.
While Matter ecosystems such as Apple Home, Google Home, Amazon Alexa, and Home Assistant can connect to the cloud to give control when you’re away from home and enable other features like voice assistants, that’s a layer on top of Matter. If a device like a smart bulb supports Matter, either over Wi-Fi or Thread, you don’t need the internet to turn on the lights. And, if the manufacturer’s server dies, your device won’t.
Matter isn’t the only option here. Devices that work over Zigbee, Z-Wave, BLE and BLE Mesh, and local Wi-Fi, also offer local control. But the standardization of Matter, its wide industry support, and its use of non-proprietary IP-based protocols Matter and Thread all broaden its overall compatibility and should make it more futureproof.
The situation with Sengled is just the latest reminder that for a truly reliable smart home, look for local control. While the cloud offers benefits, it should be part of your solution for a smarter home, not the only one.
Developed by Apple, Amazon, Google, Samsung, and others, Matter is an open-sourced, IP-based connectivity software layer for smart home devices. It works over Wi-Fi, Ethernet, and Thread.
Thread is a low-power, wireless mesh protocol. It operates on the same 2.4GHz spectrum as Zigbee and is designed for low-power devices, such as sensors, light bulbs, plugs, and shades. IP-based, Thread devices can communicate directly with each other, the internet, and with other networks using a Thread Border Router.
Today, Matter supports most of the main device types in the home, including lighting, thermostats, locks, robot vacuums, refrigerators, dishwashers, dryers, ovens, smoke alarms, air quality monitors, EV chargers, and more.
A smart home gadget with the Matter logo can be set up and used with any Matter-compatible ecosystem via a Matter controller and controlled by more than one ecosystem with a feature called multi-admin.
Amazon Alexa, Google Home, Samsung SmartThings, Apple Home, Home Assistant, Ikea, and Aqara are among the well-known smart home companies supporting Matter, along with hundreds of device manufacturers.
Artificial Intelligence
IGN hit by layoffs as parent company Ziff Davis cuts costs

“The company has told us that the reason for this layoff stems from a Ziff Davis-mandate to cut costs despite several quarters in a row of year-over-year revenue increases, to which IGN Entertainment responded by coming for our members’ jobs,” the guild says. “This is perplexing to us, as we are told again and again that IGN Entertainment has had a tremendously successful year thus far thanks to their hard work.”
An employee in the engineering department was also laid off, Hunter Paniagua, a representative of the Pacific Media Workers Guild, which represents the IGN Creators Guild, tells The Verge.
Ziff Davis didn’t immediately reply to a request for comment. Last year, across all of Ziff Davis, the company employed approximately 3,800 people, according to an SEC filing.
Last year, IGN acquired the website portfolio of Gamer Network, which includes publications like Eurogamer, Gamesindustry.biz, and Rock Paper Shotgun. Terms of that deal weren’t disclosed at the time.
“The company has not responded to the union’s questions about whether its budget for future acquisitions is being reconsidered as a cost-saving measure alongside these other apparently necessary personnel cuts,” the IGN Creators Guild says.
-
Cyber Security3 weeks ago
Hackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Cyber Security3 weeks ago
DOGE Denizen Marko Elez Leaked API Key for xAI – Krebs on Security
-
Fintech3 weeks ago
Fed Governor Lisa Cook: AI Set to Reshape Labor Market | PYMNTS.com
-
Artificial Intelligence3 weeks ago
Subaru’s new Uncharted EV looks like an undercover Toyota C-HR
-
Fintech2 weeks ago
American Express Likes What It Sees in ‘Wait and See’ Economy | PYMNTS.com
-
Fintech3 weeks ago
Retailers Rely on Modern POS to Beat Uncertainty | PYMNTS.com
-
Fintech2 weeks ago
Blackstone Backs Off From TikTok Ownership Plans | PYMNTS.com
-
Fintech2 weeks ago
WEX Taps AI For Faster FSA Reimbursements | PYMNTS.com