The 2025 TV Shootout went down over the weekend, and the results are shocking: yes, the Sony Bravia 8 II won the overall competition and my personal award for silliest name, but the LG G5 came in last place by a huge margin. I was one of the judges, and I think I have a sense of what’s going on.
Artificial Intelligence
Inside the LG G5’s shocking last-place finish at the 2025 TV Shootout
If you’re not familiar, the TV Shootout is an annual event hosted by Value Electronics, a boutique and high-end home theater store started by Robert and Wendy Zohn in 1998. They’ve been holding the event for 21 years now, and Robert proudly begins the occasion by holding up his framed registered trademarks for “TV Shootout” and “King of TV,” which is the title bestowed on the winner. I’ve been following the results for years, so it was a real thrill when Robert asked me to judge last year and equally exciting when he asked me back again this year.
(As Vergecast and Decoder listeners know, I’m out on parental leave for a few months, but Value Electronics is 15 minutes away from my house and staring at TVs in a dark room for several hours with other display nerds is my personal heaven, so I made a tiny exception.)
The event is pretty straightforward: the flagship 65-inch OLED TVs from Sony, LG, Panasonic, and Samsung were each professionally calibrated as closely as possible to reference standards by Dwayne Davis, a professional ISF calibrator familiar to AV forum nerds as D-Nice. The TVs (and MSRP) this year were:
- LG OLED65G5WUA: $3,399.99
- Panasonic TV65Z95BP: $3,199.99
- Samsung QN65S95FAFXZA: $3,299.99
- Sony K-65XR80M2: $3,499.99
Robert had asked many more manufacturers to participate, and most declined, knowing they could not compete. He also excluded mini LED TVs this year after they didn’t stack up to the OLEDs last year; he plans to have a separate shootout for those later.
The Shootout judges were all professional display experts who work in and around the film industry. Many of them have been judging the Shootout for years now. They were:
- Ilya Akiyoshi, a cinematographer who’s worked on The White Lotus and Captain America: Civil War
- Todd Anderson, a certified THX calibrator and host of the Home Theater News Review podcast
- Chris Boylan, an ISF-certified calibrator and editor-at-large of eCoustics
- Jason Dustal, an ISF calibration instructor and co-chair of the CEDIA standards committee
- Jeffrey Hagerman, a cinematographer and colorist
- Cecil Meade, an ISF-certified calibrator known as ClassyTech on the AV forums
- John Reformato, an ISF-certified calibrator
- Mike Renna, an ISF-certified calibrator
- Richard Drutman, a filmmaker
- David Mackenzie, CEO of Fidelity in Motion, a compression and mastering company
- And, of course, me
The rest of the room was filled with engineers and marketing folks from Sony, LG, and Samsung, several YouTubers, and various other display nerds, all paying close attention to the judging and the differences between the displays.
The judges were asked to objectively evaluate how closely the images on each set matched a pair of $43,000 Sony BVM-HX3110 professional reference monitors across a number of categories in a very dark room, using both test patterns and real content delivered from a Panasonic Blu-ray player, a Kaleidescape streaming box, and an Apple TV, all switched by an AVPro Edge 8×8 HDMI matrix and delivered over Bullet Train optical HDMI cables.
The closer the image was to those BVM reference displays, the higher the score, and the further from the reference, the lower the score. There were categories in which some TVs might have looked subjectively better than the reference displays, particularly in dark scenes where all the TVs tended to boost shadow detail to be more visible. But the judges were instructed to give lower scores for deviating from the reference in either direction. We were also instructed not to compare the TVs to one another, only to the reference monitors.
It was only the final category, “bright room out of the box,” that was totally subjective, and in which we were allowed to compare the TVs to each other. As the name suggests, the shades were opened in the room, and the TVs were set to uncalibrated filmmaker modes with energy-saving features turned off. More on this in a moment.
As ever, this means the Shootout ultimately delivers a very specific kind of winner: the TV that can be most closely calibrated to match an expensive professional reference display when viewed in a dark room. We didn’t look at anything else at all: not gaming features, number of HDMI inputs, operating systems, or even Dolby Vision support (which the Samsung does not have). This whole thing was about the limits of picture quality and picture quality alone. There are a lot of reasons you might pick any of these TVs that have nothing to do with how closely they can be calibrated to match a reference display, but that’s not what the Shootout is about.
It’s a big upgrade year for OLED TVs: Panasonic is back in the US market with the Z95B, and there are new panel technologies in the mix. LG and Panasonic are using tandem OLED panels for the first time, while Sony and Samsung are using new, brighter QD-OLED panels. (You can pretty easily surmise that Samsung is providing the QD-OLEDs and LG is behind the tandems, but none of the manufacturers will confirm anything.)
The underlying commonality of the panels means the Shootout really stresses the image processing differences between the manufacturers, and the results were fascinating. Panasonic had an incredibly strong showing, coming in first on the HDR tests and third overall by only a hair. Sony won the King of TV title for the seventh year in a row, which will do nothing to quell critics who say that measuring how close everything can come to a Sony reference display means Sony will always win. But the Samsung was a very close second, and to my eye, it only really fell behind because Samsung cannot help itself when it comes to colors — everything was generally a little more saturated and vibrant than the reference display.
SDR Voting Categories
Manufacturer |
Contrast / Grayscale |
Color |
Processing |
Bright Living Room |
Overall Average |
|---|---|---|---|---|---|
| LG OLED65G5WUA | 3.69 | 3.84 | 3.31 | 4.06 | 3.68 |
| Panasonic TV65Z95BP | 3.84 | 3.97 | 3.78 | 4.25 | 3.92 |
| Samsung QN65S95FAFXZA | 4.38 | 3.88 | 3.66 | 4.19 | 4.00 |
| Sony K-65XR80M2 | 4.41 | 3.84 | 4.22 | 4.19 | 4.16 |
HDR Voting Categories
Manufacturer |
Dynamic Range / EOTF Accuracy |
Color |
Processing |
Bright Living Room |
Overall Average |
|---|---|---|---|---|---|
| LG OLED65G5WUA | 3.41 | 2.84 | 3.34 | 3.94 | 3.30 |
| Panasonic TV65Z95BP | 4.03 | 4.00 | 3.97 | 3.88 | 3.98 |
| Samsung QN65S95FAFXZA | 3.88 | 4.13 | 3.72 | 4.38 | 3.97 |
| Sony K-65XR80M2 | 3.94 | 4.03 | 3.53 | 4.19 | 3.88 |
The shocker was the dismal showing by the LG G5, a hotly anticipated set because of that new tandem OLED panel. There’s no other way to say it: the G5 basically failed several of the tests, showing the wrong colors on some of the linearity test patterns, big posterization artifacts in dark scenes, a slight green cast that kept reappearing, and an overall tendency to push color and brightness in dark scenes in ways that did not require display nerds to see. The LG made Sansa Stark look like she had a blocky red rash during a particularly dim Game of Thrones scene that the Sony and Samsung handled nearly perfectly. “There are lots of problems with the LG this year,” said judge Cecil Meade. I heard other judges say, “Have you seen what the LG is doing?” more than once. Indeed, the G5 was so far off on some of the test patterns that Dwayne reminded the judges that the lowest possible score was 1, not 0. This is generally a bad sign.
If I had to explain why the LG did so poorly while the Panasonic did so well using the same panel, I’d put it down to confidence, bordering on cockiness. The test patterns tended to reveal that Panasonic’s image processing is strictly by the book — the new kid in school playing exactly by the rules, while the other manufacturers have all learned where they want to push things or make their own choices.
A simple example is HDR detail: the Panasonic dutifully accepts the metadata of the HDR content it’s presented and doesn’t display any detail beyond the listed brightness while all the other manufacturers have learned HDR metadata is often inaccurate, so they read the content directly to figure out how best to display it, which often resulted in additional detail being shown. This might result in a lower technical Shootout score, since it’s a deviation from the strict reference image, but TV makers are all doing it because they’ve learned that consumers will reliably complain about losing detail in the highlights and shadows, not about having too much.
These little tricks and tactics are both the result of experience building these displays and what feels like obvious attempts to differentiate in the market. Sony prides itself on reference-level restraint, and it tends to get that result, while Samsung uses the same panel to deliver punched-up Samsung-style colors. And I would say, based on LG’s third-place showing in the Shootout last year, that LG has learned a vivid, contrast-y OLED look sells way more TVs than the ability to calibrate closely to a reference display.
Everything came to a head in the “bright room out of box” test, which was fairly controversial in the room. It’s a totally subjective test with no real standard to measure against, and all the manufacturers spend almost all their engineering time making sure they look great this way because, well, most people put their TVs in a bright room and never change the settings. There’s no way to really rate TVs of this caliber against each other on this test — it really comes down to personal preference. “They’re all fives — they’re all bright, they’re all colorful. What else is there to say?” said David Mackenzie, a judge on the panel who also helped author the UHD specifications. You can see it in the scores, where the LG managed to pull itself back into contention and the saturated colors of the Samsung pushed it into a commanding lead in the HDR test. I would go so far as to argue the bright room scores are important but should be taken out of the averages that determine the winners, because they’re essentially a wild card.
And it’s true: the fine differences between these sets take a dark room and a lot of time and calibration to see. Anyone just putting one on the wall will undoubtedly be happy with their purchase, especially if you factor things like HDMI ports and Dolby Vision into your decision. I have both Sony and LG OLED TVs that reliably wow everyone who looks at them, and a lot of people love the contrast-y LG OLED look — and LG’s cheaper price tags.
But if you’re chasing reference-level image perfection, it’s another year for Sony, while it feels like LG has all but abandoned this particular game. And I’d guess Panasonic is going to put up an even bigger fight next time around.
Artificial Intelligence
How Cisco builds smart systems for the AI era
Among the big players in technology, Cisco is one of the sector’s leaders that’s advancing operational deployments of AI internally to its own operations, and the tools it sells to its customers around the world. As a large company, its activities encompass many areas of the typical IT stack, including infrastructure, services, security, and the design of entire enterprise-scale networks.
Cisco’s internal teams use a blend of machine learning and agentic AI to help them improve their own service delivery and personalise user experiences for its customers. It’s built a shared AI fabric built on patterns of compute and networking that are the product of years spent checking and validating its systems – battle-hardened solutions it then has the confidence to offer to customers. The infrastructure in play relies on high-performance GPUs, of course, but it’s not just raw horse-power. The detail is in the careful integration between compute and network stacks used in model training and the quite different demands from the ongoing load of inference.
Having made its name as the de facto supplier of networking infrastructure for the enterprise, it comes as no shock that it’s in network automation that some of its better-known uses of AI finds their place. Automated configuration workflows and identity management combine into access solutions that are focused on rapid network deployments generated by natural language.
For organisations looking to develop into the next generation of AI users, Cisco has been rolling out hardware and orchestration tools that are aimed explicitly to support AI workloads. A recent collaboration with chip giant NVIDIA led to the emergence of a new line of switches and the Nexus Hyperfabric line of AI network controllers. These aim to simplify the deployment of the complex clusters needed for top-end, high-performance artificial intelligence clusters.
Cisco’s Secure AI Factory framework with partners like NVIDIA and Run:ai is aimed at production-grade AI pipelines. It uses distributed orchestration, GPU utilisation governance, Kubernetes microservice optimisation, and storage, under the umbrella product description Intersight. For more local deployments, Cisco Unified Edge brings all the necessary elements – compute, networking, security, and storage – close to where data gets generated and processed.
In environments where latency metrics are critically important, AI processing at the edge is the answer. But Cisco’s approach is not necessarily to offer dedicated IIoT-specific solutions. Instead, it tries to extend the operational models typically found in a data centre and applies the same technology (if not the same exact methodology) to edge sites. It’s like data centre-grade security policies and configurations available to remote installations. Having the same precepts and standards in cloud and edge mean that Cisco accredited engineers can manage and maintain data centres or small edge deployments using the same skills, accreditation, knowledge, and experience.
Security and risk management figure prominently in the Cisco AI narrative. Its Integrated AI Security and Safety Framework applies high standards of safety and security throughout the life-cycle of AI systems. It considers adversarial threats, supply chain weakness, the risk profiles of multi-agent interactions, and multi-modal vulnerabilities as issues that have to be addressed regardless of the nature or size of any deployment.
Cisco’s work on operational AI also reflects broader ecosystem conversations. The company markets products for organisations wanting to make the transition from generative to agentic AI, where autonomous software agents carry out operational tasks. In most cases, this requires new tooling and new operational protocols.
Cisco’s future AI plans include continuing its central work in infrastructure provision for AI workloads. It’s also pursuing broader adoption of AI-ready networks, including next-gen wireless and unified management systems that will control systems across campus, branch, and cloud environments. The company is also expanding its software and platform investments, including its most recent acquisition (NeuralFabric), to help it build a more comprehensive software stack and product portfolio.
In summary, Cisco’s AI deployment strategy combines hardware, software, and service elements that embed AI into operations, giving organisations a route to production-grade systems. Its work can be found in large-scale infrastructure, systems for unified management, risk mitigation, and anywhere that connects distributed, cloud, and edge computing.
(Image source: Pixabay)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Combing the Rackspace blogfiles for operational AI pointers
In a recent blog output, Rackspace refers to the bottlenecks familiar to many readers: messy data, unclear ownership, governance gaps, and the cost of running models once they become part of production. The company frames them through the lens of service delivery, security operations, and cloud modernisation, which tells you where it is putting its own effort.
One of the clearest examples of operational AI inside Rackspace sits in its security business. In late January, the company described RAIDER (Rackspace Advanced Intelligence, Detection and Event Research) as a custom back-end platform built for its internal cyber defense centre. With security teams working amid many alerts and logs, standard detection engineering doesn’t scale if dependent on the manual writing of security rules. Rackspace says its RAIDER system unifies threat intelligence with detection engineering workflows and uses its AI Security Engine (RAISE) and LLMs to automate detection rule creation, generating detection criteria it describes as “platform-ready” in line with known frameworks such as MITRE ATT&CK. The company claims it’s cut detection development time by more than half and reduced mean time to detect and respond. This is just the kind of internal process change that matters.
The company also positions agentic AI as a way of taking the friction out of complex engineering programmes. A January post on modernising VMware environments on AWS describes a model in which AI agents handle data-intensive analysis and many repeating tasks, yet it keeps “architectural judgement, governance and business decisions” remain in the human domain. Rackspace presents this workflow as stopping senior engineers being sidelined into migration projects. The article states the target is to keep day two operations in scope – where many migration plans fail as teams discover they have modernised infrastructure but not operating practices.
Elsewhere the company sets out a picture of AI-supported operations where monitoring becomes more predictive, routine incidents are handled by bots and automation scripts, and telemetry (plus historical data) are used to spot patterns and, it turn, recommend fixes. This is conventional AIOps language, but it Rackspace is tying such language to managed services delivery, suggesting the company uses AI to reduce the cost of labour in operational pipelines in addition to the more familiar use of AI in customer-facing environments.
In a post describing AI-enabled operations, the company stresses the importance of focus strategy, governance and operating models. It specifies the machinery it needed to industrialise AI, such as choosing infrastructure based on whether workloads involve training, fine-tuning or inference. Many tasks are relatively lightweight and can run inference locally on existing hardware.
The company’s noted four recurring barriers to AI adoption, most notably that of fragmented and inconsistent data, and it recommends investment in integration and data management so models have consistent foundations. This is not an opinion unique to Rackspace, of course, but having it writ large by a technology-first, big player is illustrative of the issues faced by many enterprise-scale AI deployments.
A company of even greater size, Microsoft, is working to coordinate autonomous agents’ work across systems. Copilot has evolved into an orchestration layer, and in Microsoft’s ecosystem, multi-step task execution and broader model choice do exist. However, it’s noteworthy that Redmond is called out by Rackspace on the fact that productivity gains only arrive when identity, data access, and oversight are firmly ensconced into operations.
Rackspace’s near-term AI plan comprises of AI-assisted security engineering, agent-supported modernisation, and AI-augmented service management. Its future plans can perhaps be discerned in a January article published on the company’s blog that concerns private cloud AI trends. In it, the author argues inference economics and governance will drive architecture decisions well into 2026. It anticipates ‘bursty’ exploration in public clouds, while moving inference tasks into private clouds on the grounds of cost stability, and compliance. That’s a roadmap for operational AI grounded in budget and audit requirements, not novelty.
For decision-makers trying to accelerate their own deployments, the useful takeaway is that Rackspace has treats AI as an operational discipline. The concrete, published examples it gives are those that reduce cycle time in repeatable work. Readers may accept the company’s direction and still be wary of the company’s claimed metrics. The steps to take inside a growing business are to discover repeating processes, examine where strict oversight is necessary because of data governance, and where inference costs might be reduced by bringing some processing in-house.
(Image source: Pixabay)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical'
Before you set sail on your AI journey, always check the state of your data – because if there is one thing likely to sink your ship, it is data quality.
Gartner estimates that poor data quality costs organisations an average of $12.9 million each year in wasted resources and lost opportunities. That’s the bad news. The good news is that organisations are increasingly understanding the importance of their data quality – and less likely to fall into this trap.
That’s the view of Ronnie Sheth, CEO of AI strategy, execution and governance firm SENEN Group. The company focuses on data and AI advisory, operationalisation and literacy, and Sheth notes she has been in the data and AI space ‘ever since [she] was a corporate baby’, so there is plenty of real-world experience behind the viewpoint. There is also plenty of success; Sheth notes that her company has a 99.99% client repeat rate.
“If I were to be very practical, the one thing I’ve noticed is companies jump into adopting AI before they’re ready,” says Sheth. Companies, she notes, will have an executive direction insisting they adopt AI, but without a blueprint or roadmap to accompany it. The result may be impressive user numbers, but with no measurable outcome to back anything up.
Even as recently as 2024, Sheth saw many organisations struggling because their data was ‘nowhere where it needed to be.’ “Not even close,” she adds. Now, the conversation has turned more practical and strategic. Companies are realising this, and coming to SENEN Group initially to get help with their data, rather than wanting to adopt AI immediately.
“When companies like that come to us, the first course of order is really fixing their data,” says Sheth. “The next course of order is getting to their AI model. They are building a strong foundation for any AI initiative that comes after that.
“Once they fix their data, they can build as many AI models as they want, and they can have as many AI solutions as they want, and they will get accurate outputs because now they have a strong foundation,” Sheth adds.
With breadth and depth in expertise, SENEN Group allows organisations to right their course. Sheth notes the example of one customer who came to them wanting a data governance initiative. Ultimately, it was the data strategy which was needed – the why and how, the outcomes of what they were trying to do with their data – before adding in governance and providing a roadmap for an operating model. “They’ve moved from raw data to descriptive analytics, moving into predictive analytics, and now we’re actually setting up an AI strategy for them,” says Sheth.
It is this attitude and requirement for practical initiatives which will be the cornerstone of Sheth’s discussion at AI & Big Data Expo Global in London this week. “Now would be the time to get practical with AI, especially enterprise AI adoption, and not think about ‘look, we’re going to innovate, we’re going to do pilots, we’re going to experiment,’” says Sheth. “Now is not the time to do that. Now is the time to get practical, to get AI to value. This is the year to do that in the enterprise.”
Watch the full video conversation with Ronnie Sheth below:
-
Fintech6 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech6 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech5 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Fintech4 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Artificial Intelligence7 months agoThe best Android phones
-
Fintech7 months agoIntuit Adds Agentic AI to Its Enterprise Suite | PYMNTS.com


