Connect with us

Artificial Intelligence

It’s ugly, it’s beautiful, it’s how you know a game might be a classic

Published

on

At their biggest and most expensive, video games all sort of look the same. The reason often comes down to simple economics: More resources means more costs that need to be recouped, and historically the way publishers have done that is by being comically risk-averse. Hence the glut of semi-realistic rocky wastelands that look like death metal album covers where everyone is some kind of Wild West fetishist, or the hero shooters that all look like Pixar but shredded as hell and ready for fan artists to go places I shall not.

On occasion, however, new visual ground is staked. Octopath Traveler 0 is an example of this. The third game in the Octopath series is a lot of things — a newcomer-friendly prequel, a reconfigured adaptation of a mobile game, a pretty great JRPG — but it’s also the end of a 2025 victory lap for the art style that publisher Square Enix has dubbed “HD-2D.” It’s a bold experiment that is now a fixture of the release calendar — and also highly unusual in how much it communicates. An HD-2D game from Square Enix is a statement about what old games it considers classics worth revisiting, and what new games should be received as such.

“The HD-2D style began with the idea: ‘What if we revived games from the Super Famicom, the golden era of pixel art, using modern technology?’” Masaaki Hayasaka, producer of this year’s Dragon Quest I & II HD-2D Remake, told The Verge via email. First used in 2018’s Octopath Traveler, the art style is meant to evoke the pixelated texture and feel that characterized 16-bit role-playing games like Final Fantasy VI, but with the depth and detail afforded by modern 3D graphics. The developers at Square and Octopath co-developer Acquire cleverly achieved this using Unreal Engine, which allowed them to render — and, crucially, lightOctopath like any modern game, placing 2D characters that looked ripped from a CRT screen in a world designed to look great on a modern screen. The hope was that, as a new role-playing franchise debuting on the then-new Nintendo Switch, Octopath Traveler would immediately be seen as both thoroughly modern in its design, but also classic in a way that courted nostalgia-prone gamers.

It worked. After the sales success of Octopath Traveler, Square Enix trademarked the “HD-2D” name (but not the style), a signal of commitment to its new aesthetic paradigm. Then a funny thing happened: The next two HD-2D games from Square were not Octopath sequels, but the tactics RPG Triangle Strategy and a remake of 1994’s Live a Live, one of the most highly acclaimed Super Famicom RPGs to never get an official English-language release. Both released in 2022, these games are where HD-2D stops becoming a novel quirk and more of a design ethos for the publisher. An HD-2D game either seeks to be the ultimate pastiche via original titles like Octopath, distilling an era’s worth of hits into a crisp new package, or it is a loving re-creation of a game that deserves a new day in the sun.

“There lies an opportunity to create room for the imagination, which is unique to the pixel art style”

The pattern holds: Octopath and its sequels continue to be the only original HD-2D titles in the SNES-era JRPG style, with Triangle Strategy holding down the tactics game end and next year’s The Adventures of Elliot: The Millennium Tales seeking to do the same for Zelda-style action RPGs. On the remake front, Square Enix has released Dragon Quest I & II HD-2D Remake with great ceremony, following last year’s take on Dragon Quest III. (The backward release order reflects the order Square prefers players play them in.)

In both contexts, HD-2D has largely been a hit with critics — or at least, critics predisposed to playing the games they all lovingly homage. “It’s a gorgeous style that goes beyond a pure retro look to create something timeless,” writes Polygon’s Oli Welsh in praise of Live a Live’s use of the style, “an extension of a classic ’90s video game aesthetic into the present, which deepens and enriches it whilst staying faithful to its original character.” Many reviews of Octopath Traveler or its sequel call the games “beautiful” or “gorgeous.” Square Enix’s stated goal of pioneering a visual language that does the tricky work of being nostalgic and modern at the same time seems to be a resounding success.

“Perhaps some players may view games with pixel art as something old,” said Octopath Traveler 0 producer Hirohito Suzuki, who also spoke to The Verge via email. “But there lies an opportunity to create room for the imagination, which is unique to the pixel art style — and there are no limits to one’s imagination.”

Of course, there’s a fun little cheat here, one that the Octopath developers noted when describing the development process for a 2019 Unreal Engine promo video. It’s that the HD-2D look owes just as much to PlayStation-era titles like Xenogears and Grandia as it does the Super Nintendo games, making the style a much wider synthesis (and perhaps less novel) than it is frequently billed as. That is, however, no reason to sell it short — Square continues to demonstrate surprising variety with its HD-2D titles. According to Hayasaka, understanding how much leeway the HD-2D approach affords is crucial to its successful implementation.

“The definition of HD-2D is actually very simple, and if you create characters and monsters as pixel art and place them on top of a 3D background, that alone technically works,” Hayasaka said. “Of course, this alone wouldn’t capture the HD-2D-like quality, so from there, you’d sort everything from the color palette, effects, and camera to firmly create that ‘atmosphere that feels right.’ That’s incredibly important and is the crux of HD-2D games. So, I believe the secret to success for any HD-2D project lies in having an art director who could grasp that feeling and sensibility.”

“None of the five titles released so far look exactly the same”

The original HD-2D efforts like Octopath and Triangle Strategy are ironically the least visually expansive, hewing to similar muted color palettes and papercraft diorama-like staging. Colors make or break these games, as overreliance on any one range of shades will threaten to flatten the planes and jeopardize the illusion of depth. These shortcomings are made up for by astounding visual crescendos where light illuminates a scene in a way that feels frankly impossible. The remakes are more colorful. The Dragon Quest games are downright maximalist in a way that barely bothers to evoke pixelated landscapes, leaning more on the “HD” side of “HD-2D” and the strength of Akira Toriyama’s distinctive character and monster designs. They create an opulent backdrop for the simplicity of a master cartoonist’s work, and the result is equally, if not more, affecting.

“Even under the umbrella of HD-2D, none of the five titles released so far look exactly the same,” Hayasaka said. “There are countless ways to broaden the changes made between them, from the color palette to how much of the pixels’ grainy texture you bring out. That’s why I believe it’s an expression method that still has plenty of room to grow.”

“Timeless” is another word that’s used a lot in relation to HD-2D, in a way that speaks to its success and carefully considered deployment. The aesthetic is one born from insecurity — the developers at Acquire initially wanted to make a classic 2D pixel-art game, but were worried that it wouldn’t be seen as sufficiently modern. That conflict is one of the core tensions of video games, the push and pull between artistic expression and technological progress. Art is a moment captured in time, of the time it’s made even if poised to transcend it.

Games, however, are also tied to their technological moment, and technology can be embarrassing. Pixel art smudged to hell by 4K monitors, dialogue rendered oblique or childish because of memory limitations, music made for limited sound chips awkwardly filling 5.1 surround sound speakers. More than most other art forms, the makers of games must decide: Should limitations be preserved, or forgotten? Are compromises made for tech’s sake enduring artistic choices, or temporary acts of pragmatism?

The beautiful delusion of HD-2D is in thinking that there might be a way to craft a perfect version of a game that can survive the ravages of time. A way of rendering a game that is reverent to the past but not embarrassing to the future. Flattering to the history of games but also our modern sensibilities, which prize convenience and ample “quality-of-life” features. There is, however, no escaping these questions. We will age and change, as will technology and our relation to it. It is beautiful to try and transcend this, and it is beautiful to fail, as most of us will. Video games all look the same, until they do not.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.


Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

How Cisco builds smart systems for the AI era

Published

on

Among the big players in technology, Cisco is one of the sector’s leaders that’s advancing operational deployments of AI internally to its own operations, and the tools it sells to its customers around the world. As a large company, its activities encompass many areas of the typical IT stack, including infrastructure, services, security, and the design of entire enterprise-scale networks.

Cisco’s internal teams use a blend of machine learning and agentic AI to help them improve their own service delivery and personalise user experiences for its customers. It’s built a shared AI fabric built on patterns of compute and networking that are the product of years spent checking and validating its systems – battle-hardened solutions it then has the confidence to offer to customers. The infrastructure in play relies on high-performance GPUs, of course, but it’s not just raw horse-power. The detail is in the careful integration between compute and network stacks used in model training and the quite different demands from the ongoing load of inference.

Having made its name as the de facto supplier of networking infrastructure for the enterprise, it comes as no shock that it’s in network automation that some of its better-known uses of AI finds their place. Automated configuration workflows and identity management combine into access solutions that are focused on rapid network deployments generated by natural language.

For organisations looking to develop into the next generation of AI users, Cisco has been rolling out hardware and orchestration tools that are aimed explicitly to support AI workloads. A recent collaboration with chip giant NVIDIA led to the emergence of a new line of switches and the Nexus Hyperfabric line of AI network controllers. These aim to simplify the deployment of the complex clusters needed for top-end, high-performance artificial intelligence clusters.

Cisco’s Secure AI Factory framework with partners like NVIDIA and Run:ai is aimed at production-grade AI pipelines. It uses distributed orchestration, GPU utilisation governance, Kubernetes microservice optimisation, and storage, under the umbrella product description Intersight. For more local deployments, Cisco Unified Edge brings all the necessary elements – compute, networking, security, and storage – close to where data gets generated and processed.

In environments where latency metrics are critically important, AI processing at the edge is the answer. But Cisco’s approach is not necessarily to offer dedicated IIoT-specific solutions. Instead, it tries to extend the operational models typically found in a data centre and applies the same technology (if not the same exact methodology) to edge sites. It’s like data centre-grade security policies and configurations available to remote installations. Having the same precepts and standards in cloud and edge mean that Cisco accredited engineers can manage and maintain data centres or small edge deployments using the same skills, accreditation, knowledge, and experience.

Security and risk management figure prominently in the Cisco AI narrative. Its Integrated AI Security and Safety Framework applies high standards of safety and security throughout the life-cycle of AI systems. It considers adversarial threats, supply chain weakness, the risk profiles of multi-agent interactions, and multi-modal vulnerabilities as issues that have to be addressed regardless of the nature or size of any deployment.

Cisco’s work on operational AI also reflects broader ecosystem conversations. The company markets products for organisations wanting to make the transition from generative to agentic AI, where autonomous software agents carry out operational tasks. In most cases, this requires new tooling and new operational protocols.

Cisco’s future AI plans include continuing its central work in infrastructure provision for AI workloads. It’s also pursuing broader adoption of AI-ready networks, including next-gen wireless and unified management systems that will control systems across campus, branch, and cloud environments. The company is also expanding its software and platform investments, including its most recent acquisition (NeuralFabric), to help it build a more comprehensive software stack and product portfolio.

In summary, Cisco’s AI deployment strategy combines hardware, software, and service elements that embed AI into operations, giving organisations a route to production-grade systems. Its work can be found in large-scale infrastructure, systems for unified management, risk mitigation, and anywhere that connects distributed, cloud, and edge computing.

(Image source: Pixabay)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

Combing the Rackspace blogfiles for operational AI pointers

Published

on

In a recent blog output, Rackspace refers to the bottlenecks familiar to many readers: messy data, unclear ownership, governance gaps, and the cost of running models once they become part of production. The company frames them through the lens of service delivery, security operations, and cloud modernisation, which tells you where it is putting its own effort.

One of the clearest examples of operational AI inside Rackspace sits in its security business. In late January, the company described RAIDER (Rackspace Advanced Intelligence, Detection and Event Research) as a custom back-end platform built for its internal cyber defense centre. With security teams working amid many alerts and logs, standard detection engineering doesn’t scale if dependent on the manual writing of security rules. Rackspace says its RAIDER system unifies threat intelligence with detection engineering workflows and uses its AI Security Engine (RAISE) and LLMs to automate detection rule creation, generating detection criteria it describes as “platform-ready” in line with known frameworks such as MITRE ATT&CK. The company claims it’s cut detection development time by more than half and reduced mean time to detect and respond. This is just the kind of internal process change that matters.

The company also positions agentic AI as a way of taking the friction out of complex engineering programmes. A January post on modernising VMware environments on AWS describes a model in which AI agents handle data-intensive analysis and many repeating tasks, yet it keeps “architectural judgement, governance and business decisions” remain in the human domain. Rackspace presents this workflow as stopping senior engineers being sidelined into migration projects. The article states the target is to keep day two operations in scope – where many migration plans fail as teams discover they have modernised infrastructure but not operating practices.

Elsewhere the company sets out a picture of AI-supported operations where monitoring becomes more predictive, routine incidents are handled by bots and automation scripts, and telemetry (plus historical data) are used to spot patterns and, it turn, recommend fixes. This is conventional AIOps language, but it Rackspace is tying such language to managed services delivery, suggesting the company uses AI to reduce the cost of labour in operational pipelines in addition to the more familiar use of AI in customer-facing environments.

In a post describing AI-enabled operations, the company stresses the importance of focus strategy, governance and operating models. It specifies the machinery it needed to industrialise AI, such as choosing infrastructure based on whether workloads involve training, fine-tuning or inference. Many tasks are relatively lightweight and can run inference locally on existing hardware.

The company’s noted four recurring barriers to AI adoption, most notably that of fragmented and inconsistent data, and it recommends investment in integration and data management so models have consistent foundations. This is not an opinion unique to Rackspace, of course, but having it writ large by a technology-first, big player is illustrative of the issues faced by many enterprise-scale AI deployments.

A company of even greater size, Microsoft, is working to coordinate autonomous agents’ work across systems. Copilot has evolved into an orchestration layer, and in Microsoft’s ecosystem, multi-step task execution and broader model choice do exist. However, it’s noteworthy that Redmond is called out by Rackspace on the fact that productivity gains only arrive when identity, data access, and oversight are firmly ensconced into operations.

Rackspace’s near-term AI plan comprises of AI-assisted security engineering, agent-supported modernisation, and AI-augmented service management. Its future plans can perhaps be discerned in a January article published on the company’s blog that concerns private cloud AI trends. In it, the author argues inference economics and governance will drive architecture decisions well into 2026. It anticipates ‘bursty’ exploration in public clouds, while moving inference tasks into private clouds on the grounds of cost stability, and compliance. That’s a roadmap for operational AI grounded in budget and audit requirements, not novelty.

For decision-makers trying to accelerate their own deployments, the useful takeaway is that Rackspace has treats AI as an operational discipline. The concrete, published examples it gives are those that reduce cycle time in repeatable work. Readers may accept the company’s direction and still be wary of the company’s claimed metrics. The steps to take inside a growing business are to discover repeating processes, examine where strict oversight is necessary because of data governance, and where inference costs might be reduced by bringing some processing in-house.

(Image source: Pixabay)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical'

Published

on

Before you set sail on your AI journey, always check the state of your data – because if there is one thing likely to sink your ship, it is data quality.

Gartner estimates that poor data quality costs organisations an average of $12.9 million each year in wasted resources and lost opportunities. That’s the bad news. The good news is that organisations are increasingly understanding the importance of their data quality – and less likely to fall into this trap.

That’s the view of Ronnie Sheth, CEO of AI strategy, execution and governance firm SENEN Group. The company focuses on data and AI advisory, operationalisation and literacy, and Sheth notes she has been in the data and AI space ‘ever since [she] was a corporate baby’, so there is plenty of real-world experience behind the viewpoint. There is also plenty of success; Sheth notes that her company has a 99.99% client repeat rate.

“If I were to be very practical, the one thing I’ve noticed is companies jump into adopting AI before they’re ready,” says Sheth. Companies, she notes, will have an executive direction insisting they adopt AI, but without a blueprint or roadmap to accompany it. The result may be impressive user numbers, but with no measurable outcome to back anything up.

Even as recently as 2024, Sheth saw many organisations struggling because their data was ‘nowhere where it needed to be.’ “Not even close,” she adds. Now, the conversation has turned more practical and strategic. Companies are realising this, and coming to SENEN Group initially to get help with their data, rather than wanting to adopt AI immediately.

“When companies like that come to us, the first course of order is really fixing their data,” says Sheth. “The next course of order is getting to their AI model. They are building a strong foundation for any AI initiative that comes after that.

“Once they fix their data, they can build as many AI models as they want, and they can have as many AI solutions as they want, and they will get accurate outputs because now they have a strong foundation,” Sheth adds.

With breadth and depth in expertise, SENEN Group allows organisations to right their course. Sheth notes the example of one customer who came to them wanting a data governance initiative. Ultimately, it was the data strategy which was needed – the why and how, the outcomes of what they were trying to do with their data – before adding in governance and providing a roadmap for an operating model. “They’ve moved from raw data to descriptive analytics, moving into predictive analytics, and now we’re actually setting up an AI strategy for them,” says Sheth.

It is this attitude and requirement for practical initiatives which will be the cornerstone of Sheth’s discussion at AI & Big Data Expo Global in London this week. “Now would be the time to get practical with AI, especially enterprise AI adoption, and not think about ‘look, we’re going to innovate, we’re going to do pilots, we’re going to experiment,’” says Sheth. “Now is not the time to do that. Now is the time to get practical, to get AI to value. This is the year to do that in the enterprise.”

Watch the full video conversation with Ronnie Sheth below:

Continue Reading

Trending