James Gunn knows that most people are familiar with Superman’s origin story, which is why DC Studios’ new feature about the Man of Steel opens at a point when he has already become a world famous superhero. Instead of rehashing the tragic beats of Krypton’s destruction, the movie is punctuated with moments that show you how deeply Superman cherishes the few remaining pieces of his homeworld. He loves his Kryptonian family crest and his out-of-control superdog. But the most impressive and alien keepsake that Clark Kent holds close to his heart is a massive stronghold buried deep beneath the ice in Antarctica.
Artificial Intelligence
Superman’s Fortress of Solitude is a Silver Age man cave inspired by nature’s beauty
The Fortress of Solitude (which originated in Street & Smith’s Doc Savage pulps from the 1930s) has been part of Superman’s lore since the Golden Age of comics, when it was first introduced as a hidden citadel tucked into a mountainside by Metropolis. Over the years, the Fortress has been located in a variety of places and taken on different forms, but Gunn’s Superman presents the structure as most people know it — a gleaming cluster of gargantuan crystals situated in the frozen wilderness. Everything about the Fortress is so grand and otherworldly that one could easily assume that DC Studios would have elected to create the whole thing with VFX.
There are digital elements to the new Superman’s take on the Fortress, but Gunn has always been a fan of practically created effects. Having worked with Gunn on The Suicide Squad, The Guardians of the Galaxy Holiday Special, and Guardians of the Galaxy Vol. 3, production designer Beth Mickle was intimately familiar with his filmmaking sensibilities. Mickle could see Gunn’s vision for a Superman movie that was modern but nostalgic and vibrant like a classic comic book.
When I spoke with Mickle recently about her work on Superman, she told me that creating the new Fortress of Solitude wasn’t just difficult — it was an exercise in patience and experimentation. Mickle was certain that going the practical route would result in a much more magical final product, but she wasn’t always sure how she and the rest of Superman’s production team would pull it off.
“I’ve been on those sets where it’s just a full blue screen and the poor actor is sitting there looking at a blue tennis ball, trying to figure out how they’re supposed to be reacting to it,” Mickle said. “I feel like, no matter what, practicality comes across in the filmmaking and in the performances. But it’s really tough to pull practicality like this off seamlessly.”
Like Gunn, Mickle was a big fan of Richard Donner’s first Superman film, in which Christopher Reeves’ Clark Kent summons the Fortress of Solitude by tossing a green crystal into Arctic waters. Though she wanted to pay homage to the 1978 classic, Mickle was also interested in exploring how else the Fortress could be depicted.
“I started looking at the way that crystals sometimes grow naturally from rocks, where they kind of splay upward and have this propulsive, explosive feel,” Mickle explained. “I thought to myself, ‘You know, that actually feels a bit like Superman, exploding up into the sky.’”
Mickle’s ideas about the Fortress as a crystalline eruption also got her thinking about nature and how the structure’s shape could be inspired by things like the ocean and the way that sprays of water can freeze in mid-air in the right conditions. Photographs of crashing waves gave Mickle a general idea of what the Fortress’ silhouette should look like from a distance. But for the building’s interior, Mickle turned to DC’s Silver Age comics from the ’50s and ’60s — an era that depicted the Fortress, as Gunn described it, as “Superman’s man cave.”
“In those comics, the Fortress is where Superman has his lab set up to do experiments, and he’s got a zoo of all the interplanetary plant life and animals he comes across,” Mickle said. “Once we had committed to the Silver Age visual reference, we started looking at a lot of beautiful, mid-century, minimalist, Frank Lloyd Wright-style interiors for more inspiration. That helped us figure out the multilevel, terraced layout that our Fortress has.”
1/5Image: Warner Bros. Discovery / DC Studios
From there, the creative team had to decide where the crystals would go and how they would make the ethereal, translucent pillars. The crew spent about three months on research and development into different methods of using resin to build the Fortress of Solitude piece by piece. There were plenty of hiccups early on. Many of the larger resin crystals — which ranged in length from 12 to 40 feet — would crumble under their own weight or require a certain kind of ribbing to maintain their shape that was too visible to use on film. As other parts of the Fortress’ interior were being constructed on a soundstage in Atlanta, Mickle’s team was trying to figure out how to get the crystals to work. And at one point, she contemplated something a bit more elementary.
“After one sleepless night, I asked my art director, ‘Would it be crazy to actually build this out of real ice and just keep the stage really chilled?’” Mickle recalled. “We both laughed at the absurdity of it, but in that moment of desperation, I was like, ‘I don’t know, do we bring in ice sculptures?’”
In the end, Mickle and construction coordinator Chris Snyder developed a resin pouring method that, while more involved, resulted in crystals that were strong enough to work with. Rather than pouring the resin to make single columns, the team began pouring them as halves, letting them dry, and then bonding them together afterward. This had the added benefit of giving the crystals an unintentional shimmering luster that was in line with the film’s aesthetic.
1/6Image: Warner Bros. Discovery / DC Studios
While all 242 of the crystals now looked great, the next hurdle was getting them positioned to evoke that explosive, propulsive feel that Mickle aimed for. To resemble naturally forming crystals, the resin pillars needed to splay out at various angles. But because the pillars are translucent and backlit, rigging them with internal framing would have broken the fantastical illusion. That kind of internal framing could have been edited out digitally, but Mickle and the team opted for something more analog.
“We actually hung aircraft cable from the ceiling and put a little pick point on the top of each of the crystals,” Mickle explained. “We would put a crystal on its little metal base, lean it to whatever angle we wanted it to be, and then we would have a little point at the very top of the crystal that was attached to the aircraft cable so it would lock it to that exact space.”
Even though it sometimes felt like an uphill battle, Mickle said that she loved the explorational element of building it, and she’s excited to learn what else Gunn has planned for the franchise — especially when it comes to the weird and fantastical.
“I really loved the fantasy worlds here, and it was really fun getting to dive into the pocket universe of it all with Lex,” Mickle said. “We did a lot of that in Guardians of the Galaxy, and it’ll be fun to see if there’s opportunity to do stuff like that with any of the upcoming DC work. It’ll be an exploration for all of us.”
Artificial Intelligence
Combing the Rackspace blogfiles for operational AI pointers
In a recent blog output, Rackspace refers to the bottlenecks familiar to many readers: messy data, unclear ownership, governance gaps, and the cost of running models once they become part of production. The company frames them through the lens of service delivery, security operations, and cloud modernisation, which tells you where it is putting its own effort.
One of the clearest examples of operational AI inside Rackspace sits in its security business. In late January, the company described RAIDER (Rackspace Advanced Intelligence, Detection and Event Research) as a custom back-end platform built for its internal cyber defense centre. With security teams working amid many alerts and logs, standard detection engineering doesn’t scale if dependent on the manual writing of security rules. Rackspace says its RAIDER system unifies threat intelligence with detection engineering workflows and uses its AI Security Engine (RAISE) and LLMs to automate detection rule creation, generating detection criteria it describes as “platform-ready” in line with known frameworks such as MITRE ATT&CK. The company claims it’s cut detection development time by more than half and reduced mean time to detect and respond. This is just the kind of internal process change that matters.
The company also positions agentic AI as a way of taking the friction out of complex engineering programmes. A January post on modernising VMware environments on AWS describes a model in which AI agents handle data-intensive analysis and many repeating tasks, yet it keeps “architectural judgement, governance and business decisions” remain in the human domain. Rackspace presents this workflow as stopping senior engineers being sidelined into migration projects. The article states the target is to keep day two operations in scope – where many migration plans fail as teams discover they have modernised infrastructure but not operating practices.
Elsewhere the company sets out a picture of AI-supported operations where monitoring becomes more predictive, routine incidents are handled by bots and automation scripts, and telemetry (plus historical data) are used to spot patterns and, it turn, recommend fixes. This is conventional AIOps language, but it Rackspace is tying such language to managed services delivery, suggesting the company uses AI to reduce the cost of labour in operational pipelines in addition to the more familiar use of AI in customer-facing environments.
In a post describing AI-enabled operations, the company stresses the importance of focus strategy, governance and operating models. It specifies the machinery it needed to industrialise AI, such as choosing infrastructure based on whether workloads involve training, fine-tuning or inference. Many tasks are relatively lightweight and can run inference locally on existing hardware.
The company’s noted four recurring barriers to AI adoption, most notably that of fragmented and inconsistent data, and it recommends investment in integration and data management so models have consistent foundations. This is not an opinion unique to Rackspace, of course, but having it writ large by a technology-first, big player is illustrative of the issues faced by many enterprise-scale AI deployments.
A company of even greater size, Microsoft, is working to coordinate autonomous agents’ work across systems. Copilot has evolved into an orchestration layer, and in Microsoft’s ecosystem, multi-step task execution and broader model choice do exist. However, it’s noteworthy that Redmond is called out by Rackspace on the fact that productivity gains only arrive when identity, data access, and oversight are firmly ensconced into operations.
Rackspace’s near-term AI plan comprises of AI-assisted security engineering, agent-supported modernisation, and AI-augmented service management. Its future plans can perhaps be discerned in a January article published on the company’s blog that concerns private cloud AI trends. In it, the author argues inference economics and governance will drive architecture decisions well into 2026. It anticipates ‘bursty’ exploration in public clouds, while moving inference tasks into private clouds on the grounds of cost stability, and compliance. That’s a roadmap for operational AI grounded in budget and audit requirements, not novelty.
For decision-makers trying to accelerate their own deployments, the useful takeaway is that Rackspace has treats AI as an operational discipline. The concrete, published examples it gives are those that reduce cycle time in repeatable work. Readers may accept the company’s direction and still be wary of the company’s claimed metrics. The steps to take inside a growing business are to discover repeating processes, examine where strict oversight is necessary because of data governance, and where inference costs might be reduced by bringing some processing in-house.
(Image source: Pixabay)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical'
Before you set sail on your AI journey, always check the state of your data – because if there is one thing likely to sink your ship, it is data quality.
Gartner estimates that poor data quality costs organisations an average of $12.9 million each year in wasted resources and lost opportunities. That’s the bad news. The good news is that organisations are increasingly understanding the importance of their data quality – and less likely to fall into this trap.
That’s the view of Ronnie Sheth, CEO of AI strategy, execution and governance firm SENEN Group. The company focuses on data and AI advisory, operationalisation and literacy, and Sheth notes she has been in the data and AI space ‘ever since [she] was a corporate baby’, so there is plenty of real-world experience behind the viewpoint. There is also plenty of success; Sheth notes that her company has a 99.99% client repeat rate.
“If I were to be very practical, the one thing I’ve noticed is companies jump into adopting AI before they’re ready,” says Sheth. Companies, she notes, will have an executive direction insisting they adopt AI, but without a blueprint or roadmap to accompany it. The result may be impressive user numbers, but with no measurable outcome to back anything up.
Even as recently as 2024, Sheth saw many organisations struggling because their data was ‘nowhere where it needed to be.’ “Not even close,” she adds. Now, the conversation has turned more practical and strategic. Companies are realising this, and coming to SENEN Group initially to get help with their data, rather than wanting to adopt AI immediately.
“When companies like that come to us, the first course of order is really fixing their data,” says Sheth. “The next course of order is getting to their AI model. They are building a strong foundation for any AI initiative that comes after that.
“Once they fix their data, they can build as many AI models as they want, and they can have as many AI solutions as they want, and they will get accurate outputs because now they have a strong foundation,” Sheth adds.
With breadth and depth in expertise, SENEN Group allows organisations to right their course. Sheth notes the example of one customer who came to them wanting a data governance initiative. Ultimately, it was the data strategy which was needed – the why and how, the outcomes of what they were trying to do with their data – before adding in governance and providing a roadmap for an operating model. “They’ve moved from raw data to descriptive analytics, moving into predictive analytics, and now we’re actually setting up an AI strategy for them,” says Sheth.
It is this attitude and requirement for practical initiatives which will be the cornerstone of Sheth’s discussion at AI & Big Data Expo Global in London this week. “Now would be the time to get practical with AI, especially enterprise AI adoption, and not think about ‘look, we’re going to innovate, we’re going to do pilots, we’re going to experiment,’” says Sheth. “Now is not the time to do that. Now is the time to get practical, to get AI to value. This is the year to do that in the enterprise.”
Watch the full video conversation with Ronnie Sheth below:
Artificial Intelligence
Apptio: Why scaling intelligent automation requires financial rigour
Greg Holmes, Field CTO for EMEA at Apptio, an IBM company, argues that successfully scaling intelligent automation requires financial rigour.
The “build it and they will come” model of technology adoption often leaves a hole in the budget when applied to automation. Executives frequently find that successful pilot programmes do not translate into sustainable enterprise-wide deployments because initial financial modelling ignored the realities of production scaling.
“When we integrate FinOps capabilities with automation, we’re looking at a change from being very reactive on cost management to being very proactive around value engineering,” says Holmes.
This shifts the assessment criteria for technical leaders. Rather than waiting “months or years to assess whether things are getting value,” engineering teams can track resource consumption – such as cost per transaction or API call – “straight from the beginning.”
The unit economics of scaling intelligent automation
Innovation projects face a high mortality rate. Holmes notes that around 80 percent of new innovation projects fail, often because financial opacity during the pilot phase masks future liabilities.
“If a pilot demonstrates that automating a process saves, say, 100 hours a month, leadership thinks that’s really successful,” says Holmes. “But what it fails to track is that the pilot sometimes is running on over-provisioned infrastructure, so it looks like it performs really well. But you wouldn’t over-provision to that degree during a real production rollout.”
Moving that workload to production changes the calculus. The requirements for compute, storage, and data transfer increase. “API calls can multiply, exceptions and edge cases appear at volume that might have been out of scope for the pilot phase, and then support overheads just grow as well,” he adds.
To prevent this, organisations must track the marginal cost at scale. This involves monitoring unit economics, such as the cost per customer served or cost per transaction. If the cost per customer increases as the customer base grows, the business model is flawed.
Conversely, effective scaling should see these unit costs decrease. Holmes cites a case study from Liberty Mutual where the insurer was able to find around $2.5 million of savings by bringing in consumption metrics and “not just looking at labour hours that they were saving.”
However, financial accountability cannot sit solely with the finance department. Holmes advocates for putting governance “back in the hands of the developers into their development tools and workloads.”
Integration with infrastructure-as-code tools like HashiCorp Terraform and GitHub allows organisations to enforce policies during deployment. Teams can spin up resources programmatically with immediate cost estimates.
“Rather than deploying things and then fixing them up, which gets into the whole whack-a-mole kind of problem,” Holmes explains, companies can verify they are “deploying the right things at the right time.”
When scaling intelligent automation, tension often simmers between the CFO, who focuses on return on investment, and the Head of Automation, who tracks operational metrics like hours saved.
“This translation challenge is precisely what TBM (Technology Business Management) and Apptio are designed to solve,” says Holmes. “It’s having a common language between technology and finance and with the business.”
The TBM taxonomy provides a standardised framework to reconcile these views. It maps technical resources (such as compute, storage, and labour) into IT towers and further up to business capabilities. This structure translates technical inputs into business outputs.
“I don’t necessarily know what goes into all the IT layers underneath it,” Holmes says, describing the business user’s perspective. “But because we’ve got this taxonomy, I can get a detailed bill that tells me about my service consumption and precisely which costs are driving it to be more expensive as I consume more.”
Addressing legacy debt and budgeting for the long-term
Organisations burdened by legacy ERP systems face a binary choice: automation as a patch, or as a bridge to modernisation. Holmes warns that if a company is “just trying to mask inefficient processes and not redesign them,” they are merely “building up more technical debt.”
A total cost of ownership (TCO) approach helps determine the correct strategy. The Commonwealth Bank of Australia utilised a TCO model across 2,000 different applications – of various maturity stages – to assess their full lifecycle costs. This analysis included hidden costs such as infrastructure, labour, and the engineering time required to keep automation running.
“Just because of something’s legacy doesn’t mean you have to retire it,” says Holmes. “Some of those legacy systems are worth maintaining just because the value is so good.”
In other cases, calculating the cost of the automation wrappers required to keep an old system functional reveals a different reality. “Sometimes when you add up the TCO approach, and you’re including all these automation layers around it, you suddenly realise, the real cost of keeping that old system alive is not just the old system, it’s those extra layers,” Holmes argues.
Avoiding sticker shock requires a budgeting strategy that balances variable costs with long-term commitments. While variable costs (OPEX) offer flexibility, they can fluctuate wildly based on demand and engineering efficiency.
Holmes advises that longer-term visibility enables better investment decisions. Committing to specific technologies or platforms over a multi-year horizon allows organisations to negotiate economies of scale and standardise architecture.
“Because you’ve made those longer term commitments and you’ve standardised on different platforms and things like that, it makes it easier to build the right thing out for the long term,” Holmes says.
Combining tight management of variable costs with strategic commitments supports enterprises in scaling intelligent automation without the volatility that often derails transformation.
IBM is a key sponsor of this year’s Intelligent Automation Conference Global in London on 4-5 February 2026. Greg Holmes and other experts will be sharing their insights during the event. Be sure to check out the day one panel session, Scaling Intelligent Automation Successfully: Frameworks, Risks, and Real-World Lessons, to hear more from Holmes and swing by IBM’s booth at stand #362.
See also: Klarna backs Google UCP to power AI agent payments

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
-
Fintech6 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech6 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech5 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Fintech4 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Artificial Intelligence7 months agoThe best Android phones
-
Fintech7 months agoIntuit Adds Agentic AI to Its Enterprise Suite | PYMNTS.com

