On December 9th, Rolling Stone published a story that some saw as a bombshell: a network of coordinated, “inauthentic” social media accounts had a hand in the weekslong discourse that trailed the release of Taylor Swift’s recent album, The Life of a Showgirl.
Artificial Intelligence
A vague study on Nazi bots created chaos in the Taylor Swift fan universe
It was a big deal for those in the Swiftie/anti-Swiftie universe. Immediately following the record’s release in October, discussion of Showgirl was fan- and critic-driven — passionate but fairly calm. Listeners debated the meaning of songs, analyzed the flood of material for hidden meanings, and questioned whether the music was even good. Some fans took issue with specific lyrics, especially around Swift’s use of slang or metaphors. But at some point the discussion took a turn, and soon the tenor on social media was about whether Swift was hiding Nazi imagery into her output, or whether she was secretly MAGA. (The musician endorsed Kamala Harris for president in 2024.) Soon enough in corners of the internet, the album release was consumed by fights over whether Swift was signaling a hard right-wing pivot.
On its surface, the conversations might seem like standard fandom and anti-fandom, prompted by a much-hyped album from an artist that a lot of people have big feelings about. But the cycle of trending discourse snowballing into wall-to-wall social media activity is more than a fan rabbit hole: it’s an example of how uneven incentives turbocharge the sludge in our contemporary media ecosystem.
Two months later, new research by a little-known social listening firm seemed to upend what the public knew about how that viral discourse spread. Rolling Stone reported on research compiled by a company called Gudea, which promises clients “early visibility into rising narratives” on social media platforms. Gudea had analyzed 24,679 posts from 18,213 users on 14 different online platforms as they discussed Swift in the days following the album release. According to the report, “inauthentic” narratives that started on fringe platforms like 4chan eventually jumped to other more mainstream platforms like X and TikTok, where real people began debating whether Swift was pushing Nazi symbols and comparing Swift with Kanye West.
“This demonstrates how a strategically seeded falsehood can convert into widespread authentic discourse, reshaping public perception even when most users do not believe the originating claim,” the report reads. For some Swift fans, it was incontrovertible proof that negative discourse was the work of bots and agents of chaos. They took a victory lap.
But now Gudea’s report and Rolling Stone’s coverage has triggered a second wave of arguments that have at times spiraled into a new and ever-expanding web of theories: about Swift and her covert PR moves, Gudea and Rolling Stone, and the very act of posting online.
“No, you goofballs — Taylor Swift is not the hapless victim of a bot campaign,” one TikTok with 418,000 views begins. “You just don’t have any media literacy and you got bamboozled.”
On the other end of the spectrum, a separate TikTok user shared a video that amounted to “I told you so,” defending Swift and sharing the Rolling Stone piece. “So many opportunities to be smug this year,” the caption read. “I am taking all of them.”
And it all started with a bare-bones report that threw gasoline on a perpetual fire.
Gudea is part social listening, part public relations firm: “By translating complex online activity into clear, decision-ready intelligence, GUDEA helps communicators act sooner, proactively manage risk, and respond with confidence,” the company website reads. The startup has been around since 2023, but when the report was published, the company website was sparse. It’s the only report of its kind on the site, and there’s little information about Gudea’s personnel or previous clients. That led to accusations by Swift critics that Gudea was spun up and hired specifically to publish a report that was sympathetic to the pop star — perhaps even colluding with Swift and Rolling Stone to launder her image via the press. Some claimed Gudea was an “AI company” deploying generative AI to discredit legitimate critiques of The Life of a Showgirl. And perhaps most offensively: that the report was taking criticism coming from real people and writing it off as bot behavior.
There was plenty of organic criticism of the album and Swift’s persona more generally, particularly around lyrics and symbolism that some listeners — especially Black women — called out as racist. It follows yearslong commentary about Swift’s role in pop culture and politics as a rich, powerful white woman who is able to bend national conversations to her will and grab and retain attention on her personal projects, but only when she wants to. Some people took the Gudea report to be saying that these real, human-driven frustrations were, essentially, not real.
“Fuck you @rollingstone for saying that the outrage against Taylor Swift was “bot manufactured” when the ourtage came from Black Women. Actual. Human. Beings,” a post on Threads reads. “Are you teaming up with American Eagle? … Because the racism is LOUD.”
Miles Klee, the Rolling Stone reporter who covered Gudea’s findings, told The Verge that the outlet did not commission the report and that Swift is not a Gudea client.
“Contrary to some readings of the article, it does not suggest that every account alleging that Swift supports Trump or harbors white supremacist views was part of an influence network,” Klee wrote in an email. “Certainly, there were and are many people making those claims in earnest. But a significant amount of this content has come from a small subset of coordinated accounts that don’t behave like typical social media users. The public should understand that when they see extreme rhetoric online, it may originate from bad actors looking to manipulate the conversation.”
Jessica Maddox, an associate professor at the University of Georgia who studies social media, says that the conversation following the album release had all the hallmarks of inauthentic activity that she teaches her students to look for.
“It’s a beautiful, sunny day, and then all of a sudden … out of the blue, here comes the afternoon thunderstorm. It’s a bad one, there’s wind, there’s rain, there’s lightning, thunder, and then almost as fast as it came on, it’s gone,” Maddox says. “It has dissipated and it’s back to being a sunny day. Bot activity is kind of like that.”
Inauthentic engagement is intense, short-lived, and includes repeated refrains or sayings that recur across posts and users. Maddox — who intentionally discloses that she is a fan of Swift — also found it curious that the flavor of discourse was similar across multiple platforms. Typically when content jumps platforms there’s a sense of judgement or ridicule, she says: X users laughing about the weird thing TikTok users are doing, for example.
“I saw more of what felt like copy-and-paste topics and refrains and ideas being moved around almost too neatly compared to how discourse normally functions online,” Maddox told The Verge.
The Gudea report tracks how narratives emerged and proliferated over the course of several weeks and found that the 3.77 percent of users displaying nontypical behavior drove more than a quarter of the volume of discussion on platforms. (Gudea defines inauthentic accounts as those that “operate in ways that distort the online conversation,” like having automated posting patterns, repeating identical messages at scale, or coordinating with networks of other accounts.) Gudea also mapped and clustered different narratives that were circulating, and found that three topics were amplified by “nontypical” accounts: Nazi symbolism and conspiracies, allegations that Swift is MAGA, and the politicization of Swift’s relationship with NFL player Travis Kelce.
But lost in much of the discourse — and coverage — around the report is that Gudea acknowledges the vast majority of people were acting like typical users and that much of the discourse was “stable and free from inorganic influence.” Gudea found that discussion around cultural appropriation and Swift’s use of African American Vernacular English (AAVE) was authentic, as were general critiques of the album quality and meta commentary around Swift’s wealth and ethics. Gudea says the clearest example of inauthentic actors sowing the seeds of discourse came when accusations that Swift was using Nazi imagery successfully prompted real people to compare her to Kanye West.
“Typical users flood in, not to support the conspiracy, but to contextualize it, criticize it, or draw comparisons to Kanye West,” the report reads. “This surge ironically strengthens the narrative’s visibility by increasing conversation volume and engagement velocity.” Gudea says the narrative began on 4chan and subsequently moved through Discord, Reddit, Bluesky, and X.
“If you don’t go past the headline, it is easy and completely valid and fair to feel like, wait a minute, I am a human. I did feel these things. Why am I almost being gaslit by a company?” Maddox says. “You are being called essentially a liar and inauthentic and not human, which in this age of AI, I can’t think of anything more insulting.”
Gudea acknowledges the vast majority of people were acting like typical users
The report’s findings are fairly narrow — but once the top-line findings came down, it was disseminated, decontextualized, and reshared for maximum attention. Like many intense fan and anti-fan communities, nuance is lost as the news ripples outward and new theories and repeated falsehoods take hold. Keith Presley, cofounder and CEO of Gudea, told The Verge in an email that the report was produced independently and that the company was not asked by any outside party to put it together. Presley said Gudea contacted “counsel believed to represent Taylor Swift using publicly available legal contact information” after the report was completed; Gudea didn’t hear back.
“Gudea does not serve as an arbiter of truth. Our objective is to illuminate the underlying structure of how content is repurposed and disseminated in a coordinated manner to influence broader discourse,” Presley said in an email. “Whether a narrative is true or false is not the analytical focus; rather, we examine how actors generate polarization, segment audiences into opposing camps, and manipulate platform algorithms to achieve strategic or harmful outcomes.”
Swift’s publicist Tree Paine did not respond after The Verge said her request to be “Off the record but on background only and not to be quoted,” violates our long-standing background policy for communications professionals.
The report was provided exclusively to Rolling Stone, Presley says, because it aligned with the writer’s beat — a typical arrangement for a small firm hoping to generate media coverage for its services. Presley also clarified that the company uses generative AI only at the final interpretive stage of reports; deep learning models are used to identify patterns from large amounts of data collected from hundreds of platforms. Presley says that instead of using simple keyword searches to collect posts, Gudea uses “entity-based monitoring and platform-wide ingestion” across hundreds of sources to pull content referencing Swift, her album, and associated narratives. The posts were then grouped based on the theme of the discussion.
But the report itself has issues, Maddox points out. There is no detailed methodology, for one; very few details about how the sample size was collected; and not much information on what statistical tests were conducted. There is no breakdown of posts and users — how many came from 4chan versus X, for example, or sample posts. There are no research questions listed that Gudea sought to investigate, with the company instead telling Rolling Stone that the report was prompted by a “gut feeling” from someone on staff. The report, and by extension the news coverage of it, was thin on details and demonstrable evidence. The Rolling Stone piece also clearly hit a nerve with how it characterized the backlash to the album, describing accusations of racism or fascism as “ridiculous” and “bizarre” and perhaps putting too much stock in a surface level analysis.
“The speed at which we tackle viral events is actually pretty horrific and unsustainable.”
If Taylor Swift was actually involved, Maddox jokes, she would have done a better job. The report is neither a slam dunk for Swift superfans nor a smear campaign against real, human critics — it is somewhere in between, pointing to important findings that were communicated sloppily. It requires nuance, qualifications, and further investigation; in other words, the opposite of the immediacy and virality we chase online.
“The speed at which we tackle viral events is actually pretty horrific and unsustainable, I think for our mental health and for our just general sense of being in a culture,” Maddox says. Social platforms have incentivized scale and speed over anything else, and content creators and influencers respond to that: They swarm conversations as they emerge in real time, flocking to wherever the action is and then leaving the topic behind when there’s something new to talk about. Bad information gets passed around like a game of telephone, distorting and watering down the original reporting. A user might make 10 videos about Taylor Swift one day and then move on the next day when reach has flatlined. In other words, we all start to act a bit like bots.
Artificial Intelligence
Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical'
Before you set sail on your AI journey, always check the state of your data – because if there is one thing likely to sink your ship, it is data quality.
Gartner estimates that poor data quality costs organisations an average of $12.9 million each year in wasted resources and lost opportunities. That’s the bad news. The good news is that organisations are increasingly understanding the importance of their data quality – and less likely to fall into this trap.
That’s the view of Ronnie Sheth, CEO of AI strategy, execution and governance firm SENEN Group. The company focuses on data and AI advisory, operationalisation and literacy, and Sheth notes she has been in the data and AI space ‘ever since [she] was a corporate baby’, so there is plenty of real-world experience behind the viewpoint. There is also plenty of success; Sheth notes that her company has a 99.99% client repeat rate.
“If I were to be very practical, the one thing I’ve noticed is companies jump into adopting AI before they’re ready,” says Sheth. Companies, she notes, will have an executive direction insisting they adopt AI, but without a blueprint or roadmap to accompany it. The result may be impressive user numbers, but with no measurable outcome to back anything up.
Even as recently as 2024, Sheth saw many organisations struggling because their data was ‘nowhere where it needed to be.’ “Not even close,” she adds. Now, the conversation has turned more practical and strategic. Companies are realising this, and coming to SENEN Group initially to get help with their data, rather than wanting to adopt AI immediately.
“When companies like that come to us, the first course of order is really fixing their data,” says Sheth. “The next course of order is getting to their AI model. They are building a strong foundation for any AI initiative that comes after that.
“Once they fix their data, they can build as many AI models as they want, and they can have as many AI solutions as they want, and they will get accurate outputs because now they have a strong foundation,” Sheth adds.
With breadth and depth in expertise, SENEN Group allows organisations to right their course. Sheth notes the example of one customer who came to them wanting a data governance initiative. Ultimately, it was the data strategy which was needed – the why and how, the outcomes of what they were trying to do with their data – before adding in governance and providing a roadmap for an operating model. “They’ve moved from raw data to descriptive analytics, moving into predictive analytics, and now we’re actually setting up an AI strategy for them,” says Sheth.
It is this attitude and requirement for practical initiatives which will be the cornerstone of Sheth’s discussion at AI & Big Data Expo Global in London this week. “Now would be the time to get practical with AI, especially enterprise AI adoption, and not think about ‘look, we’re going to innovate, we’re going to do pilots, we’re going to experiment,’” says Sheth. “Now is not the time to do that. Now is the time to get practical, to get AI to value. This is the year to do that in the enterprise.”
Watch the full video conversation with Ronnie Sheth below:
Artificial Intelligence
Apptio: Why scaling intelligent automation requires financial rigour
Greg Holmes, Field CTO for EMEA at Apptio, an IBM company, argues that successfully scaling intelligent automation requires financial rigour.
The “build it and they will come” model of technology adoption often leaves a hole in the budget when applied to automation. Executives frequently find that successful pilot programmes do not translate into sustainable enterprise-wide deployments because initial financial modelling ignored the realities of production scaling.
“When we integrate FinOps capabilities with automation, we’re looking at a change from being very reactive on cost management to being very proactive around value engineering,” says Holmes.
This shifts the assessment criteria for technical leaders. Rather than waiting “months or years to assess whether things are getting value,” engineering teams can track resource consumption – such as cost per transaction or API call – “straight from the beginning.”
The unit economics of scaling intelligent automation
Innovation projects face a high mortality rate. Holmes notes that around 80 percent of new innovation projects fail, often because financial opacity during the pilot phase masks future liabilities.
“If a pilot demonstrates that automating a process saves, say, 100 hours a month, leadership thinks that’s really successful,” says Holmes. “But what it fails to track is that the pilot sometimes is running on over-provisioned infrastructure, so it looks like it performs really well. But you wouldn’t over-provision to that degree during a real production rollout.”
Moving that workload to production changes the calculus. The requirements for compute, storage, and data transfer increase. “API calls can multiply, exceptions and edge cases appear at volume that might have been out of scope for the pilot phase, and then support overheads just grow as well,” he adds.
To prevent this, organisations must track the marginal cost at scale. This involves monitoring unit economics, such as the cost per customer served or cost per transaction. If the cost per customer increases as the customer base grows, the business model is flawed.
Conversely, effective scaling should see these unit costs decrease. Holmes cites a case study from Liberty Mutual where the insurer was able to find around $2.5 million of savings by bringing in consumption metrics and “not just looking at labour hours that they were saving.”
However, financial accountability cannot sit solely with the finance department. Holmes advocates for putting governance “back in the hands of the developers into their development tools and workloads.”
Integration with infrastructure-as-code tools like HashiCorp Terraform and GitHub allows organisations to enforce policies during deployment. Teams can spin up resources programmatically with immediate cost estimates.
“Rather than deploying things and then fixing them up, which gets into the whole whack-a-mole kind of problem,” Holmes explains, companies can verify they are “deploying the right things at the right time.”
When scaling intelligent automation, tension often simmers between the CFO, who focuses on return on investment, and the Head of Automation, who tracks operational metrics like hours saved.
“This translation challenge is precisely what TBM (Technology Business Management) and Apptio are designed to solve,” says Holmes. “It’s having a common language between technology and finance and with the business.”
The TBM taxonomy provides a standardised framework to reconcile these views. It maps technical resources (such as compute, storage, and labour) into IT towers and further up to business capabilities. This structure translates technical inputs into business outputs.
“I don’t necessarily know what goes into all the IT layers underneath it,” Holmes says, describing the business user’s perspective. “But because we’ve got this taxonomy, I can get a detailed bill that tells me about my service consumption and precisely which costs are driving it to be more expensive as I consume more.”
Addressing legacy debt and budgeting for the long-term
Organisations burdened by legacy ERP systems face a binary choice: automation as a patch, or as a bridge to modernisation. Holmes warns that if a company is “just trying to mask inefficient processes and not redesign them,” they are merely “building up more technical debt.”
A total cost of ownership (TCO) approach helps determine the correct strategy. The Commonwealth Bank of Australia utilised a TCO model across 2,000 different applications – of various maturity stages – to assess their full lifecycle costs. This analysis included hidden costs such as infrastructure, labour, and the engineering time required to keep automation running.
“Just because of something’s legacy doesn’t mean you have to retire it,” says Holmes. “Some of those legacy systems are worth maintaining just because the value is so good.”
In other cases, calculating the cost of the automation wrappers required to keep an old system functional reveals a different reality. “Sometimes when you add up the TCO approach, and you’re including all these automation layers around it, you suddenly realise, the real cost of keeping that old system alive is not just the old system, it’s those extra layers,” Holmes argues.
Avoiding sticker shock requires a budgeting strategy that balances variable costs with long-term commitments. While variable costs (OPEX) offer flexibility, they can fluctuate wildly based on demand and engineering efficiency.
Holmes advises that longer-term visibility enables better investment decisions. Committing to specific technologies or platforms over a multi-year horizon allows organisations to negotiate economies of scale and standardise architecture.
“Because you’ve made those longer term commitments and you’ve standardised on different platforms and things like that, it makes it easier to build the right thing out for the long term,” Holmes says.
Combining tight management of variable costs with strategic commitments supports enterprises in scaling intelligent automation without the volatility that often derails transformation.
IBM is a key sponsor of this year’s Intelligent Automation Conference Global in London on 4-5 February 2026. Greg Holmes and other experts will be sharing their insights during the event. Be sure to check out the day one panel session, Scaling Intelligent Automation Successfully: Frameworks, Risks, and Real-World Lessons, to hear more from Holmes and swing by IBM’s booth at stand #362.
See also: Klarna backs Google UCP to power AI agent payments

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
FedEx tests how far AI can go in tracking and returns management
FedEx is using AI to change how package tracking and returns work for large enterprise shippers. For companies moving high volumes of goods, tracking no longer ends when a package leaves the warehouse. Customers expect real-time updates, flexible delivery options, and returns that do not turn into support tickets or delays.
That pressure is pushing logistics firms to rethink how tracking and returns operate at scale, especially across complex supply chains.
This is where artificial intelligence is starting to move from pilot projects into daily operations.
FedEx plans to roll out AI-powered tracking and returns tools designed for enterprise shippers, according to a report by PYMNTS. The tools are aimed at automating routine customer service tasks, improving visibility into shipments, and reducing friction when packages need to be rerouted or sent back.
Rather than focusing on consumer-facing chatbots, the effort centres on operational workflows that sit behind the scenes. These are the systems enterprise customers rely on to manage exceptions, returns, and delivery changes without manual intervention.
How FedEx is applying AI to package tracking
Traditional tracking systems tell customers where a package is and when it might arrive. AI-powered tracking takes a step further by utilising historical delivery data, traffic patterns, weather conditions, and network constraints to flag potential delays before they happen.
According to the PYMNTS report, FedEx’s AI tools are designed to help enterprise shippers anticipate issues earlier in the delivery process. Instead of reacting to missed delivery windows, shippers may be able to reroute packages or notify customers ahead of time.
For businesses that ship thousands of parcels per day, that shift matters. Small improvements in prediction accuracy can reduce support calls, lower refund rates, and improve customer trust, particularly in retail, healthcare, and manufacturing supply chains.
This approach also reflects a broader trend in enterprise software, in which AI is being embedded into existing systems rather than introduced as standalone tools. The goal is not to replace logistics teams, but to minimise the number of manual decisions they need to make.
Returns as an operational problem, not a customer issue
Returns are one of the most expensive parts of logistics. For enterprise shippers, particularly those in e-commerce, returns affect warehouse capacity, inventory planning, and transportation costs.
According to PYMNTS, FedEx’s AI-enabled returns tools aim to automate parts of the returns process, including label generation, routing decisions, and status updates. Companies that use AI to determine the most efficient return path may be able to reduce delays and avoid returning things to the wrong facility.
This is less about convenience and more about operational discipline. Returns that sit idle or move through the wrong channel create cost and uncertainty across the supply chain. AI systems trained on past return patterns can help standardise decisions that were previously handled case by case.
For enterprise customers, this type of automation supports scale. As return volumes fluctuate, especially during peak seasons, systems that adjust automatically reduce the need for temporary staffing or manual overrides.
What FedEx’s AI tracking approach says about enterprise adoption
What stands out in FedEx’s approach is how narrowly focused the AI use case is. There are no broad claims about transformation or reinvention. The emphasis is on reducing friction in processes that already exist.
This mirrors how other large organisations are adopting AI internally. In a separate context, Microsoft described a similar pattern in its article. The company outlined how AI tools were rolled out gradually, with clear limits, governance rules, and feedback loops.
While Microsoft’s case focused on knowledge work and FedEx’s on logistics operations, the underlying lesson is the same. AI adoption tends to work best when applied to specific activities with measurable results rather than broad promises of efficiency.
For logistics firms, those advantages include fewer delivery exceptions, lower return handling costs, and better coordination between shipping partners and enterprise clients.
What this signals for enterprise customers
For end-user companies, FedEx’s move signals that logistics providers are investing in AI as a way to support more complex shipping demands. As supply chains become more distributed, visibility and predictability become harder to maintain without automation.
AI-driven tracking and returns could also change how businesses measure logistics performance. Companies may focus less on delivery speed and more on how quickly issues are recognised and resolved.
That shift could influence procurement decisions, contract structures, and service-level agreements. Enterprise customers may start asking not just where a shipment is, but how well a provider anticipates problems.
FedEx’s plans reflect a quieter phase of enterprise AI adoption. The focus is less on experimentation and more on integration. These systems are not designed to draw attention but to reduce noise in operations that customers only notice when something goes wrong.
(Photo by Liam Kevan)
See also: PepsiCo is using AI to rethink how factories are designed and updated
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
-
Fintech6 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech6 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech5 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Artificial Intelligence7 months agoThe best Android phones
-
Fintech4 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Fintech7 months agoIntuit Adds Agentic AI to Its Enterprise Suite | PYMNTS.com
