Artificial Intelligence
Alibaba enters physical AI race with open-source robot model RynnBrain
Alibaba has entered the race to build AI that powers robots, not just chatbots. The Chinese tech giant this week unveiled RynnBrain, an open-source model designed to help robots perceive their environment and execute physical tasks.
The move signals China’s accelerating push into physical AI as ageing populations and labour shortages drive demand for machines that can work alongside—or replace—humans. The model positions Alibaba alongside Nvidia, Google DeepMind, and Tesla in the race to build what Nvidia CEO Jensen Huang calls “a multitrillion-dollar growth opportunity.”
Unlike its competitors, however, Alibaba is pursuing an open-source strategy—making RynnBrain freely available to developers to accelerate adoption, similar to its approach with the Qwen family of language models, which rank among China’s most advanced AI systems.
Video demonstrations released by Alibaba’s DAMO Academy show RynnBrain-powered robots identifying fruit and placing it in baskets—tasks that seem simple but require complex AI governing object recognition and precise movement.
The technology falls under the category of vision-language-action (VLA) models, which integrate computer vision, natural language processing, and motor control to enable robots to interpret their surroundings and execute appropriate actions.
Unlike traditional robots that follow preprogrammed instructions, physical AI systems like RynnBrain enable machines to learn from experience and adapt behaviour in real time. This represents a fundamental shift from automation to autonomous decision-making in physical environments—a shift with implications extending far beyond factory floors.
From prototype to production
The timing signals a broader inflexion point. According to Deloitte’s 2026 Tech Trends report, physical AI has begun “shifting from a research timeline to an industrial one,” with simulation platforms and synthetic data generation compressing iteration cycles before real-world deployment.
The transition is being driven less by technological breakthroughs than by economic necessity. Advanced economies face a stark reality: demand for production, logistics, and maintenance continues rising while labour supply increasingly fails to keep pace.
The OECD projects that working-age populations across developed nations will stagnate or decline over the coming decades as ageing accelerates.
Parts of East Asia are encountering this reality earlier than other regions. Demographic ageing, declining fertility, and tightening labour markets are already influencing automation choices in logistics, manufacturing, and infrastructure—particularly in China, Japan, and South Korea.
These environments aren’t exceptional; they’re simply ahead of a trajectory other advanced economies are likely to follow.
When it comes to humanoid robots specifically—machines designed to walk and function like humans—China is “forging ahead of the U.S.,” with companies planning to ramp up production this year, according to Deloitte.
UBS estimates there will be two million humanoids in the workplace by 2035, climbing to 300 million by 2050, representing a total addressable market between $1.4 trillion and $1.7 trillion by mid-century.
The governance gap
Yet as physical AI capabilities accelerate, a critical constraint is emerging—one that has nothing to do with model performance.
“In physical environments, failures cannot simply be patched after the fact,” according to a World Economic Forum analysis published this week. “Once AI begins to move goods, coordinate labour or operate equipment, the binding constraint shifts from what systems can do to how responsibility, authority and intervention are governed.”
Physical industries are governed by consequences, not computation. A flawed recommendation in a chatbot can be corrected in software. A robot that drops a part during handover or loses balance on a factory floor designed for humans causes operations to pause, creating cascading effects on production schedules, safety protocols, and liability chains.
The WEF framework identifies three governance layers required for safe deployment: executive governance setting risk appetite and non-negotiables; system governance embedding those constraints into engineered reality through stop rules and change controls; and frontline governance giving workers clear authority to override AI decisions.
“As physical AI accelerates, technical capabilities will increasingly converge, but governance will not,” the analysis warns. “Those that treat governance as an afterthought may see early gains, but will discover that scale amplifies fragility.”
This creates an asymmetry in the US-China competition. China’s faster deployment cycles and willingness to pilot systems in controlled industrial environments could accelerate learning curves.
However, governance frameworks that work in structured factory settings may not translate to public spaces where autonomous systems must navigate unpredictable human behaviour.
Early deployment signals
Current deployments remain concentrated in warehousing and logistics, where labour market pressures are most acute. Amazon recently deployed its millionth robot, part of a diverse fleet working alongside humans. Its DeepFleet AI model coordinates this massive robot army across the entire fulfilment network, which Amazon reports will improve travel efficiency by 10%.
BMW is testing humanoid robots at its South Carolina factory for tasks requiring dexterity that traditional industrial robots lack: precision manipulation, complex gripping, and two-handed coordination.
The automaker is also using autonomous vehicle technology to enable newly built cars to drive themselves from the assembly line through testing to the finishing area, all without human assistance.
But applications are expanding beyond traditional industrial settings. In healthcare, companies are developing AI-driven robotic surgery systems and intelligent assistants for patient care.
Cities like Cincinnati are deploying AI-powered drones to autonomously inspect bridge structures and road surfaces. Detroit has launched a free autonomous shuttle service for seniors and people with disabilities.
The regional competitive dynamic intensified this week when South Korea announced a $692 million national initiative to produce AI semiconductors, underscoring how physical AI deployment requires not just software capabilities but domestic chip manufacturing capacity.
NVIDIA has released multiple models under its “Cosmos” brand for training and running AI in robotics. Google DeepMind offers Gemini Robotics-ER 1.5. Tesla is developing its own AI to power the Optimus humanoid robot. Each company is betting that the convergence of AI capabilities with physical manipulation will unlock new categories of automation.
As simulation environments improve and ecosystem-based learning shortens deployment cycles, the strategic question is shifting from “Can we adopt physical AI?” to “Can we govern it at scale?”
For China, the answer may determine whether its early mover advantage in robotics deployment translates into sustained industrial leadership—or becomes a cautionary tale about scaling systems faster than the governance infrastructure required to sustain them.
(Photo by Alibaba)
See also: EY and NVIDIA to help companies test and deploy physical AI
Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, clickhere for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
What Murder Mystery 2 reveals about emergent behaviour in online games
Murder Mystery 2, commonly known as MM2, is often categorised as a simple social deduction game in the Roblox ecosystem. At first glance, its structure appears straightforward. One player becomes the murderer, another the sheriff, and the remaining participants attempt to survive. However, beneath the surface lies a dynamic behavioural laboratory that offers valuable insight into how artificial intelligence research approaches emergent decision-making and adaptive systems.
MM2 functions as a microcosm of distributed human behaviour in a controlled digital environment. Each round resets roles and variables, creating fresh conditions for adaptation. Players must interpret incomplete information, predict opponents’ intentions and react in real time. The characteristics closely resemble the types of uncertainty modelling that AI systems attempt to replicate.
Role randomisation and behavioural prediction
One of the most compelling design elements in MM2 is randomised role assignment. Because no player knows the murderer at the start of a round, behaviour becomes the primary signal for inference. Sudden movement changes, unusual positioning or hesitations can trigger suspicion.
From an AI research perspective, this environment mirrors anomaly detection challenges. Systems trained to identify irregular patterns must distinguish between natural variance and malicious intent. In MM2, human players perform a similar function instinctively.
The sheriff’s decision making reflects predictive modelling. Acting too early risks eliminating an innocent player. Waiting too long increases vulnerability. The balance between premature action and delayed response parallels risk optimisation algorithms.
Social signalling and pattern recognition
MM2 also demonstrates how signalling influences collective decision making. Players often attempt to appear non-threatening or cooperative. The social cues affect survival probabilities.
In AI research, multi agent systems rely on signalling mechanisms to coordinate or compete. MM2 offers a simplified but compelling demonstration of how deception and information asymmetry influence outcomes.
Repeated exposure allows players to refine their pattern recognition abilities. They learn to identify behavioural markers associated with certain roles. The iterative learning process resembles reinforcement learning cycles in artificial intelligence.
Digital asset layers and player motivation
Beyond core gameplay, MM2 includes collectable weapons and cosmetic items that influence player engagement. The items do not change fundamental mechanics but alter perceived status in the community.
Digital marketplaces have formed around this ecosystem. Some players explore external environments when evaluating cosmetic inventories or specific rare items through services connected to an MM2 shop. Platforms like Eldorado exist in this broader virtual asset landscape. As with any digital transaction environment, adherence to platform rules and account security awareness remains essential.
From a systems design standpoint, the presence of collectable layers introduces extrinsic motivation without disrupting the underlying deduction mechanics.
Emergent complexity from simple rules
The most insight MM2 provides is how simple rule sets generate complex interaction patterns. There are no elaborate skill trees or expansive maps. Yet each round unfolds differently due to human unpredictability.
AI research increasingly examines how minimal constraints can produce adaptive outcomes. MM2 demonstrates that complexity does not require excessive features. It requires variable agents interacting under structured uncertainty.
The environment becomes a testing ground for studying cooperation, suspicion, deception and reaction speed in a repeatable digital framework.
Lessons for artificial intelligence modelling
Games like MM2 illustrate how controlled digital spaces can simulate aspects of real world unpredictability. Behavioural variability, limited information and rapid adaptation form the backbone of many AI training challenges.
By observing how players react to ambiguous conditions, researchers can better understand decision latency, risk tolerance and probabilistic reasoning. While MM2 was designed for entertainment, its structure aligns with important questions in artificial intelligence research.
Conclusion
Murder Mystery 2 highlights how lightweight multiplayer games can reveal deeper insights into behavioural modelling and emergent complexity. Through role randomisation, social signalling and adaptive play, it offers a compact yet powerful example of distributed decision making in action.
As AI systems continue to evolve, environments like MM2 demonstrate the value of studying human interaction in structured uncertainty. Even the simplest digital games can illuminate the mechanics of intelligence itself.
Image source: Unsplash
Artificial Intelligence
AI forecasting model targets healthcare resource efficiency
An operational AI forecasting model developed by Hertfordshire University researchers aims to improve resource efficiency within healthcare.
Public sector organisations often hold large archives of historical data that do not inform forward-looking decisions. A partnership between the University of Hertfordshire and regional NHS health bodies addresses this issue by applying machine learning to operational planning. The project analyses healthcare demand to assist managers with decisions regarding staffing, patient care, and resources.
Most AI initiatives in healthcare focus on individual diagnostics or patient-level interventions. The project team notes that this tool targets system-wide operational management instead. This distinction matters for leaders evaluating where to deploy automated analysis within their own infrastructure.
The model uses five years of historical data to build its projections. It integrates metrics such as admissions, treatments, re-admissions, bed capacity, and infrastructure pressures. The system also accounts for workforce availability and local demographic factors including age, gender, ethnicity, and deprivation.
Iosif Mporas, Professor of Signal Processing and Machine Learning at the University of Hertfordshire, leads the project. The team includes two full-time postdoctoral researchers and will continue development through 2026.
“By working together with the NHS, we are creating tools that can forecast what will happen if no action is taken and quantify the impact of a changing regional demographic on NHS resources,” said Professor Mporas.
Using AI for forecasting in healthcare operations
The model produces forecasts showing how healthcare demand is likely to change. It models the impact of these changes in the short-, medium-, and long-term. This capability allows leadership to move beyond reactive management.
Charlotte Mullins, Strategic Programme Manager for NHS Herts and West Essex, commented: “The strategic modelling of demand can affect everything from patient outcomes including the increased number of patients living with chronic conditions.
“Used properly, this tool could enable NHS leaders to take more proactive decisions and enable delivery of the 10-year plan articulated within the Central East Integrated Care Board as our strategy document.”
The University of Hertfordshire Integrated Care System partnership funds the work, which began last year. Testing of the AI model tailored for healthcare operations is currently underway in hospital settings. The project roadmap includes extending the model to community services and care homes.
This expansion aligns with structural changes in the region. The Hertfordshire and West Essex Integrated Care Board serves 1.6 million residents and is preparing to merge with two neighbouring boards. This merger will create the Central East Integrated Care Board. The next phase of development will incorporate data from this wider population to improve the predictive accuracy of the model.
The initiative demonstrates how legacy data can drive cost efficiencies and shows that predictive models can inform “do nothing” assessments and resource allocation in complex service environments like the NHS. The project highlights the necessity of integrating varied data sources – from workforce numbers to population health trends – to create a unified view for decision-making.
See also: Agentic AI in healthcare: How Life Sciences marketing could achieve $450B in value by 2028
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Agentic AI drives finance ROI in accounts payable automation
Finance leaders are driving ROI using agentic AI for accounts payable automation, turning manual tasks into autonomous workflows.
While general AI projects saw return on investment rise to 67 percent last year, autonomous agents delivered an average ROI of 80 percent by handling complex processes without human intervention. This performance gap demands a change in how CIOs allocate automation budgets.
Agentic AI systems are now advancing the enterprise from theoretical value to hard returns. Unlike generative tools that summarise data or draft text, these agents execute workflows within strict rules and approval thresholds.
Boardroom pressure drives this pivot. A report by Basware and FT Longitude finds nearly half of CFOs face demands from leadership to implement AI across their operations. Yet 61 percent of finance leaders admit their organisations rolled out custom-developed AI agents largely as experiments to test capabilities rather than to solve business problems.
These experiments often fail to pay off. Traditional AI models generate insights or predictions that require human interpretation. Agentic systems close the gap between insight and action by embedding decisions directly into the workflow.
Jason Kurtz, CEO of Basware, explains that patience for unstructured experimentation is running low. “We’ve reached a tipping point where boards and CEOs are done with AI experiments and expecting real results,” he says. “AI for AI’s sake is a waste.”
Accounts payable as the proving ground for agentic AI in finance
Finance departments now direct these agents toward high-volume, rules-based environments. Accounts payable (AP) is the primary use case, with 72 percent of finance leaders viewing it as the obvious starting point. The process fits agentic deployment because it involves structured data: invoices enter, require cleaning and compliance checks, and result in a payment booking.
Teams use agents to automate invoice capture and data entry, a daily task for 20 percent of leaders. Other live deployments include detecting duplicate invoices, identifying fraud, and reducing overpayments. These are not hypothetical applications; they represent tasks where an algorithm functions with high autonomy when parameters are correct.
Success in this sector relies on data quality. Basware trains its systems on a dataset of more than two billion processed invoices to deliver context-aware predictions. This structured data allows the system to differentiate between legitimate anomalies and errors without human oversight.
Kevin Kamau, Director of Product Management for Data and AI at Basware, describes AP as a “proving ground” because it combines scale, control, and accountability in a way few other finance processes can.
The build versus buy decision matrix
Technology leaders must next decide how to procure these capabilities. The term “agent” currently covers everything from simple workflow scripts to complex autonomous systems, which complicates procurement.
Approaches split by function. In accounts payable, 32 percent of finance leaders prefer agentic AI embedded in existing software, compared to 20 percent who build them in-house. For financial planning and analysis (FP&A), 35 percent opt for self-built solutions versus 29 percent for embedded ones.
This divergence suggests a pragmatic rule for the C-suite. If the AI improves a process shared across many organisations, such as AP, embedding it via a vendor solution makes sense. If the AI creates a competitive advantage unique to the business, building in-house is the better path. Leaders should buy to accelerate standard processes and build to differentiate.
Governance as an enabler of speed
Fear of autonomous error slows adoption. Almost half of finance leaders (46%) will not consider deploying an agent without clear governance. This caution is rational; autonomous systems require strict guardrails to operate safely in regulated environments.
Yet the most successful organisations do not let governance stop deployment. Instead, they use it to scale. These leaders are significantly more likely to use agents for complex tasks like compliance checks (50%) compared to their less confident peers (6%).
Anssi Ruokonen, Head of Data and AI at Basware, advises treating AI agents like junior colleagues. The system requires trust but should not make large decisions immediately. He suggests testing thoroughly and introducing autonomy slowly, ensuring a human remains in the loop to maintain responsibility.
Digital workers raise concerns regarding displacement. A third of finance leaders believe job displacement is already happening. Proponents argue agents shift the nature of work rather than eliminating it.
Automating manual tasks such as information extraction from PDFs frees staff to focus on higher-value activities. The goal is to move from task efficiency to operating leverage, allowing finance teams to manage faster closes and make better liquidity decisions without increasing headcount.
Organisations that use agentic AI extensively report higher returns. Leaders who deploy agentic AI tools daily for tasks like accounts payable achieve better outcomes than those who limit usage to experimentation. Confidence grows through controlled exposure; successful small-scale deployments lead to broader operational trust and increased ROI.
Executives must move beyond unguided experimentation to replicate the success of early adopters. Data shows that 71 percent of finance teams with weak returns acted under pressure without clear direction, compared to only 13 percent of teams achieving strong ROI.
Success requires embedding AI directly into workflows and governing agents with the discipline applied to human employees. “Agentic AI can deliver transformational results, but only when it is deployed with purpose and discipline,” concludes Kurtz.
See also: AI deployment in financial services hits an inflection point as Singapore leads the shift to production
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
-
Fintech6 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech7 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech5 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Fintech4 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Fintech5 months ago
Esh Bank Unveils Experience That Includes Revenue Sharing With Customers | PYMNTS.com
-
Artificial Intelligence7 months agoThe best Android phones
