Artificial Intelligence
What Murder Mystery 2 reveals about emergent behaviour in online games
Murder Mystery 2, commonly known as MM2, is often categorised as a simple social deduction game in the Roblox ecosystem. At first glance, its structure appears straightforward. One player becomes the murderer, another the sheriff, and the remaining participants attempt to survive. However, beneath the surface lies a dynamic behavioural laboratory that offers valuable insight into how artificial intelligence research approaches emergent decision-making and adaptive systems.
MM2 functions as a microcosm of distributed human behaviour in a controlled digital environment. Each round resets roles and variables, creating fresh conditions for adaptation. Players must interpret incomplete information, predict opponents’ intentions and react in real time. The characteristics closely resemble the types of uncertainty modelling that AI systems attempt to replicate.
Role randomisation and behavioural prediction
One of the most compelling design elements in MM2 is randomised role assignment. Because no player knows the murderer at the start of a round, behaviour becomes the primary signal for inference. Sudden movement changes, unusual positioning or hesitations can trigger suspicion.
From an AI research perspective, this environment mirrors anomaly detection challenges. Systems trained to identify irregular patterns must distinguish between natural variance and malicious intent. In MM2, human players perform a similar function instinctively.
The sheriff’s decision making reflects predictive modelling. Acting too early risks eliminating an innocent player. Waiting too long increases vulnerability. The balance between premature action and delayed response parallels risk optimisation algorithms.
Social signalling and pattern recognition
MM2 also demonstrates how signalling influences collective decision making. Players often attempt to appear non-threatening or cooperative. The social cues affect survival probabilities.
In AI research, multi agent systems rely on signalling mechanisms to coordinate or compete. MM2 offers a simplified but compelling demonstration of how deception and information asymmetry influence outcomes.
Repeated exposure allows players to refine their pattern recognition abilities. They learn to identify behavioural markers associated with certain roles. The iterative learning process resembles reinforcement learning cycles in artificial intelligence.
Digital asset layers and player motivation
Beyond core gameplay, MM2 includes collectable weapons and cosmetic items that influence player engagement. The items do not change fundamental mechanics but alter perceived status in the community.
Digital marketplaces have formed around this ecosystem. Some players explore external environments when evaluating cosmetic inventories or specific rare items through services connected to an MM2 shop. Platforms like Eldorado exist in this broader virtual asset landscape. As with any digital transaction environment, adherence to platform rules and account security awareness remains essential.
From a systems design standpoint, the presence of collectable layers introduces extrinsic motivation without disrupting the underlying deduction mechanics.
Emergent complexity from simple rules
The most insight MM2 provides is how simple rule sets generate complex interaction patterns. There are no elaborate skill trees or expansive maps. Yet each round unfolds differently due to human unpredictability.
AI research increasingly examines how minimal constraints can produce adaptive outcomes. MM2 demonstrates that complexity does not require excessive features. It requires variable agents interacting under structured uncertainty.
The environment becomes a testing ground for studying cooperation, suspicion, deception and reaction speed in a repeatable digital framework.
Lessons for artificial intelligence modelling
Games like MM2 illustrate how controlled digital spaces can simulate aspects of real world unpredictability. Behavioural variability, limited information and rapid adaptation form the backbone of many AI training challenges.
By observing how players react to ambiguous conditions, researchers can better understand decision latency, risk tolerance and probabilistic reasoning. While MM2 was designed for entertainment, its structure aligns with important questions in artificial intelligence research.
Conclusion
Murder Mystery 2 highlights how lightweight multiplayer games can reveal deeper insights into behavioural modelling and emergent complexity. Through role randomisation, social signalling and adaptive play, it offers a compact yet powerful example of distributed decision making in action.
As AI systems continue to evolve, environments like MM2 demonstrate the value of studying human interaction in structured uncertainty. Even the simplest digital games can illuminate the mechanics of intelligence itself.
Image source: Unsplash
Artificial Intelligence
AI forecasting model targets healthcare resource efficiency
An operational AI forecasting model developed by Hertfordshire University researchers aims to improve resource efficiency within healthcare.
Public sector organisations often hold large archives of historical data that do not inform forward-looking decisions. A partnership between the University of Hertfordshire and regional NHS health bodies addresses this issue by applying machine learning to operational planning. The project analyses healthcare demand to assist managers with decisions regarding staffing, patient care, and resources.
Most AI initiatives in healthcare focus on individual diagnostics or patient-level interventions. The project team notes that this tool targets system-wide operational management instead. This distinction matters for leaders evaluating where to deploy automated analysis within their own infrastructure.
The model uses five years of historical data to build its projections. It integrates metrics such as admissions, treatments, re-admissions, bed capacity, and infrastructure pressures. The system also accounts for workforce availability and local demographic factors including age, gender, ethnicity, and deprivation.
Iosif Mporas, Professor of Signal Processing and Machine Learning at the University of Hertfordshire, leads the project. The team includes two full-time postdoctoral researchers and will continue development through 2026.
“By working together with the NHS, we are creating tools that can forecast what will happen if no action is taken and quantify the impact of a changing regional demographic on NHS resources,” said Professor Mporas.
Using AI for forecasting in healthcare operations
The model produces forecasts showing how healthcare demand is likely to change. It models the impact of these changes in the short-, medium-, and long-term. This capability allows leadership to move beyond reactive management.
Charlotte Mullins, Strategic Programme Manager for NHS Herts and West Essex, commented: “The strategic modelling of demand can affect everything from patient outcomes including the increased number of patients living with chronic conditions.
“Used properly, this tool could enable NHS leaders to take more proactive decisions and enable delivery of the 10-year plan articulated within the Central East Integrated Care Board as our strategy document.”
The University of Hertfordshire Integrated Care System partnership funds the work, which began last year. Testing of the AI model tailored for healthcare operations is currently underway in hospital settings. The project roadmap includes extending the model to community services and care homes.
This expansion aligns with structural changes in the region. The Hertfordshire and West Essex Integrated Care Board serves 1.6 million residents and is preparing to merge with two neighbouring boards. This merger will create the Central East Integrated Care Board. The next phase of development will incorporate data from this wider population to improve the predictive accuracy of the model.
The initiative demonstrates how legacy data can drive cost efficiencies and shows that predictive models can inform “do nothing” assessments and resource allocation in complex service environments like the NHS. The project highlights the necessity of integrating varied data sources – from workforce numbers to population health trends – to create a unified view for decision-making.
See also: Agentic AI in healthcare: How Life Sciences marketing could achieve $450B in value by 2028
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Agentic AI drives finance ROI in accounts payable automation
Finance leaders are driving ROI using agentic AI for accounts payable automation, turning manual tasks into autonomous workflows.
While general AI projects saw return on investment rise to 67 percent last year, autonomous agents delivered an average ROI of 80 percent by handling complex processes without human intervention. This performance gap demands a change in how CIOs allocate automation budgets.
Agentic AI systems are now advancing the enterprise from theoretical value to hard returns. Unlike generative tools that summarise data or draft text, these agents execute workflows within strict rules and approval thresholds.
Boardroom pressure drives this pivot. A report by Basware and FT Longitude finds nearly half of CFOs face demands from leadership to implement AI across their operations. Yet 61 percent of finance leaders admit their organisations rolled out custom-developed AI agents largely as experiments to test capabilities rather than to solve business problems.
These experiments often fail to pay off. Traditional AI models generate insights or predictions that require human interpretation. Agentic systems close the gap between insight and action by embedding decisions directly into the workflow.
Jason Kurtz, CEO of Basware, explains that patience for unstructured experimentation is running low. “We’ve reached a tipping point where boards and CEOs are done with AI experiments and expecting real results,” he says. “AI for AI’s sake is a waste.”
Accounts payable as the proving ground for agentic AI in finance
Finance departments now direct these agents toward high-volume, rules-based environments. Accounts payable (AP) is the primary use case, with 72 percent of finance leaders viewing it as the obvious starting point. The process fits agentic deployment because it involves structured data: invoices enter, require cleaning and compliance checks, and result in a payment booking.
Teams use agents to automate invoice capture and data entry, a daily task for 20 percent of leaders. Other live deployments include detecting duplicate invoices, identifying fraud, and reducing overpayments. These are not hypothetical applications; they represent tasks where an algorithm functions with high autonomy when parameters are correct.
Success in this sector relies on data quality. Basware trains its systems on a dataset of more than two billion processed invoices to deliver context-aware predictions. This structured data allows the system to differentiate between legitimate anomalies and errors without human oversight.
Kevin Kamau, Director of Product Management for Data and AI at Basware, describes AP as a “proving ground” because it combines scale, control, and accountability in a way few other finance processes can.
The build versus buy decision matrix
Technology leaders must next decide how to procure these capabilities. The term “agent” currently covers everything from simple workflow scripts to complex autonomous systems, which complicates procurement.
Approaches split by function. In accounts payable, 32 percent of finance leaders prefer agentic AI embedded in existing software, compared to 20 percent who build them in-house. For financial planning and analysis (FP&A), 35 percent opt for self-built solutions versus 29 percent for embedded ones.
This divergence suggests a pragmatic rule for the C-suite. If the AI improves a process shared across many organisations, such as AP, embedding it via a vendor solution makes sense. If the AI creates a competitive advantage unique to the business, building in-house is the better path. Leaders should buy to accelerate standard processes and build to differentiate.
Governance as an enabler of speed
Fear of autonomous error slows adoption. Almost half of finance leaders (46%) will not consider deploying an agent without clear governance. This caution is rational; autonomous systems require strict guardrails to operate safely in regulated environments.
Yet the most successful organisations do not let governance stop deployment. Instead, they use it to scale. These leaders are significantly more likely to use agents for complex tasks like compliance checks (50%) compared to their less confident peers (6%).
Anssi Ruokonen, Head of Data and AI at Basware, advises treating AI agents like junior colleagues. The system requires trust but should not make large decisions immediately. He suggests testing thoroughly and introducing autonomy slowly, ensuring a human remains in the loop to maintain responsibility.
Digital workers raise concerns regarding displacement. A third of finance leaders believe job displacement is already happening. Proponents argue agents shift the nature of work rather than eliminating it.
Automating manual tasks such as information extraction from PDFs frees staff to focus on higher-value activities. The goal is to move from task efficiency to operating leverage, allowing finance teams to manage faster closes and make better liquidity decisions without increasing headcount.
Organisations that use agentic AI extensively report higher returns. Leaders who deploy agentic AI tools daily for tasks like accounts payable achieve better outcomes than those who limit usage to experimentation. Confidence grows through controlled exposure; successful small-scale deployments lead to broader operational trust and increased ROI.
Executives must move beyond unguided experimentation to replicate the success of early adopters. Data shows that 71 percent of finance teams with weak returns acted under pressure without clear direction, compared to only 13 percent of teams achieving strong ROI.
Success requires embedding AI directly into workflows and governing agents with the discipline applied to human employees. “Agentic AI can deliver transformational results, but only when it is deployed with purpose and discipline,” concludes Kurtz.
See also: AI deployment in financial services hits an inflection point as Singapore leads the shift to production
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
How e& is using HR to bring AI into enterprise operations
For many enterprises, the first real test of AI is not customer-facing products or flashy automation demos. It is the quiet machinery that runs the organisation itself. Human resources, with its mix of routine workflows, compliance needs, and large volumes of structured data, is emerging as one of the earliest areas where companies are pushing AI into day-to-day operations.
That shift is visible in how large employers are rethinking workforce systems. The telecommunications group e& began moving its human resources operations to what it describes as an AI-first model, covering roughly 10,000 employees across its organisation. The transition is built on Oracle Fusion Cloud Human Capital Management (HCM), running in an Oracle Cloud Infrastructure dedicated region. Details of the deployment were outlined in a recent Oracle announcement.
The change is less about introducing a single AI feature and more about restructuring how HR processes are handled. Automated and AI-driven tools are expected to help HR departments with recruitment screening, interview coordination, and employee learning recommendations. The stated goal is to standardise processes across regions and provide managers with faster access to workforce data and insights.
HR as an enterprise AI proving ground
From an enterprise perspective, HR is a logical entry point. Many HR tasks follow repeatable patterns: candidate matching, onboarding documentation, leave management, and training assignments. These workflows produce consistent data trails, which makes them easier to model and automate than loosely defined knowledge work. Moving such functions onto AI-supported systems allows organisations to test reliability, governance, and user acceptance in a controlled environment before expanding into more sensitive areas.
The infrastructure choice also indicates how enterprises are balancing innovation with compliance. Oracle claims that the system is deployed in a dedicated cloud region designed to address data sovereignty and regulatory requirements. For multinational corporations, workforce data sits at the intersection of privacy law, employment regulation, and corporate governance. Running AI tools in a controlled environment is part of how companies are trying to contain risk while experimenting with automation.
Governance, compliance, and internal risk management
The e& rollout reflects a broader pattern in enterprise AI adoption: internal transformation is often more achievable than external disruption. Customer-facing AI systems attract attention, but they introduce reputational and operational risk if they fail. HR platforms, by contrast, operate behind the scenes. Errors can still carry consequences, yet they are easier to monitor, audit, and correct within existing governance structures.
Industry research supports the idea that internal operations are becoming a primary testing ground. Deloitte’s 2026 State of AI in the Enterprise report found that organisations are increasingly shifting AI projects from pilot stages into production environments, with productivity and workflow automation cited as early areas of return. The report is based on a survey of more than 3,000 senior leaders involved in AI initiatives, including respondents in Southeast Asia. While the study spans multiple business functions, administrative and operational processes were repeatedly identified as practical entry points for scaled deployment.
Workforce systems also provide a natural setting for AI agents and assistants. HR teams handle frequent employee queries about policies, benefits, and training options. Embedding conversational tools into these workflows may reduce manual workload while giving employees faster access to information. According to Oracle’s description of the deployment, e& plans to introduce digital assistants designed to support candidate engagement and employee development tasks. Whether such tools deliver consistent value will depend on accuracy, oversight, and how well they integrate with existing HR processes.
Scaling AI inside the organisation
The lesson is not that HR automation is new, but that AI is changing the scope of what can be automated. Traditional HR software focused on record-keeping and workflow management. AI layers add predictive matching, pattern analysis, and decision support. That expansion raises familiar governance questions: data quality, bias, auditability, and employee trust.
There is also a workforce dimension. Automating parts of HR does not eliminate the need for human oversight; it changes where effort is concentrated. HR professionals may spend less time on routine coordination and more on policy interpretation, employee engagement, and exception handling. Enterprises adopting AI-driven systems will need clear escalation paths and review processes to avoid over-reliance on automated outputs.
What makes the current moment different is scale. Deployments that cover thousands of employees turn AI from an experiment into operational infrastructure. They force organisations to confront issues of reliability, training, and change management in real time. The systems must work consistently across jurisdictions, languages, and regulatory frameworks.
As enterprises look for low-risk entry points into AI, workforce operations are likely to remain high on the list. They combine structured data, repeatable workflows, and measurable outcomes — conditions that suit automation while still allowing room for human judgement. The experience of early adopters will shape how quickly other internal functions, from finance to procurement, follow a similar path.
(Photo by Zulfugar Karimov)
See also: Barclays bets on AI to cut costs and boost returns
Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, clickhere for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
-
Fintech6 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech7 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech5 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Fintech4 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Fintech5 months ago
Esh Bank Unveils Experience That Includes Revenue Sharing With Customers | PYMNTS.com
-
Artificial Intelligence7 months agoThe best Android phones
