Microsoft is pitching a future where AI controls everything on your PC and agents go and do work for you in the background. But before the company gets there, it has to build the tools to make these systems work and convince its own developers that AI is actually capable of achieving these big promises.
Artificial Intelligence
How Microsoft’s developers are using AI
Microsoft CEO Satya Nadella revealed earlier this year that up to 30 percent of the code of “some of our projects” is written by AI, and I’ve been eager to learn exactly how Microsoft’s developers are using the technology ever since. I’ve been speaking to sources and company execs to get a better idea of how AI is being used by Microsoft developers. Some employees have told me they’re skeptical that AI agents will be able to fully replace the work of humans, leaving developers to fix the mistakes of automated agents.
When I ask the company for more specifics, though, Microsoft touts its early success in deploying AI internally.
“We want to really look at where there’s developer toil, where we have inefficiencies,” says Amanda Silver, a CVP in Microsoft’s CoreAI team who leads product for the company’s Apps & Agents platform, in an interview with Notepad. “Part of what we’re looking at is both how we can apply [AI] and where we can apply it.”
There are over 100,000 code repositories inside Microsoft, from brand new projects to legacy codebases that are more than 20 years old and still up and running. “We have pretty much every programming language, architecture, and lifecycle stage that you can imagine, and this really reflects a lot of our customers,” says Silver. That’s a lot of code for AI to potentially touch, especially as Microsoft pushes beyond simple code completion towards more automation with AI agents.
In May, Microsoft embedded a coding agent directly into GitHub Copilot, letting developers assign it work to do. The agent then goes off and creates its own development environment, runs in the background, and creates draft pull requests. “What we see is that developers save on average 30 minutes on simple tasks, over a half day on medium tasks, and two weeks on complex tasks,” says Silver. Microsoft’s developers are using it for time-consuming and monotonous tasks like fixing bugs and improving documentation for apps and services.
Microsoft looks at developer hours saved, incidents mitigated, or estimated hours saved for tasks to get to these numbers. “Additionally, we look at the actions completed by the agentic capabilities, such as the number of pull-requests it contributes to,” says Silver.
Measuring the impact of AI on developer productivity is something that I hear Microsoft is obsessing about internally, even if some studies show AI can make experienced developers slower. Some employees, who wish to remain anonymous, feel that Microsoft executives aren’t happy with how often developers use AI right now. There’s a push internally to get developers to use AI first for everything, but I hear that adoption isn’t always organic.
“It does require a little bit of intentional engagement to allow the mindset shift to click in,” admits Silver. While Microsoft’s developers could ignore GitHub Copilot Chat because it was in a separate window, the agentic mode and coding agent are right in the context of how developers work. “It becomes habit forming and changes behavior,” says Silver.
Microsoft says 91 percent of its engineering teams use GitHub Copilot, but sources have shared data that suggests, in some parts of the company, overall AI tool adoption is much lower – closer to the 51 percent of developers who told Stack Overflow they’re now using AI tools professionally every day.
Silver rattles off a list of teams that have sped up their work with AI. The Xbox team used Copilot’s app modernization agent to upgrade their core Xbox service from .NET 6 to .NET 8 recently. “They saw an 88 percent reduction in manual migration effort,” she says, taking “months of work and compressing it down into days.” Microsoft’s discovery and quantum team used the Copilot agent to migrate a Java app to the latest version, and saw a similar “reduction in the effort that was needed, thanks to the AI agent auto detecting deprecated APIs, suggesting fixes, and identifying security vulnerabilities.” The company’s “ES Chat” agent, which can answer questions about Microsoft’s engineering systems, has saved engineers “46 minutes per task compared to traditional search methods.” Microsoft is also using AI agents to help Site Reliability Engineers (SRE) respond to outages of systems and apps. There, the company has already saved over “10,000 hours of operational time.”
All of these time savings mean that Microsoft’s code is increasingly being built by AI instead of just humans, but Silver won’t put a number on how much of Microsoft’s code is being built by AI. She argues it’s too hard to track everything as AI is embedded in code generation, review processes, test generations, and deployment pipelines. “The agents really become a core part of the engineering system itself,” says Silver. “This is one of the reasons why it’s so hard to pin a precise number on the number of lines of code that the AI is contributing.” I also get a sense that promoting a number that’s either too high or too low would be counterproductive to Microsoft’s marketing efforts, both internally and externally.
Still, I don’t doubt the complexity of the task. A human engineer could submit code while Copilot is running inside their editor, or the engineer could copy and paste AI code into their editor. It’s fair to say that AI is prevalent in some parts of Microsoft’s developer output. You only need to look at the codebases of Aspire, Typescript Go, and Microsoft’s Agent Framework to see that Copilot is a major contributor to all of them.
The AI systems also aren’t perfect. Silver says engineers review their work. And a source at Microsoft told me that some of the tools aren’t all they’re cracked up to be. “ES Chat saves me time in that I don’t use it,” the person joked.
Microsoft’s aggressive push towards AI agents coding for developers has also got some employees inside the company concerned about the future. I’ve spoken to engineers in Microsoft’s CoreAI division that are worried about the use of autonomous AI agents, particularly as they pick up the types of projects that junior developers could be assigned. There’s a real fear in the industry, and inside Microsoft, that junior developer roles are disappearing, leaving experienced devs having to babysit the output of AI tools.
With Nadella’s goal of overhauling Microsoft into a company that’s focused on AI agents doing work, this all sounds like less humans involved in coding in the future. Silver is taking the optimistic view that AI will simply allow developers to offload the boring tasks and focus on creativity instead.
“No developer got into the industry because they wanted to be assigned a months-long code maintenance migration effort,” says Silver. “They want to be at the cutting edge, they want to create, they want to innovate. These are the kinds of things they want to offload to AI so they can get back to the process of creation.”
- You can now try the Xbox Full Screen Experience (FSE) on any PC, laptop, or tablet. Microsoft launched the Xbox FSE on all handheld devices last week, but it’s also started testing it on any PC, laptop, or tablet. It adds a console-like UI to the main Xbox app that appears at boot, making it ideal for a living room PC. I’m surprised to see the Xbox FSE appear on all handhelds so quickly, especially as Lenovo’s Legion Go 2 was the first handheld outside of Asus confirmed to be getting Xbox FSE in spring 2026. It feels like Microsoft is rapidly rolling out Xbox FSE to get more people using it and more bug reports. I’m sure the emergency of the Steam Machine is also a part of why it’s rolling out so quickly.
- Xbox Ally devices get a new game profile feature. If you’re an Xbox Ally or Xbox Ally X owner, Microsoft has started previewing its new default game profile feature this week. It automatically optimizes frame rates and power consumption across 40 games, saving you from manually tweaking game settings. The settings should help save battery life, and Microsoft says the game profile for Hollow Knight: Silksong will add nearly an hour of battery life compared to the performance mode.
- Microsoft is speeding up and decluttering File Explorer in Windows 11. Microsoft is making some changes to the File Explorer in Windows 11 that mean it will preload “to help improve File Explorer launch performance.” This preloading appears to be targeted at low-end systems where performance is constrained, and you’ll be able to disable it on any PC. Microsoft is also tweaking the context menus in File Explorer to remove some of the clutter and reduce the amount of space less commonly used actions take up.
- Notepad is getting tables. Don’t worry I’m not adding tables to the newsletter. Microsoft is now testing tables inside Windows Notepad. It’s the latest addition to the app, that takes it way beyond just a default text editor. I’m sure some people will moan that this makes Notepad “bloatware,” but the addition of tables and a full-featured Markdown editor are great improvements for my use of Notepad.
- Microsoft’s AI-powered copy and paste can now use on-device AI. Microsoft is upgrading its Advanced Paste tool in PowerToys so you can route requests through the company’s Foundry Local tool. You can also use the open-source Ollama, and both options will run AI models on a device’s NPU instead of from the cloud so you won’t need to purchase credits to perform some Advanced Paste features.
- Microsoft makes Zork open-source. The original Zork I, Zork II, and Zork III games are now available under the MIT license. Microsoft, Xbox, and Activision have teamed up to preserve the clever Z-Machine engine that powered the Zork games and allow students, teachers, and developers to study the code and learn from it. Microsoft has also worked with Jason Scott, from the Internet Archive, to grant this open-source license.
- Xbox Crocs are real. Microsoft has launched Xbox-themed Crocs this week, priced at $80. After releasing Windows XP-themed Crocs earlier this year, the Xbox limited edition Crocs mimic the Xbox One X’s controller. Both Xbox Crocs shoes feature the classic X, Y, B, A buttons, D-pad, left and right analog sticks, and a white Xbox button and bumpers on the sides. There’s even a $20 set of shoe charms with characters and icons from Halo, Fallout, Doom, World of Warcraft, and Sea of Thieves.
- Copilot is leaving WhatsApp. ChatGPT and Copilot are both disappearing from WhatsApp, thanks to Meta’s new platform policies. Copilot will remain inside WhatsApp until January 15th, 2026. If you were one of the few relying on this feature, you’ll have to switch over to the dedicated Copilot mobile app instead.
- Fara-7B is Microsoft’s first agentic small language model for computer use. Microsoft is building on the work of its Phi small language models by releasing Fara-7B this week. Instead of providing you with text-based responses, Fara-7B is designed to control computer interfaces and use a computer for you. It’s an experimental release for now, and Microsoft is inviting people to get an early hands-on and provide feedback before it’s released more broadly.
- Copilot in Edge is now a shopping assistant. Just in time for Black Friday and Cyber Monday, Microsoft has updated its Copilot in Edge feature with a bunch of shopping assistant capabilities. Copilot in Edge now has tools like cashback, price comparison, price history, and price tracking. It works on supported retailers to provide comparisons against other retailers to make it easy to get the best price for a product.
- Microsoft’s AI enterprise apps get new icons, too. After rolling out new Office icons, Microsoft is now overhauling the icons for its enterprise AI apps and services. The business apps and agents all have new icons that closely match the Microsoft 365 ones, and they’re starting to appear across Microsoft’s Power Platform, Foundry, and other AI services.
- Claude Opus 4.5 is now rolling out to GitHub Copilot. Microsoft has been quick to adopt Anthropic’s latest Claude AI model inside GitHub Copilot this week. Early testing has shown that Opus 4.5 “surpassed internal coding benchmarks, while cutting token usage in half.” Microsoft says Anthropic’s latest model is also “great for code migration and code refactoring.”
I’m always keen to hear from readers, so please drop a comment here, or you can reach me at notepad@theverge.com if you want to discuss anything else. If you’ve heard about any of Microsoft’s secret projects, you can reach me via email at notepad@theverge.com or speak to me confidentially on the Signal messaging app, where I’m tomwarren.01. I’m also tomwarren on Telegram, if you’d prefer to chat there.
Thanks for subscribing to Notepad.
Artificial Intelligence
Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical'
Before you set sail on your AI journey, always check the state of your data – because if there is one thing likely to sink your ship, it is data quality.
Gartner estimates that poor data quality costs organisations an average of $12.9 million each year in wasted resources and lost opportunities. That’s the bad news. The good news is that organisations are increasingly understanding the importance of their data quality – and less likely to fall into this trap.
That’s the view of Ronnie Sheth, CEO of AI strategy, execution and governance firm SENEN Group. The company focuses on data and AI advisory, operationalisation and literacy, and Sheth notes she has been in the data and AI space ‘ever since [she] was a corporate baby’, so there is plenty of real-world experience behind the viewpoint. There is also plenty of success; Sheth notes that her company has a 99.99% client repeat rate.
“If I were to be very practical, the one thing I’ve noticed is companies jump into adopting AI before they’re ready,” says Sheth. Companies, she notes, will have an executive direction insisting they adopt AI, but without a blueprint or roadmap to accompany it. The result may be impressive user numbers, but with no measurable outcome to back anything up.
Even as recently as 2024, Sheth saw many organisations struggling because their data was ‘nowhere where it needed to be.’ “Not even close,” she adds. Now, the conversation has turned more practical and strategic. Companies are realising this, and coming to SENEN Group initially to get help with their data, rather than wanting to adopt AI immediately.
“When companies like that come to us, the first course of order is really fixing their data,” says Sheth. “The next course of order is getting to their AI model. They are building a strong foundation for any AI initiative that comes after that.
“Once they fix their data, they can build as many AI models as they want, and they can have as many AI solutions as they want, and they will get accurate outputs because now they have a strong foundation,” Sheth adds.
With breadth and depth in expertise, SENEN Group allows organisations to right their course. Sheth notes the example of one customer who came to them wanting a data governance initiative. Ultimately, it was the data strategy which was needed – the why and how, the outcomes of what they were trying to do with their data – before adding in governance and providing a roadmap for an operating model. “They’ve moved from raw data to descriptive analytics, moving into predictive analytics, and now we’re actually setting up an AI strategy for them,” says Sheth.
It is this attitude and requirement for practical initiatives which will be the cornerstone of Sheth’s discussion at AI & Big Data Expo Global in London this week. “Now would be the time to get practical with AI, especially enterprise AI adoption, and not think about ‘look, we’re going to innovate, we’re going to do pilots, we’re going to experiment,’” says Sheth. “Now is not the time to do that. Now is the time to get practical, to get AI to value. This is the year to do that in the enterprise.”
Watch the full video conversation with Ronnie Sheth below:
Artificial Intelligence
Apptio: Why scaling intelligent automation requires financial rigour
Greg Holmes, Field CTO for EMEA at Apptio, an IBM company, argues that successfully scaling intelligent automation requires financial rigour.
The “build it and they will come” model of technology adoption often leaves a hole in the budget when applied to automation. Executives frequently find that successful pilot programmes do not translate into sustainable enterprise-wide deployments because initial financial modelling ignored the realities of production scaling.
“When we integrate FinOps capabilities with automation, we’re looking at a change from being very reactive on cost management to being very proactive around value engineering,” says Holmes.
This shifts the assessment criteria for technical leaders. Rather than waiting “months or years to assess whether things are getting value,” engineering teams can track resource consumption – such as cost per transaction or API call – “straight from the beginning.”
The unit economics of scaling intelligent automation
Innovation projects face a high mortality rate. Holmes notes that around 80 percent of new innovation projects fail, often because financial opacity during the pilot phase masks future liabilities.
“If a pilot demonstrates that automating a process saves, say, 100 hours a month, leadership thinks that’s really successful,” says Holmes. “But what it fails to track is that the pilot sometimes is running on over-provisioned infrastructure, so it looks like it performs really well. But you wouldn’t over-provision to that degree during a real production rollout.”
Moving that workload to production changes the calculus. The requirements for compute, storage, and data transfer increase. “API calls can multiply, exceptions and edge cases appear at volume that might have been out of scope for the pilot phase, and then support overheads just grow as well,” he adds.
To prevent this, organisations must track the marginal cost at scale. This involves monitoring unit economics, such as the cost per customer served or cost per transaction. If the cost per customer increases as the customer base grows, the business model is flawed.
Conversely, effective scaling should see these unit costs decrease. Holmes cites a case study from Liberty Mutual where the insurer was able to find around $2.5 million of savings by bringing in consumption metrics and “not just looking at labour hours that they were saving.”
However, financial accountability cannot sit solely with the finance department. Holmes advocates for putting governance “back in the hands of the developers into their development tools and workloads.”
Integration with infrastructure-as-code tools like HashiCorp Terraform and GitHub allows organisations to enforce policies during deployment. Teams can spin up resources programmatically with immediate cost estimates.
“Rather than deploying things and then fixing them up, which gets into the whole whack-a-mole kind of problem,” Holmes explains, companies can verify they are “deploying the right things at the right time.”
When scaling intelligent automation, tension often simmers between the CFO, who focuses on return on investment, and the Head of Automation, who tracks operational metrics like hours saved.
“This translation challenge is precisely what TBM (Technology Business Management) and Apptio are designed to solve,” says Holmes. “It’s having a common language between technology and finance and with the business.”
The TBM taxonomy provides a standardised framework to reconcile these views. It maps technical resources (such as compute, storage, and labour) into IT towers and further up to business capabilities. This structure translates technical inputs into business outputs.
“I don’t necessarily know what goes into all the IT layers underneath it,” Holmes says, describing the business user’s perspective. “But because we’ve got this taxonomy, I can get a detailed bill that tells me about my service consumption and precisely which costs are driving it to be more expensive as I consume more.”
Addressing legacy debt and budgeting for the long-term
Organisations burdened by legacy ERP systems face a binary choice: automation as a patch, or as a bridge to modernisation. Holmes warns that if a company is “just trying to mask inefficient processes and not redesign them,” they are merely “building up more technical debt.”
A total cost of ownership (TCO) approach helps determine the correct strategy. The Commonwealth Bank of Australia utilised a TCO model across 2,000 different applications – of various maturity stages – to assess their full lifecycle costs. This analysis included hidden costs such as infrastructure, labour, and the engineering time required to keep automation running.
“Just because of something’s legacy doesn’t mean you have to retire it,” says Holmes. “Some of those legacy systems are worth maintaining just because the value is so good.”
In other cases, calculating the cost of the automation wrappers required to keep an old system functional reveals a different reality. “Sometimes when you add up the TCO approach, and you’re including all these automation layers around it, you suddenly realise, the real cost of keeping that old system alive is not just the old system, it’s those extra layers,” Holmes argues.
Avoiding sticker shock requires a budgeting strategy that balances variable costs with long-term commitments. While variable costs (OPEX) offer flexibility, they can fluctuate wildly based on demand and engineering efficiency.
Holmes advises that longer-term visibility enables better investment decisions. Committing to specific technologies or platforms over a multi-year horizon allows organisations to negotiate economies of scale and standardise architecture.
“Because you’ve made those longer term commitments and you’ve standardised on different platforms and things like that, it makes it easier to build the right thing out for the long term,” Holmes says.
Combining tight management of variable costs with strategic commitments supports enterprises in scaling intelligent automation without the volatility that often derails transformation.
IBM is a key sponsor of this year’s Intelligent Automation Conference Global in London on 4-5 February 2026. Greg Holmes and other experts will be sharing their insights during the event. Be sure to check out the day one panel session, Scaling Intelligent Automation Successfully: Frameworks, Risks, and Real-World Lessons, to hear more from Holmes and swing by IBM’s booth at stand #362.
See also: Klarna backs Google UCP to power AI agent payments

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
FedEx tests how far AI can go in tracking and returns management
FedEx is using AI to change how package tracking and returns work for large enterprise shippers. For companies moving high volumes of goods, tracking no longer ends when a package leaves the warehouse. Customers expect real-time updates, flexible delivery options, and returns that do not turn into support tickets or delays.
That pressure is pushing logistics firms to rethink how tracking and returns operate at scale, especially across complex supply chains.
This is where artificial intelligence is starting to move from pilot projects into daily operations.
FedEx plans to roll out AI-powered tracking and returns tools designed for enterprise shippers, according to a report by PYMNTS. The tools are aimed at automating routine customer service tasks, improving visibility into shipments, and reducing friction when packages need to be rerouted or sent back.
Rather than focusing on consumer-facing chatbots, the effort centres on operational workflows that sit behind the scenes. These are the systems enterprise customers rely on to manage exceptions, returns, and delivery changes without manual intervention.
How FedEx is applying AI to package tracking
Traditional tracking systems tell customers where a package is and when it might arrive. AI-powered tracking takes a step further by utilising historical delivery data, traffic patterns, weather conditions, and network constraints to flag potential delays before they happen.
According to the PYMNTS report, FedEx’s AI tools are designed to help enterprise shippers anticipate issues earlier in the delivery process. Instead of reacting to missed delivery windows, shippers may be able to reroute packages or notify customers ahead of time.
For businesses that ship thousands of parcels per day, that shift matters. Small improvements in prediction accuracy can reduce support calls, lower refund rates, and improve customer trust, particularly in retail, healthcare, and manufacturing supply chains.
This approach also reflects a broader trend in enterprise software, in which AI is being embedded into existing systems rather than introduced as standalone tools. The goal is not to replace logistics teams, but to minimise the number of manual decisions they need to make.
Returns as an operational problem, not a customer issue
Returns are one of the most expensive parts of logistics. For enterprise shippers, particularly those in e-commerce, returns affect warehouse capacity, inventory planning, and transportation costs.
According to PYMNTS, FedEx’s AI-enabled returns tools aim to automate parts of the returns process, including label generation, routing decisions, and status updates. Companies that use AI to determine the most efficient return path may be able to reduce delays and avoid returning things to the wrong facility.
This is less about convenience and more about operational discipline. Returns that sit idle or move through the wrong channel create cost and uncertainty across the supply chain. AI systems trained on past return patterns can help standardise decisions that were previously handled case by case.
For enterprise customers, this type of automation supports scale. As return volumes fluctuate, especially during peak seasons, systems that adjust automatically reduce the need for temporary staffing or manual overrides.
What FedEx’s AI tracking approach says about enterprise adoption
What stands out in FedEx’s approach is how narrowly focused the AI use case is. There are no broad claims about transformation or reinvention. The emphasis is on reducing friction in processes that already exist.
This mirrors how other large organisations are adopting AI internally. In a separate context, Microsoft described a similar pattern in its article. The company outlined how AI tools were rolled out gradually, with clear limits, governance rules, and feedback loops.
While Microsoft’s case focused on knowledge work and FedEx’s on logistics operations, the underlying lesson is the same. AI adoption tends to work best when applied to specific activities with measurable results rather than broad promises of efficiency.
For logistics firms, those advantages include fewer delivery exceptions, lower return handling costs, and better coordination between shipping partners and enterprise clients.
What this signals for enterprise customers
For end-user companies, FedEx’s move signals that logistics providers are investing in AI as a way to support more complex shipping demands. As supply chains become more distributed, visibility and predictability become harder to maintain without automation.
AI-driven tracking and returns could also change how businesses measure logistics performance. Companies may focus less on delivery speed and more on how quickly issues are recognised and resolved.
That shift could influence procurement decisions, contract structures, and service-level agreements. Enterprise customers may start asking not just where a shipment is, but how well a provider anticipates problems.
FedEx’s plans reflect a quieter phase of enterprise AI adoption. The focus is less on experimentation and more on integration. These systems are not designed to draw attention but to reduce noise in operations that customers only notice when something goes wrong.
(Photo by Liam Kevan)
See also: PepsiCo is using AI to rethink how factories are designed and updated
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
-
Fintech6 months agoRace to Instant Onboarding Accelerates as FDIC OKs Pre‑filled Forms | PYMNTS.com
-
Cyber Security7 months agoHackers Use GitHub Repositories to Host Amadey Malware and Data Stealers, Bypassing Filters
-
Fintech6 months ago
DAT to Acquire Convoy Platform to Expand Freight-Matching Network’s Capabilities | PYMNTS.com
-
Fintech5 months agoID.me Raises $340 Million to Expand Digital Identity Solutions | PYMNTS.com
-
Artificial Intelligence7 months agoNothing Phone 3 review: flagship-ish
-
Fintech4 months agoTracking the Convergence of Payments and Digital Identity | PYMNTS.com
-
Artificial Intelligence7 months agoThe best Android phones
-
Fintech7 months agoIntuit Adds Agentic AI to Its Enterprise Suite | PYMNTS.com
