Microsoft’s Amanda Silver has been dedicated to helping developers for 24 years. And for the past few years, that has meant building tools for AI. After a long stint at GitHub Copilot, Silver is now corporate vice president of Microsoft’s CoreAI division, where he works on tools for deploying apps and agent systems within enterprises.
Her work focuses on the Foundry system in Azure, which is designed as a unified AI portal for enterprises, providing a deep look into how enterprises are actually using these systems and where deployments ultimately fall short.
I spoke to Silver about the current capabilities of enterprise agents and why she believes this is the biggest opportunity for startups since the public cloud.
This interview has been edited for length and clarity.
This means your work focuses on Microsoft products for external developers. They are often startups that are not otherwise focused on AI. How do you think AI will impact those companies?
I see this as a turning point for startups, as significant as moving to the public cloud. If you think about it, the cloud has had a huge impact on startups. Because you don’t need real estate space to host racks anymore, and you don’t have to spend as much money on capital injections to get hardware hosted in labs and the like. Everything is now cheaper. Now, agent AI will once again continue to reduce the total cost of software operations. Many of the tasks involved in starting a new venture, such as support personnel and legal research, can be performed faster and cheaper using AI agents. I think this will lead to the launch of more ventures and startups. And startups with fewer people at the helm and better valuations will emerge. And I think it’s a very exciting world.
What would actually happen?
tech crunch event
boston, massachusetts
|
June 23, 2026
It’s true that multi-step agents are becoming very widely used across many different types of coding tasks. As just one example, one of the things developers must do to maintain their codebase is keep up to date with the latest versions of dependent libraries. You may have dependencies on an older version of the dotnet runtime or Java SDK. And then you can have these agent systems reason through your entire codebase and bring it up to date much more easily and probably take 70% or 80% less time. And to make that happen, you actually have to deploy a multi-step agent.
Operating a live site is another matter. If you’re looking to maintain a website or service and something goes wrong, you hear a thud in the night and someone has to wake up and stand by to go respond to the incident. We are still on call 24/7 in case the service goes down. However, it used to be a really disliked job because it was often caused by such trivial events. And we have now built a genetic system to properly diagnose and, in many cases, completely mitigate the problems that arise in the operation of these live sites. This eliminates the need for humans to be woken up in the middle of the night, frantically heading to their terminal and trying to diagnose what’s going on. This also helps significantly reduce the average time it takes to resolve an incident.
One of the other mysteries at the moment is that the introduction of agents is not happening as quickly as expected, even six months ago. I wonder why you think so.
If you think about people who are architectural agents and what prevents them from being successful, a lot of times it’s because they don’t really understand what the purpose of an agent should be. There needs to be a cultural shift in the way people build these systems. What business use case are they trying to solve? What are they trying to accomplish? You need to really look at what the definition of success is for this agent. We then need to think about what data we want to pass to the agent so that it can reason about how to accomplish this particular task.
We believe these are bigger obstacles than the general uncertainty around agent deployment. Anyone who goes to look at these systems sees the return on investment.
You mention general uncertainty, and I think that feels like a big stumbling block from an outsider’s perspective. Why do you think it actually doesn’t matter that much?
First of all, I think it’s going to become very common to have human-involved scenarios in agent systems. Think of something like returning a parcel. Previously, we had a return processing workflow that was 90% automated and 10% human, requiring someone to look at the package and determine how damaged it was before deciding whether to accept the return.
This is a perfect example of how computer vision models are now so good that in many cases there is no longer a need for humans to oversee package inspections and decisions. There will probably be some borderline cases where the computer vision is not yet good enough to make a call and perhaps an escalation could occur. It’s like, how often do I need to call my manager?
There are some operations that are so critical that they always require some form of human oversight. Consider deploying code into your production codebase that may have contractual legal obligations or impact system reliability. However, the question remains how much of the remaining process can be automated.
