On Thursday, Box launched the developer conference Boxworks by unveiling a new set of AI features, building the agent AI model into the backbone of the company’s products.
The conference was more product launches than usual, reflecting the pace of AI development last year, which saw the increasingly fast pace of AI development, launching AI studios last year, and in February for a new set of data extraction agents and searching and deeper research in May.
Now, the company is deploying a new system called Box Automate, which acts as a kind of operating system for AI agents, destroying workflows into various segments that can be scaled with AI if necessary.
I spoke with CEO Aaron Levy about the company’s approach to AI and the dangerous work of competing with foundation model companies. Naturally, he was very bullish about the possibilities of AI agents in modern workplaces, but also had a clear look at the limitations of current models and how existing technologies can manage those limitations.
This interview has been compiled for length and clarity.
TechCrunch: Today we are introducing a bunch of AI products, so we want to start by asking about the majority of our vision. Why build AI agents into cloud content management services?
Aaron Levy: So what we think all day, and what our focus is on the box is how much work is changing for AI. And most of the current impact lies in workflows that contain unstructured data. We were able to automate anything that handles structured data that enters the database. When you think about CRM systems, ERP systems, and HR systems, the space has already had years of automation. But if we’ve never automated it, we’re exposed to unstructured data.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Think about all kinds of legal review processes, all kinds of marketing asset management processes, all kinds of M&A transaction reviews. All of these workflows deal with a lot of unstructured data. People have to review that data, update it, make decisions, etc. We couldn’t bring much automation to these workflows. Although the software could explain it, the computer was not enough to read the documentation or look at marketing assets.
So for us, AI agents mean that for the first time, they can actually make use of all this unstructured data.
TC: What about the risk of deploying agents in a business context? Some customers need to be nervous about deploying something like this on sensitive data.
Levy: What we’ve seen from our customers is that every time they run that workflow, they want to know that the agent runs more or less the same way at the same point in the workflow. I don’t want the agent to make some compounding mistakes where they kind of started running after making 100 submissions for the first couple.
It is very important that the agent has a proper boundary point where it starts and the rest of the system will exit. For every workflow, there is this question about what requires deterministic guardrails and what is completely agent and nondeterministic.
What Box Automate can do is decide how much work you want to do with each agent before handing over to another agent. Therefore, there may be other submission agents and so on, rather than review agents. Essentially, you can deploy AI agents at scale in any type of workflow or business process within your organization.

TC: What problems can you prevent by splitting your workflow?
Even with the most sophisticated and complete agent systems like Levie: Claude Code, we have already seen some limitations. At some point in the task, the model runs out of the context window room and continues to make good decisions. AI doesn’t have free lunches right now. Long-term agents can’t follow your business tasks using unlimited context windows. Therefore, you need to split the workflow and use subagents.
I think we are in an age of context within AI. What AI models and agents need is context, and the context they need to work with sits inside your unstructured data. Therefore, our entire system is designed to really understand what contexts we can provide so that AI agents can run as effectively as possible.
TC: There is a greater debate in the industry about the benefits of larger, powerful frontier models compared to smaller, more reliable models. Does this bring you to the smaller model aspect?
Levie: Probably need to be clear. With our system, there is nothing to prevent the task from becoming arbitrarily long or complicated. What we’re trying to do is create the right guardrails so that we can decide how we want to do that task.
We have no particular philosophy as to where people should be on that continuum. We are trying to design an architecture for future grounds. We designed this to get all these benefits directly to the platform as the model improves and agent functionality improves.
TC: Another concern is data control. The model is trained with so much data, there is a real fear that sensitive data will be refluxed or misused. What will be the cause of that?
Levy: This is where a lot of AI deployments go wrong. People say, “Hey, this is easy. I give AI model access to all of my unstructured data, and it answers questions for people.” And it starts giving you answers about data you can’t access or should not access. You need a very strong layer of access control, data security, permissions, data governance, compliance, everything.
So we are benefiting from decades of spending on building a system that essentially handles that exact problem. Do you want only the right people to access each data within the company? So, when an agent answers a question, they know deterministically that they cannot extract data that the person has no access to. It’s fundamentally built into our system.
TC: Earlier this week, humanity released a new feature for uploading files directly to claude.ai. This is a long way from the type of file management the box does, but you need to be thinking about the potential competition with foundation model companies. How would you approach it strategically?
Levie: So, when you consider what companies need when deploying AI at scale, security, permissions and control are needed. You need a user interface, a strong API, and an AI model selection. One day, one AI model drives better use cases than the other, so it can change and you don’t want to be locked to a particular platform.
So what we built is a system that has all these functions effectively. It has storage, security, permissions, vector embedding, and connects to all the major AI models out there.