India’s AI research institute Sarvam on Tuesday unveiled a new generation of large-scale language models. The company is betting that its smaller, more efficient open-source AI models can capture some market share from more expensive systems offered by much larger American and Chinese rivals.
The announcement, made at the India AI Impact Summit in New Delhi, is in line with New Delhi’s efforts to reduce dependence on foreign AI platforms and tailor models to local languages and use cases.
Sarvam said the new lineup includes 30 billion and 105 billion parameter models. Text-to-speech model. Speech-to-text model. and a vision model for analyzing documents. These represent a significant upgrade from the company’s 2 billion parameter Sarvam 1 model released in October 2024.
The 30-billion-parameter and 105-billion-parameter models employ an expert mixture architecture, which activates only a portion of all parameters at a time, significantly reducing computing costs, Sarvam said. The 30B model supports a 32,000-token context window intended for use in real-time conversations, while the larger model provides a 128,000-token window for more complex multi-step inference tasks.

Sarvam said the new AI model was trained from scratch rather than being fine-tuned with an existing open source system. The 30B model was pre-trained on approximately 16 trillion text tokens, while the 105B model was trained on trillions of tokens across multiple Indian languages.
These models are designed to support real-time applications such as voice-based assistants and chat systems in Indian languages, the startup said.

The company said the model was trained using computing resources provided under India’s government-backed IndiaAI Mission, with infrastructure support from data center operator Yotta and technical support from Nvidia.
tech crunch event
boston, massachusetts
|
June 23, 2026
Sarvam executives said the company plans to take a measured approach to scaling models, focusing on real-world applications rather than raw sizes.
Pratyush Kumar, co-founder of Sarvam, said at the launch: “We don’t want to scale blindly. We want to understand the tasks that really matter at scale and build for them.”
Sarvam said he plans to open source the 30B and 105B models, but did not say whether he would also release training data or the full training code.
The company also outlined plans to build specialized AI systems, including coding-focused models and enterprise tools for a product called Sarvam for Work, and a conversational AI agent platform called Samvaad.
Founded in 2023, Sarvam has raised over $50 million in funding to date, with investors including Lightspeed Venture Partners, Khosla Ventures, and Peak XV Partners (formerly Sequoia Capital India).
