NVIDIA's founder and chief executive officer Jensen Huang has said that the company is working with partners including Yotta and Tata Communications to build AI computing infrastructure in India. During his keynote address at the inaugural India edition of Nvidia AI Summit being held in Mumbai today, he said that India will have nearly 20 times more computing infrastructure by the end of this year than a year ago.
“In order to build an AI ecosystem in any country, there is a need to start with the ecosystem of AI infrastructure. We announced that Yotta, ETE and Tata Communications and our other partners are joining us to build fundamental computing infrastructure in India, in just one year, we will have nearly 20 times more compute here in India than it was just a year ago,” says Jensen Huang.
NVIDIA is hosting its first AI summit in India at the Jio World Convention Centre, Mumbai. The summit, from October 23-25, focuses on building AI infrastructure with NVIDIA GPUs, advancing Indian language models, supporting startup innovation, and upskilling developers for the AI-driven future.
Discussing the significance of large language models as AI operating systems, Huang said that the company is currently building Hindi LLMs with its partners. He added that Hindi remains a focus for the company, given the innumerable dialects present in the country which are found every 1500 km.
“This is the hardest language model region in the world. Once India figures out how to build Hindi LLM then it can be figured out for every other country,” says Huang.
Nvidia, with a market cap nearing $4 trillion, is leading the AI-driven tech boom, commanding about 88% of the global GPU market.
Huang shared that over five lakh developers have joined the Nvidia Developer Program with over 10,000 engineers in India.
Huang unveiled the company’s all-new NEMO suite, designed to create these AI agents, helping employees become "super-employees." Huang said that the development of AI is driven by two scaling laws - one for training, where more data requires larger models, and another for inference, demanding faster computation.
Nvidia’s Blackwell is in high demand for its ability to generate tokens at incredible speeds. Huang noted that each year, both data and model sizes are doubling, requiring a fourfold increase in computational power. This scaling process has shown that AI becomes more intelligent as training size increases, driving continuous improvements in performance.
Meanwhile, Tata Communications has announced a collaboration with NVIDIA to build AI cloud infrastructure in India using the company's Hopper GPUs. The partnership will integrate NVIDIA software solutions like Omniverse, Isaac, and NIM microservices into Tata’s AI Cloud offerings, supporting simulation, automation, and data management for businesses, it says in a BSE filing.
Tata Communications will begin large-scale deployment of the Hopper GPUs by the end of 2024, followed by further expansion in 2025 with Blackwell GPUs. The initiative will provide businesses access to the NVIDIA AI Enterprise platform, enabling the development of AI applications, models, and real-time edge computing solutions.
The announcement follows the government's approval of a ₹10,372-crore India AI mission in March. In August, bids were invited from data centres and cloud providers to offer high-speed AI computing infrastructure, including GPUs and TPUs, to academia, startups, researchers, and government bodies. The mission aims to build over 10,000 GPUs under a public-private partnership to enhance India's AI ecosystem.