ĢƵ

A ĢƵ Allen interview with NVIDIA founder and CEO Jensen Huang about the future of AI and computing.

Raising the Stakes for Accelerated Computing

A Q&A with Jensen Huang, founder and CEO of NVIDIA

Accelerated computing and generative AI (GenAI) are the most transformative technologies of our time. They are enabling leaders at federal agencies and private companies to think differently about how they tackle their organizations’ biggest challenges, and as these technologies continue to evolve and expand their capabilities, the question on the minds of leaders across the world is, “What will AI allow us to do next?”

As the founder and CEO of NVIDIA, Jensen Huang is one of the pioneers of the AI revolution. NVIDIA’s computing platforms are at the forefront of accelerated computing, powering a wide range of applications across every industry. NVIDIA is also innovating in other key areas, such as high-performance computing, networking, robotics, and physically accurate digital twins.

We spoke with Jensen about the future of large language models (LLMs), why the federal government needs to become an AI practitioner, and why persevering through pain is part and parcel of achieving true innovation in any field.

You founded and now run a company at the center of the AI revolution. What are you most excited about for the next era of computing?

For the first time, the computer has moved beyond just a tool to become a generator of intelligence and skills. We're entering an era where computers don't simply process data but create new knowledge, solve complex problems, and augment human capabilities in ways we've never seen before. With accelerated computing and AI, we're building machines that can understand and reason; your computer will now actively generate skills and perform tasks.

These AI factories are generating tokens of intelligence at massive scales, transforming industries by continuously learning, reasoning, and evolving. Computers are becoming a continuous, dynamic force—producing intelligence all the time, whether we interact with them or not.

This shift represents a fundamental transformation in computing, where AI becomes a collaborator, an assistant, and even a creator alongside humans. It is ushering in a new industrial revolution, driving innovation across every sector. It’s all here today for us to build on, and endless possibilities exist. Whether it’s simulating weather to inform climate policy, accelerating drug discovery for faster cures, or enhancing cybersecurity through real-time data processing, AI drives innovation across every sector while being more energy efficient.  

NVIDIA is reimagining the entire computing environment, and there’s a concept you call the “three-computer problem.” Through this idea, what is your vision for computing?

Let’s break that down into two components to better understand the problem and the opportunity.

The first and most important part is creating an ecosystem where AI becomes integral to our world, seamlessly blending the digital and physical realms. The second part, let’s call it the “three-computer problem,” describes how we’re bringing the new wave of AI, which we describe as “physical AI,” to life within that ecosystem. This triad of computing power—one to create, one to simulate, and one to run AI—represents a fundamental shift in how we approach problem-solving and innovation.

The first computer is the AI training and inference system. This is where massive AI models are developed and trained using accelerated computing. These models are then deployed for real-time inference across various industries, driving everything from language models to autonomous systems. This is part of our DGX platform. Together, these solutions make it easier for organizations to deploy AI. For example, companies like ServiceNow use NVIDIA NIM with their federal customers to create better internal management systems.

The second computer is the simulation environment, and we call that NVIDIA Omniverse. In Omniverse, we simulate physical worlds with unprecedented precision, enabling us to design, test, and train AI in virtual replicas of the real world. This allows us to simulate complex systems, from autonomous vehicles to factory robots, in digital twins that perfectly mirror physical environments. This is essential for safety, efficiency, and scalability, allowing AI to be tested in virtual worlds before interacting with the physical one.

The third computer is the edge device, which could be our Jetson platform or NVIDIA RTX laptops and workstations, where AI meets the real world. These autonomous machines, such as robots, drones, self-driving cars, and so on, operate in the physical world using the AI models trained on the first computer and tested in the simulated environments of the second. 

In conversations about AI, the federal government isn’t usually the first thing people think about. But AI is embedded in some of our nation’s most critical missions—in many areas, more significantly than in private industries. What do you see as next on the horizon for federal innovators who are operating AI in high-risk environments?

The federal government has always been an early tester and even creator of technology. Federal agencies have the unique opportunity to set the standard for AI operating in high-risk environments. The next phase is about scaling AI to make faster, more precise decisions while ensuring these systems are transparent, secure, and accountable.

Cybersecurity is another hugely important area. The NVIDIA NIM Agent Blueprint for container security provides a powerful tool for organizations to safeguard critical infrastructure through real-time threat detection and analysis. It's really incredible. The blueprint can help cybersecurity developers reduce threat response times from days to seconds, a huge leap forward for security.

The convergence of AI, accelerated computing, and simulation, such as digital twins, is already in play but will become increasingly important when operating in high-risk environments. By simulating environments like regional climates or critical infrastructure, agencies can safely test AI systems before deploying them in the real world. This helps reduce risk and increase reliability.

A perfect example is what was announced at our AI Summit in Washington, DC, with MITRE and Mcity at the University of Michigan. Both organizations are using NVIDIA Omniverse to safely validate autonomous systems in both virtual and physical environments. By creating a repeatable and reproducible digital test bed for mission-critical environments, federal innovators can accelerate innovation while ensuring safety before deploying in the real world. 

There is a unique urgency for federal agencies to accelerate AI adoption. What are some of the barriers that stand in the way of quickly integrating AI into mission-critical work?

First, every agency within the U.S. government needs to become an AI practitioner, not just an AI governor. We’re going to need to use AI for all mission-critical work, including building out new AI algorithms to advance our country.

Second, we need to increase the infrastructure needed to fully support AI. The U.S. should be the largest investor in AI on the planet; we literally cannot afford not to be. The U.S. should build a supercomputer to work on our moonshot projects like finding a cure for cancer. We work with many countries to build out their sovereign AI infrastructure and supercomputers. In the U.K. we have Cambridge-1, an AI supercomputer that accelerates research in healthcare and life sciences, helping pharma companies and research institutions advance drug discovery, genomics, and medical imaging.

And finally, we need the U.S. to be the most attractive country for every AI researcher to come to. Not only that, but we must also make it a national priority to upskill and educate our workforce on how to use AI. We need to be the pacesetter here. Every organization, especially within the federal government, will be transformed by AI.

What can the private sector do to bring the best technologies and capabilities at scale to the federal government?

We’re here to help the federal government with whatever it is they need. The private sector’s responsibility is to help lift, educate, and bring our expertise and innovation to help our government scale. We collaborate with many agencies, such as the National Institutes of Health (NIH) and the National Oceanic and Atmospheric Administration (NOAA), to help deploy the latest technology, whether it’s creating a digital twin or developing a NIM for protein discovery. By working together, we can translate breakthroughs in AI, simulation, and high-performance computing into real-world applications that strengthen national resilience, improve decision making, and drive efficiency at every level of government. 

Over the past 10 years, most large-scale IT systems have moved to service-oriented and microservice architecture. Now, the future appears to be all about agent-oriented architecture, in which AI agents work with and bind to other agents. Can you talk about how you see this evolving?

At NVIDIA, we call this “agentic AI.” The first use of generative AI was built on LLMs and is good at providing some output in response to a prompt. You ask it a question, and it gives you an answer.

This next era will be agentic AI, where AI systems can reason through many scenarios, interact with other AI agents, and even take action on your behalf based on the information it has. In the future, it will look more like employees in an organization. Agentic AI is already transforming industries by automating complex tasks, freeing people to focus on areas that maximize their talents.

Many organizations are excited about the power of agentic AI, but they need help figuring out how to harness its potential. NVIDIA is simplifying and accelerating agentic AI with our partners. They're using NIM Agent Blueprints to help customers build agentic AI systems. These Agentic AI systems will one day be able to operate autonomously and support responsible AI behavior with minimal human oversight.

There is a race to build more powerful LLMs even as some have begun to question their economic viability. What should we expect from LLMs in the future?

We’re moving beyond just text-based LLM interactions into the era of multimodal LLMs and agentic AI, where models will work together to understand and respond to a combination of text, speech, and images. This will make AI far more intuitive and contextually aware. Imagine being able to ask a model not only about a written document but also to analyze and summarize a graph, interpret a chart, or even interact with an image—and have the AI take action to complete the next steps in your project. This opens endless possibilities for enterprises across industries.

We’ll also see breakthroughs in how LLMs are customized and fine-tuned. Fine-tuning allows companies to tailor these foundation models to their specific needs securely and efficiently, making them more applicable to real-world problems. I’m excited about what can be done with guardrails, which are necessary to protect us. I also wouldn’t be surprised if we have many AI applications that check each other and keep each other accountable.

Making this all possible is what we call retrieval-augmented generation, or RAG. RAG makes generative AI more precise because it pulls in real data for models in real time, making the output more relevant and accurate while not having to constantly retrain the models. It’s a game changer for businesses looking to integrate AI into their workflows.

The future of LLMs is not just about size or scale—it’s about versatility, accuracy, and integration. We’ll see models driving research forward and evolving into specialized language models for niche tasks, helping enterprises solve complex problems in secure, customizable ways. AI is quickly becoming a crucial tool in every industry, and these large and specialized models are crucial for the next wave of AI.

If you sat down with an engineering student today—or a group of summer interns at NVIDIA—what would you tell them?

I tell this generation that they are part of one of the biggest shifts in technology since IBM introduced the System/360 more than 60 years ago. The work ahead is challenging and incredibly meaningful because it’s never been done before. The work they choose to do will redefine industries and shape the future of humanity. AI, accelerated computing, robotics, and simulation will revolutionize everything from healthcare to climate science.

None of this will be easy, but nothing worthwhile ever is. Persevering through pain and suffering is part of the journey. At NVIDIA, we’ve experienced lots of setbacks and failures. We’ve become extremely resilient because of it, and we’re a better company because of all the failures we’ve overcome.

Meet the Expert

Jensen Huang founded NVIDIA in 1993 and has served since its inception as president, chief executive officer, and a member of the board of directors. Huang has been elected to the National Academy of Engineering and is a recipient of the Semiconductor Industry Association’s highest honor, the Robert N. Noyce Award; the IEEE Founders Medal; the Dr. Morris Chang Exemplary Leadership Award; and honorary doctorate degrees from National Chiao Tung University, National Taiwan University, and Oregon State University. He holds a bachelor of science degree in electrical engineering from Oregon State University and a master of science degree in electrical engineering from Stanford.

VELOCITY MAGAZINE

ĢƵ Allen's annual publication dissecting issues at the center of mission and innovation.

Subscribe

Want more insights from Velocity? Sign up to receive more stories about emerging technologies and the impacts they’re making on missions of national importance.



1 - 4 of 8