Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Benzinga
Benzinga
Business
Erica Kollmann

Nvidia Q3 FY2026 Earnings Call Transcript

Nvidia Corporation

NVIDIA Corp. (NASDAQ:NVDA) released its third-quarter earnings report after Wednesday’s closing bell.

The transcript from the call is below:

This transcript is brought to you by Benzinga APIs. For real-time access to our entire catalog, please visit Benzinga APIs for a consultation.

Jensen Huang CEO

Thanks Colette. There’s been a lot of talk about an AI bubble. From our vantage point we see something very different. As a reminder, Nvidia is unlike any other accelerator. We excel at every phase of AI from pre training and post training to inference and with our two decade investment in CUDA X acceleration libraries, we are also exceptional at science and engineering simulations, computer graphics, structured data processing to classical machine learning.

The world is undergoing three massive platform shifts at once. The first time since the dawn of Moore’s Law. Nvidia is uniquely addressing each of the three transformations. The first transition is from CPU general purpose computing to GPU accelerated computing. As Moore’s Law slows, the world has a massive investment in non AI software from data processing to science and engineering simulations representing Hundreds of billions of dollars in cloud computing spend each year. Many of these applications, which ran once exclusively on CPUs are now rapidly shifting to CUDA. GPUs accelerated computing has reached a tipping point.

Secondly, AI has also reached a tipping point and is transforming existing applications while enabling entirely new ones for existing applications. Generative AI is replacing classical machine learning in search ranking, recommender systems, ad targeting, click through prediction to content moderation. The very foundations of hyperscale infrastructure. Meta’s gem, a foundation model for ad recommendations trained on large scale GPU clusters and exemplifies this shift in Q2. Meta reported over a 5% increase in ad conversions on Instagram and 3% gain on Facebook feed driven by generative AI based Gem. Transitioning to generative AI represents substantial revenue gains for hyperscalers.

Now a new wave is rising. Agentic AI systems capable of reasoning, planning and using tools from coding assistants like Cursor and quadcode to radiology tools like idoc, legal assistants like Harvey and AI chauffeurs like Tesla, FSD and Waymo. These systems mark the next frontier of computing. The fastest growing companies in The World Today OpenAI, Anthropic, XAI, Google, Cursor, Lovable, Replit, Cognition AI, Open Evidence, Abridged, Tesla are pioneering agentic AI. So there are three massive platform shifts. The transition to accelerated computing is foundational and necessary, essential in a post Moore’s Law era. The transition to generative AI is transformational and necessary, supercharging existing applications and business models. And the transition to agentic and physical AI will be revolutionary, giving rise to new applications, companies, products and services.

As you can, as you consider infrastructure investments, consider these three fundamental dynamics. Each will contribute to infrastructure growth in the coming years. Nvidia is chosen because our singular architecture enables all three transitions and thus so for any form and modality of AI across all industries, across every phase of AI, across all of the diverse computing needs in a cloud and also from cloud to enterprise to robots. One architecture Toshio, back to you. We will now open the call for questions. Operator, would you please pull for questions?

Operator

Thank you. At this time I would like to remind everyone in order to ask a question, press star. Send the number one on your telephone keypad. We’ll pause for just a moment to compile the Q and A roster. As a reminder, please limit yourself to one question. Thank you. Your first question comes from Joseph Moore with Morgan Stanley. Your line is open.

Morgan Stanley Analyst

Great, thank you. I wonder if you could update us. You talked about the 500 billion of revenue for Blackwell plus Rubin in 25 and 26 at GTC at that time you talked about 150 billion of that already having been shipped. So as the quarters wrapped up, are those still kind of the general parameters that there’s 350 billion in the next kind of, you know, 14 months or so and you know, I would assume over that time you haven’t seen all the demand that there is, there’s any possibility of upside down to those numbers as we move forward.

Jensen Huang CEO

Yeah, thanks Joe. I’ll start first with a response here on that. Yes, that’s correct. We are working into our 500 billion forecast and we are on track for that as we have finished some of the quarters and now we have several quarters now in, in front of us to take us through the end of calendar year 26. The number will grow and we will achieve, I’m sure, additional needs for compute that will be shippable by fiscal year 26. So we shipped 50 billion this quarter, but we would be not finished if we didn’t say that we’ll probably be taking more orders. For example, just even today our announcements with KSA and that agreement in itself is 400 to 600,000 more GPUs over three years. Anthropic is also net new. So there’s definitely an opportunity for us to have more on top of the 500 billion that we announced.

Operator

The next question comes from CJ Muse with Cantor Fitzgerald. Your line is open.

Cantor Fitzgerald Analyst

Yeah, good afternoon. Thank you for taking the question. There’s clearly a great deal of consternation around the magnitude of AI infrastructure buildouts and the ability to fund such plans in the roi. Yet you know, at the same time you’re talking about being sold out. Every stood up GP is taken. The world hasn’t seen the enormous benefit yet, you know, from 300. Never mind. Rubin and Gemini 3 just announced Groc 5 coming soon. And so the question is this, when you look at that as the backdrop, do you see a realistic path for supply to catch up with demand over the next 12 to 18 months or do you think it can extend beyond that time frame?

Jensen Huang CEO

Well, as you know, we’ve done a really good job planning our supply chain. Nvidia supply chain basically includes every technology company in the world and TSMC and their packaging and our memory vendors and memory partners and all of our system ODMs have done a really good job planning with us and we were planning for a big year. You know, we, we’ve seen for some time the three transitions that I spoke about. Just, just a Second ago, accelerated computing from general purpose computing. And it’s really important to recognize that AI is not just agentic AI, but generative AI is transforming the way that hyperscalers did the work that they used to do on CPUs, generative AI made it possible for them to move search and recommender systems and you know, add recommendations and targeting. All of that has been generated, has been moved to generative AI and, and it’s still transitioning.

And so whether you, whether you installed Nvidia GPS for data processing or you did it for generative AI for your recommender system, or you’re building it for agentic chatbots and the type of AIs that most people see when they think about AI, all of those applications are accelerated by Nvidia. When you look at the totality of the spend, it’s really important to think about each one of those layers. They’re all growing, they’re related, but not the same. But the wonderful thing is that they all run on Nvidia GPUs simultaneously because the quality of the AI models are improving so incredibly. The adoption of IT in the different use cases, whether it’s in code assistance, which Nvidia uses fairly exhaustively. And we’re not the only one. I mean the fastest growing application in history, combination of cursor and Cloud code and OpenAI’s codecs and, and GitHub Copilot, these applications are the fastest growing in history. And it’s not just used for software engineers, it’s used by, because of vibe coding, it’s used by engineers and marketeers all over companies, supply chain planners all over companies. And so I think that that’s just one example. And the list goes on. Whether it’s open evidence and the work that they do in healthcare or the work that’s being done in digital video editing, Runway and I mean the number of really, really exciting startups that are taking advantage of generative AI and agentic AI is growing quite rapidly. And not to mention we’re all using it a lot more.

And so all of these exponentials, not to mention just today I was reading a text from Demis and he was saying that pre training and post training are fully intact and Gemini 3 takes advantage of the scaling laws and God received a huge jump in quality performance, model performance. We’re seeing all of these exponentials running at the same time. Just always go back to first principles and think about what’s happening from each one of the dynamics that I mentioned before. General purpose computing to accelerated computing, generative AI replacing classical machine learning and of course agentic AI which is a brand new category.

Operator

The next question comes from Vivek Arya with Bank of America Securities. Your line is open.

Bank of America Analyst

Thanks for taking my question. I’m curious, what assumptions are you making on Nvidia content per gigawatt in that 500 billion number? Because we have heard numbers as low as 25 billion per gigawatt of content as high as 30 or 40 billion per gigawatt. So I’m curious what power and what dollar per gigawatt assumptions you are making as part of that 500 billion number and then longer term. Jensen, the 3 to 4 trillion in data center by 202030 was mentioned. How much of that do you think will require vendor financing and how much of that can be supported by cash flows of your large customers or governments or enterprises? Thank you.

Jensen Huang CEO

In each generation, from Ampere to Hopper, from Hopper to Blackwell, Blackwell to Rubin, we are a part of the data center increases. And Hopper generation was probably something along the lines of 20, some odd 20 to 25 Blackwell generation, Grace Blackwell particularly is probably 30 to 30, you know, say 30 plus or minus. And then Reuben is probably higher than that. And in each one of these generations the speed up is X factors and therefore their tco, the customer TCO improves by X factors. And the most important thing is in the end you still only have 1 gigawatt of power. You know, 1 gigawatt data center is 1 gigawatt of power and therefore performance per watt. The efficiency of your architecture is incredibly important. And the efficiency of your architecture can’t be brute forced. There is no brute forcing about it. That 1 gigawatt translates directly. Your, your performance per watt translates directly, absolutely directly to your revenues. Which is the reason why choosing the right architecture matters so much.

Now you know, the world doesn’t have an excess of anything to squander. And so we have to be really, really, you know we, we use this, this concept called co design across our entire stack, across the frameworks and models, across the entire data center. Even power and cooling optimized across the entire supply chain in our ecosystem. And so each generation our economic contribution will be greater, our value delivered will be greater. But the most important thing is our energy efficiency per one is going to be extraordinary. Every single generation with respect to growing into continuing to grow our customers financing is up to them. We see the opportunity to grow for quite some time. And remember today most of the focus has been on the hyperscalers. And one of the areas that is really misunderstood about the hyperscalers is that the investment on Nvidia GPUs not only improves their scale, speed and cost from general purpose computing. That’s number one because Moore’s Law scaling has really slowed. Moore’s Law is about driving cost down. It’s about deflationary cost, the incredible deflationary cost of computing over time. But that has slowed. Therefore a new approach is necessary for them to keep driving the cost down. Going to Nvidia GPU computing is really the best way to do so.

The second is revenue boosting. In their current business models, recommender systems drive the world’s hyperscalers. Every single, whether it’s watching short form videos or recommending books, or recommending the next item in your basket, to recommending ads, to recommending news, to, it’s all about recommenders. The world has, the Internet has trillions of pieces of content. How could they possibly figure out what to put in front of you in your little tiny screen unless they have really sophisticated recommender systems to do so? Well, that has gone generative AI. So the first two things that I just said, hundreds of billions of dollars of capex is going to have to be invested is fully cash flow funded. What is above it therefore is agentic AI. This is revenue is net new, net new consumption, but it’s also net new applications. And some of the applications I mentioned before, but these are, these new applications are also the fastest growing applications in history. Okay, so I, I think that, that I, you’re going to see, you’re going to see that once people start to appreciate what is actually happening under the water, if you will, from the simplistic view of what’s happening to CAPEX investment, recognizing there’s these three dynamics.

Then lastly, remember we were just talking about the American CSPs. Each country will fund their own infrastructure. You have multiple countries, you have multiple industries. Most of the world’s industries haven’t really engaged agentic AI yet and they’re about to. You know all the names of companies that you know, we’re working with, you know, whether, whether it’s autonomous vehicle companies or digital twins for, for physical AI, for, for factories and the number of factories and warehouses being built around the world, just the number of digital biology startups that are being funded so that we could accelerate drug discovery. All of those different industries are now getting engaged and they’re going to do their own fundraising. And so don’t just look at, don’t just look at the hyperscalers as a way to build out for the future. You got to look at the world, you got to look at all the different industries and you know, enterprise computing is going to fund their own industry.

Operator

The next question comes from Ben Reitzes with Melius. Your line is open.

Melius Analyst

Hey, thanks a lot, Jensen. I wanted to ask you about cash. Speaking of half a trillion, you may generate about half a trillion in free cash flow over the next couple years. What are your plans for that cash? How much goes to buyback versus investing in the ecosystem and how do you look at investing in the ecosystem? I think there’s just a lot of confusion out there about how these deals work and your criteria for doing those, like the anthropic, the OpenAI’s et ceter, etc. Thanks a lot.

Jensen Huang CEO

Yeah, I appreciate the question. Of course, using cash to fund our growth. No company has grown at the scale that we’re talking about and have the connection and the depth and the breadth of supply chain that Nvidia has. The reason why our, our entire customer base can rely on us is because we’ve secured a really, you know, really resilient supply chain and we have the balance sheet to support them when we make purchases. Our suppliers can take it to the bank when we make, when we make forecasts and we plan with them. They take us seriously because of our balance sheet. We’re not, we’re not making up the offtake. We know what our offtake is. And, and because they’ve been planning with us for so many years, our reputation and our credibility is incredible. And so, so it takes really strong balance sheet to do that, to support the level of growth and the, the rate of growth and the magnitude associated with that. So that’s number one.

The second thing, of course, we’re going to continue to do stock buyback buybacks, we’re going to continue to do that. But with respect to the investments, this is really, really important work that we do. All of the investments that we’ve done so far, well, all the period is associated with expanding the reach of cuda, expanding the ecosystem. If you look at the work that the investments that we did, we did with OpenAI, it’s of course that relationship we’ve had since 2016. I delivered the first AI supercomputer ever made to OpenAI. And so we’ve had a close and wonderful relationship with OpenAI since then. And everything that OpenAI does runs on Nvidia today. So all the clouds that they deploy in, whether it’s training and inference runs Nvidia. And we love working with them. The partnership that we have with them Is one so that we could work even deeper from a technical perspective, so that we could support their accelerated growth. You know, this is a company that’s growing incredibly fast. And don’t just look at, don’t just look at what is said in the press, look at all the ecosystem partners and all the developers that are connected to OpenAI and they’re all driving consumption of it and the quality of the AI that’s being produced, huge step up since a year ago. And so the quality of response is extraordinary. So we invest in OpenAI for a deep partnership in co development to expand our ecosystem and support their growth. And of course, rather than giving up a share of our company, we get a share of their company and we invested in them in one of the most consequential once in a generation companies, once in a generic company that we have a share of. And so I fully expect that investment to translate to extraordinary returns.

Now, in the case of Anthropic, this is the first time that Anthropic will be on Nvidia’s architecture. The first time Nvidia will be Anthropic will be on Nvidia’s architecture is the the second most successful AI in the world in terms of total number of users. But in enterprise they’re doing incredibly well. Claude Code is doing incredibly well, Cloud is doing incredibly well, all of the world’s enterprise. And now we have the opportunity to have a deep partnership with them and bringing Claude onto the Nvidia platform. And so what do we have now? Nvidia’s architecture. Taking a step back, Nvidia’s architecture. Nvidia’s platform is the singular platform in the world that runs every AI model. We run OpenAI, we run Anthropic, we run XAI. Because of our deep partnership with Elon and XAI, we were able to bring that opportunity to Saudi Arabia, to the ksa, so that Humane could also be hosting opportunity for xai. We run xai, we run Gemini, we run thinking machines. Let’s see, what else do we run? We run them all. Not to mention we run the science models, the biology models, DNA models, gene models, chemical models, and all the different fields around the world. It’s not just cognitive AI that the world uses. AI is impacting every single industry. We have the ability, through the ecosystem investments that we make to partner with, deeply partner on a technical basis with some of the best companies, most brilliant companies in the world. We’re expanding the reach of our ecosystem and we’re getting a share in investment in what Will what will be a very successful company oftentimes once in a generation company. And so that basic. That’s our, that’s our investment thesis.

Operator

The next question comes from Jim Schneider with Goldman Sachs. Your line is open.

Goldman Sachs Analyst

Good afternoon. Thanks for taking my question. In the past you’ve Talked about roughly 40% of your shipments tied to AI inference. I’m wondering, as you look forward into next year, where do you expect that percentage could go in say, a year’s time? And can we be addressed the Rubin CPX product you expect to introduce next year and contextualize that how big of the overall TAM you expect that can take and maybe talk about some of the target customer applications for that specific product. Thank you.

Jensen Huang CEO

CPX is designed for long context type of workload generation and so long context, basically before you start generating answers, you have to read a lot, basically, you know, long context. And it could be a bunch of PDFs, it could be watching a bunch of videos, studying 3D images, so on, so forth. You have to, you have to absorb the context. And so CPX is designed for long context type of workloads and, and it’s perf. Per dollars is perf. Per dollar is excellent. It’s per. For what is excellent and which made me forget the first part of the question.

UNKNOWN

Inferencing.

Jensen Huang CEO

Oh, inference. Yeah. There are three scaling laws that are, that are scaling at the same time. The first scaling law, called pre training, continues to be very effective. And the second is post training. Post training basically has found incredible algorithms for improving an AI’s ability to break a problem down and solve a problem step by step. And post training is scaling exponentially. Basically, the more compute you apply to a model, the smarter it is, the more intelligent it is. And then the third is inference. Inference because of chain of thought, because of reasoning capabilities. AIs are essentially reading thinking before it answers. The amount of computation necessary as a result of those three things has gone completely exponential. I think that it’s hard to know exactly what the percentage will be at any given point in time and who. But of course our hope, our hope is that inference is a very large part of the market. Because if inference is large, then what it suggests is that people are using it in more applications and they’re using it more frequently. We should all hope for inference to be very large. This is where Grace Blackwell is just an order of magnitude better, more advanced than anything in the world. The second best platform is H200 and it’s very clear now that GB300, GB200GB300 because of MV link 72, the scale up network that we have achieved and you saw and Colette talked about in the semi analysis benchmark, it’s the largest single inference benchmark ever done and GB, GB200MV link 72 is 10 times, 10 to 15 times higher performance. And so that’s a big step up. It’s going to take a long time before somebody is able to take that on. And our leadership there is surely multi year. I’m hoping that inference becomes a very big deal. Our leadership and inference is extraordinary.

Operator

The next question comes from Timothy Arcuri with UBS. Your line is open.

UBS Analyst

Thanks a lot Jensen. Many of your customers are pursuing behind the meter power but like what’s the single biggest bottleneck that worries you that could constrain your growth? Is it power or maybe it’s financing or maybe it’s something else like memory or even foundry. Thanks a lot.

Jensen Huang CEO

Well, these are all issues and they’re all constraints. And the reason for that when you’re growing at the rate that we are and the scale that we are, how could anything be easy? What Nvidia is doing obviously has never been done before. And we’ve created a whole new industry. On the one hand we are transitioning computing from general purpose and classical or traditional computing to accelerated computing and AI. That’s on the one hand. On the other hand we created a whole new industry called AI factories. The idea that in order for software to run you need these factories to generate it, generate every single token instead of retrieving information that was pre, pre created. And so, so I think this, this whole, this whole transition requires extraordinary scale. And all the way from the supply chain, of course the supply chain we have, we have much better visibility and control over because you know, obviously we’re incredibly good at managing our supply chain. We have great partners that we’ve worked with for 33 years. And so the supply chain part of it, we’re quite confident now looking down our supply chain. We’ve now established partnerships with so many players in land and power and shell and of course financing these things. None of these things are easy but they’re all tractable and they’re all solvable things. And the most important thing that we have to do is do a good job planning. We plan up the supply chain, down the supply chain. We have established a whole lot of partners and so we have a lot of routes to market and very importantly our architecture has to deliver the best value to the customers that we have. And so at this point you Know, I’m, I’m very confident that Nvidia’s architecture is the best performance per tco. It is the best performance per watt. And therefore, for any amount of energy that is delivered, our architecture will drive the most revenues. And I think the, the inc. The increasing rate of our success, I think that we’re more successful this year at this point than we were last year at this point. You know, the, the number of customers coming to us and the number of platforms coming to us after they’ve explored others is increasing, not decreasing. And so I think the, the, I think all of that is just, you know, all the things that I’ve been telling you over the years are really coming, are coming true and, or becoming evident.

Operator

The next question comes from Stacy Rasgon with Bernstein Research. Your line is open Questions.

Bernstein Analyst

Colette, I had some questions on margins. You said for next year, you’re working to hold them in the mid-70s. So I guess first of all, what are the biggest cost increases? Is it just memory or is it something else? What are you doing to work toward that? Is it how much is like, you know, cost optimizations versus pre buys versus pricing? And then also, how should we think about OPEX growth next year, given the revenues seem likely to grow materially from where we’re running right now?

Colette Kress CFO

Thanks, Stacey. Let me see if I can start with remembering where we were with the current fiscal year that we’re in. Remember, earlier this year we indicated that through cost improvements and mix that we would exit the year in our gross margins in the mid-70s. We’ve achieved that and getting ready to also execute that in Q4. So now it’s time for us to communicate. Where are we working right now in terms of next year? Next year, there are input prices that are well known in industries that we need to work through. And our systems are by no means very easy to work with. There are a tremendous amount of components, many different parts of it as we think about that. So we’re taking all of that into account. But we do believe if we look at working again on cost improvement cycle time and mix, that we will work to try and hold at our gross margins in the mid seven days. So that’s our overall plan for gross margin. Your second question is around opex. And right now our goal in terms of OPEX is to really make sure that we are innovating with our engineering teams, with all of our business teams to create more and more systems for this market. As you know, right now we have a new architecture coming out, and that means they are quite busy in order to meet that goal. And so we’re going to continue to see our investments on innovating more and more, both our software, both our systems and our hard work to do. So I’ll leave a turn it to Jensen if he wants to add any couple more comments.

Jensen Huang CEO

Yeah, I think that’s spot on. I think the only thing that would add is remember that we plan, we forecast, we plan and we negotiate with our supply chain well in advance. Our supply chain have known for quite a long time our requirements and they’ve known for quite a long time our demand. And we’ve been working with them and negotiating with them for quite a long time. And so, so I think the, the recent surge obviously quite significant. But remember, our supply chain has been working with us for a very long time. And so, so in many cases we’ve secured a lot, a lot of supply for ourselves because, you know, obviously they’re working with the largest company in the world in doing so. And we’ve also been working closely with them on the financial aspects of IT and securing forecasts and plans and so on and so forth. So I think all of that has worked out well for us.

Operator

Your final question comes from the line ofAaron Rakers with Wells Fargo. Your line is open.

Wells Fargo Analyst

Yeah, thanks for taking the question. Jensen, the question for you, you know, as you think about the anthropic deal that was announced and just the overall breadth of your customers, I’m curious if your thoughts around the role that AI ASICs or dedicated XPUs play in these architecture build outs has changed at all. Have you seen, you know, I think you’ve been fairly adamant in the past that some of these, some of these programs never really see deployments. But I’m curious if, if we’re at a point where maybe, maybe that’s even changed more in favor of just GPU architecture. Thank you.

Jensen Huang CEO

Yeah, thank you very much. And I really appreciate the question. So first of all, you’re not competing against teams, excuse me, against a company. You’re competing against teams. And there are, there just aren’t that many teams in the world who are built, who are extraordinary at building these incredibly complicated things. You know, back in the hopper day and the ampere days, we would build one gpu. That’s the definition of an accelerated AI system. But today we’ve got to build entire racks, entire, you know, three different types of switches, a scale up, a scale out and a scale across switch. And it takes a lot more than one chip to build a compute node anymore. Everything about that computing system, because AI needs to have memory. AI didn’t used to have memory at all. Now it has to remember things. The amount of memory and context it has is gigantic. The memory architecture implications, incredible. The diversity of models, from mixture of experts to dense models to diffusion models, to autoregressive, not to mention biological models that obeys the laws of physics. The list of, the list of different types of models has exploded in the last several years. And so the challenge is the complexity of the problem is much higher. The diversity of AI models is incredibly, incredibly large.

And so this is where, if I will say the five things that makes us special, if you will. The first thing I would say that makes us special is that we accelerate every phase of that transition. That’s the first phase that CUDA allows us to have, CUDA X for transitioning from general purposes accelerated computing. We’re incredibly good at generative AI, we’re incredibly good at agentic AI. So every single phase of that, every single layer of that transition we are excellent at. You can invest in one architecture, use it across the board, you can use one architecture and not worry about the changes in the workload across those three phases. That’s number one. Number two, we’re excellent at every phase of AI. Everybody’s always known that. We’re incredibly good at pre training, we’re obviously very good at post training, and we’re incredibly good, as it turns out, at inference, because inference is really, really hard. How could thinking be easy? You know, people think that inference is one shot and therefore it’s easy. Anybody could approach the market that way. But it turns out to be the hardest of all, because thinking, as it turns out, is quite hard. We’re great at every phase of AI. The second thing, the third thing is we’re now the only architecture in the world that runs every AI model, every frontier AI model. We run open source AI models incredibly well. We run science models, biology models, robotics models, we run every single model. We’re the only architecture in the world that can claim that it doesn’t matter whether you’re autoregressive or diffusion based. We run everything and we run it for every major platform, as I just mentioned. So we run every model.

And then the fourth thing I would say is that we’re in every cloud. The reason why developers love us is because we’re literally everywhere. We’re in every cloud, we’re in every, you know, we could even make you a little tiny cloud called DGX Spark. And so we’re in Every computer. We’re everywhere from cloud to on prem to robotic Systems, Edge devices, PCs, you name it. One architecture, things just work. It’s incredible. And then the last thing, and this is probably the most important thing, the fifth thing is if you are a cloud service provider, if you’re a new company like Humane, if you’re a new company like coreweaver, N Scale, nvs or OCI for that matter, the reason why Nvidia is the best platform for you is because our offtake is so diverse. We can help you with offtake. It’s not about just putting a random ASIC into a, into a data center. Where’s the offtake coming from? Where’s the diversity coming from? Where’s the resilience coming from? The versatility of the architecture coming from, the diversity of capability coming from? Nvidia has such incredibly good offtake because our ecosystem is so large. So these five things, every phase of acceleration and transition, every phase of AI, every model, every cloud to on prem, and of course, finally it all leads to offtake.

Operator

Thank you. I will now turn the call to Toshiya Hari for closing remarks.

Toshiya Hari VP Investor Relations and Strategic Finance

In closing, please note, we will be at the UBS Global Technology and AI Conference on December 2nd. And our earnings call to discuss the results of our fourth quarter of fiscal 2026 is scheduled for February 25th. Thank you for joining us today. Operator, please go ahead and close the call. Thank you. This concludes today’s conference call. You may now disconnect.

This transcript is to be used for informational purposes only. Though Benzinga believes the content to be substantially and directionally correct, Benzinga cannot and does not guarantee 100% accuracy of the content herein. Audio quality, accents, and technical issues could impact the exactness and we advise you to refer to source audio files before making any decisions based upon the above.

Read Next:
Nvidia Q3: Record Revenue As Blackwell Demand Surges — Huang Says ‘AI Is Going Everywhere’

Photo: Shutterstock

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.