
Altman talks about OpenAI's latest direction: enterprise API revenue has surpassed consumer terminals, a new model will be released in the first quarter of next year, and computing power determines the revenue ceiling

Altman bluntly stated that OpenAI is still in a "compute-constrained" state—compute is not a cost burden, but a hard constraint that determines whether demand can be converted into revenue. "If compute doubles, revenue will almost double as well." In his view, the real competitive focus has shifted from the battle over model parameters to who can first lay out a sufficiently broad compute and platform
As the AI competition fully enters the stage of "close combat," the market's focus is shifting: who has the stronger model is no longer the only question; who can reliably convert model capabilities into revenue and cash flow is the new dividing line.
In a rare one-on-one interview in the latest episode of the Big Technology Podcast, OpenAI CEO Sam Altman systematically addressed the most pressing questions from the outside world from three perspectives: business, product, and infrastructure. Multiple statements released a clear signal: OpenAI is at a critical turning point from being a "phenomenal product company" to becoming an "enterprise-level AI platform."

Why has ChatGPT changed so little in three years? The answer is "universality"
Altman admitted that he originally thought the ChatGPT chat interface wouldn't last this long, but reality has proven that: the universal, low-threshold interaction method has been severely underestimated.
However, he also clearly pointed out that the ultimate form of ChatGPT will not just be a "dialogue box": in the future, AI will work proactively rather than respond passively, generate different interfaces based on different tasks, and run continuously in the background, only interrupting users at critical moments, evolving from a "tool" to an "intelligent agent."
This is also the underlying logic behind OpenAI's simultaneous advancement of browsers, devices, and agents—the goal is not to create a smarter chatbot but to become the "default intelligent layer."
Altman reiterated that "memory" is one of the most valuable long-term capabilities of AI, and the current memory function of AI is only at the "GPT-2 era." Future AI will be able to remember every word you have said and every decision you have made; not just facts, but preferences, emotions, and habits; this is something human assistants can never achieve.
Enterprise API revenue surpasses consumer side, growth engine is switching
On the business front, Altman clearly stated that OpenAI is not "transforming" from a consumer company to enter the enterprise market, but rather is going with the flow.
So far, OpenAI has over 1 million enterprise users, and the growth rate of its API business has already surpassed that of ChatGPT itself. This year, the contribution of APIs to the company's overall growth is even higher than that of consumer products.
In his view, what enterprises truly need is not scattered AI functions, but a complete, unified, and scalable AI platform.
He proposed that in the future, both "traditional cloud" and "AI cloud" will coexist in enterprise IT architecture. OpenAI is not trying to replicate AWS but is building an intelligent infrastructure layer capable of supporting trillions of tokens.
When will GPT-6 be launched: new models are still in progress, but naming is no longer important
Regarding the model roadmap, Altman did not provide a clear timeline for "GPT-6," but confirmed that OpenAI will launch a new model in the first quarter of next year that represents a significant capability leap compared to GPT-5.2. The model upgrades are still ongoing, but the naming itself is no longer the focus.
On the hardware front, OpenAI is preparing to launch a series of small AI devices rather than a single blockbuster product. Altman believes that the form of computing devices will fundamentally change in the future—from passive tools that respond to commands to intelligent systems that can actively understand users' lives, contexts, and collaborative relationships.
In this vision, current computing devices centered around screens and applications are no longer suitable for an "AI-first" world. The new generation of hardware will become a key entry point for carrying long-term memory, continuous perception, and proactive decision-making capabilities.
Why Bet Big on Computing Power: The Revenue Bottleneck Lies in Infrastructure, Not Demand
Rather than whether we have reached AGI, Altman is more concerned about a question overlooked by the market: Is the existing AI capability really being utilized effectively? His judgment is clear—no.
On the enterprise side, many companies are still stuck in shallow applications of "having AI write copy, modify code, and summarize." At the organizational level, AI is often seen more as an auxiliary tool rather than a true "member" participating in decision-making, execution, and collaboration. This is not because the models are not strong enough, but because companies have not yet completed the preparations to restructure processes, roles, and responsibility boundaries around AI.
Therefore, even if model capabilities do not significantly improve for a period, the existing capabilities themselves are sufficient to release massive economic value, which has yet to be systematically activated.
In Altman's view, the judgment of "excess capacity" directly changes the nature of computing power investment. At this stage, computing power is not just a continuously expanding cost item, but a key constraint determining whether potential demand can be converted into actual revenue, given that model capabilities are already in place. He emphasizes that investment in computing power is essentially a preemptive layout for future usage. Over the past year, OpenAI's computing capacity has grown by about three times, with revenue growth roughly keeping pace, and the company has not experienced idle computing power or difficulties in monetization. In other words, if one has double the computing power, revenue would almost double as well.
In his view, the real risk does not lie in excess computing power, but in whether the infrastructure is ready when society and enterprises finally complete their structural adaptation to AI. That will be the true decisive moment in the next phase of the AI competition. At that time, what will limit growth will no longer be model capabilities, but who has already laid out sufficient computing power and platforms in advance.
Shift in Competitive Focus: From Model Parameters to Platform Width
Faced with the rapid catch-up of models like Gemini and DeepSeek, Altman does not shy away from competitive pressure. He admits that OpenAI feels the competitive pressure and frequently enters a "code red" state internally, but does not believe that its lead is being lost.
In his view, the gap in model capabilities will ultimately be compressed, and what will truly create distance will be the ability to productize, distribution efficiency, and the ability to establish long-term relationships with users. **The debate of "distribution vs. product" is itself a false proposition—ChatGPT itself is distribution, and distribution must be built on sustainably evolving products **
The reason ChatGPT has become the largest AI entry point globally is due to its extremely low barriers to entry and versatility. Currently, ChatGPT's weekly active user base is approaching 900 million, and this scale effect is reinforcing OpenAI's competitiveness in the enterprise market.
The podcast transcript is as follows, translated with AI assistance:
Sam Altman:
You know, the $1.4 trillion mentioned, we will gradually invest over a long period. I really wish it could be faster. I think it would be best to clearly explain how these numbers will work all at once.
Alex Kantrowitz:
Exponential growth is often hard to intuitively understand. In this episode, OpenAI CEO Sam Altman will join us to discuss OpenAI's winning strategy as the AI competition intensifies, how infrastructure investments are reasonable, and when OpenAI's IPO might come.
Sam is here in our studio today. Sam, welcome to the show.
Sam Altman:
Thank you for having me.
Alex Kantrowitz:
OpenAI has been around for 10 years, which feels a bit unbelievable. ChatGPT is only three years old, but the competition is intensifying. The OpenAI headquarters we are currently in was once in a "red alert" state, and after the release of Gemini 3, it is currently also in a "red alert" state. Looking around, many companies are trying to encroach on OpenAI's advantages. This is the first time I can remember feeling that this company no longer has a clear lead. So I'm curious to hear your thoughts on how OpenAI will navigate this moment, and...
Sam Altman:
First of all, regarding the "red alert," we believe these are relatively low-risk situations that happen from time to time. It's good to stay vigilant and take swift action when potential competitive threats arise. We have encountered this in the past, and it happened earlier this year when DeepSeek emerged, which also triggered a "red alert." That's right. There's a saying about the COVID-19 pandemic: every action you take at the start of the pandemic is worth far more than actions taken later, and most people acted too little early on and panicked later. We certainly saw this during the pandemic. But I think we apply this mindset to address competitive threats. I feel it's good to maintain a bit of vigilance. So far, Gemini 3 has not produced the kind of impact we were worried about, but it has indeed exposed some weaknesses in our product strategy, which we are quickly addressing. I believe this "red alert" status will not last long. Historically, such situations for us usually last about six to eight weeks.
But I'm glad we took action. Just today, we launched a new image model, which is good news and something consumers really want. Last week we launched 5.2, and the response has been excellent, with very rapid growth. We will also release some other things, and there will be some ongoing improvements, such as enhancing service speed But I guess that for a long time in the future, we might do this kind of thing once or twice a year, which is indeed part of ensuring our victory in our field. Many other companies will also do well, and I am happy for them. But you know, ChatGPT is still the chatbot that holds an absolute dominant position in the market, and I expect this leading advantage to increase over time, not decrease. Models will become excellent everywhere, but there are many reasons why people use a product (whether consumers or enterprise-level) that go far beyond the model itself. We anticipated this early on, so we worked hard to build a whole set of closely coordinated elements to ensure we become the product people most want to use. I believe competition is a good thing; it drives us to get better. But I think we will do well in the chatbot field, and in the enterprise market as well as in future new categories, I believe we will also perform excellently. People do want to use a unified AI platform. People use smartphones in their personal lives and mostly want to use similar smartphones at work. We see the same trend in the AI field. The advantages of ChatGPT on the consumer side have indeed helped us win the enterprise market. Of course, enterprises need different products, but they will think: I know this company OpenAI and am familiar with the ChatGPT interface. So our strategy is: to build the best model, create the best product around it, and have enough infrastructure to provide services at scale.
Alex Kantrowitz:
Yes, there is an existing advantage. Earlier this year, the weekly active users of ChatGPT were about 400 million, and now reports say it is 800 million, reportedly approaching 900 million. But on the other hand, companies like Google have distribution advantages. So I want to hear your thoughts: do you think models will commoditize? If so, what is most important? Is it distribution channels? Is it application building capabilities? Or are there other factors I haven't thought of?
Sam Altman:
I don't think "commoditization" is the right framework for thinking about models. Models will have their strengths in different areas. For general chat use cases, there may be many excellent options. But for scientific discovery, you would want something that is at the forefront and possibly optimized specifically for science. Therefore, models will have different advantages, and I believe the most economically valuable will be those that are at the forefront, and we plan to stay ahead there. We are proud that 5.2 is the best reasoning model in the world and the model that has helped scientists make the most progress, while enterprises also feedback that it performs best in completing various tasks required for business. So, we will lead in certain areas and lag in others, but I expect that even in a world where free models can meet many people's needs, the overall smartest models will still hold tremendous value. The product itself is indeed crucial. As you said, distribution and branding are also very important. Take ChatGPT as an example; the personalization features are highly sticky. Users like the model to gradually understand them over time, and you will see us strengthen this aspect significantly. The interaction experience users have with these models will create a strong sense of connection with them. I remember someone once told me that people basically stick with one brand of toothpaste for their entire lives, and most people seem to do that After users have a magical experience with ChatGPT, they will talk about it. Healthcare is a famous example, where someone inputs blood test results or symptoms into ChatGPT, discovers they have a certain disease, then sees a doctor and cures a previously undiagnosed issue. The user stickiness is very high, not to mention the personalized features available. There will also be a variety of product functionalities. We just launched the browsing feature, and I think this opens up a potential new model for us. There's still some distance to go in terms of devices, but I am very excited about it. So I think all these components will come together.
As for the enterprise market, the elements that form a "moat" or competitive advantage will differ, but similar to the importance of personalization on the consumer side, there will be similar concepts on the enterprise side: companies establishing relationships with firms like ours, integrating their data, and being able to use multiple agents from different companies running on that platform to ensure information is processed appropriately. I expect this will also be quite sticky. We currently have over 1 million enterprise users. Many people primarily see us as a consumer company, but we will definitely venture into the enterprise market. In fact, we have over 1 million enterprise users, but our API business is growing very rapidly. This year, the API business has even contributed more to our growth than ChatGPT. So, the enterprise business is indeed starting to take off, beginning this year.
Alex Kantrowitz:
Can I return to the previous topic? If "commoditization" is not the right word, perhaps for everyday users, the model will reach some sort of "equivalence"? Because you mentioned earlier that everyday use might feel similar, but it feels very different in cutting-edge areas. So, in terms of ChatGPT's growth potential, if ChatGPT and the models built by Gemini feel similar to everyday users, and Google has numerous channels to push Gemini while ChatGPT has to work hard for every new user, how significant is that threat?
Sam Altman:
I think Google is still a huge threat, an extremely powerful company. If Google had taken us seriously in 2023, we might have been in a tough spot. I think they could have easily crushed us. But at that time, their AI was somewhat misaligned in product direction, and they themselves went through a "red alert," but did not take it seriously enough. Now everyone is in "red alert." Moreover, Google may have the best business model in the entire tech industry, and I think they will slowly abandon it.
However, stuffing AI into web search, I am not sure if that will be as effective as reimagining everything. This is actually a broader trend that I believe in. "Stuffing" AI into existing ways of doing things is not as effective as redesigning in an "AI-first" world. This is also one of the reasons we initially wanted to create consumer devices, but it applies to other levels as well. If you stuff AI into a messaging app to summarize your messages well and draft replies, it will indeed be a bit better, but I don't think that's the final form. That's not what I mean by truly smart AI, like your intelligent agent, which talks to other people's agents, judges when to disturb you, when not to disturb, and which decisions it can handle and when it needs to ask you. The same goes for areas like search and productivity suites. I suspect this will take longer than people imagine, but I expect we will see entirely new products built around AI in major categories, rather than just cramming AI in. I think this is a weakness for Google, despite their huge distribution advantage.
Alex Kantrowitz:
I've discussed this with many people. When ChatGPT first launched, I think it was Benedict Evans who suggested that you might not want to put AI into Excel; you might want to rethink how to use Excel. To me, that means you upload numbers and then converse with your numbers. As people develop this kind of thing, they find they need some sort of backend. So, do you build the backend and then interact with AI as if it's a new software program?
Sam Altman:
Yes, that's basically how it evolves.
Alex Kantrowitz:
So why can't you just "cram" it on top?
Sam Altman:
You can cram it in, but the issue is the interface. I spend a lot of time every day on various messaging apps, including email, text messages, Slack, and so on. I think that's the wrong interface. You can cram AI into these apps, and it will be a bit better, but I would prefer to say in the morning: This is what I want to accomplish today, this is what I'm worried about, this is what I'm thinking about, this is what I want to happen. I don't want to spend all day messaging people; I don't need you to summarize them, and I don't want to see a bunch of drafts. Handle everything you can handle. You understand me, understand these people, understand what I want to accomplish. Then every few hours, if there's something I need to do, just give me bulk updates. But this is completely different from how these current applications work.
Alex Kantrowitz:
I was just about to ask what you think ChatGPT will look like in the next year or two. Is it heading in that direction?
Sam Altman:
To be honest, I originally thought ChatGPT would have changed a lot more by now than it has since its launch.
Alex Kantrowitz:
What did you envision back then?
Sam Altman:
I don't know. I just felt that the chat interface wouldn't have progressed as far as it has now. I mean, it looks much better now, but overall it's quite similar to when it was first launched as a research preview. It wasn't even intended to be a product. We know that text interfaces are very usable, and everyone is used to texting friends. The chat interface is great, but I thought that to become a large product used for actual work like it is now, the interface would have to evolve further than it has. I still think it should do that, but I underestimated the power of the current generic interface Of course, I believe the future should be that AI can generate different interfaces for different types of tasks. So, if you're discussing data, it should be able to present it in various ways, and you should also be able to interact with it in different ways. We've already seen a glimpse of this in functionalities like Canvas. It should be more interactive. Right now, it's basically a back-and-forth conversation; if it could continuously discuss around an object, it could keep updating as you have more questions, ideas, and new information comes in. Over time, it should be more proactive, perhaps understanding what you want to accomplish that day and continuously working in the background to send updates. You can see part of the future in how people use Codex; I think that's one of the most exciting things this year. Codex has become excellent, and it foreshadows a lot of what I hope the future will present. But it surprises me. I wanted to say it's a bit awkward, but it's not; it's clearly very successful. It's just that ChatGPT has changed so little over the past three years, which I find unexpected.
Alex Kantrowitz:
Yes, this interface works well. But I think its "inner workings" have changed. You touched on the importance of personalization; for me, the memory feature really brings a difference. I've been discussing an upcoming trip with ChatGPT for weeks, which involves a lot of planning, and I can directly say in a new window, "Okay, let's continue talking about this trip." It has context, knows which tour guide I'm referring to, knows what I'm doing, knows I've been working on a fitness plan for it, and can really integrate all this information. How good can the memory feature become?
Sam Altman:
I think we can't imagine because there are human limits. Even if you have the best personal assistant in the world, they can't remember every word you've said, can't read all your emails and documents, can't check all your work every day and remember every detail, and can't participate in your life to that extent. No one has infinite perfect memory. And AI can definitely do that. We often discuss this, currently the memory feature is still very primitive and early. We are in the "GPT-2 era" of memory features. But when it can truly remember every detail of your life and personalize based on that, not just facts, but also those little preferences that you haven't even thought to specify but AI can capture, I think that will be very powerful. This is still one of the features I'm most excited about, maybe not achievable by 2026, but absolutely the part of the future I'm looking forward to the most.
Alex Kantrowitz:
I spoke with a neuroscientist on the show who mentioned that you can't find thoughts in the brain. The brain doesn't have a place to store thoughts, but in computing, there are places to store them. So, you can save all thoughts. As these robots indeed save our thoughts, there are certainly privacy issues. Another interesting thing is that we will really build relationships with them. I think this is one of the most underrated things of the entire era: people feel these robots are their companions, caring for them. I'm curious to hear your thoughts on this When you think about the intimacy or companionship between people and these robots, is there a "knob" that can be adjusted? For example, can we ensure that people become very close to them, or can we slightly adjust it to maintain a certain distance? If this "knob" exists, how would you appropriately adjust it?
Sam Altman:
Indeed, there are more people than I expected who want to establish an intimate companionship with AI. I don't know what term accurately describes it—relationship doesn't quite fit, and companionship doesn't either—but they desire a deep connection with AI. At the current level of model capabilities, there are more people wanting this than I thought. Earlier this year, expressing a desire for this was considered quite strange, and perhaps many people still do not openly admit it. But actual behavior shows that people like their AI chatbots to understand them, be enthusiastic about them, and support them; this is valuable, even for those who claim not to care, as they actually have this preference. I believe some form of this relationship can be very healthy, and I think adult users should have significant choice in determining their position on this spectrum. Of course, there are also versions that I consider unhealthy, although I am sure many people would choose that. Some people absolutely want the most boring, efficient tools. So, I suspect that, like many other technologies, we will experiment and discover unknown pros and cons. Society will figure out over time how to view where people set this "knob," and then people will have great choice and set it very differently.
Alex Kantrowitz:
So your idea is basically to let people decide for themselves.
Sam Altman:
Yes, of course. But I think we don't know how far it should go, or how far we should allow it to go. We will give people considerable personal freedom. There are some examples, like other services might offer, but we will not. For instance, we will not let AI try to persuade people that they should have an exclusive romantic relationship with it. It must remain open. But I am sure other services will emerge that do this.
Alex Kantrowitz:
Yes, because the stickier it is, the more money the service makes. All these possibilities, when you think about them a little deeper, are somewhat frightening.
Sam Altman:
I completely agree. Personally, I feel this could indeed go very wrong.
Alex Kantrowitz:
Indeed. The stickier the service, the more revenue it generates. All these possibilities, when you think about them a bit, are quite unsettling.
Sam Altman:
Absolutely right. Personally, I also think that it could indeed go very wrong.
Alex Kantrowitz:
You mentioned the enterprise market. Let's talk about that. Last week, you had lunch in New York with some editors and CEOs from news companies, telling them that enterprise business will be a major priority for OpenAI next year. I would love to hear more about why this is a priority and what advantages you think you have over Anthropic I know some people will say this is a transformation for OpenAI, which has always been consumer-centric. Please give us an overview of the plans for the enterprise market.
Sam Altman:
Our strategy has always been consumer-first. There are several reasons for this. First, the models at that time were not powerful and skilled enough for most enterprise-level uses, and now they are gradually meeting the requirements. Second, we had a clear and rare winning opportunity in the consumer market at that time, and such opportunities are rare. I believe that if you win in the consumer market, it will greatly help you win in the enterprise market, and we are seeing that now. But as I mentioned earlier, this year our enterprise business growth has outpaced consumer business growth. Given the current level of the models and the heights they will reach next year, we believe it is time to rapidly build a very important enterprise business. I mean, we already have a certain foundation, but it can grow faster. The company seems ready for this, and the technology seems ready as well. Coding is currently the biggest example, but other verticals are also growing rapidly. We are starting to hear enterprise clients say, I really just want an AI platform.
Alex Kantrowitz:
Which vertical?
Sam Altman:
Finance. Science is what I am personally most excited about in all the progress right now. Customer support is doing well. However, we do have that assessment tool called GDP Val.
Alex Kantrowitz:
I was just about to ask you about that. Can I directly ask you about GDP Val? Okay. Because I messaged Box CEO Aaron Levy, saying I was about to meet Sam, what should I ask him? He said to ask a question about GDP Val.
Sam Altman:
So this is a metric that measures AI performance in knowledge work tasks. I looked back at the information on the recently released GPT-5.2 model and checked the GDP Val chart. Of course, this is OpenAI's own assessment. But according to the chart, the GPT-5 "thinking" model released in the summer was on par with, tied, or beat knowledge workers on 38% of the test tasks. The GPT-5.2 "thinking" model was on par or beat humans on 70.9% of knowledge work tasks, and the GPT-5.2 "professional version" reached 74.1%, and it crossed the threshold considered "expert-level," appearing capable of handling about 60% of expert-level tasks, comparable to the level of experts in the knowledge work field. What does it mean that these models can handle such a large amount of knowledge work?
You asked about verticals earlier, which is a good question, but the reason I hesitate a bit is that this assessment covers more than 40 different verticals that enterprises need to do—creating PPTs, conducting legal analysis, writing small web applications, and so on. And this assessment asks: Do experts prefer the model's output over that of other experts? This involves a lot of things that enterprises need to do. Of course, these are all clearly defined small tasks, excluding those complex, open-ended, creative works like brainstorming new products, and it does not include a lot of team collaboration tasks However, an "AI colleague" that can allocate an hour's workload and can deliver results you are more satisfied with 70% or 74% of the time, if you are willing to pay a little less, is still quite remarkable. If we went back to three years ago when ChatGPT was just released and said that we could achieve this three years later, most people would have said it was absolutely impossible. Therefore, when we think about how companies will integrate this technology, it is no longer just about its ability to write code, but rather that all these knowledge work tasks can be handed over to AI. It will take some time to truly figure out how companies will integrate this, but the impact should be quite significant.
Alex Kantrowitz:
I know you are not an economist, so I won't ask you about the macro impact on employment. But let me read a passage I saw in "Blood in the Machine" on Substack about how this affects work. A tech copywriter wrote: "The chatbot came, and as a result, my job turned into managing the bots instead of managing a team of customer service representatives." This seems to me like it would happen often. But this person went on to say, "Once the bots were trained to provide good enough support, I was laid off." Will this situation become more common? Is this the behavior of a bad company? Because if you have someone who can coordinate multiple bots, you might want to keep them. I'm not sure. What do you think?
Sam Altman:
I agree with you, it is clear that in the future everyone will manage many AIs doing different things. Ultimately, like any good manager, you hope your team gets better and better, but you just take on broader responsibilities and more duties. I am not a "job doomsayer." In the short term, I have some concerns; I think the transition may be difficult in some cases. But deep down, we care a lot about what others do, we seem so focused on relative status, always yearning for more, wanting to make an impact, serve others, express our creative spirit—whatever drives us to this point today, I don't think those things will disappear. I think the future of work—even whether the term "work" is appropriate—what we do every day in 2050 may look very different from today. But I don't think life will lose its meaning, or that the economy will completely collapse. I hope we can find more meaning, and I believe the economy will undergo significant changes, but I feel you cannot go against evolutionary biology.
Alex Kantrowitz:
I often think about how we can automate all functions of OpenAI, and even further, what it means to have an AI CEO managing OpenAI. This doesn't bother me; I find it exciting, and I wouldn't resist it. I don't want to be the stubborn person who insists that doing things manually is better. An AI CEO making a series of decisions, guiding all our resources to empower AI with more energy and capability...
Sam Altman:
There will definitely be some safety guardrails in place.
Alex Kantrowitz: Clearly, you don't want an AI CEO that is not governed by humans. But if you envision a scenario—perhaps a crazy analogy—what if everyone in the world were actually board members of an AI company, able to tell the AI CEO what to do, able to fire it if it doesn't perform well, and having oversight over decisions, while the AI CEO is responsible for executing the board's will? I think this seems quite reasonable for future humans.
Sam Altman:
Yes, well.
Alex Kantrowitz:
We'll turn to the infrastructure topic in a minute, but before we leave the part about models and capabilities, when is GPT-6 coming?
Sam Altman:
I'm not sure when we will name a model GPT-6. But I expect there will be a new model with significant improvements over 5.2 in the first quarter of next year.
Alex Kantrowitz:
What does "significant improvements" mean?
Sam Altman:
I don't have specific evaluation scores to give you yet. But it will include both enterprise-level and consumer-side improvements. What consumers mainly want right now is not a higher IQ, while enterprises still need that. So we will improve the model in different ways for different uses. But our goal is to launch a model that everyone prefers more. Next, infrastructure. You have about $1.4 trillion in commitments for building infrastructure. I've heard a lot of your comments about infrastructure. You once said: if people knew what we could do with computing, they would want much more. You also said that the gap between what we can provide today and 10 times or 100 times computing power is huge. Can you help elaborate on that? What do you plan to do with so much extra computing power?
Sam Altman:
I mentioned earlier that what excites me the most is using AI and massive computing power to discover new science. I believe scientific discovery is key to how the world becomes better for everyone. If we can apply massive computing power to scientific problems to discover new knowledge—there are already very early signs of this, although they are still very small things. But my historical experience in this field is that once the curve starts to take off and moves slightly away from the X-axis, we know how to make it better and better. But this requires tremendous computing power. So this is one area we are investing in: using a lot of AI to discover new science, cure diseases, and so on.
A recent example is that we built the Sora Android app with Codex, and they used a large number of tokens in about a month. One benefit of working at OpenAI is that there are no limits to using Codex. They used a lot of tokens but accomplished work that usually requires many people and longer time to complete, which Codex basically did for us. You can imagine this could go further, with the whole company leveraging massive computing power to build products. People have talked a lot about how video models will lead to these real-time generated user interfaces, which will require a lot of computing. Hopefully, businesses transforming will use a lot of computing. Doctors providing good personalized medical services, continuously monitoring various vital signs of each patient, you can imagine that would use a lot of computing Currently, the amount of computation used for generating AI outputs in the world is difficult to describe specifically, but these numbers are very rough.
I know this discussion is not rigorous enough, but I find this thought experiment somewhat useful, so please forgive my imprecision. Suppose today an AI company generates about 100 trillion tokens daily from cutting-edge models. Maybe more, but I think no one has reached the level of a quadrillion. Assuming the world has 8 billion people, the average number of tokens output per person per day is about 20,000—these numbers I feel are completely wrong, but you can start calculating. Comparing the number of tokens output by today's model providers with the total number of tokens output by all humanity, you could say we will see a company outputting more tokens daily than the total of all humanity, then 10 times, 100 times. In a sense, this is a very silly comparison, but in another sense, it gives a sense of magnitude: how much of the "intellectual computation" on Earth is done by the human brain and how much is done by AI.
The relative growth rate there is interesting. I'm wondering if you are sure there is this demand to use this computational power? For example, if we doubled the computational power invested in science or medicine, would there necessarily be scientific breakthroughs, or would it clearly help doctors? How much of this is based on the exact cognition you see today, and how much is speculation about future possibilities?
Sam Altman:
Everything we see today indicates that this will happen. But that doesn't mean there won't be crazy things in the future. Someone might discover a completely new architecture that brings a thousandfold increase in efficiency, and then we might indeed overbuild for a while. But regarding the speed at which models improve at each new level, and the increase in usage every time we lower costs, everything we see today indicates that demand will continue to grow, and people will use it to do wonderful things, as well as boring things. This seems to be the shape of the future. It's not just about how many tokens can be processed daily. As coding models get better, they can think for a long time, but you don't want to wait that long. So there will be other dimensions, not just the number of tokens. The demand for intelligence, and what we can do with it on a few key dimensions... If you have a very tricky medical problem, would you want to use 5.2 or 5.2 Pro, even if the latter requires many more tokens? I would choose the better model. I think you would too. Let's dig a layer deeper. Regarding scientific discovery, can you give a specific example? For instance, if a scientist has an X problem today, and it can be solved with Y computational power, but it is currently unachievable?
Sam Altman:
This morning on Twitter, there was something where some mathematicians were replying to each other. They said: I was very skeptical that large language models would get better, but 5.2 is the one that crossed the boundary for me. It helped complete a small proof, discovered some small things, but it really changed my workflow. Then others joined in saying: Me too. Some said 5.1 had already reached that point. But this model was just released about five days ago, and the mathematics research community seems to be saying: Well, important things just happened. I see Greg Brockman emphasizing various different mathematical and scientific uses on his timeline I think 5.2 has resonated in these communities. So it will be interesting to see what happens as we progress. One challenge of large-scale computing is that you have to plan a long time in advance. So, the $1.4 trillion you mentioned, we will spend over a long period. I hope we can spend it faster. I believe if we can go faster, the demand is there. But building these projects, and the energy, chips, systems, networks, etc., required to run data centers, takes an extremely long time. So this will last for a while. But from last year to now, our computing power may have tripled. It may triple again next year, and I hope it can continue after that. Revenue growth is even a bit faster than this, but roughly in sync with our growth in computing resources. We have never found a situation where we couldn't monetize all computing resources well. I think if we had double the computing power now, revenue would be twice what it is now.
Alex Kantrowitz:
Well, since you mentioned the numbers, let's talk about them. Revenue is growing, computing expenses are growing, but the growth in computing expenses still exceeds the growth in revenue. Reports indicate that OpenAI expects to lose about $120 billion between now and 2028 or 2029 before it starts to become profitable. So how does the transition happen? Where is the turning point?
Sam Altman:
As revenue grows, and the share of inference in computing resources increases, it will eventually cover the training costs. That's the plan. Spend a lot of money on training, but earn more and more. If we didn't continue to increase training costs so significantly, we would achieve profitability much earlier. But we are betting on investing very aggressively in training these large models. The whole world is curious about how your revenue will match your expenses. Someone asked, if this year's revenue target is $20 billion, and the spending commitment is $1.4 trillion, how does that work?
Alex Kantrowitz:
Over a long period.
Sam Altman:
Yes. That's why I wanted to discuss this with you. I think it would be best to clearly explain how these numbers will work all at once. This is very difficult, and I'm sure I can't do it, and very few people I've seen can do it. You know, you might have a good intuition for many mathematical problems, but exponential growth is usually something that people find hard to quickly build a mental model for. For some reason, evolution has made us good at processing many mathematical problems with our brains, but simulating exponential growth doesn't seem to be one of them. So we believe we can maintain a very steep revenue growth curve for quite a long time, and everything we see currently continues to indicate that. But that depends on us having enough computing power.
We are still severely constrained by computing, which has a huge impact on the revenue line. I think if we reach a point where we have a lot of idle computing power and cannot monetize it at a reasonable unit profit, then we should really ask: how does this all work? But we have already deduced this from various ways. Of course, as all our efforts to reduce computing costs bear fruit, the floating-point operations per dollar will also improve. We see consumer growth and enterprise-level growth There are still many new types of businesses that we have not launched but will be launching. However, computing power is indeed the lifeline to achieving all of this. So, we will have checkpoints along the way, and if we deviate slightly in time or computation, we have some flexibility. But we have always been in a state of computing power shortage, which has always limited what we can do. Unfortunately, I think this will be the norm, but I hope it will improve over time. Because I believe there are too many great products and services we can offer, which will be a huge business.
Alex Kantrowitz:
So, essentially, the proportion of training costs in total costs is decreasing, but the absolute value is increasing. Your expectation is that through enterprise-level business advancement, people are willing to pay for ChatGPT, and through APIs and other means, OpenAI will be able to cover these costs with revenue.
Sam Altman:
That is the plan.
Alex Kantrowitz:
Now, the market has recently lost its mind a bit about this. I think what makes the market uneasy is that debt has entered the equation. The idea around debt is that you only borrow when there are predictable things. Companies borrow, build, and then gain predictable revenue. But this is a new category with unpredictability. How do you view the entry of debt into this field?
Sam Altman:
First of all, I think the market lost its mind earlier. Earlier this year, we met with a company, and the next day that company's stock price would go up by 20% or 15%. It was crazy and felt very unhealthy. In fact, I am glad that there is now more skepticism and rationality in the market because I felt we were heading towards a very unstable bubble at that time, and now I think people have a certain level of discipline. So I actually think things are better. People were crazy before, and now they are becoming more rational. Regarding debt, I think we know that if we build infrastructure, there will always be someone in the industry who gains value from it. It is still very early, and I agree with your view. But I don't think anyone doubts that AI infrastructure will generate value. Therefore, I think it is reasonable for debt to enter this market. I expect other types of financial instruments to emerge as well. I suspect that as people innovate financially in this area, some unreasonable instruments will appear. However, lending money to companies to build data centers seems fine to me.
Alex Kantrowitz:
The market's concern is that if progress does not continue at the expected pace—say, if model progress stagnates—then the value of the infrastructure will be lower than expected. Yes, those data centers still have value to some, but they may be liquidated, and someone will buy them at a discount.
Sam Altman:
I do suspect that there will be some booms and busts along the way. These things are never a perfectly smooth line. First of all, it is very clear to me, and I would be happy to bet on it with the company: the models will get much better. We have good foresight in this area, and we are very confident about it. Even if the models do not improve, I think the world has a lot of inertia, and adapting to new things takes time. I believe in the economic value represented by 5.2, and relative to the value that the world has already extracted from it, the "potential gap" is enormous Even if you freeze the model at level 5.2, the revenue you can create and thus drive is, I believe, enormous. In fact, if you allow me to slightly deviate from the topic. We used to often discuss a 2x2 matrix: short-term timeline vs long-term timeline, slow takeoff vs fast takeoff, and how we thought the probability distribution would change over different periods. You can understand how many decisions and strategies in the world should be optimized based on your position in this matrix. But in my mind, a Z-axis has emerged in this picture, which is: small potential gap vs large potential gap. I guess I hadn't thought that deeply before, but upon reflection, I must have assumed that the potential gap wouldn't be that huge—if the model contains immense value, the world would quickly figure out how to deploy it. But now it seems to me that the potential gap will be very large in most places in the world. You will have some areas, like certain programmers, where productivity will greatly increase by adopting these tools. But overall, you have this extremely smart model, and frankly, most people are still asking questions similar to those in the GPT-4 era. Scientists and programmers may be different, but knowledge workers might also be different, yet the potential gap is enormous. This will lead to a series of very strange consequences for the world, and we haven't fully figured out how all of this will unfold, but it is very different from what I expected a few years ago.
Alex Kantrowitz:
I have a question about this "capability potential gap." Basically, the model can do far more than it currently does. I'm trying to understand how the model can be so much better than its actual applications? But many companies are not seeing a return on investment when trying to apply them, or at least that's what they tell MIT. I'm not quite sure how to think about this because we hear all companies say that if you raise the price of GPT-5.2 by ten times, we would still pay. Your pricing is too low, and we are getting immense value from this. This seems a bit off to me.
Sam Altman:
Of course, if you ask programmers, they would say, "I would pay 100 times the price." It might just be bureaucracy messing things up. Assuming you believe the GDP Val data—maybe you don't, and there are good reasons for that, maybe they are wrong—but assuming that's true, for these clearly defined, not very long-term knowledge work tasks, seven out of ten times you are equally or more satisfied with the output of 5.2. Then you should be using it extensively. However, it takes so long for people to change workflows. People are too accustomed to having junior analysts do slides and the like. This is stickier than I imagined. You know, my own workflow is still largely the same, even though I know I could be using AI more.
Alex Kantrowitz:
We have ten minutes left. We have four questions remaining. Let's try to go through them quickly. The devices you are developing. We will continue our conversation with OpenAI CEO Sam Altman later Alex Kantrowitz:
What I've heard is: the size of a phone, without a screen. If it's just a phone without a screen, why can't it be an app?
Sam Altman:
First of all, we will launch a series of small devices, it will not be a single device. Over time, I believe the way people use computers will change from a passive, clunky thing to something very smart and proactive that understands your entire life, your context, everything happening around you, and knows very well the people you are collaborating with through the computer. I don't think current devices are suited for that kind of world. I firmly believe we are working at the limits of our devices. You have that computer, which has a range of design choices. It can be open or closed, but it can't do both at the same time... for example, focus on this interview but whisper reminders in my ear when I forget to ask Sam a question. Maybe that would help. With a screen, you are limited to the way graphical user interfaces have worked for decades. With a keyboard, it was originally designed to slow down your input speed. These have long been unquestioned assumptions, but they work. Then this brand new thing comes along, opening up a space of possibilities. But I don't think the current form of devices is the best carrier for this amazing new capability. If it were, that would be strange. My goodness, we could talk about this for an hour.
Alex Kantrowitz:
Let's move on to the next one. Cloud computing. You've talked about building cloud services. We received an email from a listener: "In my company, we are migrating from Azure, directly integrating with OpenAI to provide AI experiences for our products. The focus is on powering AI experiences through the stack by injecting trillions of tokens." Is this part of the plan? To build a massive cloud business in this way?
Sam Altman:
First of all, trillions of tokens is massive. You asked about the demand for computing power and our business strategy. Enterprise customers have clearly told us how many tokens they want to purchase from us. We still won't be able to meet the demand by 2026. But the strategy is that most companies seem to want to find a company like ours and say: I want my company name to be associated with AI. I need a custom API for my company, a custom ChatGPT enterprise version for my company, a platform I can trust my data with, the ability to run all these agents, the capability to inject trillions of tokens into my products, and the ability to make all internal processes more efficient. We currently don't have a great integrated product, and we want to build that.
Alex Kantrowitz:
Is your ambition to elevate it to the level of AWS and Azure?
Sam Altman:
I think it's different from those things. I don't have the ambition to provide all the services needed for hosting websites or anything else. I think, yes, my guess is that people will continue to have their web clouds, and there will be another thing, where a company will say: I need an AI platform to handle everything I want to do internally, the services I want to provide, and so on In a sense, it does run on physical hardware, but I think it will be a rather different product.
Alex Kantrowitz:
Let's quickly talk about scientific discoveries. You said something that really interests me: you believe that models, or people collaborating with models, will make small discoveries next year and major discoveries in five years. Is it the models themselves? Or is it people collaborating with them? What gives you confidence in this?
Sam Altman:
It's people using the models. The models can ask questions and solve them on their own, but that still feels like it will take longer. However, if the world benefits from new knowledge, we should be very excited. I think the entire process of human progress is that we build better tools, people do more with them, and then in the process, we build more tools, and we ascend layer by layer, generation by generation, discovery by discovery. Humans ask questions, and I think that does not diminish the value of the tools at all. I think that's great. I'm happy. Earlier this year, I thought small discoveries would start in 2026. It turns out they started at the end of 2025. To reiterate, these are all very small. I really don't want to exaggerate, but any discovery feels qualitatively different from "no discovery." Of course, when we released the model three years ago, that model could not contribute any new knowledge to humanity. From now until five years later, the journey to major discoveries, I suspect, will be like the normal climb of AI, getting a little better each quarter, and then suddenly, humans enhanced by these models will be able to do things that humans absolutely could not do five years ago. Whether we mainly attribute this to smarter humans or smarter models, as long as we can achieve scientific discoveries, I am very happy regardless.
Alex Kantrowitz:
Will there be an IPO next year? Do you want to become a public company? It seems you can remain private for a long time. Will you go public before needing financing?
Sam Altman:
There are many factors at play here. I do think it's cool for the public markets to participate in value creation. In a sense, if we go public, it will be very late compared to any previous company. Being a private company is great. We need a lot of capital. Eventually, we will also exceed the shareholder limit, etc. So, am I excited about being the CEO of a public company? 0%. Am I excited about OpenAI becoming a public company? In some ways, yes, and in some ways, I think it would be annoying.
Alex Kantrowitz:
I listened carefully to your interview with Theo Van; it was a great interview.
Sam Altman:
Theo really knows his stuff; he did his homework.
Alex Kantrowitz:
You told him before the release of GPT-5 that GPT-5 is smarter than us in almost every way. I think that's the definition of AGI. Isn't it AGI? If not, does that term become somewhat meaningless?
Sam Altman: These models are clearly extremely intelligent in their original capabilities. There has been a lot of discussion in the past few days about GPT-5.2's IQ being 147, 144, or 151. Depending on the tests, that's a high number. Many domain experts say it can do these amazing things, contributing to making experts more efficient. We talked about GDP Val. One thing the model still cannot do is to realize when it cannot do something today, then think of ways to learn and understand it, so that when you come back the next day, it gets it right. This ability for continuous learning is something even toddlers can do. In my view, this seems to be an important component we need to build. So, without this ability, can it be considered AGI as most people think? I would say clearly not. I mean, many people would say our current models are AGI. I can almost guarantee that if we had the current level of intelligence plus that ability, it would clearly be very close to AGI. But perhaps most people would say: well, even without that, it can accomplish most important knowledge tasks and is smarter than most of us in many ways, and that is AGI. It discovers new scientific knowledge, and that is AGI. I think this means that while we all find it hard to stop using the term, the definition is very vague. One thing I hope we can get right for AGI, but we have never clearly defined, is the term that everyone is focusing on now: superintelligence. So my proposal is that we agree the concept of AGI has vaguely passed, it hasn't changed the world much, or it will in the long run, but okay, at some point we built AGI, and we are in a vague period where some think we have it, some think we don't, and more and more people will think we do, and then we say: what’s next? One candidate definition of superintelligence is: when a system can perform the jobs of the President of the United States, CEOs of large companies, and heads of large scientific laboratories better than anyone else (even with AI assistance).
I think the chess example is interesting. I remember very clearly when Deep Blue defeated a human. Then for a while, humans and AI together were better than AI alone, and later humans lagged behind, with the smartest being the AI that didn't need human help and didn't understand its own great intelligence. I think something similar can serve as an interesting framework for thinking about superintelligence. I think there is still a long way to go, but I hope this time we can have a clearer definition.
Alex Kantrowitz:
Sam, look, I've been using your products for three years, using them every day. They have indeed gotten much better. It's hard to imagine how they will be in the future.
Sam Altman:
We will work hard to make them faster and better.
Alex Kantrowitz:
Okay. This is our second conversation, and thank you for being so candid both times. Thank you for your time. Thanks to everyone for listening and watching. If this is your first time here, please click follow or subscribe. Our show has many great interviews, and there will be more in the future. Over the past year, we have interviewed Google DeepMind CEO Demis Hassabis twice, once with Google co-founder Sergey Brin We also interviewed Anthropic's CEO Dario Amodei. We have many more heavyweight interviews in 2026. Thank you again, and see you next time on the "Big Tech Podcast."
