**主持人:** 我们的下一位嘉宾无需介绍,所以我就不费心介绍他了——Sam Altman。我只想说,Sam 现在是三连参加了,他参加了我们举办的三次 AI 活动,每一次都来分享他的想法,我们真的非常感激。所以,我只想说感谢你来到这里。这是我们的第一间办公室。
**主持人:** Our next guest needs no introduction, so I'm not going to bother introducing him, Sam Altman. I will just say Sam is now three for three and joining us to share his thoughts at the three AI events that we've had, which we really appreciate. So, I just want to say thank you for being here. This was our first office.
**Sam Altman:** 没错。
**Sam Altman:** That's right.
**主持人:** 哦,对了。再说一遍。
**主持人:** Oh, that's right. Say that again.
**Sam Altman:** 是的,这是我们的第一间办公室。所以,回来感觉很好。
**Sam Altman:** Yeah, this was this was our first office. So, it's nice to be back.
**主持人:** 让我们回到这间第一个办公室。你是2016年开始的。
**主持人:** Let's go back to the first office here. He started in 2016.
**Sam Altman:** 2016年。
**Sam Altman:** 2016.
**主持人:** 刚才 Jensen 在这里,他说他就是在这里交付了第一台 DGX-1 系统。
**主持人:** We just had Jensen here who said that he delivered the first GGX1 system over here.
**Sam Altman:** 他确实是。是的。现在回头看那个东西有多小,真是令人惊叹。
**Sam Altman:** He did. Yeah. It's amazing how small that thing looks now.
**主持人:** 哦。跟什么比呢?
**主持人:** Oh. Versus what?
**Sam Altman:** 嗯,现在的机箱还是很大的。但是,确实是一次有趣的回忆。
**Sam Altman:** Well, the current boxes are still huge. But yeah, it was a fun throwback.
**主持人:** 它有多重?
**主持人:** How heavy was it?
**Sam Altman:** 那时候你还能自己一个人搬起来。他说大概70磅。是的。我的意思是,它很重,但你可以搬动它。
**Sam Altman:** That was still when you could kind of like lift one yourself. He said it was about 70 lbs. Yeah. I mean, it was heavy, but you could carry it.
**主持人:** 所以,嗯,你在2016年有想象过你今天会在这里吗?
**主持人:** So, um, did you imagine that you'd be here today in 2016?
**Sam Altman:** 呃,没有。就像,呃,我们坐在那边,大概有14个人左右,你们在研究这个新系统。我的意思是,即使那样——我们坐在那里看着白板,试图讨论我们应该做什么。这是一个——几乎不可能夸大我们当时有多像一个研究实验室——一个没有——一个有着非常坚定的信念、方向和信心,但没有真正的行动计划的研究实验室。我的意思是,不仅公司或产品的想法是不可想象的,具体来说 LLM 作为一个概念还非常遥远。
**Sam Altman:** Uh, no. It was like, uh, we were sitting over there and there were, you know, 14 of us or something and you were hacking on this new system. I mean, even that was like a we were sitting around like looking at whiteboards trying to talk about what we should do. Like this was a ve it's almost impossible to sort of overstate how much we were like a research lab with no with with a very strong belief and direction and conviction but no real kind of like action plan. I mean, not only was like the idea of a company or a product sort of unimaginable, the spec like LLMs as an idea were still very far off.
**主持人:** 当时在尝试玩电子游戏。
**主持人:** And so, trying to play video games.
**Sam Altman:** 尝试玩电子游戏。
**Sam Altman:** Trying to play video games.
**主持人:** 你们现在还在尝试玩电子游戏吗?
**主持人:** Are you still trying to play video games?
**Sam Altman:** 现在,我们在那方面已经相当厉害了。
**Sam Altman:** Now, we're pretty good at that.
**主持人:** 嗯,好的。所以,嗯,你又花了六年时间才推出第一个消费者产品,也就是 ChatGPT。在这个过程中,你是怎么思考里程碑的,才能把东西做到那个水平——这算是历史的偶然吗?
**主持人:** Um, all right. So, um, it took you another six years for the first consumer product to come out, which is ChatGPT. Along the way, how did you sort of think about milestones to get something to that level as like an accident of history?
**Sam Altman:** 呃,第一个消费者产品不是 ChatGPT。
**Sam Altman:** The the uh the first consumer product was not ChatGPT.
**主持人:** 对,没错。
**主持人:** That's right.
**Sam Altman:** 是 DALL·E。嗯,第一个产品是 API。所以我们构建了——你知道,我们经历了几个不同的阶段。我们有几个方向是真正想要押注的。最终,正如我提到的,我们说:"好吧,我们必须构建一个系统来看看它是否在工作。我们不只是在写研究论文。所以,我们要看看能不能玩一个电子游戏。好吧,我们要看看能不能做一个机器人手。我们要看看能不能做一些其他的事情。" 在这个过程中的某个时候,最初是一个人,然后慢慢发展成一个团队,对尝试做无监督学习和构建语言模型产生了热情。这就产生了 GPT-1,然后是 GPT-2,到了 GPT-3 的时候,我们觉得我们有了一个挺酷的东西,但我们想不出该拿它做什么。嗯,而且我们也意识到我们需要更多的钱来继续 scaling。你知道,我们做了 GPT-3,我们想做 GPT-4。我们正在迈入十亿美元模型的世界。要把这些作为纯科学实验来做是很难的,除非你是粒子加速器之类的。嗯,即使那样也很难。
所以,我们开始思考,好吧,我们既需要弄清楚这怎么能成为一个能够维持所需投资的业务,同时我们也感觉到这正在朝着真正有用的方向发展。我们之前把 GPT-2 作为模型权重发布了,但并没有太多事情发生。嗯,我观察到的关于公司和产品的一件事是,如果你做一个 API,它通常在上行方面总能起到一些作用。这在很多很多 YC 公司中都是如此。还有,如果你让某样东西更容易使用,通常会有巨大的好处。所以我们就想,好吧,运行这些模型有点难,它们越来越大了,我们去写一些软件,把运行它们这件事做好。而且,与其构建一个产品——因为我们想不出该构建什么——我们会寄希望于别人找到该构建什么。所以,我不太记得确切时间了,但大概是2020年6月左右。嗯,我们通过 API 发布了 GPT-3,全世界并不太在意,但硅谷有点在意。他们说:"哦,这挺酷的。这指向了某些东西。" 有一个奇怪的现象,就是我们几乎没有得到大部分世界的关注。而一些创业公司创始人说:"哦,这真的很酷。" 或者,我的意思是,有些人说这就是 AGI。嗯,据我所知,用 GPT-3 API 建立了真正业务的唯一一些公司,是那些提供文案写作服务的公司。那基本上是 GPT-3 唯一超过经济门槛的事情。
嗯,但我们确实注意到了一件事,最终导向了 ChatGPT——尽管人们不能用 GPT-3 API 建立很多很好的业务,但人们喜欢在 Playground 里跟它聊天。而且它在聊天方面很差劲。我们当时还没有弄清楚如何做 RLHF 来使它易于对话,但人们还是喜欢这样做。从某种意义上说,这是除了文案写作之外 API 产品唯一的杀手级用途,这最终引导我们构建了 ChatGPT。到 ChatGPT 3.5 推出的时候,大概有八个类别而不是一个类别可以用 API 来构建业务了。嗯,但我们对"人们就是想跟模型对话"的信念已经变得非常强烈了。所以我们做了 DALL·E,DALL·E 做得还可以,但我们知道我们想要构建的——特别是配合我们能够做的 fine-tuning——我们知道我们想要构建这个产品,让你能跟模型对话,它在2022年推出了。
**Sam Altman:** It was Dolly. Um the first product was uh the API. So we had built, you know, we kind of went through a few different things. We we were we had a a few directions that we really wanted to bet on. Eventually, as I mentioned, we said, "Well, we got to build a system to see if it's working. And we're not just writing research papers. So, we're going to see if we can, you know, play a video game. Well, we're going to see if we can do a robot hand. We're going to see if we can do a few other things. And at some point in there, uh, one person and then initially and then eventually a team got excited about trying to do unsupervised learning and to build language models. And that led to GPT1 and then GPT2 and and and by the time of GBT3, we both thought we had something that was kind of cool, but we couldn't figure out what to do with it. Um, and also we realized we needed a lot more money to keep scaling. You know, we had done GBD3, we wanted to go to GPT4. We were heading into the world of billion-dollar models. It's like hard to do those as a pure science experiment unless you're like a particle accelerator or something. Um, even then it's hard. So, we started thinking, okay, we we both need to figure out how this can become a business that can sustain the investment that it requires. And also like we have a sense that this is heading towards something actually useful and we had put GPT2 out as model weights and not that much had happened. Um one of the things that I had just observed about uh companies products in general is if you do an API it usually works somehow on the upside. This was like true across many many YC companies. And also that if you make something much easier to use, there's usually a huge benefit to that. So we're like, well, it's kind of hard to run these models, they're getting big, we'll go write some software, do a really good job of running them. And and also we'll then rather than build a product cuz we couldn't figure out what to build. Um we will hope that somebody else finds something to build. And so I forget exactly when, but maybe it was like June of 2020. Um, we put out GPT3 in the API and the world didn't care, but sort of Silicon Valley did. They're like, "Oh, this is kind of cool. This is pointing at something." And there was this weird thing where like we got almost no attention from most of the world. And some startup founders uh were like, "Oh, this is really cool." Or like I mean some of them are like this is AGI. Um, the only people that built real businesses with the GPT3 API that I can remember were these company a few companies that did like copyrightiting as a service. That was kind of the only thing GPT3 was over the economic threshold on. Um, but one thing we did notice which eventually led to ChatGPT is even though people couldn't build a lot of great businesses with the GPT3 API, people love to talk to it in the playground. And it was terrible at chat. We had not at that point figured out how to do RHF to make it easy to chat with, but people loved to do it anyway. And in some sense, that was the kind of only killer use other than copyrightiting of the API product that led us to eventually build chat GPT. By the time Chat GPT 3.5 came out, there were maybe like eight categories instead of one category where you could build a business with the API. Um but but our conviction that like people just want to talk to the model had gotten really strong. So we had done Dolly and Dolly was doing okay but we knew we kind of wanted to build especially along with um the finetuning we were able to do we knew we wanted to build this model this product to let you talk to the model and it launched in 2022 or something uh I think yeah about six years.
**主持人:** 我想是的,大约六年。
**主持人:** I think, yeah, about six years. When was the first — November 30th, 2022?
**Sam Altman:** 2022年11月30日。
**Sam Altman:** November 30th, 2022.
**主持人:** 是的,所以有很多前期工作。2022年推出,到今天它每周有超过5亿人在使用。
**主持人:** Yeah, so there was a lot of work leading up to that and 2022 launched — today it has over 500 million people who talk to it on a weekly basis.
**Sam Altman:** 是的。
**Sam Altman:** Yeah.
**主持人:** 好的。好的。所以,嗯,顺便说一下,准备好观众提问,因为这是 Sam 的要求。嗯,你三次 Ascent 活动每次都来了,正如 Pat 提到的,有过很多起起伏伏,但最近六个月似乎就是在不停地发布、发布、发布。我们发布了很多深度思考的内容,看到产品速度、发布速度持续提升,真是令人惊叹。所以这是一个多层次的问题。你是怎么让一个大公司随着时间推移提高产品速度的?
**主持人:** All right. All right. So, um, by the way, uh, get ready for some audience questions because that's what that was Sam's request. Um, you've been here for three every single one of the ascents as Pat mentioned and there's been some lots of ups and downs, but seems like the last 6 months he's just been shipping, shipping, shipping. We shipped a lot of thought stuff and it's amazing to see the product velocity, the shipping velocity continue to increase. So this is like multi- sort of part question. How have you gotten a large company to like increase product velocity over time?
**Sam Altman:** 我认为很多公司犯的一个错误是,他们变大了但没有做更多的事情。所以他们只是因为应该变大就变大了,但仍然发布同样数量的产品。这就是那种"糖浆效应"真正开始发生的时候。我是一个坚定的信奉者——你希望每个人都很忙。你希望团队小。你希望相对于人员数量做很多事情,否则你就会在每个会议里有40个人,然后为了谁能负责产品的哪个微小部分而激烈争吵。嗯,有一个古老的商业观察,说一个好的高管是一个忙碌的高管,因为你不想让人们无所事事地瞎忙。嗯,但我认为这适用于——你知道,在我们公司和很多其他公司里,研究人员、工程师、产品人员——他们创造了几乎所有的价值,你希望这些人忙碌并且有高影响力。所以如果你要增长,你最好做更多的事情,否则你就会有一堆人坐在房间里争吵、开会或者谈论什么的。嗯,所以我们尽量让相对少的人承担大量的责任。嗯,而让这个机制运转的方式就是做很多事情。
而且我们也确实需要做很多事情——我认为我们现在真的有机会去建造一个重要的互联网平台。嗯,但要做到这一点——如果我们真的要成为人们的个性化 AI,人们在很多不同的服务中使用它,你知道,在他们的一生中,跨越所有这些不同的主要类别和所有更小的类别——我们需要弄清楚如何实现,那就是需要去构建很多东西。
**Sam Altman:** I I I think a mistake that a lot of companies make is they get big and they don't do any they don't do more things. So they just like get bigger because you're supposed to get bigger and they still ship the same amount of product. And that's when like the molasses really takes hold. I I like I am a big believer that you want everyone to be busy. You want teams to be small. you want like to do a lot of things relative to the number of people you have otherwise you just have like 40 people in every meeting and huge fights over who gets like what tiny part of the product. Um there there was that there was this like old observation of business that like a a a good executive is a busy executive because you don't people like muddling around. Um but I I think it's like a good I you know at our company and many other companies like researchers, engineers, product people, they drive almost all the value and you want those people to be busy and high impact. So if you're going to grow you better do a lot more things otherwise you kind of just have a lot of people sitting in your room fighting or meeting or talking about whatever. Um so we try to have you know relatively small numbers of people with huge amounts of responsibility. Um, and the way to make that work is to do a lot of things. And also like we have to do a lot of things like that the to go kind of I think I think we we really do now have an opportunity to go build one of these important internet platforms. Um but to do that like if we really are going to be people's like personalized AI that they use across many different services and you know over their life and across all of these different all these different like kind of main categories and all the smaller ones tthat we need to uh figure out how to enable then that's just a lot of stuff to go build.
**主持人:** 有什么是你特别自豪的,在过去六个月推出的?
**主持人:** Anything you're particularly proud of that you've launched in the last six months?
**Sam Altman:** 我的意思是,模型现在太好了。就像——它们当然还有需要改进的地方,我们正在快速推进,但我觉得现在 ChatGPT 是一个非常好的产品,因为模型非常好。我的意思是,还有其他东西也很重要,但——我惊叹于一个模型能把这么多事情做得这么好。
**Sam Altman:** I mean, the models are so good now. Like, they they still have areas to get better, of course, and we're working on that fast, but like I think at this point, Chad GBT is a very good product because the model's very good. I mean, there's other stuff that matters, too, but the I am like I'm amazed that one model can do so many things so well.
**主持人:** 你在构建小模型和大模型。你做了很多事情,正如你所说。那么,在座的观众怎样才能不挡你的路,不被碾压?
**主持人:** You're building small models and large models. You're doing a lot of things as you said. So, how do this audience stay out of uh your ways and not be roadkill?
**Sam Altman:** 嗯,我的意思是,我觉得理解我们的方式是——我们想要构建的是——我们想成为人们的核心 AI 订阅和使用方式。其中一部分是你在 ChatGPT 内部做的事情。嗯,我们会有几个其他的关键订阅组成部分。但主要是,我们希望构建这个越来越聪明的模型。我们会有这些界面,比如未来的设备、未来类似操作系统的东西,等等。嗯,然后你知道,我们还没有完全弄清楚 API 或 SDK 或者你想怎么称呼它——真正成为我们平台的东西是什么样的。但我们会的。可能需要我们尝试几次,但我们会的。嗯,我希望这能带来世界上难以置信的财富创造,让其他人在此基础上构建。但是,是的,我们会去做核心 AI 订阅、模型,然后是核心服务,还会有大量其他东西需要构建。
**Sam Altman:** Um, I mean, like I I I think the way to model us is we want to build we want to be people's like core AI subscription and way to use that thing. Some of that will be like what you do inside of Chad GPT. Um, we'll have a couple of other kind of like really key parts of that subscription. But mostly we will hopefully build this smarter and smarter model. We'll have these surfaces like future devices, future things that are sort of similar to operating systems, whatever. Um, and then you know we want we have not yet figured out exactly I think what the sort of API or SDK or whatever you want to call it is to like really be our platform. But we will. It may take us a few tries, but we will. Um, and I hope that that enables like just an unbelievable amount of wealth creation in the world and other people to build onto that. But yeah, we're going to go for like the core AI subscription and the model and then um the kind of core services and there will be a ton of other stuff to build.
**主持人:** 好的。所以,不要做核心 AI 订阅,但你可以做其他所有事情。
**主持人:** Okay. So, don't be the Core AI subscription, but you can do everything else.
**Sam Altman:** 我们会去尝试。我的意思是,如果你能做出比我们更好的核心 AI 订阅产品,尽管去做。那太好了。
**Sam Altman:** We're going to try. I mean, if you can make a better Core i subscription offering than us, go ahead. That'd be great.
**主持人:** 好的。嗯,据传你正在以3400亿美元的估值融资400亿美元左右。
**主持人:** Okay. Um, it's rumored that you're raising $40 billion or something like that at $340 billion valuation.
**Sam Altman:** 传闻是——我不知道——我觉得我们已经宣布了。
**Sam Altman:** It's rumors that it's I don't know if I think we announced it. We're okay.
**主持人:** 好的。好吧,我只是想确认你们确实宣布了。嗯,从这里出发,你的野心规模是什么?
**主持人:** Well, if I just want to make sure that you announced it. Um, what's the what's your scale of ambition from there?
**Sam Altman:** 从这里开始,我们要努力做出优秀的模型,发布好的产品,除此之外没有什么宏大计划。就像,我们会——我觉得——当然了。不,我的意思是——我看到观众席里有不少 OpenAI 的人。他们可以作证。我们不会坐在那里制定什么——我坚信你可以做好眼前的事情,但如果你试图从某个疯狂复杂的终点往回推,嗯,那通常不会那么有效。就像——我们知道我们需要大量的 AI 基础设施。我们知道我们需要建设大量的 AI 工厂规模。嗯,我们知道我们需要不断让模型变得更好。我们知道我们需要构建一个伟大的上层消费者产品以及所有相关的组成部分。但我们以灵活和随世界调整战术而自豪。所以,我们明年要构建的产品,我们现在可能都还没有想到。我们相信我们可以构建一系列人们真正热爱的产品。嗯,我们对此有着毫不动摇的信心,而且我们相信我们可以构建伟大的模型。我实际上从来没有像现在这样对我们的研究路线图感到乐观。
**Sam Altman:** From here, we're going to like try to make great models and ship good products and there's no master plan beyond that. Like we're gonna I I I think like Sure. No, I I I I mean there's I see plenty of open eye people in the audience. They can vouch for that. Like we don't we don't sit there and have like I I am a big believer that you can kind of like do the things in front of you, but if you like try to work backwards from like kind of we have this crazy complex thing. Um that doesn't usually work as well. Like the the we we know that we need tons of AI infrastructure. like we know we need to go build out massive amounts of like AI factory volume. Um we know that we need to keep making models better. We know that we need to like build a great top of the stack like kind of consumer product and all the pieces that go into that. But we pride ourselves on being like nimble and adjusting tactics as the world adjusts. And so the products um you know the products that we're going to build next year we're probably not even thinking about right now. And we believe we can build uh a set of products that people really really love. Um and we have like unwavering confidence in that and we believe we can build great models. I' I've actually never felt more optimistic about our research road map than I do right now.
**主持人:** 嗯,研究路线图上有什么?
**主持人:** Um what's on the research road map?
**Sam Altman:** 真正聪明的模型。嗯,但就眼前的步骤而言,我们一次只迈一两步。
**Sam Altman:** Really smart models. Um but but in terms of like the steps in front of us, we kind of take those one or two at a time.
**主持人:** 所以你相信向前推进,而不一定是从终点往回推。
**主持人:** So you believe in working forwards, not necessarily working backwards.
**Sam Altman:** 我听过一些人谈论他们那些绝妙的策略——他们要去哪里,他们要从终点往回推,你知道,这是征服世界,这是之前的那个步骤,这是那个,这是那个,这是那个,这是那个,这是我们今天的位置。我从来没有见过那些人真正取得巨大的成功。
**Sam Altman:** I have heard some people talk about these brilliant strategies of how they're this is where they're going to go and they're going to work backwards and you know this is take over the world and this is the thing before that and this is that and this is that and this is that and this is that and here's where we are today. I have never seen those people like really massively succeed.
**主持人:** 明白了。谁有问题?有一个麦克风正在向你递过来。
**主持人:** Got it. Who who has a question? There's a mic coming your way being thrown.
**观众提问者1:** 嗯,你认为大型公司在将组织转型为更加 AI 原生方面,哪些地方做错了——无论是在使用工具方面还是在生产产品方面?你知道,很明显小公司在创新方面正在把大公司打得落花流水。
**观众提问者1:** Um, what do you think the larger companies are getting wrong about transforming their organizations to be more AI native in terms of both using the tooling as well as producing products? It's been, you know, it's smaller companies are clearly just beating the crap out of out of larger ones when it comes to innovation here.
**Sam Altman:** 我认为每次重大技术革命基本上都会发生这种情况。嗯,这对我来说并不令人惊讶。他们做错的事情就是他们一直以来做错的事情——人们在自己的做事方式上变得极其固化。组织在自己的做事方式上变得极其固化。如果事情每一两个季度就在发生巨大变化,而你有一个信息安全委员会一年只开一次会来决定你要允许哪些应用程序,以及把数据放入一个系统意味着什么——看着这里发生的事情真的太痛苦了。但你知道,这就是创造性破坏。这就是为什么初创公司会赢。这就是行业向前发展的方式。嗯,我会说,我对大公司愿意做出改变的速度感到失望,但并不惊讶。嗯,我的预测大概是——还有大约两年的时间会去抗拒、假装这不会重塑一切,然后会有一个投降和最后一刻的挣扎,然后基本上为时已晚,总的来说初创公司就这样超过了用旧方式做事的人。嗯,我的意思是,这也发生在个人身上——看看,你知道,你跟一个普通的20岁年轻人聊聊,看看他们怎么使用 ChatGPT,然后你跟一个普通的35岁的人聊聊,看看他们怎么使用它或其他服务,差异是不可思议的。这让我想起了,你知道,当智能手机刚出来的时候,每个孩子都能用得非常好,而年纪大的人就是花了三年时间才弄清楚怎么做基本操作。然后当然人们会融入,但目前在 AI 工具上的这种代际差异是疯狂的,我认为公司只是这个现象的另一个症状。
**Sam Altman:** I I think this basically happens every major tech revolution. Um, there's nothing to me surprising about it. The thing that they're getting wrong is the same thing they always get wrong, which is like people get incredibly stuck in their ways. Organizations get incredibly stuck in their ways. If things are changing a lot every quarter or two and you have like an information security council that meets once a year to decide what applications you're going to allow and what it means to like put data into a system, like it it's just it's so painful to watch what happens here. But like, you know, this is this is creative destruction. This is why startups win. this is like how the industry moves forward. Um I am I'd say I feel like disappointed but not surprised at the rate that big companies are willing to do this. Um they will my kind of prediction would be that there's another like couple of years of fighting pretending like this isn't going to reshape everything and then there's like a capitulation and a last minute scramble and it's sort of too late and in general startups just sort of like blow past people doing it the old way. Um, I mean this happens to people too like watching watching like a you know someone who started maybe you like talk to an average 20-year-old and watch how they use chat GBT and then you go talk to like an average 35-year-old and how they they use it or some other service and like the difference is unbelievable. It reminds me of like, you know, when the smartphone came out and like every kid was able to use it super well and older people just like took like three years to figure out how to do basic stuff. And then of course people integrate, but but the the sort of like generational divide on AI tools right now is crazy and I think companies are just another symptom of that.
**主持人:** 还有人有问题吗?就是跟进一下这个话题,嗯,你看到年轻人使用 ChatGPT 的哪些有趣的使用场景可能会让我们感到惊讶?
**主持人:** Anybody else have a question? Just to follow up on that, um, what are the cool use cases that you're seeing young people using with Touch EPT that might surprise us?
**Sam Altman:** 他们真的把它当作一个操作系统来用。嗯,他们有复杂的方式来设置它,把它连接到一堆文件上,他们脑子里记着相当复杂的 prompt,或者你知道,在某个地方存着可以复制粘贴的。嗯,我觉得那些东西都很酷、很令人印象深刻。还有另外一件事——他们基本上不会在不问 ChatGPT 该怎么做的情况下做人生决定。嗯,它了解他们生活中每一个人的完整背景,他们谈过什么,你知道的——记忆功能在那里带来了真正的改变。但总的来说,我觉得这是一个极度简化的概括——年纪大的人把 ChatGPT 当作 Google 的替代品来用。二三十岁的人可能把它当作人生顾问之类的东西来用,然后大学生把它当作操作系统来用。
**Sam Altman:** They really do use it like an operating system. um they have like complex ways to set it up to connect it to like a bunch of files and they have like fairly complex prompts memorized in their head or like you know in something where they paste in and out and um the I mean that stuff I think is all cool and impressive and there's this other thing where like they don't really make life decisions without asking like chbt what they should do. Um, and it has like the full context on every person in their life and what they've talked about and you know that like the memory thing has been a real change there. But but yeah, I think gross oversimplification, but like older people use Chachi PT as a Google replacement. Maybe people in their 20s and 30s use it as like a life advisor something and then like people in college use it as an operating system.
**主持人:** 你们在 OpenAI 内部是怎么使用它的?
**主持人:** How do you use it inside of OpenAI?
**Sam Altman:** 嗯,我的意思是,它写了我们很多代码。
**Sam Altman:** Um, I mean it writes a lot of our code.
**主持人:** 多少?
**主持人:** How much?
**Sam Altman:** 我不知道具体数字。而且当人们说数字的时候,我觉得那总是一件非常蠢的事情,因为就像你说微软的代码有20%还是30%是——用代码行数来衡量简直是一种疯狂的方式。我——也许我能说的有意义的事情是——它在写有意义的代码。就像——它在写——我不知道有多少,但它写的是真正重要的部分。
**Sam Altman:** I don't know the number. And also when people say the number I think is always this very dumb thing cuz like you said Microsoft code is 30 20 30% written measuring by lines of code is just such an insane way to like I don't I I maybe the meaningful thing I could maybe the thing I could say is it's writing meaningful code like it's writing I don't know how much but it's like writing the the the parts that actually matter.
**主持人:** 那很有意思。下一个问题。
**主持人:** That's That's interesting. Next question.
**观众提问者2:** 嘿,Sam,麦克风递过来了。这样可以吗?嘿,Sam。呃,我觉得很有意思的是,你对 Alfred 那个关于你们想去哪里的问题的回答主要集中在消费者和成为核心订阅上,而且你们大部分收入也来自消费者订阅。为什么要保留 API?
**观众提问者2:** Hey Sam, mic going away. Is this okay? Hey, Sam. Uh, I thought it was interesting that the answer to Alfred's question about where you guys want to go is focus mostly around consumer and being the core subscription and also most of your revenue comes from consumer subscriptions. Why keep the API in 10 years?
**Sam Altman:** 十年后,我真的希望这一切合并成一件事。就像你应该能用 OpenAI 账号登录其他服务。其他服务应该有一个令人难以置信的 SDK 来在某个时候接管 ChatGPT 的 UI。但就像——在你将拥有一个了解你的个性化 AI、拥有你的信息、知道你想稍后分享什么、拥有关于你的所有背景的前提下——你会想在很多地方使用它。现在,我同意目前版本的 API 离那个愿景还很远,但我认为我们能做到。
**Sam Altman:** I really hope that all of this merges into one thing. like you should be able to sign in with OpenAI to other services. Other services should have an incredible SDK to like take over the chat GBT um UI at some point. But like to the degree that you are going to have a personalized AI that knows you, that has your information, that knows what you want to share later, and you know has all this context on you, you'll want to be able to use that in a lot of places. Now, I agree that the current version of the API is very far off that vision, but I think we can get there.
**观众提问者3:** 呃,是的,我可能有一个跟进问题。你有点抢了我的问题。嗯,但就像我们很多在做应用层公司的人,我们想要使用那些构建模块、那些不同的 API 组件,也许是 Deep Research API——它还没有发布但可能会——然后用它们来构建东西。这会是一个优先事项吗?为我们启用那个平台?我们应该怎么想这件事?
**观众提问者3:** Uh yeah, I maybe have a follow-up question to that one. You kind of took mine. Um but like a lot of us who are building application layer companies, we want to like use those building blocks, those different API components, maybe the deep research API which is not a release thing but could be uh and and build stuff with them like is that going to be a priority like enabling that platform for us? How should we think about that?
**Sam Altman:** 是的,我觉得,我希望能有一个介于两者之间的东西——会有一种类似于 HTTP 级别的新协议,用于未来的互联网,事物变得联邦化并被分解为更小的组件,agent 不断地暴露和使用不同的工具,认证、支付、数据传输——这一切都在这个层面上内置,每个人都信任,一切和一切对话。我不太确定那会是什么样子,但它正在从迷雾中浮现。嗯,当我们对此有更清晰的认识时——同样,可能需要我们几次迭代才能达到那里。但那大概是我希望看到事情发展的方向。
**Sam Altman:** Yeah, I I think I hope something in between those that there is sort of like a new protocol on the level of HTTP for the future of the internet where things get federated and broken down into like much smaller components and agents are like constantly exposing and using different tools and authentication, payment, data transfer, it's all like built in at this level that everybody trusts. everything talk to everything. And I I don't quite think we know what that looks like, but it's like coming out of the fog. Um, and as we get a better sense for that, again, it'll probably take us like a few iterations toward that to get there. But that's kind of where I would like to see things.
**观众提问者4:** 嘿,Sam。呃,在后面。呃,我叫 Roy。我很好奇,嗯,AI 如果有更多的输入数据显然会做得更好。有没有考虑过输入传感器数据?嗯,什么类型的传感器数据,比如温度,嗯,你知道的,物理世界中的那些东西,你可以输入进去让它更好地理解现实。
**观众提问者4:** Hey, Sam. Uh, back here. Uh, my name is Roy. I'm curious, uh, the AI would obviously do better with more input data. Is there any thought to feeding sensor data, uh, and what type of sensor data, whether it's temperature, uh, you know, things in the physical world that you could feed in that it could better understand reality.
**Sam Altman:** 人们已经做了很多了。嗯,人们就是——你知道,人们建了各种东西,把传感器数据输入到 API 中,用 o3 API 调用什么的,对于某些用例它确实效果很好。嗯,我要说的是,最新的模型在这方面做得很好,而以前不行。呃,所以我们可能会在某个时候更明确地把它内置进去,但现在已经有很多进展了。
**Sam Altman:** People do that a lot. uh people like put that into you know people have whatever they build things where they just put sensor data into like an API and like an 03 API call or whatever and for some use cases it does work super well. Um I'd say that the latest models seem to do a good job with this and they used to not. Uh so we'll probably bake it in more explicitly at some point but there's already like a lot happening there.
**观众提问者5:** 嗨,Sam。呃,我对在 Playground 里使用语音模型感到非常兴奋。所以我有两个问题。第一个是,语音对 OpenAI 来说有多重要——在基础设施的优先级排序中——你能分享一下你认为它会如何在产品和 ChatGPT 这个核心产品中呈现吗?
**观众提问者5:** Hi Sam. Uh I was really excited to play with the voice model in the playground and so I have two questions. The first is how important uh is voice to open AAI in terms of like stack ranking for infrastructure and can you share a little bit about how you think it'll show up in the product and chat GBT the core thing.
**Sam Altman:** 我认为语音极其重要。坦白说,我们只是还没有做出一个足够好的语音产品。没关系。就像我们花了一段时间才做出一个足够好的文本模型一样。嗯,我们最终会破解那个难题,当我们做到的时候,嗯,我认为很多人会想要更多地使用语音交互。我——当我们首次推出我们目前的语音模式时,对我来说最有趣的是,它是在触摸界面之上的一个新的交互层,你可以一边说话一边在手机上点击。我继续认为语音加 GUI 交互有一些很棒的东西是我们还没有破解的。但在那之前,我们会先把语音做得真正出色。当我们做到的时候,我认为它不仅对现有设备来说很酷,而且我认为如果你能让语音感觉达到真正的人类水平,语音将会催生一个全新类别的设备。
**Sam Altman:** I think voice is extremely important. Honestly, we just we have not made a good enough voice product yet. That's fine. Like it took us a while to make a good enough text model too. Um, we will crack that code eventually and when we do, um, I think a lot of people are going to want to use voice interaction a lot more. I I am super when we first launched our current voice mode, the thing that was most interesting to me was it was a new stream on top of like the touch interface and I you could talk and be like clicking around on your phone at the same time. And I continue to think there's something amazing to do about like voice plus guey interaction that we have not cracked. But before that, we'll just make voice really great. And when we do, I think there's a not only is it cool with existing devices, but I I sort of think voice will enable a totally new class of devices if you can make it feel like truly human level voice.
**主持人:** 类似的问题。关于编程的类似问题。我很好奇,编程只是另一个垂直应用,还是它对 OpenAI 的未来更为核心?
**主持人:** Similar question. Similar question about coding. I'm curious, is coding just another vertical application or is it more central to the future of open AI?
**Sam Altman:** 那个对 OpenAI 的未来更为核心。嗯,编程我认为将是这些模型的——现在如果你问 ChatGPT 一个问题,你会得到文本回复,也许会得到一张图片。嗯,你会希望得到一个完整的程序回来。你会希望每个回复都有定制渲染的代码,或者至少我会。嗯,你会希望这些模型有能力去让事情在世界上发生,而编写代码我认为将是你驱动世界、调用一堆 API 或其他什么的非常核心的方式。所以我会说编程会更多地在核心类别里。我们显然也会通过我们的 API 在我们的平台上提供它。嗯,但你知道,ChatGPT 应该在编写代码方面表现出色。
**Sam Altman:** That one's more central to the future of open AI. Um, coding I think will be how these models kind of right now if you ask CHP a response, you get text back, maybe you get an image. Um, you would like to get a whole program back. You would like, you know, custom rendered code for every response or at least I would. um you would like the ability for these models to go make things happen in the world and writing code I think will be very central to how you like actuate the world and call a bunch of APIs or whatever. So I I would say coding will be more in a central category. We'll obviously expose it through our API on our platform as well. Um but you know chat GBT should be excellent at writing code.
**主持人:** 所以我们将从 assistant 的世界转向 agent,再到基本上是应用程序,一路走下去。
**主持人:** So we're going to move from the world of assistants to agents to basically applications all the way through.
**Sam Altman:** 我觉得会感觉是非常连续的,但是的。
**Sam Altman:** I I I think it'll feel yeah like very continuous but yes
**主持人:** 嗯,所以你对研究路线图中更智能的模型有信心。太好了。我有一个心智模型——有一些要素,比如更多数据、更大的数据中心、Transformer 架构、test-time compute——有什么是被低估的要素,或者将会成为这个组合的一部分,但可能不在大多数人心智模型中的?
**主持人:** u so you have conviction in the road map about smarter models awesome I have this mental model there's some ingredients like more data bigger data centers a transformer architecture test time compute what's like an underrated ingredient or something that's going to be part of that mix that like maybe isn't in the mental model of most of
**Sam Altman:** 嗯,我的意思是——每一个要素都真的很难,你知道,显然最高杠杆的东西仍然是大的算法突破,我认为可能还剩下一些10倍或100倍的改进——不会很多,但即使一两个也是大事。嗯,但你知道,基本上就是算法、数据、算力——这些是主要的要素。
**Sam Altman:** Um, I mean that's kind of the each of those things are really hard and you know obviously like the highest leverage thing is still big algorithmic breakthroughs and I think there still probably are some 10 x's or 100 x's left not very many but even one or two is a big deal. Um but you know yeah it's kind of like algorithms, data, compute those are sort of the big ingredients.
**观众提问者6:** 呃,嗨,嗯,我的问题是——你管理着世界上最好的 ML 团队之一。呃,你如何在让聪明人——比如 Ilya——去深入追求研究或其他看起来令人兴奋的东西,与自上而下地说"我们要做这个,我们要让它发生"之间取得平衡?
**观众提问者6:** Uh hi uh so my question is you run one of the best ML teams in the world. Uh, how do you balance between uh letting smart people like Issa chase uh deeply research or something else that seems exciting versus going top down and being like we're going to build this, we're going to make it happen.
**Sam Altman:** 有些项目需要大量的协调,所以必须有一点自上而下的指挥,但我认为大多数人试图做太多那样的事情。我的意思是——可能有其他方式来管理好的 AI 研究或好的研究实验室,但当我们创立 OpenAI 的时候,我们花了大量时间试图理解一个运作良好的研究实验室是什么样的。你得回溯到很久以前的历史。事实上,几乎所有能帮助我们提供建议的人都已经去世了。嗯,已经很长时间没有出现过好的研究实验室了。你知道,人们经常问我们——为什么 OpenAI 能反复创新,而其他 AI 实验室只是在模仿?或者为什么生物实验室 X 做不出好的工作,而生物实验室 Y 能?或者诸如此类的。我们就一直说,这是我们观察到的原则,这是我们学到它们的方式,这是我们参考过的历史案例。然后每个人都说太好了,但我要去做另一件事。我们说没关系。你来找我们寻求建议。你想做什么就做什么。嗯,但我觉得很了不起的是,这些我们试图用来管理研究实验室的少数原则——我们并没有发明它们。我们无耻地从历史上其他优秀的研究实验室那里复制来的——它们对我们来说一直很有效。而那些有一些聪明理由要做别的事情的人,结果没有成功。
**Sam Altman:** We don't know if it'll work. There are some projects that require so much coordination that there has to be a little bit of like top down quarterbacking, but I think most people try to do way too much of that. I I I mean this is like there's probably other ways to run good AI research or good research labs in general, but when we started OpenAI, we spent a lot of time trying to understand uh what a well-run research lab looks like. And you had to go really far back in the past. In fact, almost everyone that could like help advise us on this was dead. Um it had been like a long time since there had been good good research labs. And you know people ask us a lot like why why does open AI like repeatedly innovate and why do the other AI labs like sort of copy or why do like biolab x not do good work and biolab y does do good work or whatever. And we sort of keep saying like here's the principles we've observed. Here's how we learned them. Here's what we looked at in the past. And then everybody says great um but I'm going to go do the other thing. We said that's fine. Like you came to us for advice. Like you do what you want. Um but I find it remarkable how much these few principles that we've tried to run our research lab on which we did not invent. We shamelessly copied from other good research labs in history um have worked for us. And then people who have had some smart reason about why they were going to do something else, it didn't work. Um, so it seems to me that uh these large models
**观众提问者7:** 嗯,所以在我看来,这些大型模型有一件真正令人着迷的事——作为一个知识爱好者——它们可能体现并让我们回答人文学科中这些令人惊叹的长期问题——关于周期性变化和艺术性的有趣事物,甚至像,你知道,系统性偏见和其他社会中真正发生的事情在多大程度上存在,我们能否检测到这些——非常微妙的东西,我们以前只能假设而无法真正做到。我想知道 OpenAI 是否有想法,甚至有路线图,来与学术研究人员合作,帮助解锁我们在人文学科和社会科学中能首次学到的一些新东西。
**观众提问者7:** uh one of the really fascinating things as like a lover of knowledge about them is that they potentially embody and allow us to answer these like amazing long-standing questions in the humanities about cyclical changes and artistic uh interesting things or even like uh you know to what extent systematic prejudice and other sorts of things are really happening in society and can we sort of detect these and I'm uh very subtle things which we we could never really do more than hypothesize before. And I'm wondering whether OpenAI has a thought about or even a roadmap for working with academic researchers say to help unlock some of these new things we could learn for the first time in the humanities and in the social sciences.
**Sam Altman:** 我们有的。嗯,是的,看到人们在那里做的事情真的很棒。我们确实有学术研究项目,在其中我们合作并做一些定制工作,但主要是人们会说"我想要访问模型",或者"也许我想要访问基础模型",我认为我们在这方面做得很好。呃,我们做的事情中有一个很酷的地方是,我们的激励结构很大程度上推动我们去让模型尽可能聪明、便宜和广泛可及,这对学术界和整个世界都很有益。所以,你知道,我们确实做一些定制合作,但我们经常发现,研究人员或用户真正想要的只是我们让通用模型全面变得更好。嗯,所以我们试图把大约90%的推动力集中在那上面。
**Sam Altman:** We do um yeah I mean it's amazing to see what people are doing there. We do have academic research programs where we partner and you know do some custom work but mostly people just say like I want access to the model or maybe I want access to the base model and I think we're really good at that. Uh one of the kind of cool things about what we do is so much of our incentive structure is pushed towards making the models as smart and cheap and widely accessible as possible that that serves academics and the really the whole world very well. So, you know, we we have we do some custom partnerships, but we often find that what researchers or users really want is just for us to make the general model better across the board. Um, and so we we try to focus, you know, kind of 90% of our thrust vector on that.
**主持人:** 我很好奇你怎么看定制化。所以,你提到了联邦化的、用 OpenAI 登录、带上你的记忆和背景。我只是好奇你是否认为定制化——以及这些不同的针对特定应用的 post-training——是一种权宜之计,用来弥补核心模型不够好,你是怎么想这件事的。
**主持人:** I'm curious how you're thinking about customization. So, you mentioned the federated like sign in with OpenAI, bringing your memories, your context. I'm just curious if you think customization and like these different post- training on like application specific things is a band-aid for or trying to make the core models better and how you're thinking about that.
**Sam Altman:** 我的意思是,在某种意义上,我认为柏拉图式的理想状态是一个非常小的推理模型,拥有一万亿 token 的上下文窗口,你把你的整个人生都放进去。模型永远不重新训练。权重永远不定制。但那个东西可以在你的整个上下文中进行推理,而且高效地做到这一点。你这辈子的每一次对话、你读过的每一本书、你看过的每一封邮件、嗯、你看过的一切都在里面,加上连接你来自其他来源的所有数据。而且,你知道的,你的生活就不断追加到上下文中,你的公司也对公司的所有数据做同样的事情。嗯,我们今天还做不到。呃,但我觉得其他任何方案都是对那个柏拉图式理想的妥协,那就是我最终希望我们做定制化的方式。
**Sam Altman:** I mean in some sense I think the like platonic ideal state is uh a very tiny reasoning model with a trillion tokens of context that you put your whole life into. The model never retrains. The weights never customize. But that thing can like reason across your whole context and do it efficiently. And every conversation you've ever had in your life, every book you've ever read, every email you've ever read, um, every everything you've ever looked at is in there, plus connected all your data from other sources. And, you know, your life just keeps appending to the context and your company just does the same thing for all your company's data. Um, we can't get there today. Uh but but I I think of kind of like anything else as a a compromise off that platonic ideal and that is how I would eventually I hope we do customization.
**主持人:** 最后一个问题,在后面。
**主持人:** One last question in the back.
**观众提问者8:** 嗨,Sam,感谢你的时间。你认为在未来12个月内,大部分价值创造会来自哪里?是高级记忆功能,还是安全性,还是让 agent 能做更多事情并与现实世界交互的协议?
**观众提问者8:** Hi Sam, thanks for your time. Where do you think most of the value creation we come from in the next 12 months? Would it be maybe advanced memory capabilities or maybe security or protocols that allow agents to do more stuff and interact with the real world?
**Sam Altman:** 嗯,我的意思是,在某种意义上,价值将继续来自三件事——建设更多基础设施、更智能的模型,以及构建将这些东西整合到社会中的脚手架。如果你推动这三件事,我认为其余的会自行理顺。嗯,在更细节的层面上,我觉得2025年将是 agent 做工作的一年。特别是编程,我预期会是一个主导类别。我认为还会有几个其他的。嗯,明年是我预计会有更多像 AI 发现新东西的一年——也许我们会让 AI 做出一些非常重大的科学发现,或者协助人类做到这一点。而且你知道,我是一个信奉者——在人类历史上,大部分真正可持续的经济增长来自于——一旦你已经扩张并殖民了地球之后——大部分来自于更好的科学知识,然后为世界实施这些知识。然后2027年,我猜测将是所有这些从智力领域转移到物理世界的年份,机器人从一种新奇事物变成一个严肃的经济价值创造者。但那只是我现在脑子里临时想到的一个猜测。
**Sam Altman:** Um I mean in some sense the value will continue to come from really three things like building out more infrastructure, smarter models and building the kind of scaffolding to integrate this stuff into society. And if you push on those, I think the rest will sort itself out. Um, at at a higher level of detail, I kind of think 2025 will be a year of sort of agents doing work. Coding in particular, I would expect to be a dominant category. I think there'll be a few others too. Um, next year is a year where I would expect more like uh sort of AI discovering new stuff and maybe we have AIs make some very large scientific discoveries or assist humans in doing that. And you know, I'm I am kind of a believer that most of the sort of real sustainable economic growth in human history comes from once you've like kind of spread out and colonized the earth, most of it comes from just better scientific knowledge and and then implementing that for the world. And then 27 I I would guess is the year where like that all moves from the sort of intellectual realm to the physical world and robots go from a curiosity to like a serious economic creator of value. But that was like an off the top of my head kind of guess right now.
**主持人:** 我能用几个快速问题来结束吗?太好了。其中一个是——嗯,ChatGPT-5。它会比我们在座的所有人都聪明吗?
**主持人:** Can I close with a few quick questions? Great. One of which is um chat uh GPT5. Is that going to be just all smarter than all of us here?
**Sam Altman:** 嗯,我的意思是,如果你觉得自己比 o3 聪明很多,那你可能还有一段路要走,但 o3 已经很聪明了。
**Sam Altman:** Um I mean if you think you're like way smarter than 03 then maybe you have a little bit of a ways to go but 03 is already pretty smart.
**主持人:** 嗯,两个私人问题。上次你来这里的时候,你刚刚经历了 OpenAI 的那件事。呃,现在有了一些时间和距离,你对在座的创始人们关于韧性、耐力和力量有什么建议?
**主持人:** Um two personal questions. Last time you were here, you had just come off a blip with open AI. Uh given some perspective now in distance, you got any advice for founders here about resilience, endurance, strength?
**Sam Altman:** 嗯,随着时间推移会变得更容易。我觉得作为创始人,你会面临很多逆境,挑战的种类会变得更难、风险更高,但随着你经历更多的坏事,情感上的代价会变得更容易承受。所以,你知道,在某种意义上——即使抽象来看挑战变得更大更难,你应对它们的能力、你建立起来的那种韧性会随着每一次经历变得更强。嗯,然后我觉得——作为创始人面临的重大挑战中最难的部分不是它们发生的那一刻。呃,很多事情在公司的历史中会出错。嗯,在危机的急性期你可以——你知道,你会得到很多支持,你可以靠大量的肾上腺素来运转。就像——你知道,就像——即使是真的很大的事情,比如你的公司钱花光了、失败了——很多人会来支持你。嗯,你会挺过去,然后继续做下一件事。我觉得更难在心理上管理的是之后的后续影响。嗯,我觉得如果有——你知道,人们很多关注放在如何在危机的那一刻工作上。而真正有价值的是学会如何收拾残局。关于这方面的讨论少得多。我觉得——我实际上从来没有找到什么好的资料可以推荐给创始人去读——不是关于在零日或第一天或第二天如何处理真正的危机,而是在第60天,当你只是试图在废墟上重建的时候。嗯,那才是我觉得你可以练习并变得更好的领域。
**Sam Altman:** Um it gets easier over time. I think like you will face a lot of adversity in your journey as a founder and the the kind of challenges get harder and higher stakes but the emotional toll gets easier as you kind of go through more bad things. So it's uh you know in some sense like it does it yeah even though like abstractly the challenges get bigger and harder the your ability to deal with them the sort of resilience you build up gets easier like with each one you you kind of go through. Um and then I like I I think the the hardest thing about the big challenges that come as a founder is not the moment when they happen. Uh like a lot of things go wrong in the history of a company. Um in the acute thing you can kind of like you know you get a lot of support, you can function a lot of adrenaline. like that's, you know, you're kind of like e even the really big stuff like your company runs out of money and fails, like a lot of people will come and support you. Um, and you kind of get through it and go on to the new thing. The thing that I think is harder to sort of manage your own psychology through is the sort of like fallout after. Um, and I think if there's, you know, people focus a lot about how to work in that one moment during the crisis. And the really valuable thing to learn is how you like pick up the pieces. There's much less talk about that. I think there's I've never actually found something good to point founders to to go read about, you know, not how you deal with the real crisis on day zero or day one or day two, but on day 60 as you're just trying to like rebuild after it. Um, and that's that's the area that I think you can like practice and get better at.
**主持人:** 谢谢你,Sam。
**主持人:** Thank you, Sam.
**Sam Altman:** 是的。
**Sam Altman:** Yeah,
**主持人:** 你目前正式还在休陪产假。
**主持人:** you're officially still on paternity leave.
**Sam Altman:** 我知道。
**Sam Altman:** I know.
**主持人:** 所以,感谢你来这里和我们交流。非常感谢。
**主持人:** So, thank you for coming in and speaking with us. Appreciate it.
**Sam Altman:** 谢谢。
**Sam Altman:** Thank you.