What's missing from current AI models? A conversation with Sam Altman

Our CEO Riccardo Di Molfetta recently sat down with Sam Altman, CEO and co-founder of OpenAI, at Harvard University. The interview was part of a series of events hosted by Xfund, the early-stage venture capital fund co-founded by Harvard's School of Engineering and Applied Sciences (SEAS). In their conversation, Riccardo asked Sam about his views on the future of human-AI interaction, personal computing, and what's currently missing in the mental and social capacities of large language models (LLMs).



Below is the transcript of the conversation between Riccardo and Sam Altman:

Riccardo: “I would be interested to hear more about your view on the evolving relationship between AI and humans — as in AI companions and personal AI in the future.”

Sam Altman: “I think there's a very open question of what people want — whether it's sort of a very capable senior coworker or if they want like an echo of themselves. It seems clear at this point we'll all be spending a lot of time talking to AIs, but are we going to treat them like colleagues or are we going to treat them like extensions of ourselves? I don't think we know yet what we may want. I don't even think we know what most people are going to want though. And the only way that I think we'll figure that out is by letting people explore all of these options. It seems clear that people are going to spend more time talking to AI in the future than they do today. But what that feeling is like, or kind of like the way we use these things, I could see it going either way. I think what I want is something like a super competent colleague that knows absolutely everything about me — my whole life, every email, every conversation I've ever had, but doesn't feel like an extension of me. It's clear that like I'm me and that is another separate thing.”

“…Also, one of the kind of potential visions for AGI that I think is compelling is that it's not some fundamental new black box, magic box in the sky, but it's like a bunch more entities running around contributing to the sort of knowledge scaffolding and tool building of society. So society is already sort of like an AGI to either of us. And that collective intelligence does not exist in her brain or his brain (indicating people in the room). It is in the space between the knowledge we've built up, the sort of tool chain that we get to use. Like an iPhone is not going to get built by one person, but it is sort of like a superpower once you have it that you get to build on further. And so if AGIs just contribute to all of that, I think it's a good framework for thinking about how humans and AI will coexist.”

Riccardo: “Building on that, do you think that we’re going to need more social capacities or mental capacities? What is missing in current LLMs to have that more intimate relationship between humans and AI?”

Sam Altman: “Well, I think the biggest problem is just that they're not very good. Like GPT-4 relative to what I hope we have soon is just this incredibly dumb model. And so it's like not that helpful. If we go back to that senior colleague, it's not a very thoughtful senior colleague. It makes a lot of dumb mistakes. It can't really reason. So I think we just like, we're so far off where we need to be from the capabilities.”