Skip to main content

The AI Optimist Club

Has doom and gloom over the potential deadly consequences of AI overshadowed the real opportunities ahead? Two AI pioneers—Fei-Fei Li and Reid Hoffman—discuss why AI is poised to enhance our work and personal lives.

Released on 12/07/2023

Transcript

We're so lucky to have these two people here.

Fei-Fei Li is the Sequoia professor

of Computer Science Department.

She's renowned for being, you know, inventor of ImageNet,

which was, you know,

just a pivotal moment in the development of contemporary AI.

And then she's got a new book, which is fantastic,

called, The Worlds I See,

The Worlds I See.

Reid Hoffman is a legendary entrepreneur and investor.

He is the co-founder of LinkedIn, a partner at Greylock.

I can take the whole time talking about

all this stuff he's done.

And he's written a book recently,

co-written it called,

Impromptu: Amplifying Our Humanity Through AI

And his co-author was GPT-4.

[audience laughs]

So, thanks so much for being with us.

You know, this panel is called The Optimist Club

because of where you stand,

and this is, you know, in contrast to that,

all that doomerism,

but I wanna start with you, Fei-Fei

with something that you did in your book.

It's called The Worlds I See

because you talk about seeing the world differently

from the experience of creating ImageNet,

and looking at the world

through maybe this alien intelligence was being

introduced to us.

Tell me about that, and what it might mean for all of us.

That's a very profound question, Steven. Thank you.

So, the title of the book is The Worlds I See

and I make sure the world is plural,

is because as a AI scientist who works in computer vision,

I actually do think we're seeing this world

in many different layers.

First of all, vision is part of intelligence.

As a AI scientist working in computer vision,

it's very clear to me visual intelligence is a cornerstone

of, you know, human intelligence

and of machine intelligence.

Especially this is a interesting context

in today's technology

where language is leading a lot of the breakthroughs.

It is actually important to recognize

intelligence is extremely multimodal

and for humans and will be for machines,

and there's a lot of room to grow in terms of the deep

perceptual, visual, eventually,

you know, action oriented understanding of the world.

But it also, you know, you asked me what I see.

As a AI scientist, not only I work in technology,

but I also

I'm dealing with the consequence

of the technology we have been building,

and it make me to see the human aspect of this technology,

the human centeredness, the human impact,

the human responsibility, the human agency,

and that's a different layer

that I think I want to underscore,

both in the book as well as in my leadership

in communicating AI to the world.

You know, maybe we'll talk a little bit

about how you're doing that at Stanford,

but you know, the New York Times gave your book a rave,

but in recently they had an article

that said the most important people in AI

and considering ImageNet and everything else,

I was a little surprised that you weren't on it

and no other people of your agenda were on it.

Reid, did you notice that?

I did.

And after I stopped being pretty irritated

by the incompetence reflected by the article,

'cause you know, we're sitting here with one of the people

who should have been in the article,

I was like, well, this reflects a larger problem

that there have been a number.

Fei-Fei is one of the amazing leading ones,

but there have been a number of women

who have been key to artificial intelligence,

both through its history,

but also in the important current wave.

Yeah, I wanted to say it's wrong for New York Times

to give a list of people who have made modern AI happen

and it has zero women on it.

[audience applauds]

But that brings up a an interesting point.

Fei-Fei you mentioned, you know, the human aspect of it.

These systems we have are gonna be trained

and are trained on human content.

You know, is it possible to ever purge them of bias

considering that they're learning from us

and we're capable, even our greatest institutions,

of making these errors.

Reid also should chime in.

So, first of all, humans' relationship with AI

or any tool we have built is a complex one.

A tool is designed or intended

to help and make our lives and work better,

but it's also brings a lot of harm

and unintended consequences.

So I'm gonna admit,

I don't know if I can answer your question,

are we gonna completely 100% purge AI from the human bias,

but that doesn't mean we shouldn't have the responsibility

of trying very hard.

We are aware of this bias.

We are learning how to mitigate this bias.

We're learning how to govern this tool

so I think it's our responsibility

to make it better and better,

but we have to start with understanding this is complex

and humans are as flawed as, you know, anyone

and we need to just take that responsibility

and try to do better.

I think one of the things

that's a progress of human beings

is we are trying to figure out how to be our better selves.

And so, like if you go back a hundred years

I'm pretty sure there wasn't anything

that was interesting in disability rights

and therefore would be bias issues.

So I think not only is it gonna be a work

forever in progress with AI,

it's a work for ever in progress with human beings.

And I think that one of the key things to be clear on

is what our benchmarks and target are.

Exactly as Fei-Fei said, it's a continuing work in progress

that you continue to apply yourself, you know,

kind of fiercely and intelligently towards.

But the benchmark is to be helping us all improve,

not to be perfect.

Take autonomous vehicles as a parallel of.

There's over 40,000 deaths in the US, you know,

with car related accidents.

That's not including injuries and everything else.

The goal is not to get to zero with autonomous vehicles.

If you said the goal was zero

and you went through a number of years

where you could have saved tens of thousands of lives

by deploying some of that,

even though there will be some still accidents,

you are net massively saving lives,

and we should see our way forward to that.

Well, spoken like true optimists.

So, there were two letters that circulated

among the AI community and, you know,

people associated with it.

One of them said to put a pause

on developing AI or training it,

I'm not sure which they meant, for six months.

A lot of people signed that

and then even more people,

some of them who were really actively involved

in developing AI signed a letter,

was sort of a general statement saying, you know,

we're kinda concerned that this might kill us or whatever.

Neither of you were among the signers of either letter.

Could you explain why?

'Cause I'm sure someone asked you

to put your John Hancock there.

Why you didn't?

Or you're Fei-Fei Li but.

It takes about a minute to sign a letter.

It takes five years to build a human-centered AI institute

that has been working on AI policy, AI ethics,

AI for good, and AI for all.

And I think, you know, talking is easy,

really working hard to bring human-centered

ethical AI to the world is way harder

and that's how I focus my focus on my energy.

Yeah,

[audience applauds]

I 100% agree with Fei-Fei.

It's part of the reason I'm, you know,

on our advisory board.

Yes, and Reid was helping us to build this institute.

I will say,

'cause it's important to state on the two things.

So on the six month pause letter

it's almost certainly

the people think they're being positive

and they're actually being destructive.

If you do a simple thing of who is gonna listen

to your letter and possibly pause,

it's the people who care about human centered values

and everything else,

and so your net impact is between neutral and bad

because the good people are pausing

and the other people are not.

So, it just a foolish endeavor in the first one.

And that's leaving aside the people

who are signing the letter saying,

we should pause while I'm accelerating myself.

We all know who I'm talking about.

[audience laughs]

And then the 22 word statement

I actually thought about some more,

and it was actually a little bit more strongly worded

than you just gestured.

It was, should be considered an existential risk

along with climate change, pandemic, et cetera.

And ultimately the reason I didn't sign that statement,

although many people that I love and respect deeply did,

so I, you know, respect and credit for that,

is because it isn't like the other existential risks

because those have no positive consequences.

Climate change does not have a positive consequence.

Pandemic does not have a positive consequence.

AI may be the thing that helps us solve the next pandemic.

AI may be the thing that helps us,

you know, mitigate climate change,

and so it has the positive column as well.

It's not just whatever

existential risk you're thinking about

and you have to think about that as well.

And this is the problem with the negative focus

and the reason why I recommend everyone come join us

in The Optimist Club, not because it's, you know,

a utopian and everything works out just fine

and you don't have to navigate,

but because it can be part of an amazing solution

and that's to what Fei-Fei is saying

is what we're trying to build towards.

I really wanna agree with Reid on this

because what Reid is saying

is this is a very horizontal technology

and that means it has many roles to play.

AI can give us a lot of opportunity

to discover new materials, new treatments for diseases,

climate solutions, new energy, you know,

the fusion results and all that.

In the meantime, we do need to recognize its risk,

existential risk in the pool of, you know,

list of AI risks is right now the furthest.

We actually have social risks,

immediate social risks like disinformation and democracy,

like job disruption,

like bias and all that.

So, if all of our conversation and energy,

social capital is focusing on first pure negativity

without recognizing the opportunities.

Second, even when we're focusing on the problem,

we're not focusing on the immediate, important

problem to the society.

I would be worried about that.

This is why, you know, a month ago

I actually had a very fun actually

public discussion with Jeff,

professor Jeff Hinton about this.

I mean, I love him to pieces,

but he and I had a discussion

of how to look at existential risk versus social risks.

Well, one member of The Optimist Club,

I don't know if you're welcoming him

is a fellow named Marc Andreessen

who published a thing called

The Techno-Optimist Manifesto,

very strident arguments that quoted Thomas Edison,

Richard Feynman and Carrie Fisher on the subject.

Let me read you a sentence from his manifesto.

We believe that any deceleration of AI will cost lives.

Deaths that were preventable by the AI

that was prevented from existing is a form of murder.

Are you among the first person plural

in to say we believe in that sentence?

Well, I think, I would say we believe,

although any de acceleration is not quite right.

So, one of the ways that I describe myself

as I'm a techno optimist

and have described myself that way for a decade or more,

and not a techno utopian,

which means that just because you can build the technology

doesn't mean that it necessarily has a good outcome.

You have to shape it, you have to direct what you're doing.

And so, being an intelligent shaper of it

and driving it in the right direction.

So for example, you know, like take

a variety of AI systems over, you know things,

machine learning systems over the last decade

have had bad bias results.

Like, for example, paroling or credit decisions

and so forth on racial basis and so on.

And so, you got no, you have to pay attention

to how to do it in the right way,

and if that paying attention

is a mild de acceleration as you're doing it,

that's because you're doing it

to get the really good outcomes.

Now, I'm generally speaking a believer

that our future will give us more tools

for both the betterment of humanity

and navigating the risks.

So generally speaking, I'm not a de accelerationist at all,

I'm more of an accelerationist,

but intelligent acceleration,

navigating the course,

which means that there may be some, like for example,

you get to a corner on the road,

you do slow down while you're going around the corner.

It's rational.

Steve, I think I should start a new club called

The Techno Humanist.

I'm not a pure optimist nor a doomsday.

I think we need to look at this technology with nuance.

I believe very much the possibilities, the opportunities,

but I also agree with Reid, we should look at it,

look at the messy consequences sometimes, you know,

intended or unintended of technology.

Look at the human impact from individual dignity

all the way to a societal society, socioeconomic structure.

So, I think it's not, it's too simple to say,

do you wanna accelerate or decelerate.

We should talk about where we wanna accelerate,

like Reid said, and where we should slow down

and that's a nuanced topic.

Well, I think one thing that might be different about AI

is that we're uncertain about the ceiling,

and what that means if you hit whatever that ceiling is.

So, there's this term called AGI, right?

Which is Artificial General Intelligence.

I don't think there's universal agreement what that is.

I do know when I was researching

my OpenAI story for Wired,

I found out that in their contracts

there actually is some kind of clause that says

if we reach AGI then the terms of this contract are off,

because we're in a different world now.

And when I talked to Satya Nadella,

a fellow, you know, pretty well Reid,

you know, he said,

yeah, it could be the last invention.

You know, he just spun that thing off.

You know, I'm curious, what do each of you think AGI is,

and what would happen if we got there?

You know, I always wonder what would Alan Turing think

the definition of AGI is?

Or what would John McCarthy and Marvin Minsky think?

The reason I say this is

these founding pioneers of our field.

I mean, Alan Turing probably wasn't aware

he inspired humanity to create AI,

but John McCarthy, Marvin Minsky,

those people in that Dartmouth summer,

they put on paper an audacious dream

of creating machines that think.

I don't think they put on paper

a dream of narrow AI, or task specific AI.

So, from that point of view, as a scholar,

it's hard for me to fully understand the difference

between the science of AI

versus this particular term

that comes out of industry, frankly, of AGI.

To me, the ceiling of AI

is similar to the ceiling of biology and physics

in the sense that we will continue to discover

and uncover the new knowledge of intelligence

and innovate intelligent machines with the goal

of doing benevolence to humanity.

If we get to a point that,

that benevolence was diminishing return,

and we really get to a very dangerous point

I think as a species

we need to collectively face that responsibility,

and I actually have, this is my optimism, not technology,

my optimism is humanity.

I believe in our resilience.

I believe in our, you know,

the better part of ourselves.

So I don't know where the ceiling is

and I don't know how to, you know,

I think AGI is not a term the founding fathers have used.

Hmm. The founding fathers. I love it.

One of the things that I've quipped about AGI

is it actually stands for the AI

that we haven't invented yet.

Which kind of then means

that we'll never get to actually AGI

'cause if you look at a set of different AI milestones,

including like the Turing Test,

that have been done is like, we blow past that.

I was like, well, that's not what we meant.

We meant this other thing.

And I do think that, you know,

it's a really,

I think part of the reason why people have such

kind of challenged judgements on these things

is we don't know what to do

if you actually created an AGI

that could be a super intelligence,

or we don't know what an exponential curve is.

One of the things I find most interesting

is people say,

people are really bad at predicting

the results of exponential curves,

and so therefore my prediction is. [laughs]

And it's like, yes,

I agree with your first statement

and that's the reason why your second statement,

you should be a little bit more cautious about

in your assertion.

And that doesn't mean go blindly,

but it does mean that kind of navigation.

I mean like Moore's law had an

exponential curve for a long time.

I do think we're still an exponential increase in commute,

compute, and what we're doing in AI.

I do think while people will frequently say,

well that compute curve will then go to an IQ curve,

that's when you begin doing speculation.

It's not at all clear that the increase in compute

is a direct correlation to an increase in IQ,

increase in certain kinds of capabilities and so forth.

And I think that, you know,

our next larger scale models

are going to have new, amazing, things for us,

but that doesn't necessarily mean that it's gonna be,

you know, like in the valley here, you know,

the number of times you hear quotes like,

Well, I, for one am in support

of our future robot overlords.

And you're like, what the heck are you talking about?

But anyway, so I think that the right thing

is to say we're bad at predicting exponentials

and we should keep our attention focused on it,

but we've been part of a number of exponential curves

that we have navigated just fine.

Okay.

Well, I wanna switch to something a little more prosaic.

You know, we've had a big shock lately

about what's been happening at OpenAI.

It's a company that you were original funder, Reid,

and you were on the board until relatively recently.

I'm curious, were you surprised at this board

that you once sat on had fired Sam Altman?

Surprise would be an understatement.

It was definitely like reading the blog post was like,

what's going on?

I still don't think we fully know, you know,

as the world 'cause I haven't been on the board since March

and I've not had any conversation

with any of the board members.

I have talked to Sam.

I do think that we are in a much better place for the world

and for the mission

to have Sam as a CEO.

I think he's very competent at that.

You know, I don't think I've ever seen,

in all of corporation history, where a board fires a CEO

and something like rounds to 100% of the employees

sign a letter saying,

you all resign and reinstate the CEO or we're outta here.

Which like, I think that's history making.

So yeah, I was surprised.

Well, I mean, do you feel, you know,

you're at Microsoft now.

Could Microsoft really have taken on that whole company

and absorbed that?

I think, I'm just wondering whether Satya Nadella

just had a sigh of relief

that he wound up with Sam back in charge of OpenAI,

and you know, he gets all the, you know,

the fruits of their labors

without having to take on that whole company.

Well, it was a very genuine, I believe, offer

for both kind of the world

and for Microsoft's business purposes

because I think the arrangement

that OpenAI and Microsoft have made

is going to be another thing that is going to be,

I think taught in business schools and everything else

as one of the epic partnerships in technology history.

And I think that Satya was like, I think our, the outcome

where there's an independent organization

where Sam is the CEO, which continues doing the good work,

which has borne so much fruit in this partnership,

is exactly like the, if it isn't broke, don't fix it.

Like, let's keep going with with what it is.

But I also, you know,

Satya is a very high integrity, genuine leader,

and I think he would've hired everybody from OpenAI

and kept going if that was the only path

that was left open to him.

So we made, you know, in the press,

we went bonkers on it

and it was like, the biggest deal in our world,

but, you know, are there big lessons to be drawn from that.

Whether it's between profit,

or benefit and safety or whatever.

Fei-Fei, did you take any lesson from that

or do you just find that just sort of, you know,

an interesting sideshow?

Well, first of all, I have tremendous respect

to all the technologists from Greg to Ilia, Sam,

and many of my former students and all that in OpenAI,

so I have a sigh of relief

when this whole thing has normalized.

If there's any story in this particular story

I would say is that it's a human story,

even in the world of AI,

in the world of, you know, making AI technology

what unfolded is more a human story.

One final question before we go.

Fei-Fei, would you sit on the board?

I will carefully consider that, yes.

[audience laughs]

Okay, well on that note, I'm gonna get outta here.

Thank you.

Should I offer to the OpenAI

that you're gonna be doing board recruiting for them?

[audience laughs]

Thanks so much. Thank you.

These guys are great.