# MIT Quest for Intelligence Launch

Data: 11-01-2025 21:47:45

## Lista de Vídeos

1. [MIT Quest for Intelligence Launch: Opening Remarks](https://www.youtube.com/watch?v=KfB89Sp6-ck)
2. [MIT Quest for Intelligence Launch: Introduction](https://www.youtube.com/watch?v=cfojlUq0-Hw)
3. [MIT Quest for Intelligence Launch: The Core – Human and Machine Intelligence](https://www.youtube.com/watch?v=zByPOr8q9n8)
4. [MIT Quest for Intelligence Launch: The Future of Intelligence Science](https://www.youtube.com/watch?v=Vem0Ilrbf3c)
5. [MIT Quest for Intelligence Launch: The Future of Intelligence Engineering](https://www.youtube.com/watch?v=tWAC5oeC_gk)
6. [MIT Quest for Intelligence Launch: The Science and Engineering of Intelligence](https://www.youtube.com/watch?v=_WCPnIhqjR4)
7. [MIT Quest for Intelligence Launch: Teaching Machines to See and Hear](https://www.youtube.com/watch?v=va3Qqu1HRQo)
8. [MIT Quest for Intelligence Launch: Thinking Big by Starting Small](https://www.youtube.com/watch?v=KxOeRXlfXAg)
9. [MIT Quest for Intelligence Launch: Building a Social Brain](https://www.youtube.com/watch?v=zHVx1ACnehE)
10. [MIT Quest for Intelligence Launch: Scaling AI the Human Way](https://www.youtube.com/watch?v=K5RNp1SoGOc)
11. [MIT Quest for Intelligence Launch: The Bridge – Applying the Tools of Augmented Intelligence](https://www.youtube.com/watch?v=dTsSbz-sEfY)
12. [MIT Quest for Intelligence Launch: Social and Emotional Intelligence for Human-AI Collaboration](https://www.youtube.com/watch?v=7aYKRkSUHTA)
13. [MIT Quest for Intelligence Launch: Learning to Cure Cancer](https://www.youtube.com/watch?v=424zoKmpZvg)
14. [MIT Quest for Intelligence Launch: How AI Enables the Home to Monitor Our Physical and Mental Health](https://www.youtube.com/watch?v=KHwHxqCBU78)
15. [MIT Quest for Intelligence Launch: AI, Artificial Stupidity, and Financial Markets](https://www.youtube.com/watch?v=yrSHC81kqpw)
16. [MIT Quest for Intelligence Launch: Announcing the 2018 Solve Global Challenges](https://www.youtube.com/watch?v=zMmSgjrkX3s)
17. [MIT Quest for Intelligence Launch: The Impact – Bringing Intelligence to Market](https://www.youtube.com/watch?v=KZP6cepILbw)
18. [MIT Quest for Intelligence Launch: AI-Driven Drug Discovery](https://www.youtube.com/watch?v=aqMRrRS_0JY)
19. [MIT Quest for Intelligence Launch: Engineering Common Sense](https://www.youtube.com/watch?v=Ym3AIyksVdc)
20. [MIT Quest for Intelligence Launch: Featured Innovator](https://www.youtube.com/watch?v=9nJaDK7jKro)
21. [MIT Quest for Intelligence Launch: The Consequences – Intelligence and Society](https://www.youtube.com/watch?v=j_4mnuwhHJo)
22. [MIT Quest for Intelligence Launch: Fireside Chat](https://www.youtube.com/watch?v=1H8WRlHe1wo)
23. [MIT Quest for Intelligence Launch: MIT–IBM Watson AI Lab](https://www.youtube.com/watch?v=oDWmvzhZt4g)
24. [MIT Quest for Intelligence Launch: Student Poster Session Introduction](https://www.youtube.com/watch?v=fCtlSJm_v2o)
25. [MIT Quest for Intelligence Launch: Closing remarks](https://www.youtube.com/watch?v=72CNg7SJBxs)

## Transcrições

### MIT Quest for Intelligence Launch: Opening Remarks
URL: https://www.youtube.com/watch?v=KfB89Sp6-ck

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: Introduction
URL: https://www.youtube.com/watch?v=cfojlUq0-Hw

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: The Core – Human and Machine Intelligence
URL: https://www.youtube.com/watch?v=zByPOr8q9n8

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: The Future of Intelligence Science
URL: https://www.youtube.com/watch?v=Vem0Ilrbf3c

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: The Future of Intelligence Engineering
URL: https://www.youtube.com/watch?v=tWAC5oeC_gk

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: The Science and Engineering of Intelligence
URL: https://www.youtube.com/watch?v=_WCPnIhqjR4

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: Teaching Machines to See and Hear
URL: https://www.youtube.com/watch?v=va3Qqu1HRQo

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: Thinking Big by Starting Small
URL: https://www.youtube.com/watch?v=KxOeRXlfXAg

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: Building a Social Brain
URL: https://www.youtube.com/watch?v=zHVx1ACnehE

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: Scaling AI the Human Way
URL: https://www.youtube.com/watch?v=K5RNp1SoGOc

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: The Bridge – Applying the Tools of Augmented Intelligence
URL: https://www.youtube.com/watch?v=dTsSbz-sEfY

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: Social and Emotional Intelligence for Human-AI Collaboration
URL: https://www.youtube.com/watch?v=7aYKRkSUHTA

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: Learning to Cure Cancer
URL: https://www.youtube.com/watch?v=424zoKmpZvg

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: How AI Enables the Home to Monitor Our Physical and Mental Health
URL: https://www.youtube.com/watch?v=KHwHxqCBU78

Idioma: en

Hello, everyone.
I would like to tell
you today about how
we can combine advances in
deep learning with advances
in sensor technology to
reduce the cost of health care
and improve outcomes,
particularly
for chronic disease patients.
Now, one of those
chronic disease patients
is my aunt, who was
just recently rushed
to the hospital.
She has congestive
heart disease.
She had an exacerbation.
Now, luckily,
everything is fine now.
She was able to get through it.
However, what I
want to emphasize
is that in the case of chronic
diseases in particular,
problems don't just
happen overnight.
They are cumulative.
So if, just if, we had the
ability to continuously monitor
chronic disease
patients in the home
to detect changes and
degradation in their health,
we could potentially alert
the doctors much earlier.
And they could intervene, and
many of these hospitalizations
could be avoided.
And that could dramatically
change health care
as we know it today.
And this is exactly
what I'm working on.
And I want you to imagine
with me what could happen if--
oh, sorry.
Let me just go back again.
Imagine with me a
home of the future,
where the home actually
can monitor your health.
The home would know your
breathing, your heart rate,
gait, sleep, other
physiological signals.
And if there is a degradation,
it would detect it early
and would be able to
alert your doctor,
so that they can intervene and
we can avoid the person going
to the emergency room.
Now you might be
thinking, oh, yeah.
This is great.
But actually monitoring
things at home now
is cumbersome and invasive.
And, actually, you're right.
So, today, if you want to
monitor breathing, for example,
you would need a nasal
probe or chest band.
And if you want
to monitor falls,
you have to ask the person to
wear [INAUDIBLE] push button,
and they have to push
it after they fall.
So they have to be
conscious enough to push it.
And not only this, if
you want to monitor--
say, in Parkinson's
disease, you want
to monitor motion
for Parkinson's
or multiple sclerosis,
then you have
to ask people to wear these
sensors on their legs.
And that's definitely
not the way to live.
And finally, if you
did any sleep study,
you know that they would put
these sensors on your head--
EEG sensors-- and they
ask you to sleep with it.
Now, this is not
a happy picture.
It's definitely
cumbersome and invasive.
But I want you to
imagine if now I
tell you we can monitor
all of these things--
all of them and more--
without any sensor
on the person's body.
This is exactly what
we do in my group.
We have invented
this smart Wi-Fi box
that sits in the background of
the home and monitors health.
And it uses all
these advances in AI
to analyze the wireless signal
without any sensor on the body.
It can predict and
detect breathing,
heartbeats, gait, and
falls, and even sleep.
Now, I see many here like
looking, oh, rolling your eyes.
Like, how could you
possibly monitor a person
without any sensors on the body?
Well, actually, all
of you guys sit here,
you are sitting in a sea
of wireless signals--
Wi-Fi, cellular, you name it.
Every single move
that you do-- you
just lift your arm
like this, it changes
the electromagnetic
waves around you.
And this box is.
Smart it's just going to
sit in the background,
uses new models
from deep learning
that are developed
particularly for analyzing
electromagnetic waves.
And it's going to
know actually what
happened-- you lifted your arm.
You took a breath.
And without any
sensor on the body.
So let me show you video
to illustrate this.
So say this is a home.
The wireless signals, of
course, propagate in the home
and they reflect off our
bodies, because our bodies
are full of water.
And they come back
to our smart device,
which analyzes them using
our own machine learning
algorithms.
And here, in this
case, it detects a fall
and can alert the caregiver
via text, phone, or email.
I want to show you a few
examples from the lab.
So, in this example,
we are going
to monitor this
person as he moves.
But as you can see, the device
is not even in the same room.
So imagine if somebody
is trying to monitor me
from another room,
from behind the wall.
Because we know wireless
signals can traverse walls.
So the device is in the
office adjacent to where
he's standing behind this wall.
And this red dot
here on the side
is where the device
thinks that he's standing.
I'm going to play
this video for you.
And look at how the red
dot is going to follow him.
So you can see the red
dot is following him
pretty accurately.
And remember, all of this
without any sensor on his body,
purely based on how
his body affects
the electromagnetic
waves around him.
Now, not only we can follow
his motion, but if he falls,
we can detect the fall.
Because this blue line actually
is following his elevation.
As you can see, it's
detecting a fall
as his elevation went
to the floor level.
Now, of course,
detecting falls is great.
But we want to be
able to predict falls.
And it turned out that
this gait speed and gait
of the person that
we're monitoring
is actually a very good
predictor of fall risks.
So not only we
can detect a fall,
but we can actually
understand the risk of falling
and how it's changing
for various people.
And, of course, gait speed
is a very important metric
for Parkinson's disease
and multiple sclerosis
and other
motion-related diseases.
But guess what?
It turned out that gait actually
is a very good predictor
for exacerbation in
congestive heart disease,
like my aunt's exacerbation.
And even also
pulmonary diseases.
And today, the doctor
measures gait when the patient
goes to the hospital.
They have a stopwatch.
They ask them to walk in front
of them and they measure that.
So imagine if we have
this 24/7 at home.
And being able to detect
and predict accordingly
what can affect this patient.
Not only this-- sleep.
Sleep, of course,
is very important.
And this device is able
to track the person.
So it's going to be able to see
the person as they walk, get
into the bed, when they
stop tossing around in bed,
and they go to sleep.
So that is a
measurement of sleep
that is based on what
we call [INAUDIBLE]..
But there is even something
that is more important in sleep,
which is sleep stages.
So when you go to sleep,
your brain waves change,
and you enter different
stages of sleep--
awake, light sleep,
deep sleep, and REM--
rapid eye movements.
And each one of
these sleep stages
is associated with different
physiological function.
For example, we
dream during REM.
And we have memory consolidation
if you're in deep sleep.
So, of course,
knowing these stages
is very important
for sleep disorders.
But we heard a lot, for
example, about depression.
So guess what?
Actually, REM is very
important for depression.
So when people have
depression, typically what
happens is REM happens early
in sleep and the pattern of REM
is disturbed.
And in fact, it's why
medications-- really what
they do is they affect the REM.
They control the REM.
And also, deep sleep is very
important for Alzheimer's.
Because there is connection
between slow waves in the brain
and, as I said, memory
consolidation and the position
of better amyloid in the brain.
So it's very important
to monitor sleep staging.
Today, if you want to
monitor sleep stages,
you go to a sleep lab.
They put all of these
sensors on the person,
and they ask him
to sleep like this.
[LAUGHTER]
My student was not
happy, as you can tell.
But now, imagine if I told
you we can monitor exactly
the same thing but in
the person's own bedroom
without any sensors.
So here's my other
student-- the lucky one.
[INAUDIBLE] device, the
wireless signal reflects back.
We have machine-learning
algorithm internally
in the box.
And it spits out his sleep
stages throughout the night.
And the accuracy is comparable
to putting EEG sensors
on the person's head and
having a technician manually go
through the signal to
detect your sleep stages.
So we can monitor many things.
We can also monitor breathing.
So this guy is sitting
there and he's reading.
But what you see on the
side, the signal up and down,
is nothing but his
inhale-exhale motion.
And we asked him
to hold his breath.
You see the signal
stays at steady level
because he inhaled.
He did not exhale.
He's going to hold his
breath for 30 seconds.
Don't try this.
So let me zoom in on the signal.
So this is the same signal--
the breathing signal.
So what you see here,
these are the inhales.
And these are the exhales.
But these blips on the
signal that look like noise,
they're actually not noise.
They are his heartbeats,
beat by beat.
And all of that without
putting anything on his body,
purely based on the
electromagnetic waves
and analyzing them using
these advanced algorithms.
So typically, when
I do this, people
ask me what if there
are multiple people
in the environment.
And here, where we have the
device, again, behind the wall,
monitoring the breathing
of this person,
another person enters, you
see, and the device recognizes
that there is a new person now.
Zoom in on this person.
And now it's monitoring
the breathing of both guys.
So I want to end by saying
that the future of health care
can be completely
different, particularly
for chronic disease
patients, by having homes
that can understand
our health metrics,
using technologies that are
completely passive, that
would disappear
into the background,
so that we can live
our life normally.
That can dramatically be able
to change the health care cost,
because it would be able
to track our health metrics
and detect and predict
health emergencies.
But most importantly,
it can change and reduce
the number of hospitalizations
for chronic disease patients.
And that can have
a lot of impact
on my life and my family's life.
And it makes me very happy
about doing this research.
So thank you very much.
[APPLAUSE]

---

### MIT Quest for Intelligence Launch: AI, Artificial Stupidity, and Financial Markets
URL: https://www.youtube.com/watch?v=yrSHC81kqpw

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: Announcing the 2018 Solve Global Challenges
URL: https://www.youtube.com/watch?v=zMmSgjrkX3s

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: The Impact – Bringing Intelligence to Market
URL: https://www.youtube.com/watch?v=KZP6cepILbw

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: AI-Driven Drug Discovery
URL: https://www.youtube.com/watch?v=aqMRrRS_0JY

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: Engineering Common Sense
URL: https://www.youtube.com/watch?v=Ym3AIyksVdc

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: Featured Innovator
URL: https://www.youtube.com/watch?v=9nJaDK7jKro

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: The Consequences – Intelligence and Society
URL: https://www.youtube.com/watch?v=j_4mnuwhHJo

Idioma: en

Good afternoon.
My name is Melissa Nobles.
And I'm the Kenan Sahin Dean of
the School of Humanities, Arts,
and Social Sciences
and a faculty member
in the Department of
Political Science here at MIT.
It is my honor to serve
as both the session chair
and as a participant in this
afternoon's panel discussion,
The Consequences,
Intelligence and Society.
Led by our moderator, whom I
will introduce in a moment,
we are going to explore the
human consequences of research
on human and machine
intelligence.
What does it mean for us to
build machines that can think?
What are the social, economic,
political, artistic, ethical,
and spiritual consequences
of trying to make
what happens in our minds
happen in a machine?
Who does this machine answer to?
How do we ensure that the
results of our efforts
act as moral agents in society?
Answering these
questions responsibly
means first backing up a little.
What does it really
mean to think?
What is intelligence anyway?
Philosophers, and
social scientists,
and artists have been
grappling with these questions
for centuries, but
today these questions
are being asked in
a different context.
We are on the verge
of incorporating
incredibly sophisticated tools,
autonomy, addiction, analysis,
and sensing into
devices in environments
that are as intimate to
our daily experiences
as our own clothing.
These questions,
in other words, are
moving very rapidly out of
a theoretical or speculative
domain.
They are headed directly into
our lives and how we live them.
Given the variety of
perspectives and backgrounds
of my fellow panelists,
and of the questions
that we are about to address,
I'm going to stop here.
I want to hear what they have
to say about where we are going,
and what it means to get there.
So, in alphabetical
order, and I'm just
going to ask them to wave
when I say their names,
allow me to introduce the
people seated on stage.
Daron Acemoglu is the Elizabeth
and James Killian Professor
of Economics at MIT and a
member of the institutions
Organizations and Growth Program
at the Canadian Institute
for Advanced Research.
He earned a BA from
the University of York,
an MS in Mathematical
Economics and Econometrics
from the London
School of Economics,
and a PhD in Economics from
the London School of Economics.
Rodney Brooks is the Panasonic
Professor of Robotics emeritus
at MIT, where he was Director
of the Computer Science
and Artificial Intelligence
laboratory until 2007.
He was co-founder, Chief
Technology Officer,
and chairman of iRobot,
and is currently
the founder and Chief
Technology Officer
and chairman of
Rethink Robotics.
Dario Gil is Vice
President of AI
and Quantum Computing at IBM.
He is responsible for IBM's
Global Artificial Intelligence
Research efforts and their
quantum computing program.
He has also helped launch and
co-chairs the MIT-IBM Watson AI
Lab.
Dario earned an SM and PhD
in Electrical Engineering
and Computer Science from MIT.
Joi Ito is the director
of the MIT Media Lab.
Recognized for his work as
an activist, entrepreneur,
venture capitalist, and advocate
of emerging democracy, privacy,
and internet freedom,
Joi is currently
exploring how radical
new approaches
to science and technology
can transform society.
Our moderator,
Gideon Lichfield, has
been the Editor-in-Chief
of MIT Technology Review
since December, 2017.
He spent 16 years at The
Economist as a science
and technology writer,
and in 2012 became
one of the founding
editors of Quartz.
Gideon has taught journalism
at New York University
and has been a fellow at
Data & Society, a research
institute devoted to
studying the social impacts
of technology.
Finally, Megan Smith served as
the third United States Chief
Technology Officer and
Assistant to the President
under President Obama.
Prior to her White House role,
Megan was a vice president
at Google, leading new business
development for nine years,
and later serving as
the vice president
on the leadership team
at Google X. She recently
started a new company, shift7.
She earned an SB and SM in
Mechanical Engineering from MIT
and is a member of the MIT
Corporation and the MIT Media
Lab visiting committee.
Please join me in
welcoming our panelists.
Thank you very much, Melissa.
As Melissa told us, we are going
to examine some small questions
today in the 45 minutes
or so that remain to us.
And there is enough
intellectual firepower
on this stage to
sink a battleship,
let alone a journalist.
So I'm going to try my
best to stay out of the way
and guide this
discussion loosely.
But, on the whole, this
should be a conversation
between these brilliant
people about the questions
of technology and AI
that Melissa outlined.
In the morning sessions we heard
a lot about the benefits of AI,
machine intelligence, and how it
can be used to benefit society.
So we heard about social robots
that help children to learn,
about using algorithms
to predict cancer
and also to cure it, predicting
falls when elderly people are
moving around by themselves.
The risk of them falling,
detecting when they fall,
using Wi-Fi signals for that.
Using Wi-Fi signals also
to analyze sleep patterns.
Creating algorithms to build
personalized investment schemes
for people so they
don't get suckered
by people like Bernie Madoff.
So there are some
wonderful, wonderful things
that machine intelligence
can do for us.
But of course, like any
very powerful technology
it has its dark side and
its potential for abuse.
I think that this
audience is probably
familiar with a lot
of the social issues
and risks around AI,
so I'm just going
to do very, very high
level brief thumbnail
sketches of some
of the main ones,
And then turn it over to the
panel for the discussion.
So one of the most
familiar items of course
is jobs, the future of work,
the effects of automation,
and what that will
do to our economies.
Algorithmic bias is a topic that
has gotten a lot of attention
recently, the ways in
which the algorithms
we build and the data, more
importantly, the data that they
are trained on may
encode biases that
already existed in our society
that then just get reproduced.
So, for example, the algorithms
that give people criminal risk
scores, on the risk
of them re-offending,
and that turn out
to have biases,
that give inaccurate scores
more often to African-Americans
than to white Americans.
The issue of differential
pricing of ads on Facebook
recently got a lot of attention.
So the ways in which
pricing mechanisms,
that are invisible to
us most of the time,
can have all sorts of
political consequences.
The power of the tech giants.
A handful of very large
companies thickly in the US
and increasingly in China
have the dominant power
over much of the data
that is being collected
and on the algorithms that
that data is used for.
And that means that a
handful of these companies
are effectively creating
frameworks and dictating
policy that affects
the whole of the world.
What do we do about the
power of those tech giants?
There is unequal
access to technology.
So one of the issues
that this arises in
was highlighted in a report
recently by Data & Society,
and institute where I was a
fellow, on precision medicine.
The issue there being
that we are increasingly
developing forms of medical
treatment, algorithmically
based, that can create
personalized treatments for you
based on data analysis.
But the problem is, of
course, those treatments
are much more likely to
be accessible to people
in certain areas, people who
are technologically savvy, who
are health literate, and, if
those treatments are developed
on the basis of data from
those sorts of people,
then the data they
contain will be biased
and they may work better for
some people than for others.
The use of
surveillance and other,
let's just say
creepy, technologies
that are enabled by AI.
The generation of fake news,
the ability to manipulate media,
the inadvertent echo
chambers that arise
from our use of social media.
The use of machine
intelligence for criminal ends,
such as the recent report
by the Open AI Foundation
which looked at
how AI may be used
to make things
like phishing scams
operate on a much larger scale.
And then, finally,
there is an issue
that faces a lot of
us, the vice addiction.
Our devices and the websites
and apps that we use already
designed to capture as much
of our attention as possible.
AI makes them even better
at that and even harder
for us to control
how we use our time.
So these are familiar issues,
I think, to most people.
It feels as if they've gotten
more attention recently.
There have been books like
Cathy O'Neil's Weapons
of Math Destruction, Virginia
Eubanks' Automating Inequality,
Safiya Noble's
Algorithms of Oppression.
There are NGOs such as the
Center of Humane Technology,
which deals with, specifically
this issue of device addiction.
There are technology
ethics courses now
appearing at MIT and
Harvard, at Stanford, at NYU,
and elsewhere.
So it seems as if there
is increasing, at least
in public and in academia and
in research and among activists,
increasing recognition
of a lot of these issues.
So my first question to
you as members of the panel
is, you're a well-connected in
industry and in policy circles.
What are the conversations
that are happening there?
How are these issues
percolating into there and what
sorts of solutions are
people beginning to discuss
for some of these issues?
Anyone who wants to jump in.
Can I do one framing question?
Because I think there's the fear
and risk of super intelligence,
and a lot of the presentations
are the future of AI
which is generally what we call
whatever we can't yet explain.
And there's stupid
intelligence which, they just
take automation--
Actually Norbert Wiener, who
was a famous MIT mathematician,
in his book The Human
Use of Human Beings,
he calls institutions
"machines of flesh and blood."
And so the idea that automation,
and any rule, any bureaucracy
is a form of automation, and
if you look at the markets
they're not under
control, and they
have certain
evolutionary systems
that cause injustices and harm.
And we have trouble
regulating and controlling
these complex, self-adaptive
systems, like MIT,
like the markets.
And so some of the problems of
automation aren't actually new.
And I think there's
also this framing
of AI as this other thing.
I know Rodney used networks
connecting to each other,
and at the media
lab we use the term
extended intelligence
rather than
an artificial intelligence.
So I think there are
some new problems,
but there are a lot of
good, old-fashioned problems
which are just complex,
self-adaptive systems that
are evolving in a sort of
uncontrolled and harmful way.
And people like Donella
Meadows and Jay Forrester
modeled these complex
systems and were
trying to suggest how
we might intervene.
But those problems are
actually quite similar.
So I think there's some really
interesting new problems,
and some of the
reinforcing biases
through algorithms also are new.
But the fact is that the
systemic biases that exist
exist because of these
automated systems that
have reinforced biases.
So again, I just want
to caution that it's not
all brand new stuff.
It's just that we have booster
rockets on these systems
causing them to be
harder to control.
I think Joi makes a really
good point, which is now we're
going to take them exponential.
So you said might
introduce bias.
They already are introducing
extraordinary bias
across the board.
And so the systems that we live
in, these human systems, they
accelerate some people.
I think of the captains
of Silicon Valley.
If you think Mark
Zuckerberg, or Jeff Bezos,
or different people.
They, plus the
system, created them.
And others who were like
them who didn't look
like that, maybe they're
different gender, race, or age,
or whatever, they got
decelerated by the system.
And now we're going to add
this at an exponential level.
Right?
So one of my favorite ones,
we don't have visuals,
but polygraph.cool shows
you who speaks in film.
And it turns out in
children's television
it's 15 to 1 boy programmers
girl programmers who are cast.
So we teach children propaganda
that boys do this, girls don't.
And if you look at the visuals
it's 80 to 20, or even worse,
of who speaks.
So do we want to train
on that data set?
So that men are going to
speak and women aren't?
That's not the
world we want today.
So it's a really
interesting time
of trying to be more mindful.
By the way it's so exciting
that MIT is doing this.
And I think that our
opportunity as very
thoughtful,
cross-functional folks
is really step into the world.
I love MIT Solve.
It's reaching out to
really think through
what do we hope for
and put our values,
as well as what we already
do, into the system
and really pay mindful
attention to that
as we step into this design.
And even then it's going
to be incredibly hard.
So if this, as Joi says, if
this is an acceleration of stuff
that has been going
on for a while.
Is there anything that
we've learned already
from the past, I don't know,
30 years or whatever time frame
you want to put it,
on what works as a way
to prevent biases from
creeping in and getting
stuck in the system?
I think that one
element of it was
that, or a contrast
to the reality
we face today with some of
the modern machine learning
techniques, is the
degree to which
processes or programs we created
could be inspected by others,
and be understood by others.
So to the extent
that let's say you
were doing a credit
scoring program,
or deciding who
should get credit.
To the extent that you
were writing a set of rules
before right, and you were using
a protected variable, right,
that you shouldn't have used
and it was illegal to use.
And you had a process--
A protected variable
being race, for instance.
Race, or gender, or something.
And as a result of that somebody
was inspecting the program
you would see you can't do this.
Right?
This is illegal, and so on.
Now if somebody just takes
past data, historical data
that you've had, and trains a
simple machine-learning model
to be able to do
this classification.
And that becomes a bit
more opaque in that.
Then how does it happen?
How do you debug it?
Are you doing something illegal?
Are you doing something
that's not right?
So that is a difference
from past behaviors,
but the prior data that was
being used as an example
to train and validate may have
been biased in both cases.
Can I jump in for
one second here?
So if you look at, and I
would want to be careful
speaking in front
of a professional,
but if you look at
the history of racism,
you have slavery,
Jim Crow, and now you
have this colorblind racism.
And similarly you can't
use race in risk scores,
but you have all these
proxies for race.
And even if we
identify the proxies,
the machine will find it.
So there was a spotlight
article from last year
that showed that
the median asset
ownership of African-Americans
in Boston is $8,
and white people is $247,000.
So if you have an
accurate prediction
of who you want to
give mortgages to,
you're just going
to reinforce it.
And so this is
one of the things.
This ties to this point that
you made about [? Julie ?]
[? Anglin's ?] thing.
So if you're trying to get
very accurate assessment
of recidivism, you're going
to get unfair false positives.
And if you tune for
fair or false positives,
you get inaccurate
recidivism rates.
And the other thing is the
input to the recidivism rate
is actually arrests.
And arrests correlate
more with policing
than they do with crime.
Most homicides are not actually
turned into convictions.
And so crime doesn't
correlate with arrests,
but arrests are input.
And then arrests are correlated
with low income neighborhoods,
and so on.
So the tricky part
about transparency
is that the machine
will figure out
its way to continue
to disadvantage
disadvantaged people.
And this is the point that we're
trying to make at the Media Lab
is we worship
prediction and accuracy.
But prediction and
accuracy isn't fair
when the system is unfair.
And so what you
want to do is you
want to go for causal inference,
to look for the underlying
causes and try to treat
that, rather than more
accurately doing criminal
justice, and prediction,
and policing.
And so I think
that's the problem,
is we already have a
system that's broken,
and if we just look for clean,
pure, transparent accuracy
it's still going to be unfair.
So that is a way, then, of
doing what Megan was suggesting,
which is how do we infuse
the values that we want
into the arguments by changing
what the goal of the algorithm
is instead of trying to
predict more accurately?
It should look for
the causal factors.
One the thing is, it's
March 1st, which is, I call
Women's Missing History Month.
So we had, you guys did
this incredible work
in the Humanities Department
with MIT and slavery,
and thank you Rafael
and leadership of MIT
and your leadership,
Melissa, to do that.
It's notable to
mention Ida B. Wells.
How many people in
this audience have ever
heard of Ida B. Wells.
She's one of the greatest
American data scientists
in the history of our country.
And so in the late
1800s she used data
to measure what we were
doing and really slowed us,
and slowed us, and stopped
us from lynching people.
She's really hashtag
Black Lives Matter data
scientist and an extremely
talented journalist.
But the reality of
what happened to her,
if you know your history, is
she and Frederick Douglass
protested the
Chicago World's Fair.
I love Tesla, and
Edison, and their fight.
And that's so known
from the fair, right?
Tesla demoed, and
I'm from Buffalo
where he lit Niagara Falls, used
Niagara Falls to light Buffalo.
But he got a demo, and be
all that, and all of us
know his name.
Frederick and Ida
protested the fair
because they were
not going to let,
African-American
people were not allowed
to demonstrate all
of their inventions,
and art, and everything.
And so the truth
channeled into the truth.
And how would it be,
we at MIT, if Ida
had been able to do her
incredible data science
work on justice, would there be
presenters on this stage who've
been building for a century
on her work on machine
learning now around data
science and justice the way
that you are.
And Ida would be
known to all of us,
and it would be a
different truth.
You know my view on this
is, thinking about it
as the Dean of the School of
Humanities, Arts, and Social
Science, is the
ways in which all
of this great knowledge
that is being created still
needs to be rooted
in the old stuff.
Right?
So we're not-- in
a way it further
reinforces the
need for knowledge
about political science,
history, literature
even as we begin to do all
these incredible technological
changes, in part because
we don't want to amplify,
as you all are saying,
all of the inequalities.
And we still have just basic
work that needs to be done.
And students need, as
an educational matter,
as part of our
educational mission,
need to be able to understand
and contextualize these things.
And that can only come through.
So this is music to my ears.
I didn't think being on this
panel I'd be hearing this
and making me so happy.
But when you say students--
I imagine all these
kids are going
to take these classes now.
Yes.
And so you're
saying specifically
engineering technology
students, they
need the historical, and
political, and everything else
context.
Right.
Precisely right.
Even more than ever,
in certain ways,
this maybe will do
the work that we've
been trying to do for
years which is to reinforce
the importance of it.
But it may be now that
it is indispensable.
That in order to really do
this you need to know it.
And I have two
philosophy students
in my AI and Ethics class.
Right.
Exactly right.
It goes both ways.
Right.
Exactly right, of course.
And Tim Cook's
commencement address
specifically addressed this
to the MIT students last year.
Correct.
Saying don't leave the room.
You've got your
engineering issues,
but when we get to
privacy, and humanities,
social science, engineering
science, all together.
The universe doesn't
separate the subjects,
but we ring bells between them.
And we need to stop doing that.
So you're saying it's incumbent
on institutions like MIT
to make sure everyone
gets a rounded education.
Is it doing that?
Yes, I think so.
Yes, we are.
Of course we are.
But we can always do it better.
And so it's
precisely these kinds
of efforts, which are
across the Institute, which
will ensure that that happens.
All right.
I'd like to move over to the
subject of the labor market
and automation and
jobs because it's
one of the biggest topics in our
discussions before this panel
that Ron raised this issue.
Which is can we talk
about what is helping?
What's working
and helping people
adapt to the changing
labor market?
And when can
machine intelligence
actually help people and
when does it hinder them?
And what do we do about that?
What we've seen,
for instance, is
I think there was a
study recently showing
that simply retraining people
who've been displaced from jobs
doesn't necessarily help.
In fact, I think in one study
the people who were retrained
did worse at finding new work
than the people who had not
been retrained.
I forget the exact context.
So what are the
circumstances in which
it is, machine intelligence,
is helping people
overcome the work displacement?
If at all?
I think it's very apt
that this initiative is
called intelligence
because I think
that's what we need more of it.
So, looked at it from
one simple perspective,
I think what we haven't had
enough over the last 30 years
is enough machine intelligence
and enough human intelligence
to go with the
machine intelligence.
I don't think it's possible to
create a healthy society that
doesn't have jobs for people,
both for livelihood reasons
and for their roles as social
and economic actors in society.
But it's also a reality that
many, many technologies, not
just the recent AI robotics,
many, many technologies
displace people.
A lot of what technology
tries to do is reduce costs,
and you do that by finding
better ways and cheaper
ways of doing it.
And mostly substitute
machines for things
that people did before.
So that's displacement.
What happens after Displacement?
Does that mean that
technological change
always reduces employment,
always reduces jobs and wages?
No.
Because there is a very powerful
countervailing mechanism,
which is that when
you reduce costs then
you also create jobs
in other domains.
You create jobs because the
sector in which costs are being
reduced expands, you
produce more cars,
but also other sectors expand.
They try to start doing
complementary tasks,
complementary activities.
So when I looked at it from this
perspective, the threat to jobs
is not what sometimes you
read in the newspapers,
that those are brilliant
technologies that
are going to do everything that
humans are doing at the moment.
They are so-so technologies.
They are technologies that are
just enough to displace people,
but they don't really
reduce costs all that much.
And as a result the
displacement effect is there.
We get people out of the task
that they were performing.
But we don't generate
all of that dynamism.
And if you look at the data,
and we can debate the data,
some people say it's
mis-measurement, et cetera.
We've never had it so bad
for the last 90 years,
in terms of productivity growth.
This is the Dark Age
of productivity growth.
So all of these
technological improvements,
they're not really translating
into productivity growth
either at the sectoral level
or at the national level.
So what's wrong?
And I think there are three
things that are wrong.
One is that machines
are not actually
as intelligent as
we think they are.
And part of that is
because they're not really
going after, perhaps,
the big thing in terms
of economic returns.
You could improve the
cat recognition algorithm
on the internet,
but how much GDP are
you going to get out of that?
The second is we're
not exploiting
the machine-human intelligence.
And the way to do that is
to create complementarity
that comes both from the machine
side and from the human side.
We've been here before.
The British
Industrial Revolution,
which is also a landmark or
machines replacing humans.
We had 80 years
of no wage growth.
And wage growth
finally came when
we started changing the
whole system, the education
system, the institutions, or
representation, redistribution,
and all of these things.
And those are all missing things
in the way that we do things
and they are not
generating enough
of the machine-human
intelligence dimension.
And finally, I think
we're also not helping,
and again the British Industrial
Revolution is apt here,
because we're not sharing
the gains, whatever they are,
and they're not
super high relative
to the 1960s or the '70s, but
they're still not trivial,
we're not sharing
them equally enough.
And that creates even more
tensions and even more
problems in terms of building
the human-machine intelligence
of the future, the
productivity gains, and also
the social order which is going
to support all of these things.
Right.
Yes, Rodney?
I'd like to talk about
job displacement.
Unlike you, I'm
not an economist.
So I'm not constrained
by data [LAUGHTER]
in what I talk about here.
So the intelligence,
machine intelligence we have
is very narrow.
We saw that this morning.
The videos of the young children
doing incredible social things
and inferring all sorts
of stuff and thinking
at a very young age.
We don't have any of
that in our intelligence.
And I think a lot
of what gets labeled
as artificial
intelligence and job
displacement from
intelligence is really
about we now have
digital channels
throughout our society, and we
exploit those digital channels,
and that's how
stuff gets deployed.
But there are
certain places where
we are not going to have
digital channels anytime soon.
And I think it is going to have
a big change to a lot of jobs.
And in particular we don't
have any capability for robots
to be able to interact
closely with people.
So we will be able to
sense when someone falls,
as we saw this morning.
We'll be able to monitor when
something is going to go wrong.
But there's an
incredible mega-trend
of a demographic
inversion where there's
going to be lots and
lots of older people.
They will need physical help
when they're going to fall,
or when something
bad has happened,
or just to maintain
their dignity
and independence in the house.
And in the US, throughout
the history of the US,
from pre-colonial
days we have relied
on low-cost immigrant labor.
In the early days
it wasn't immigrants
who chose to come here.
But that was where
we had labor from.
More recently it's
been people who've
chosen to come,
but get exploited.
Low-cost labor.
We've decided that's a bad idea.
We don't want any
more of those people,
and it just doesn't work
on the world in general
because everyone is going
to be getting older.
So what I see happening is we'll
get all the transactional tasks
that people have done as good
jobs, and what will be left
will be a real demand
for physical stuff.
Helping old people poop.
Helping old people
into and out of bed.
That is not something, that's
going to be like teaching.
We're not going to value that
very well in our society.
So there will be this
incredible demand
for what's going to be
viewed as low cost labor
with no respect for it.
And I think we're going
to have to restructure or.
I think I'm the oldest one here.
I'm going to be not
able to get out of bed,
and have no one to help me.
And by the time
you guys get there,
it's going to be much worse.
So I'm not an economist
either, but doesn't the market
solve that demand?
If there is that demand, won't
the price of that kind of work
rise?
It may work, but it
certainly, I don't
think it's solved it
in the case of teaching
in schools in the country.
But now they're going to get a
bonus for being able to shoot.
I think there are two
separate points here.
On the first point
of the importance of,
what Rod is saying,
about bringing
robots and various forms of
more digital intelligence,
artificial intelligence to
perform a broad range of tasks.
100%.
But actually on the
second one, that's
both a blessing and a curse.
If you look at the US economy
over the last 30 years,
where have we created jobs?
We haven't created
industrial jobs.
We haven't even created a
lot of engineering jobs.
They come, a lot of it,
at the bottom, very bottom
of the distribution.
The bottom 25th percentile
of occupations into wages.
And that's for two reasons.
First of all, a lot of
people have lost their jobs
in the middle of the
wage distribution because
of automation, and they've
been pushed into that part.
And secondly, because
those are the jobs that,
exactly like Rod
says, we haven't
found ways of automating them.
But that's both a
blessing and a curse.
If we have no other
jobs, those are better
and we've actually
seen some wage growth
at the bottom of
the distribution
as a result of that.
But I think the real promise
of machine-human intelligence
is to go beyond that.
Create jobs that are going to
be both more higher paying, more
pleasant, more
satisfying for people.
And I think that's quite
possible if we take a back step
and think about how we can
better use these technologies
for creating jobs.
Can I ask you a question?
You use the word GDP,
and productivity,
and I have a
nine-month-old child.
And it feels like a
service job, but that
wouldn't go the GDP, right?
And I think if I'm
a street musician,
I'm not contributing to GDP.
But I think if I break
a window I do, right?
So I'm a little
bit curious about--
If you marry your gardener
it goes into the--
Yeah, so I guess I'm
a little bit concerned
about using financial metrics
from an industrial age
of making and building things
when most of the stuff that I
think is important in
society, like spending
time with my child, isn't
calculated as productivity
gain.
Whereas if I could
spend more time
with my child and less
time cleaning windows,
it feels like a good thing.
So I think one of the
questions that I have is,
is there another way to
measure things like--
because I also believe
that a lot of the problems
with these machines of
flesh and blood is we've
started, since the Reagan period
and then more and more becoming
a short term financial returns
oriented society, which I think
is aggravating the
evolution of these systems.
And so I wonder
if there is a way
to redefine how we measure
things so that maybe we
can change-- and this is the
Donella Meadows point, which
is when you want to
intervene in a complex system
you change the
paradigm, rather than
fiddling with the parameters.
Of course I mean, I
totally agree with that.
But of course what
you don't want to say
is we want to change
the metric because we
don't like what the previous
metric is giving us.
But GDP was never meant to
be a measure of welfare.
It's not a measure of welfare.
We could create a lot of GDP and
destroy the environment, which
we've been doing, and that
would be a terrible thing.
But GDP is very
good for measuring
what it does, and especially
at the sectoral level.
So that's why I mentioned
sector and aggregate.
So at the aggregate level, a
lot of the US economy services
and services we do a terrible
job of measuring quality.
But for manufacturing
we know these things.
That's much better.
We have price indices.
It's in manufacturing
that we don't
see the productivity in ruins.
We see all of these
digital technologies,
even robotics, a lot of
the numerically controlled
machines, spread over
the last 40 years
and there's very little
productivity improvement
in US manufacturing.
And it's true in
other countries, also.
So one of the things I've
been thinking a lot about
is how do we drive for creative
confidence in everybody?
Right?
And so how do we
get to a point where
people are opting
themselves into the thing
that they're
passionate about doing?
I have my computer
science for all t-shirt
on which is from some stuff
that we did in the White House.
How about all children
in America learn coding?
And not because
everyone be a coder,
we teach freshman biology
not because everyone's
going to be a biologist,
but because these
are basic literacies
of the 21st century.
But they also teach confidence.
And you build things.
Design thinking.
Having those experiences
is really important.
I think if you
talk to each of us
we came into our career because
of somebody and something
that tapped our passion.
And so how do we
tap the passion?
So I've actually
spent the last, I've
been doing many different kinds
of projects, but one of them
was a tech jobs tour.
I've been to 25 different cities
across Appalachia, Milwaukee,
Birmingham, Cleveland,
everywhere and just
getting Americans
to meet each other.
Because there are techie,
creatively confident folks
in every town.
In Boise there's
15 tech meet ups
with 800 people in one of
them, and no one knows them.
And so how do you,
people already
know how to fish
in town and they
know where the great
fish is, how do you get
neighbors to meet each other?
And live for their own cities?
And do what they would do?
So I've been witnessing that all
around and seeing that happen.
And I think part of
the future of work
is including everyone
and creating structures
so people can fix it
themselves with the thing
that they would love
to do, and solve.
And including the
vision that you just
had about spending time with
family, being with each other,
or being in the creative
arts, or whatever that is.
What opportunity we have,
how would we like it to be?
But isn't there an
assumption there
that you can just sort of teach
creative confidence to anybody
and they will pick it up?
I think we on this panel, the
people probably in this room,
we're privileged and lucky
not least in the fact
that something in us allows
us to attain that confidence.
Maybe it's been taught to us
or shown to us by somebody.
But nonetheless
we're in a position
that we're able to
take advantage of that
and then go out and do it.
I don't think that's
true of everybody.
I think some people have grown
up in really hard conditions.
Some people are just
innately, incredibly shy,
or incredibly timid.
And I fear that
what you're saying
means that it creates a
new elite class of those
who are confident, or having
innate confidence in those
who are not.
But you can define it
just very specifically
you can define confidence
in different ways.
And we can also make
the system adapt.
When Malala Yousafzai
was attacked,
I went and helped create
the Malala Fund because I
want Malala to lead us.
Because she knows much
more than I'll ever know.
And having worked
with her for a while
and been all over the
world with young people,
everyone is talented.
Everyone is talented.
And I think your
point of the diversity
of how we bring that talent,
how we want to do that, we need
to adapt the system for that.
But everyone can bring
what they would bring.
We have to allow people
to work on the things
that they would love to work on.
Yeah.
I'd like to pick
up on your point.
Does anyone here have
one of these things?
Did any of you
have to be trained
at a six-month community
college course of how to use it.
No.
Because what this
does is it teaches you
how to use it as you use it.
And I think as technologists
we should be thinking about how
we--
Because I agree.
Every person who's able
to have any sort of job
is smarter than any AI
system we have today.
So how do we bring out
the intelligence in them
by having the machines so they
don't have to have training,
or they don't have
to have confidence,
very unconfident
people can use these.
But how the technology
brings out the best in them
I think is something
that Silicon Valley knows
how to do in many ways.
But the motivations
for a high return,
sorry, motivations for fantastic
returns doesn't direct it
everywhere it needs
to be directed.
Connected to that it has to
be the notion of purpose.
What are we going to
use the technology for?
I think a skew we have in
the current system, even
in the context of
the AI discussion,
is very, very skewed
towards consumer domain,
or a consuming orientation.
As opposed to an
orientation of professions,
and your livelihood.
And to the degree that we
can create a technology
that has a different
requirement,
by the way, than when you have
very massive amounts of data,
and you can amortize.
The labeling effort to
a billion consumers.
For most people whether
you have a small business
or for large enterprises,
the amount of data available
is very different.
The requirements
associated on how
you're going to use
the information,
or need for security,
or meet for regulation,
depending on the profession.
By and large, the
community at large
is not sufficiently focused
in the application of AI
to that world.
To the world of
our livelihoods, as
opposed to the world
of our personal time.
And one of the
things I would hope
that can happen within
the IQ initiative
is that we also shift that
talent and orientation
towards the complementarity
that you were alluding to that
will ultimately result in
the increase in productivity.
How will humans and
machines collaborate?
But that has intent
and has purpose.
If I can add something.
I think Megan mentioned
a very important c-word,
confidence is important,
but creativity.
I think the thing that
machines will not have,
certainly not in my
lifetime in my assessment,
is true creativity.
But that's where the domain
of the complementarity
between machines and humans lie.
And moreover, machines
have the promise
of increasing the potential
for that creativity.
At the end of the 19th century
over 60% of the US labor
force was in agriculture.
The conditions were
extremely harsh.
The schooling available
to people was very low.
As soon as they could
exert physical effort,
they were put in the field.
The same thing,
actually, in the '70s
and the 1980s when there is an
economic boom in the region,
you see people drop
out of high school
and they go and take
manufacturing jobs.
It's different when
you have machines
as mechanization of
agriculture show them
in the agricultural
domain, and now
with robots and
other technologies
in the manufacturing.
When there is greater room
for people to actually develop
their creativity, and a
lot of the application
of digital technology will
give more ability for people
to pursue their own interests
and their own comparative
advantage.
So that actually prepares the
possibilities for creativity
if we develop it the right way.
But again, I'm not sure whether
the current applications
of machine learning,
the current applications
of these technologies is
going in that direction.
In fact, I would say it's
not going in that direction.
And that's why we
do need to step back
and think about machine
intelligence differently.
And to be clear, you're talking
about encouraging people
to develop their creativity
so that they can find
new employment opportunities.
So that they can--
And more satisfying employment.
--be who they want
to be in the world.
So I call it
creative confidence.
Hashtag creative
confidence, let's go.
OK, CC, even better than this.
But to your point, when
we were in the White House
and we did a lot of the
work President Obama
asked us to around AI we did
town halls all over the country
with different universities.
And the main conclusion was
that we don't have enough AI.
We need more AI, much
broader, and many more topics.
And I love that MIT's
doing MIT In the World
For a Better World.
But what type of AI?
We can have much more of
the Facebook type of AI?
Or touch recognition AI?
That's not going to
help with that problem.
Exactly.
So it's the type, and it's the
perspective change, perhaps.
So broadening that and
getting more people,
a much broader set of
people working on it
on the creative side, developing
and in the voice from all
the different domains.
OK on an issue that's
tangentially related to this,
and I don't want you to talk,
I just want you to vote.
Universal basic
income: yes or no?
I don't know.
I love experiments.
I think broadly, no but
worth experimenting.
Yeah.
Yeah, not yet.
I love experiments,
too, but this is
going in the wrong direction.
The problem is
creating jobs so you
want to use the redistribution.
Definitely stronger social
safety net, more welfare state,
but use that, just like negative
income tax or retraining
programs, in order to
get people into jobs
and not encourage them
to stay out of it.
All right, quick audience poll--
Yeah, no, I agree, too.
I mean, I'm the
political scientist.
I agree with the economist.
[LAUGHTER]
You agree with the economist.
OK.
Quick audience poll.
Universal basic income.
Raise your hands if you
think it's a good idea.
Hmm.
What about experiments?
Do you like experimenting?
Everybody loves experiments.
Raise your hands if you
think UBI is not a good idea.
Raise your hands if you
think UBI is not a good idea.
OK.
This is basically an audience
that has no opinions.
[LAUGHTER]
It was half and half.
I'd like to, so
this also leads us
into another question,
which I think
comes up a lot, which
is what is the role?
Essentially whose
responsibility is it
to make some of these
solutions happen?
In particular what's
the role of government?
We see in particular in things
like the issues of automation
and the gig economy that our
labor laws are out of date
for dealing with
this new situation,
that our anti-trust
laws are out of date
for dealing with the power
of big tech companies,
that our privacy
laws are out of date
for dealing with the situation
when basically everybody's
data is out everywhere and
nobody really has privacy
anymore.
So does it feel to
you as if things
are moving faster than before
and that government is now
fundamentally unable to keep up?
Or is there
something that can be
done to keep law and
regulation more in line
with technological change?
I wouldn't say it's
not able to keep up
but it's not keeping up.
So two points.
On the economic
concentration, absolutely.
At the time of the Gilded Age
when people were up in arms
about the trusts, the
largest five corporations
made up less than 6% of US GDP.
Today largest five
corporations, 17%
in terms of their
market capitalization.
But we choose to do
nothing about it.
The second thing is,
Silicon Valley has a lot
to recommend in
terms of its values,
in terms of its freedom,
and rebellious nature.
But there is also this
view of libertarian.
Everything comes from
the business world.
And I think that's
just not true.
If you look at the history
of technology in the United
States, or anywhere,
it's the interaction
between what the government and
what the private sector does.
We did not create all
of the technologies that
were transformative because
the market by itself did it.
It was an interaction.
So I think it's the
same thing today.
None of the things that we're
saying, and all of the promise
and the creativity
of the tech world,
doesn't mean that the
government shouldn't
play a role in channeling
it in a different direction.
I think in the tech
this may not be clear,
but think of the same thing
in the area of energy.
If we let the
market do the energy
we will end up with much more
gas guzzlers and more coal
power plants, but
fortunately that's
not what the world is
doing to some little degree
that we're encouraging
clean technology.
So it's the same thing.
That the return for
society of all technologies
are not the same.
And to add to that the
issue is compounded
to the degree to which
citizens don't have confidence
in government, right?
So they're losing trust in
the ability of government
to do its job, and thereby
making Silicon Valley
and other non-government venues
of decision making processes
more attractive.
But they are not
democratic, right?
I mean many of these things
happen in Silicon Valley
and such are anti-majoritarian.
A small number of people
making huge decisions
that have big effects.
So we certainly want
government involved.
But it's clear that it isn't
yet and has its basic governance
functions about which we
are losing confidence.
And we have a world that's
changing so quickly.
We need our government
to do more, and it's not.
So in a sense, the political
crisis that we are witnessing,
it has many ramifications.
Not merely with the
day to day, but how
we deal with the future.
And that's, as a
political scientist,
I find it quite troubling.
That brings me back, I
guess, to the question
I asked at the
beginning, which is
in the light of this increasing
trust in government, increasing
incapacity of government to deal
with some of these issues, what
are the conversations that
people are hearing in industry,
in the tech industry, about the
industry's responsibility/need
to address some of these
issues more actively itself?
Well I think, first of all,
just to tie a little bit,
I think it's interesting to
contrast with, say Germany,
which is a mostly functioning
democracy, and America,
which is a dysfunctional
democracy, in my view.
And the way they're applying it.
So autonomous vehicles.
We did this trolley problem
thing with tens of millions
of responses.
And overall people thought
that an autonomous vehicle
should sacrifice a passenger if
it's going to save more lives.
But they would
never buy that car.
But everyone else should.
[LAUGHTER]
So this is clearly, the market's
not going to fix that problem.
And interestingly, Germany has
passed the autonomous vehicle
directive, or set of guidelines,
which is really interesting.
So I think the
question is to me,
and Silicon Valley is
libertarian is somewhat,
part of the problem is that
there is, but broadly I
think people don't
trust government here,
and in some countries.
And I think we can
look to the countries
where you have
functional democracies
to see how they are
starting to grapple
with some of these questions
in Europe and other places.
And I think we need to
fix democracy quickly
to get to these problems because
the market's not going to solve
some of these social problems.
There's also, one
of the things that I
had the chance,
that honor to get
to serve the American
people and be in government.
One of things I
noticed is that someone
has solved just about everything
somewhere, and no one knows.
And so using these
network channels
you're talking about
better, to better share
as community practice,
whether it's neighbors,
like in this tech jobs
idea, or whether we created
a thing called the data driven
justice because we found things
in criminal justice reform.
Miami went from 7,000 people in
prison to 4,900, closed a jail
and saved $12 million.
How did you do that, right?
And they had opened a
12-bed stabilization unit,
they were routing people with
substance abuse and mental
health disorder in
a different way.
And so, great solution.
Camden.
Similar looking
at enterprise data
found 205 people cost
them 7% of money.
Started acting differently.
We were able to get
a couple of cities
willing to listen to
that, adapt it, copy it,
and then we now
have 129 cities who
talk to each other every two
weeks in a community practice,
like an open source
team might across that.
So we can use these networks
to much more rapidly share
what's working in these areas.
And so on the specifics
of government,
I think most government
is, as you described,
super dysfunctional
all over the world.
But there are beautiful pockets,
Estonia being the most digital,
but different places
where people really
have solved things.
And if we are
better at sharing we
could really fix a
lot of things faster.
So what is the tech
industry doing about it?
I think they're waking
up in some ways.
I think there's many
things that happen.
I think that people,
that the beautiful stuff,
we're a bunch of
engineers and scientists.
We try to do, MIT is minds,
and hands, and heart,
we do this because of service.
We do it because of wonder.
We do it because of community.
And then stuff got weaponized.
Right?
The great things that
everybody made got weaponized.
And people had to wake up.
Or they are addicting
people, and so now we
have to face that reality,
look in the mirror.
So that's going on.
But it's not just--
Very serious.
--the industry.
Going back to the
first thing, you really
need the intelligence, both from
the humans and the machines.
If our education
system is crumbling,
and that's a political
problem, because the reason why
it's very hard to fix
the education system
is not just because we
don't know what works,
but it's because there
will be a lot of resistance
from all sorts of
quarters from parents,
from teachers, from
political forces.
So this is where the
democracy that we need to fix
is actually much
harder because it
needs to tackle all of these
sweeping challenges that
are cross-cutting and
that's not an easy thing.
So it's very, I think if we
could solve these problems it's
a very bright world, but
the problems we're facing
are quite daunting
at some level.
Melissa, do you see
any promising signs
in terms of the ability of--
people's ability
to start solving
these problems of democracy?
Well I think at least what's
happening here in the US
we can begin to look at
the states that are--
part of the benefits
of federalism
is that allows for
experimentation
at the state level
and local level.
And by my mind that's
where a lot of the energy
is going to be coming from.
And then eventually may make
it up to the federal level.
But I think a lot
of Americans now
are deciding I'm just
going to work and do
what I can in my local
space and see what works,
and if federal government
takes advantage of that.
We've gone through periods like
this in our country's history.
So I assume we'll get out of it,
but I'm looking to the states
now.
I unfortunately have to
draw this to a close.
We got a little
bit tight on time,
so we won't have time
for audience questions,
but I want to thank our panel
for a really fascinating,
albeit brief, discussion.

---

### MIT Quest for Intelligence Launch: Fireside Chat
URL: https://www.youtube.com/watch?v=1H8WRlHe1wo

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: MIT–IBM Watson AI Lab
URL: https://www.youtube.com/watch?v=oDWmvzhZt4g

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: Student Poster Session Introduction
URL: https://www.youtube.com/watch?v=fCtlSJm_v2o

Transcrição não disponível

---

### MIT Quest for Intelligence Launch: Closing remarks
URL: https://www.youtube.com/watch?v=72CNg7SJBxs

Transcrição não disponível

---

