# MIT Stephen A. Schwarzman College of Computing

Data: 11-01-2025 21:45:50

## Lista de Vídeos

1. [Understanding neural networks](https://www.youtube.com/watch?v=jT5nYLND7co)
2. [The heart of the matter](https://www.youtube.com/watch?v=UZcdAZErPUY)
3. [Seeing, believing, and computing](https://www.youtube.com/watch?v=TvfAKnyiPik)
4. [Telling stories with machines](https://www.youtube.com/watch?v=tutxIYnzQ6g)
5. [A question of trust](https://www.youtube.com/watch?v=tliyKmxEsw8)
6. [Borderline](https://www.youtube.com/watch?v=yA6sr7_x6hQ)
7. [The Future of Higher Education in the Age of Disruption](https://www.youtube.com/watch?v=NFP2S2f3io4)
8. [Teaching Computing in Arts and Humanities](https://www.youtube.com/watch?v=hoZGR3BEkoo)
9. [Teaching Ethics and Policy in Computer Science](https://www.youtube.com/watch?v=Kc2WbkDpQQc)
10. [Teaching Computing in Science and Engineering](https://www.youtube.com/watch?v=Lb5bMw0AEGI)
11. [Computing is for Everyone](https://www.youtube.com/watch?v=Du5rlleQyk8)
12. [Experiences with CS+X Majors and Curricula](https://www.youtube.com/watch?v=fyato3wXsN4)
13. [Teaching Computer Science to All](https://www.youtube.com/watch?v=N8HdEbAVLP4)
14. [Welcome to CELEBRATE: The College!](https://www.youtube.com/watch?v=0NLFItC-4dA)
15. [Computing: Reflections and the Path Forward](https://www.youtube.com/watch?v=vCyZUiBr_ds)
16. [Computing at the Crossroads: Intersections of Research and Education](https://www.youtube.com/watch?v=Qoghqz5MfNI)
17. [Computing for the Marketplace: Entrepreneurship and AI](https://www.youtube.com/watch?v=XsGyLueFAGk)
18. [SOLVE 2019 Announcement](https://www.youtube.com/watch?v=wx3nfcoDsKg)
19. [Computing the Future: Setting New Directions (Part 1)](https://www.youtube.com/watch?v=9f2l1033lNo)
20. [Computing the Future: Setting New Directions (Part 2)](https://www.youtube.com/watch?v=Bq1y57pVya4)
21. [Creating a College: A Conversation](https://www.youtube.com/watch?v=K1C2f5kpi-E)
22. [How the Enlightenment Ends: A Conversation](https://www.youtube.com/watch?v=nTyIeMuUavo)
23. [Computing for the People: Ethics and AI](https://www.youtube.com/watch?v=Sm7I4QjscVQ)
24. [MIT Schwarzman Celebrate Closing](https://www.youtube.com/watch?v=ofXvS95CzQQ)
25. [MIT reshapes itself to shape the future](https://www.youtube.com/watch?v=nzt18BIAt3g)
26. [Creating a College: Stephen A. Schwarzman](https://www.youtube.com/watch?v=Nfcxxn0NZTc)
27. [The Future of Computing: MIT President L. Rafael Reif](https://www.youtube.com/watch?v=P9H90JnF_28)

## Transcrições

### Understanding neural networks
URL: https://www.youtube.com/watch?v=jT5nYLND7co

Transcrição não disponível

---

### The heart of the matter
URL: https://www.youtube.com/watch?v=UZcdAZErPUY

Transcrição não disponível

---

### Seeing, believing, and computing
URL: https://www.youtube.com/watch?v=TvfAKnyiPik

Idioma: en

[music]
My name is Garrett Souza.
I'm a Course 6-3 computer science major.
I'm doing a project analyzing affect of visual
media on implicit biases.
I'm involved in the fashion magazine on campus.
I'm a very visual learner and I've been more
and more interested in how people are actually
perceiving the images I take and also how
I'm consuming images.
I think nowadays we have Instagram and Facebook
and we're seeing thousands of photos a week,
and I was wondering, "How is that affecting
me implicitly?", "How has me looking at twenty
images portraying someone that looks like
them in a certain way, has that impacted how
I then interact with this person in real life?",
and my guess is yes,
but that's what I really wanted to study.
Affective computing is really geared towards
how people can naturally engage with computers.
Any device that can measure some sort of affect
or emotional response.
So we use cameras that can detect micro expressions
in the face,
that can signal happiness, sadness, fear.
And then there's also wrist sensors that can
detect electro dermal activity
that will signal stress and anxiety.
And then also eye tracking, so where are you
looking on a screen?
What parts of an image are specifically piquing
your interest?
Elections, for example; when a news outlet
is reporting a specific political candidate,
what are the images they are choosing to broadcast
that candidate's position, and how are those
images biased based on the news outlet's own
preferences?
And then how is that impacting the millions
of people that are watching?
The ideal outcome of this project is to have
a quantitative analysis of,
"If this is how the media media you 
consume on a day-to-day basis,
this is actually impacting you."
I think that would lead to a lot of people being
more cognizant of what they're consuming,
and also what people are producing.
My hope is that this computation thinking
is used keeping in mind
the implications of technologies that are being created,
keeping in mind the biases within the new approaches.
It's trained on faces, that's how we generate
neural networks.
And if those faces are all white or caucasian
it's so much worse at detecting faces of darker-skin people.
How is that impacting the neural networks
of face-tracking software that
the government is using in web-cameras across our nation?
Are there biases present in that?
It's a really satisfying feeling when talk
to someone and you fundamentally believe that
they, not like agree with you, not anything,
but that they just understand you.
And that you aren't misconstruing our words,
because even language is so hard.
It's so hard to actually say what you're trying
to say.
"An image is worth a thousand words", or whatever.
It's such a cliche statement, but I think
it has some merit in that visuals are so much
easier to have a broad robust array of things
that you can interpret and convey.
It's really hard to go through life
if you feel like you're not really being seen.
And I think that's where the desire for creativity
comes in.
At least for me.
[music]
[music and background chatter]

---

### Telling stories with machines
URL: https://www.youtube.com/watch?v=tutxIYnzQ6g

Idioma: en

[music]
You know, sometimes I think in Spanish, and
sometimes I think in English, which I think,
even though Spanish for example, has different
grammatical structures than English, we still
break down things in the way.
And you still think about things in the same
way.
And you know, storytelling is really universal.
My name is Jennifer Madiedo.
I am from South Florida, coming from a Colombian
family.
I'm currently a senior studying 6-3, which
is computer science, with a minor in music.
[music]
They say that music is the language for everyone,
because no matter what language you speak,
you understand music.
And I've come to realize computers, and computer
science are very similar.
[music]
This idea of how important computers are and
how we can write these languages that can
make them do really incredible things spans
way more than the languages we speak
or where we come from.
[music]
My SuperUROP project came about from a class
I took by Professor Winston,
who's actually my supervisor.
He believes that all humans learn through
stories, and through storytelling,
and all our interactions are us telling each other
stories and then internally processing
through our own inner storytelling.
I'm very interested in how people can be so
swayed in one direction or another.
And so my project focused around persuasion.
Genesis, the storytelling program that I use
reads a story and processes it internally,
and so it can understand relations between
characters.
We're trying to almost make it self-aware.
You can ask it questions, for example, and
it'll tell you why it thinks certain things.
And can tell it, "I want to put a positive
spin, for example, on this character.
One of the best ways I have of doing that,
which is actually one that I see used a lot in politics
is relating two people to each
other.
What I would hope this program could do is
that it could be used for important things
like maybe relating messaging to a certain
population or to a certain group of people,
so that they feel more connected to it, and
they feel that it is more impactful to them.
[music]
I was kind of so excited that this new initiative
is called, "The College of Computing",
and not of "Computer Science", because I feel
like you don't need to be a computer scientist
to be able to work in this area.
Computers and just the capabilities that they
bring are not limited to computer science.
They impact every single field in the world.
They're impacting medicine, they're impacting
the humanities, you know,
they're impacting business, computer science.
[music and background talking]
I just think it's like such an exciting time
to be involved in it.
Even if you don't feel like you're a computer
scientist, like, we want you to get involved
just so you can see what you can make of the field
and how you can use it in the thing's you're working with.
[music]

---

### A question of trust
URL: https://www.youtube.com/watch?v=tliyKmxEsw8

Idioma: en

These things really scare me.
It's kind of terrifying right?
The saying used to be that seeing is believing, and we're entering a world where that isn't true.
So if people can't trust their eyes and
ears then it becomes a question of what can you trust?
[music]
My name is Moin Nadeem.
I'm currently majoring in Course 6-3
which is electrical engineering and computer science
and my minor is currently in negotiations and leadership under Course 11.
[music and background talking]
I'm working on detecting fake news
with Professor Jim Glass
underneath the Spoken Language Systems Group.
[music and background talking]
The ability to spread misinformation is so much easier than the ability to combat it.
So this is why I became really really interested in algorithmic ways to solve misinformation.
[music]
My area of research is is really document retrieval
and what I'm working on specifically is,
"How you understand the evidence through which you can say that this is fake or true?"
[music]
I use deep neural networks to essentially
learn what the content of this vector looks like.
and then I use how similar
one vector is to another trying to assess
what documents are the best suited for
these claims.
I've downloaded all of Wikipedia locally
and that is my source of truth.
And then I say, "Given that someone wants to know about this, can you rank all the documents for me?"
And from there you can understand which claims are true and which claims are false
because now you
know which evidences you should be looking at,
and you know how they agree
with respect to the claim.
Right now we're depending on Wikipedia as our source of truth.
Once we've proven that the basic idea works
we can take this exact same methodology, and now we can start using Google search instead of Wikipedia.
So ideally this would become as useful as an ad would, right?
Whenever you load a page it's right there at the top saying,
"These claims are false and this piece of news has a dangerously high rate of things that are false in it."
[music and background talking]
Long term I want to work on a lot of
things like this
where we can use computer science to solve social problems.
[music and background talking]
Something I really hold true is; no matter how good our algorithm is
and no matter how much we advance the state-of-the-art in computer science
the important thing to keep in mind is
technology is not the uniform solution
to everything.
In an age where misinformation is actually gonna be like the key player in most political games
we also need social norms that will help
understand that
the truth is something we value,
that it's something we should
continue to prioritize.
[music]
 

---

### Borderline
URL: https://www.youtube.com/watch?v=yA6sr7_x6hQ

Transcrição não disponível

---

### The Future of Higher Education in the Age of Disruption
URL: https://www.youtube.com/watch?v=NFP2S2f3io4

Idioma: en

[APPLAUSE]
So when MIT decided to commit
to creating the MIT Stephen A.
Schwarzman College of
Computing, it was clear to us
that this was what
we needed to do.
What was less clear to us was
exactly how we should do it.
In a somewhat
uncharacteristic fashion
for an academic
institution, we basically
decided that we should
simply declare our intentions
to establish this
college, then set
about the business of
creating the college together
as a community.
In that context, it seemed
entirely appropriate
that we convene
an event like this
to discuss how to
teach computer science,
not only amongst ourselves,
but with experts and friends
from other institutions.
We also recently created
five working groups
that are contemplating various
aspects of the College.
Many of the members of
those working groups
are here to listen
and learn today.
And I'm confident that
what we learn today
will, indeed, shape our
framing of the College.
I want to thank the
organizers for this event,
in particular Saman, Asu, and
Sanjay, who worked diligently
to pull together what looks
like an extraordinarily
exciting program.
And also the speakers
that we'll hear
from today, both our
colleagues from MIT
and other institutions.
At the risk of repeating
some things that many of you
have heard me say, let me offer
my reflections on the College
to set a context for
today's discussion.
A little over a year
ago, President Reif
launched us on a
campus conversation
about what we should
do about computing.
In some respects, we are
facing enormous challenges.
And one example of
that was an explosion
of interest and an inefficient
allocation resources.
40% of MIT undergraduates
are majoring either
in computer science or
computer science combined
with another degree at MIT.
And yet, only 7% of the MIT
faculty are computer science--
faculty appointed in the
Computer Science EECS
department.
That's a tremendous imbalance
in allocation of resources.
In addition, and very
importantly, we're
hearing from all
corners of the Institute
a narrative that basically
followed along the following
lines, which is that my field--
insert the blank--
is being transformed
by modern computational methods.
In fact, I was at a dinner
just last night upstairs,
a meeting of the
visiting committee
for the political science
department, and spoke to one
of our colleagues
who's using large data
sets scraped from public
records about legislation
and lobbying and applying
natural language processing
to identify how individual
corporations are influencing
the development of
legislation, a fundamentally
different approach to a
political science challenge.
That story repeats
all over campus.
And what our colleagues
in these disciplines
were saying to us as we
had this conversation
was that they needed GPUs.
They needed access to
software professionals.
They needed engagement from our
computer science colleagues,
both to think
about what are some
of the more advanced algorithms
that could facilitate
their work, but also to assist
in developing curricular
offerings for students
in their discipline
that needed to
master these skills.
Lastly, we saw everyone
feeling, particularly
in the climate we're
in, an intense need
to think more holistically
about the societal impact
of the technologies
they were developing,
and to think about that
before it was deployed,
and thus appropriately
shape the deployment.
Amongst this sea
of need as we were
going through this
conversation, what we realized
was an enormous opportunity.
As is true for many of
the institutions that
are present in this
auditorium today,
we at MIT are blessed with
an extraordinarily remarkable
computer science
talent pool at MIT.
And we have an opportunity
to invest in that talent pool
to advance the fundamental
work that's being done.
But in addition, we realize that
if we could build a structure,
the college would strengthen the
links between computer science
and the departments that
want to leverage computing,
not only to the advantage
of those departments,
but so that what we learn
through those linkages
would certainly feed back into
the research and teaching we
do in computer science.
And if we could do that, we
would really seize a really
profound opportunity while
addressing the challenges
we were feeling.
And lastly, in
creating something new,
we have a clean sheet of paper.
And we have the
opportunity to think
what I believe will be
creatively about how
we integrate a comprehension
to the societal impact
of the technologies
that will emerge
from MIT and this college into
the education and research
agenda.
I'm not suggesting
that this will be easy.
In fact, it's going
to be quite hard.
But I can't think of a more
transformative opportunity
for this institution.
And that makes me get up
every morning very excited
about the potential
of the college.
So that's the aspiration for
our college and the journey
we've embarked upon.
I look forward to hearing
from today's remarks
that are going to help us
point in the right direction.
So with that, let me now
introduce our keynote speaker
Dr. Farnam Jahanian.
Farnam was appointed the 10th
president of Carnegie Mellon
University in March of 2018.
He's a nationally recognized
computer scientist,
entrepreneur, public servant,
and higher education leader.
And in that regard,
we're extraordinarily
fortunate that he would
take time to join us today.
He first joined CMU as vice
president for research in 2014,
and later assumed
the role of provost
and chief academic officer
for May 2015 to June 2017.
In July 2017, he
stepped up at CMU
to serve as interim president
before, in the infinite wisdom
of their board, they tapped
him to be the president.
Prior to coming into CMU, he led
the National Science Foundation
directorate for computer
and information science
and engineering
from 2011 to 2014.
And before serving in NSF,
he was the Edward S. Davidson
collegiate professor at
the University of Michigan,
where he served as chair
for computer science
and engineering
from 2007 to 2011,
and is director of
the software systems
laboratory from 1997 to 2000.
In addition to his academic
and government work,
he co-founded in 2001 the
internet security company
Arbor Networks, where
he served as chairman
until its acquisition in 2010.
He holds a PhD in computer
science for the University
of Texas in Austin.
Hook 'em, horns.
He's a fellow of the ACM,
the IEEE, and the AAAS.
On a personal note,
I would say that I've
come to know him very well
through a form of what
I consider to be group therapy,
which is when collections
of provosts get together.
In that setting, I've really--
can sincerely say that
I've benefited tremendously
from his advice,
wisdom, and friendship.
And today Farnam is
going to address us
on the future of
higher education
in the age of disruption.
Please join me in extending
a warm welcome to Farnam.
[APPLAUSE]
Good morning, and it's good
to be with you this morning.
And Marty, thank you so much
for that kind introduction.
I should tell Marty that
having served as president
and also as provost, provost
job is the hardest job
on campus, Marty.
I think you know that already.
Once again, thank you very
much for the invitation.
First, on behalf of
Carnegie Mellon University,
I would like to congratulate
the entire MIT community
as you celebrate the launch
of the Stephen A. Schwarzman
College of Computing this week.
I want to especially extend
a warm congratulations
to President Reif for his
extraordinary leadership,
and also to Steve Schwarzman
for his generosity, vision,
and continued
commitment to the future
of higher education and
the economic prosperity
of our country.
Please join me in congratulating
the entire MIT community.
[APPLAUSE]
MIT continues to be a world
class institution that
offers a distinctive
education and cutting edge
research, of course.
And this latest development will
certainly increase its impact
in the changing world.
I'm also grateful, I should
say, to Anant and the organizing
committee for the invitation to
deliver this morning's keynote.
The theme of today is
centered around the importance
of education, and especially
within the context
of the unprecedented
advances that we
have seen in technology.
So this morning,
what I would like
to talk about is the
changing role of higher
education in this
age of disruption,
with a particular focus on
the way computation and data
are underpinning these changes.
To begin, I think
we all recognize,
and as Marty pointed
out, we're in the midst
of a global
transformation that's
catalyzed by rapid acceleration
of digital technologies,
including unprecedented access
to computation and data.
The scale and scope and
pace of these advances
are truly unprecedented
in human history.
In particular-- let me see
if I can find my clicker.
Oh, thank you very much.
To put this in perspective,
if you look at this scale,
we're not only dealing with a
singular technology, but rather
a set of interrelated
breakthroughs.
This dynamic of
interrelated technologies
necessitates cross collaboration
across disciplines.
When you look at
the scope of it,
the impact of these
emerging technologies
are ubiquitous, reaching almost
every sector of our economy
with a wide range of
applications from health care
to finance to
transportation to energy,
manufacturing, and far beyond.
And of course, the
pace of it, I don't
need to tell this audience, is
that the pace of innovation,
of course, is
accelerating dramatically.
This requires new
strategies for partnership,
not only within a
campus community,
but also across to government,
as well as industry partners.
Let's consider for a
moment just what we
have seen in the last 10 years.
We could have scarcely imagined
that just about 10 years ago.
Imagine a day, if I said to
you, by integrating biomedical,
clinical, and scientific
data we can predict
the onset of diseases, identify
unwanted drug interactions,
automated diagnosis, and
personalized therapeutics.
Imagine a day that by coupling
roadway sensors, clinical and--
I should say roadway
sensors, traffic cameras,
and individual GPS devices, we
can reduce traffic congestion
and generate significant savings
in time and fuel efficiency.
Imagine a day that by accurately
predicting natural disasters
such as hurricanes
and tornadoes,
we can employ lifesaving
preventative measures that
mitigate their potential impact.
Imagine a day by using
biometrics and unconstrained
facial recognition techniques
we can correlate disparate data
streams to enhance
public safety.
Imagine a day that by using
autonomous technologies
we can have our
cars drive us safely
and securely without the
danger of-- or at least
mitigating the danger of
traffic accidents caused
by human error.
Just imagine a day
that by cataloging data
from millions of photos and
videos posted on social media
from conflict areas
we can move rapidly
to investigate and understand
human impact of conflicts,
disasters, and
political violence.
And finally, imagine a
day that by integrating
emerging technologies such as
AI enabled learning techniques
and inverted classrooms
we can achieve
personalized
outcome-based education.
Now, all of these applications
and advances I talked about,
they're not science fiction.
In fact, this audience
can attest to that,
every one of these
scenarios is possible.
At least to some extent, in some
cases, we can do this today.
And that's been as
a result of advances
that we've seen in
science and technology
over the last couple of decades.
In fact, if we
step back and look
at what's happening as a result
of the unprecedented emerging
technologies, we see that
they're catalyzed, in fact,
by three major trends.
One, obviously, is
an enormous expansion
that we've seen in our
computation, storage,
and connectivity.
Again, at the same
time, what we're seeing
is that exponential growth
in power and reduction
in cost of computation,
storage, and bandwidth just
to consider that
there's essentially
a supercomputer in
everyone's pocket.
And it's always on and
it's always connected.
The second trend, of course,
has to do with digitization,
data explosion, and
advances that we're
seeing in machine learning.
We're in a period,
of course, that's
called a period of
data and information
that's enabled by experimental
methods, observational studies,
scientific instruments,
email, video, images,
click streams,
internet transaction,
and so on and so on.
Data represents, of course,
a transformative currency.
It's a new currency for
science, for engineering,
for commerce, and education.
And it's transforming
almost every business
model and industry.
It's also accelerating
the pace of discovery
in almost every
field of inquiry.
And finally, the third
major trend that we've
seen over the last decade--
or 20 years, I should say--
is the ubiquitous deployment
of sensors that has enabled
smart systems all around
us-- complemented, of course,
with advances that we're seeing
in automation and robotics.
Bottom line is that we're deeply
integrating computation data
and control into physical
systems and a melding
of, if you will--
excuse me, melding of cyber
and the physical world
has become a reality.
The truth is that the digital
innovation that we're seeing
is not just additive.
It's the combination of
those is leading to advances
that are exponential in nature.
In fact, often there is a
major gap between milestones
that we have seen
in the past has
been reduced in recent decades.
And the breakthroughs
that we see
are coming to fruition in
a matter of sometimes years
and months.
Today, in some cases,
computational technologies
are out-stripping,
essentially, the performance
of even the most
experienced human beings.
Consider, for example,
advances we've
seen in speech recognition,
in computer vision,
in facial recognition,
in robotic surgery.
And in other cases,
they're augmenting.
And that's what's
the beauty of it
is our cognitive and our
physical capabilities
as we're seeing this
in medical diagnosis,
in financial market analysis,
in recommendation systems,
and the list goes on.
In fact, the future
is even brighter
than I described to you.
We're now facing a future
that the impossible
seems very achievable.
We're only a few years away
from groundbreaking discoveries
that are potentially
going to transform
our system of health care
and our understanding
of human brain.
Just give you an example of it.
We're working toward,
for example, a greater
understanding of new brain areas
and kinds of synaptic changes
that occur during learning
disease states and treatment
conditions.
Overall, these type
of technologies
are poised to transform
our entire health care
system to go from something that
was very reactive and episodic
to a health care system that's
much, much more proactive,
is evidence based, and
focuses on quality of life.
We can also envision, for
example, much smarter cities
and connected cities.
By 2050, it's estimated that
2/3 of the world's population--
it's projected to be
about 9.7 billion--
will live in urban areas.
Just imagine that we
can transform our cities
through, essentially,
introduction and integration
of technologies.
But it's going to require us
not just to bring scientists
and engineers
together, but you're
going to have to
bring public policy
and you're going to have private
and public partnerships that
are finding innovative
solutions to transform
our cities and urban areas.
And of course, when you
look at, for example,
the area of global
decision making,
there are now approaches that
bring layers of global data
into interactive
visual systems that
are going to allow us
to better understand
environmental and
population changes.
And when it comes
to deforestation,
when it comes to
refugee flows, sea level
rises, surface water changes,
pandemics, urban growth,
and so on and so on.
And of course the
area of transportation
is completely being transformed.
So what I'm sharing
with you is something
that I think, to a large
extent, the academic community--
and as Marty mentioned, the
computer science community--
has recognized.
Computational and data
intensive approaches
are underpinning our economic
prosperity and global security.
They're accelerating the pace
of discovery and innovation
across nearly all
fields of inquiry
and are crucial to achieving
our major societal priorities.
I think there is broad
recognition that is happening.
Of course, technological
innovations
have always disrupted the
status quo and underpinned
dynamic economic changes.
Today, however, as
I mentioned earlier,
the scale, the scope, and
the pace of-- and the impact
is unprecedented.
And it's disrupting many
markets and industries.
Adoption is happening as
breakthrough, speed, and scale.
And of course, we're
seeing an acceleration
of the economic impact.
And the society
and its structure,
including our
education system, must
adapt to this new paradigm.
In fact, I should
step back and tell you
that while in this
country we have enjoyed
having the gold standard
for higher education, which
is a model for the rest of the
world to copy, if you will,
throughout our
history, every period
of significant
technological change
has been met with corresponding
waves of innovation
in education.
In fact, if you think back 100
years ago or so, or 100 years
or so plus ago, land grant
universities to expand access--
consider German-style
university ideas that
were brought to this country.
Consider, for example,
the Carnegie unit
for standardization of higher
education, the California
Master Plan.
And in fact, I would go
as far as saying that many
universities in this country
that are leading our higher
education-- including
Cornell, Johns Hopkins, MIT,
University of Chicago, in fact--
have run experiments.
And some of them
actually were created
as part of an experiment
to deal with changes
that we see in technology and
the transformation of higher
education that we have seen
throughout our history.
So the current
environment that we're in,
I would argue that we're at the
cusp of the next transformation
of higher education.
Are we at a tipping point?
I'm not sure.
But we're probably at
a very close to it.
As I mentioned, there's
unprecedented pace
of societal changes due
to the advances that we
see in technology.
There's, of course,
greater pressure
on higher education as
the engine of progress
in knowledge-based
economy, and many
of our higher academic
institutions in this country
are at the center
of that, of course.
And of course, we're
seeing this shift
from industrial, somewhat
transactional model
of education that's based on
tradition and rigid pathways
to a much, much more
personalized outcome-based
model of education.
In fact, there've been number
of studies in recent years
that have looked at the
impact of technology
on education, on the nature of
workforce, on business models,
on income inequities, and so on.
In fact, one of those
highly influential reports
was a report by your
colleague, Erik,
from MIT and my
colleague Tom from CMU
who co-authored and co-led
this national report that
was commissioned by the
National Academies of Science,
Engineering, and Medicine.
And this report, which was
titled information Technology
and the US Workforce,
Where Are We
and Where Do We Go
From Here argued
that recent advances in
computing and communication
technologies have
had and will continue
to have a profound
impact on society,
and will affect almost
every occupation.
This is creating large
economic benefits,
but is also leading
to significant changes
for our workforce.
So looking at this
context, before we examine
how we can incorporate these
changes into our higher
education system, I want
to take a very quick look
at some of the challenges
we face in higher education.
And you see the context that
it provides for the discussion
later on today.
One challenge I think
that everyone recognizes
has to do with college
affordability and access.
The second has to do with
increasing demand for college
educated workforce.
But in particular--
as, again, Marty
mentioned-- demand
for students who
have computational and
data-intensive knowledge.
And finally, adaptability as
we see the rise of automation
in the workforce.
So let me spend a
couple of minutes
on each of these topics.
Let me first shock you
by sharing some data.
Let's look a little closer
at access and affordability.
The runaway cost of education
are why so many Americans
are increasingly concerned
about their children's
future in this country.
And there's no doubt that
higher education in this country
has been a pathway
to social mobility.
I think that's been
one of the reasons
that we have benefited
as a society.
There is undoubtedly
also some skepticism
about the value of
higher education.
I'm going to refute
that in a moment.
Consider the fact that aggregate
student debt has tripled from
2006 from about $500 million to
about $1.5 trillion dollars--
that was a T, folks--
in 2018.
[INAUDIBLE]
I'm sorry?
$500 million to $1.5 trillion.
$5.-- trillion, that's right.
I'm sorry. $1.5 trillion,
thank you, in 2018.
It should have been
billion, you're right.
Thank you.
There's a correction
on this slide.
Although, I've shown
this a few times.
Nobody else caught it.
[LAUGHTER]
Maybe that was wishful thinking.
I'm not sure.
But more seriously, consider
the fact that this $1.5 trillion
dollars is actually larger
than the entire credit card
debt of our nation.
That's really staggering.
And by the, way
every time I show
this slide, that $1.5 trillion
goes up by $100 million.
Every year it's going up
by about $100 million.
The second data point, the
college tuition in this country
has risen by about 538%
comparing the consumer price
index increase of 121%, which
is, again, fairly significant
if you consider that.
In fact, despite
the rising cost,
however, there is no
denying that the kind
of social mobility that
education provides--
this graph breaks down,
essentially, wage trends
over time by education level--
a chasm has opened up.
And it's actually growing
between the best educated
and the least educated
in our country.
Our most educated
citizens have continued
to see their wages rise
robustly since the early '70s.
And I think if you just look at
the salaries of freshmen that
are coming from MIT or CMU
or any other institution
in this room, you can see
that there are six figure
salaries for
undergraduates, for example,
in computer science or
computer engineering.
But our less educated
citizens, on the other hand,
have seen their real income
fall since the early '70s.
In fact, labor
economists predict
that the next wave of disruption
of innovation that we're
going to see that's going
to have an impact on higher
education is going
to further grow
the inequality that
has placed strain
on our national politics.
Let me build on
that for a moment.
The other data point
that's driving the urgency
of our conversation--
and in fact, the initiative
that MIT has taken--
is the increasing demand and
relevance for a college degree.
There was a Georgetown
study a couple of years
ago that showed that the
number of people with at least
some post-secondary credentials
have increased by about 1%
a year, but the demand
for these workers
is growing about by
2% a year annually.
But for much of
the 20th century,
supply of college
educated workers
has kept up with the demand.
But for the past
three decades or so,
the supply has not kept up.
In fact, today, while the
job market is churning
and the future is
constantly evolving
and we hear all this sort
of concerns about automation
displacing workers,
there's also one thing
that's patently clear--
this is a future that needs
higher education more than ever
before.
Let's talk about
the issue of demand.
To understand this point,
consider, for example--
and this is-- a
couple of studies
have pointed out to
this result that I'm
going to share with you.
That's 65% of students
entering elementary school now
will one day work in jobs
that do not exist today.
Think about that.
Actually, that shouldn't
be as surprising to us,
because if you consider
over the last 10 years
all the kind of jobs
that have been created
as a result of advances that
we have seen in technology that
didn't even exist 10 years ago.
It shouldn't be surprising
to us that a five
or a six-year-old will,
in fact, most of them,
will have jobs that have
not been invented yet.
I often share this
other data point,
which is almost trivial, but
somewhat surprising to people.
A student that comes
to MIT, an 18-year-old,
or to Carnegie Mellon, after he
or she graduates in four years
will be in the workforce
for the next 40 to 50 years.
Think about it.
40 to 50 years.
Imagine the kind of
education and foundation
we have to give these students,
the next generation, that
will enable them to thrive
in an economy for the next 40
or 50 years, given the
context of some of the data
that I've shown you.
So there are two
forces, of course,
driving this future work.
One-- and MIT is in the
middle of all of this,
of course-- has to do
with autonomy, then
the digital revolution,
which, of course,
many talk about how it displaces
blue collar workers performing
routine jobs.
But the truth is
that it also changes
the nature of work for
white collar workers
in a knowledge-based economy.
And in fact, there are
estimates that, for example,
50% of these jobs are
at risk at some level
for significant change.
The second force has to
do with the gig economy.
We see a much more
liquid force that's
contributing to the shift that
we're seeing in education.
That will contribute
to the shift
that we need to see in
education, I should say.
Of course, I don't need to
tell you about the gig economy.
But one data point that
was quite intriguing
is that the estimates are that
much of the growth that we have
seen in the job sector in the
workforce over the last decade
or so has been due to the
rise of the gig economy.
And in fact, one
of the data points
that supports that is, in fact,
the percentage of the workforce
due to gig economy has
gone from 10.1% to 15.8%
over the last several years.
And as I mentioned,
estimated that almost all
of the employment growth that
we've seen in the US since 2005
is due to the gig economy.
I'm not trying to depress you.
On the contrary, these trends
have the potential, in fact,
to shape the educational
landscape significantly,
hence the need for
experimentation,
hence the need for thinking
about the future of the country
and the future of
higher education,
but not continuing to
follow the path that we
have been on for many years.
These trends, of course,
have the potential
to reshape the
educational landscape,
bringing focus on self-directed
education, lifelong learning,
and topics such as
entrepreneurship
as a foundational skill.
So how do we
prepare our students
for a changing
workforce and workplace?
I want to-- in the
remaining minutes
that I have-- to talk
about the solution
space in three dimensions.
One, having to do with
reimagining curriculum,
second, rethinking
structure and pedagogy,
and finally, considering
new models of collaboration
within an institution
as well as across
our academic institutions
and external partners.
First, about reimagining the
curriculum to both enhance
digital core skills, as well
as incorporating human skills.
I think it's pretty
well-documented.
And I know that, in fact, Jim
Kurose, who's sitting here,
talks about this in
his presentations
representing the
National Science
Foundation that the first trend
that we must be mindful of
is that growing reliance
on technology and science
as drivers of new jobs.
In fact, growth in STEM jobs
have outstripped overall job
growth in this country.
And a lot of that, of course,
in computational areas.
But the US department of Labor
estimates that the STEM jobs--
or I should say,
STEM-related jobs--
will grow at almost double
the rate of non-STEM jobs
for the next 10 years.
There's also an important
point to highlight, that--
and the growth that we're
seeing is not just because
of the technology companies.
Virtually every industry
and organization
has become dependent on
technology for this business,
and in particular computation
and data centric approaches.
You see this in finance.
You see this in transportation,
healthcare, energy production,
distribution, and
so on and so on.
I'm sure most of you, in
fact, in the CS community
are familiar with
the two reports
that I'm showing on the screen.
And in fact, these two
influential reports
have looked at how
do we deal, in fact,
with broadening participation
in computing and STEM fields?
How can we increase the Computer
Science core competencies
across the educational system?
The report was prepared by a
National Academy of Sciences
Committee on the growth
of computer science
undergraduate enrollment.
It was co-chaired by Jerry
Cohen from CMU and CRA
a vice chair at the
time Susanne Hambrusch
from Purdue University.
In fact, it builds
on the work that
was published in CRA's Next
Generation CS report, which
was chaired by Tracy Camp.
I want to acknowledge
their work.
And this report
had, first of all,
identified this, of
course, as a major issue.
But equally
important, highlighted
that context matters.
And approaches taken
by one institution
is not necessarily workable
in other institutions.
The report highlights
limiting participation,
growing programs, leveraging
resources creatively.
But equally important,
their report
is very forceful
about rethinking
organizational structure
for computer science,
both in terms of
interdisciplinary
collaborations,
CS Plus X, which I
know is going to be
discussed, or X Plus CS which
is going to be discussed as one
of the panels later on today,
as well as considering
college of computing
and new organizational
structures
that allow a much more porous
boundaries between units
on campus.
And in fact, the report
mentions that there is no
one-size-fits-all.
All institutions need to assess
the role of CS and related
fields, and should see
this as an opportunity
to plan for future success
across the entire institution,
which obviously is the
model that MIT is employing.
By the way, I should
also highlight
that we've seen a
significant-- while we've
seen a significant growth in
the number of undergraduates
in computer science, as Marty
mentioned, much of the growth
is also happening as a
result of all other students
on campus who need to learn
computational approaches,
and approaches that
are data centric.
So the need is
fairly broad-based
across the university.
But STEM is only one
part of the picture.
And I want to underscore
that and spend a minute or so
on that in particular.
The argument that can be
made that a liberal arts
education and core
human skills are just as
important in the new economy.
In this uncertain, and
constantly shifting,
of course, landscape,
non-automatable, if you will,
human skills should perhaps
serve as a foundational core
competency, I should say.
These skills include things such
as communication, leadership,
problem solving,
critical thinking,
organizational skills,
creativity, and so on.
I'm really fond of this quote
from Geoff Colvin, who is
one of the editors at Fortune.
And in a book that he wrote
a couple of years ago,
called Humans Are
Underrated-- and I'll just
read the quote to you.
It says, "Our greatest
advantage lies
in our deepest, most
essential human abilities--
empathy, creativity, social
sensitivity, storytelling,
humor, relationship building,
and expressing ourselves
with greater power than
logic can achieve."
So I want to underscore this,
that it is extremely important
as we think about
STEM education,
we also think about the
importance of developing
the whole individual and
developing the students who
go out to the real world having
not only disciplinary expertise
in one area, but
at the same time,
be able to connect
to other disciplines.
At the same time that we've
seen the rate of progress
continues to accelerate,
the societal issues
and intersection of
technology and humanity
will continue to become
really important.
A number of institutions,
including MIT,
are looking at the issues
of ethics and technology.
But I want to argue that we
need to expand the discussion
to include discussion of
the critical intersection
between not only
ethics and technology,
but also addressing issues of
security, privacy, fairness,
trust, and so on.
And this has to become part
of our educational system.
And it has to become part of the
curriculum at our higher ed--
at our academic
institutions, including
dealing with issues
such as mitigating
algorithmic bias, the spread of
fake news, and so on and so on.
This argues that,
in fact, we need
to think about this
much more holistically
to bring together the
intersection of technology
with policy, design,
psychology, economics,
and other disciplines.
I know that there is a session
later on this morning at 10:45
that's going to
focus on this topic.
And I want to acknowledge my
colleague, David Danks, who's
here, who's going to be
serving on this panel.
In fact, at CMU we've been
thinking about this very hard
and deeply integrating
some of these issues
into our curriculum.
I hope that he'll have a chance
to share some of that with you.
As I'm going to
run out of time, I
want to highlight the other
two points very quickly.
The second one has to do
with rethinking structure
and pedagogy, moving away
from transactional nature
of education and potential
disciplinary silos
that we have.
I want to argue,
in fact, that we
need to consider
experimentation, assessment,
and ways of scaling
and eventually
consolidation of new educational
models and structures.
If you remember what I said
a few minutes ago, which
is in the last 100
years every time we've
had major technological
advances, what we've
seen in this country is
significant experimentation
and innovations in higher ed.
And I absolutely believe
we're at the cusp of one
of those moments.
For example, consider learning
as a lifelong endeavor.
Should we rethink the relevance
of a four year degree?
Should we focus on outcome
and competency, not just
a transactional model
that we have bought into?
Another example is to
rethink disciplinary silos.
If the future is going to be
increasingly interdisciplinary,
department boundaries may
need to become much, much more
porous.
In fact, not to make the
department heads and the deans
in this room nervous, are
departments and colleges as
important as they were
a few decades ago?
Some of you are saying no.
So I will hold my
comment any further.
So these connections
that we have
to build across departments
and across colleges
are becoming
extremely important.
In fact, every time we
experimented with this
at Carnegie Mellon,
what we have seen
is that the response
of our students
and the response
of, essentially,
the external world, people
who hire our students,
has been tremendous.
We experimented with that
in our neuroscience program.
We introduced a neuroscience
undergraduate such
that you can come into the
program having essentially
the same set of
foundational courses,
but you can get a degree
from our science college
with a biology focus
in neuroscience.
Or you can get a degree from our
social science and humanities
college with a focus on
cognitive neuroscience.
Or you can actually get a
degree from our computer science
college with a focus on
computational neuroscience.
And it's been received
extremely well.
Another example that
I'm really fond of
is the IDeATe, which is at the
intersection of technology,
design, media, and art.
And we launched it as a minor.
And the result of that has been
about 850 to 900 undergraduates
out of a 7,000 population
in our institution
are minoring and taking
courses in our IDeATe program.
And there are a number of
other examples such as that.
And I know MIT has
also experimented
with similar things.
The other important
point related to pedagogy
is that there is, in fact, an
important role for technology
that could not be understated--
technology enhanced learning
and potential disruption of it
on innovation.
I think of a grand
challenge for education
to be one that each student
has a dedicated tutor
or teacher delivering
personalized
learning and marginal cost.
In fact, studies have
been shown that could
have a significant impact
on learning outcomes.
So this desire for
personalization,
desire for better learning
outcomes, desire for control
and controlling costs and
access can potentially
be addressed by technology
and enhanced learning.
Finally, the item that
I want to highlight
has to do with considering
new models of engagement
with the private
sector and government.
And this is really beyond the
scope of the discussion today,
but I want to just
get this off my chest
and plant a seed with you.
We need new
collaboration models.
We need new policies,
public policies,
that, in fact,
build that support,
building human capital.
Let me just share a
couple of that with you.
In this country, we
need to start rethink
as human capital development
as long-term investment
by the private sector.
In fact, we need
to think about it
in the public sector or the
government tax incentives
fiscal policies for
investing in human capital.
In fact, much of our tax
policy in this country
supports capital investment but
not human capital development.
Another example of it that--
and I'm quite fond of--
is thinking about creative
options for financial aid,
such as income share
agreements and so on.
But the list goes
on and I wanted
to just share that with you.
In this country, we need
to have really fairly
progressive policies towards
supporting the next generation
as they look at education
as the source of mobility.
A couple of slides
before I wrap up.
I was asked to share, also, some
thoughts on our experiences.
And of course, I
want to first start
by saying that local
context matters.
And that's my last bullet.
That's actually the
most important thing.
There is no single
recipe for, in fact,
creating the kind of higher
education that's much,
much more porous,
provides our students
the kind of
foundational knowledge
and competencies that they need.
So the local context matters.
The organizational, cultural,
budgetary environment
of every institution
is different.
What we have observed is that
intellectual and practical
justifications are often
mutually reinforcing.
By that I mean, whether
for intellectual reasons,
such as we need to make sure
students have computational
and data intensive skills,
or practical reasons,
as Marty mentioned--
40% of the students
on this campus
are majoring in computer
science or related fields.
It turns out these are
mutually reinforcing
and there are opportunities
for all academic institutions.
We-- and when I say we, I
mean computer scientists--
have to take a much more
expansive but inclusive view
of computing.
And I think that's
been recipe for success
for institutions
like Carnegie Mellon.
Also, we need to keep
disciplinary boundaries,
as I mentioned, porous.
And that allows us to
strengthen connections
between academic units.
Furthermore, continuous
experimentation and risk taking
requires stakeholder
commitments.
But I can't underestimate the
importance of experimentation
in this environment.
And finally-- again, for my
fellow computer scientists--
we have to be very,
very sensitive--
and I've lived through
this in my career--
to the perception
that computer science
may become insular by
becoming a separate, larger
unit on a campus.
I think to a large
extent, the experiments
that we have had in other
institutions shows that's
probably a perception.
But we have to be
really aware of it.
And we have to be very
sensitive to that notion.
To wrap up, Horace
Mann in 1848 said,
"Education, beyond all other
devices of human origin,
is the great equalizer
and a balance-wheel
of social machinery."
Indeed, higher education
is unique in its power
to catalyze social mobility.
It can bridge social, economic,
racial, geographical divides
like no other force.
But if we want
education to continue
to be an active force
for equality and not
the inadvertent, I should
say, engine for inequality,
we need to commit ourselves
to major transformations.
The future is arriving
faster than ever before.
And it's looking vastly
different what we have seen it.
So we must embrace
a system that allows
these unbounded connections
across organization
and disciplines.
It further encourages and
nurtures continuous innovation
new models, and of course,
supports lifelong learning
as a guiding principle.
With that, once again, I want
to congratulate our colleagues
at MIT on the launch of
the computing college.
And I look forward to watching
their success in the future.
Thank you very much.
[APPLAUSE]

---

### Teaching Computing in Arts and Humanities
URL: https://www.youtube.com/watch?v=hoZGR3BEkoo

Transcrição não disponível

---

### Teaching Ethics and Policy in Computer Science
URL: https://www.youtube.com/watch?v=Kc2WbkDpQQc

Transcrição não disponível

---

### Teaching Computing in Science and Engineering
URL: https://www.youtube.com/watch?v=Lb5bMw0AEGI

Idioma: en

I'm Cynthia Barnhart.
I'm the Chancellor here and
Ford Professor of Engineering.
And like with the
other panels, we
will go through and have
each one of our guests
provide you a brief
presentation and then
we will open it
up for discussion.
So we're going to do this
in alphabetical order,
and we'll begin
with Steven Boyd who
is the chair of the Department
of Electrical Engineering
and the Samsung
Professor of Engineering
at Stanford University.
He has courtesy appointments
in the Department of Computer
Science and the Department
of Management Science
and Engineering.
I won't go into all the
various research awards he has,
but I will say, relevant
to today's theme of teach,
he has earned some prestigious
awards for his teaching,
including the Walter J. Gores
Award, Stanford's highest
teaching award, and the IEEE
James H. Mulligan Junior
Education Medal.
Steven.
Thank you very much.
I assume my slides will
come up there at some point.
Yep, they should.
OK.
So first of all, I should say,
surprise, we are your lunch.
So I mean, actually,
you're the ones that
didn't head out for lunch
or something like that.
So let's see if I can-- oh, OK.
Oh, no, no.
That's going backwards.
OK.
So let me go ahead and start.
No, no, no, no, no.
All right.
Fine.
I'll just go ahead and start.
I think that's the first one.
So first off, I'll just say a
little bit about my background
and then I'll explain what
I'm going to be talking about.
So I mean, my original
training is pure mathematics
and super applied EE.
I should actually go back.
I said it on the panel this
morning on digital humanities.
So I actually grew up,
and both my parents
were English professors.
And what the panel
brought back to me
was the painful
recognition for me
that the humanities is so
much cooler than what we do.
[LAUGHTER]
Max, I want to take your class.
I want to take all
of them, actually.
So I want to write Python
scripts and compose--
anyway.
So for us, it's not
anywhere near as exciting.
But anyway, sorry, too bad.
So I teach big classes
on applied optimization,
linear algebra.
And it's for students with very
diverse technical backgrounds,
right?
So I have to say
technical there,
because it's not really
diverse backgrounds.
It depends which PhD
program they're in, right?
But it is-- I mean, it
is actually interesting.
It's multicultural in
a mathematical sense
because we have people who come
from a lot of different areas
and they speak a lot
of different dialects.
Actually, most of them don't
think they speak a dialect,
and I'll have to tell some
of them, oh, by the way,
you're speaking a hick dialect,
a mathematical hick dialect.
And I'm going to talk
about the courses that
are mathematical type
courses, actually none in math
but typically in
engineering and science.
So this would be things--
most courses in
electrical engineering,
computer science, mechanical
engineering, aero astro,
statistics, machine
learning, econ,
all these kinds of courses.
Anyway, so not anywhere
near as interesting as
glass blowing or composing
or Victorian literature,
but that's just what it is.
OK.
So the first thing
I'd like to do
is actually I'd like
to change the title.
So I actually prefer
the title Teaching
Science and Engineering
with Computing or--
and this one is long winded, but
I think it actually says better
what we really want, which is
Teaching Computational Thinking
and Doing-- that's going
to be a big theme--
in Science and Engineering.
So that would be
my preferred title.
No disrespect meant, by the
way, at the choice of the title.
And now I've gone to
some other random--
OK.
That's the price you pay.
OK.
Yeah, this is--
OK.
All right.
So what now I'm
going to state what
is completely obvious which is
that what computing does when
you wrap it in with
these courses is it
makes them actionable, right?
So it means you can
actually do something.
And it means that beyond gaining
insight and understanding,
students can actually
do practical things.
And you can do it pretty
soon, like we saw something
approaching music in
five minutes, right?
So it's actually just like--
I mean, that is a huge big
deal for these courses where
a colleague of mine,
Bob Dutton, refers
to-- he referred to
our EE curriculum--
he may still refer
to it that way.
He said it's a deferred
gratification with nothing
at the end.
So I mean, it was
actually pretty good.
It was actually a
pretty good description
of our EE curriculum, right?
You always take a class
and students say, you know,
can I do something with this?
And you're like, no, no, just
wait for another four years,
then you'll be able
to do something.
And then they come back as a
senior, can I do something?
No, not yet.
So anyway.
All right.
So what's very cool about this--
it means that in these classes,
you can bring together a lot
of different ways of knowing.
So you have the ideas.
What are the essen-- what are
the Ideas This is with no math.
You would have the
theory, so you'd actually
tell people about the math.
You'd say, these
things are just true.
If you believe this
hypothesis, then this
has to follow, period.
That's the math or the theory.
You would have things
like the practice.
Does this really work?
How do you do this?
And then you have what
we call the practice--
street fighting tricks,
like, OK, here's something.
This actually works in practice.
No one knows why, but it does.
OK, so that's the
street fighting.
So we have all these
different ways of doing this.
And of course, this is enabled
entirely by the computing.
And it's also very nice because
it brings a maker culture
to mathematics and
applications, right?
So what's kind of
cool about that
is we're very jealous
of our friends
in mechanical engineering.
They'd sit there, and
the freshmen sit there
and they do some CAD stuff,
and then they press a button.
Half an hour later, this
thing comes out a 3D printer,
and it's, like, cool.
And then you think about
the classes we teach,
and they're like so boring
by comparison, right?
So it's like we're jealous,
but this kind of at least--
I mean, OK, it's not origami,
and it's not a composition,
but still, for us,
it's pretty cool.
We have very low standards.
[LAUGHTER]
So all right.
And it also introduces
an empirical component
to these things, right?
So you can teach these classes.
You can talk about
various things.
You talk about the theory,
all sorts of other stuff.
And then you can say,
let's just try it.
Let's just see what happens,
maybe on simulated data,
maybe on real data.
And that's actually really cool.
I'm going to try to press
the right button now.
No guarantees.
I didn't.
It was the wrong one.
[LAUGHTER]
OK.
I'm giving up.
I quit.
Let's see.
OK.
No, no, look at that.
OK, maybe that's just
going to the end.
Oh, OK.
I think I have it.
There was a button
I hadn't seen here.
OK.
Mhm, all right.
So OK.
All right.
There we go.
I did it.
OK, great.
And the 10 minutes
is almost over.
That's fine.
So the idea-- now I'm going to
talk a little bit and quickly
about some of the things
that makes all this possible.
So encapsulation and abstraction
is extremely important.
I think Ellen is going
to talk about that.
That's very important.
It means that you have
high-level interfaces that
are going to match the
ideas and the theory closely
and hiding the
implementation details.
That's very important.
And so there you really
have to accept the idea
that you and the students
do not need to know
all the implementation details.
By the way, there's people
who think that you do,
and they are just wrong.
So that's simple.
So what it does is
it shifts the focus
to what you can do with
the methods and not
the implementation.
You may spend some time
talking about how it works,
but the most important thing
is what can you do with this.
And this is actually
very important
because it allows you to
create outreach classes.
When someone from maybe
quite a bit further afield
is taking this class,
they don't want
to know the stupid
details of the algorithms.
They want to know that's
super cool, what is it,
what can I do with it, and
how can I apply it in chemical
engineering, for example.
OK?
All right.
Now the enablers, they
were mentioned actually
in the last panel--
one is kind of obvious, but
it's sort of not stated enough.
And that is open-source,
non-proprietary software.
That's critical to all of this.
I mean, without that,
you can't do any of this.
Software that supports
high-level abstract interface
to users like, for
example, a Julia
would be a perfectly
good example.
You're going to hear
more about that later.
And actually very
important is cultural--
it's a culture of
openness and sharing.
And so this varies by
field, but the idea
is that the fields that are
thriving if you look around
are precisely the ones where
it is expected that you write
a paper or something
like that and everything
goes on-- you would not dream of
writing a paper where there was
not associated code.
This is openly available to
the entire world, period.
OK.
In my opinion, not only
is that morally correct,
it's also a huge advantage to
the quality of the teaching
and the research
and everything else.
So I'll start with
some suggestions.
So the first is--
I mean, these are
all kind of obvious.
But the first is I
believe computations
should be infused
into everything
in all of these courses, right?
I'm talking about all
the way from the lowest
level immediate
introduction, all
the way to the most advanced
PhD level courses, right?
So I can't imagine this not
improving all of these courses.
And that allows
you to co-develop
these different modes of
understanding, the ideas,
the theory, the practice,
the software, the street
fighting tricks.
It's great.
And I should also
say what's very
important is when you do
have these multiple modes
of understanding, it's
extremely important
to be very clear about which
level you're at, right?
Was that an idea?
Was that a theory?
Was that an empirical
observation?
Or was that actually,
for example, a street
fighting trick?
Those are the ones where
I tell the students,
don't tell anyone I
told you, and I'll
deny it if you tell anyone.
So OK.
So in the end, I think
this is basically
what I'm describing,
which is actually--
I mean, I think it's done
in a lot of universities,
certainly at MIT.
I know that.
But my claim is
this is the training
that students want and need.
And it's the ability
to fluently move
between the big
ideas, some theory--
some, notice I said.
I qualified that-- algorithms
and computation, software,
and applications.
And what this does
is it allows you
to go into a whole variety
of existing fields.
And this is much more important.
It actually prepares
the students
to create fields
that don't exist yet.
And that is actually what
we're training people to do.
So when students say to me,
oh, can I get a job with this?
Yeah, but that's not the point.
Can you start an
industry with this?
And that's the point.
OK.
So I'll quit here.
And we'll go on to the next one.
[APPLAUSE]
So next up--
Steven, maybe you'll
have to teach Tony
how to use this while
I am introducing Tony.
So Tony Derose is an
educational consultant.
From 1996 to 2018, he was a
senior scientist at Pixar where
he led Pixar's research group.
Previously, he was a
professor of computer science
and engineering at the
University of Washington.
So over the past
seven years, he's
developed several initiatives
to help make math, science,
and engineering
education more inspiring
for middle and high
school students.
So please welcome Tony.
[APPLAUSE]
Thank you.
And thank you for being here
this morning as opposed to,
I don't know, maybe listening
to Michael Cohen's testimony
in front of Congress.
So thank you for
the introduction.
Thank you for the invitation
to speak here today.
I may be the only one
talking about education
below higher education.
So I'm really interested
in K12 education,
general public education.
I primarily have been focusing
on middle and high school
for reasons that we'll talk
about in a couple of minutes.
And the most visible projects in
those lines that I've worked on
are two.
One is The Science Behind
Pixar, and the other
is Pixar in a Box.
So these are kind of
companion projects
both intended to bring some
relevance and inspiration
to a lot of the material that
students are learning in class.
We feel like and there's
a lot of evidence
to show that students don't
learn if they're not engaged.
If they don't see a reason
for why to learn something,
it's hard for many students
to really take interest.
And so that's the
end of the spectrum
we've been focusing on.
The Science Behind Pixar
was a collaboration--
about a seven-year
collaboration,
actually with the Museum of
Science right here in Boston.
It opened here in
the summer of 2015.
Did anybody manage to see
it while it was in town?
Great.
A few of you.
It's a 13,000 square
foot exhibition
that really shows how
much math and science goes
into the storytelling
that we do.
Back in the early
days of Pixar, we
were viewed primarily
as a technology company
that happened to tell stories.
And then the stories
became so successful,
the films became so successful,
we became a film studio,
and people kind of forgot
about the technology.
And so this is an opportunity
to share that with people.
It's got dozens and dozens
and dozens of activities.
Some of them are screen
based like this one.
Others are physical.
There's video.
It's a physical experience.
So you go-- typically
family units go together.
There's a lot of
really wonderful
intergenerational
learning that goes on.
It's generally
the younger guests
teaching their grandparents
how computer graphics works.
And that was a really
great collaboration.
We know the business
of making films.
We have the domain
knowledge about how
math and science and engineering
is used to tell stories.
And the team at
the museum really
understood how to take
all those complex ideas
and package it in a
way that is consumable
by the general audience,
the general public,
so ages three and up, really.
Partway through that
project, we realized
if we could find a partner in
the online education space,
we could bring
that sort of story
in a virtual way to potentially
many more people and really--
and for free.
And so we went knocking on
the door of the Khan Academy.
And they said, yeah, let's
partner to put this together.
So we had funding
for three years.
We call these seasons.
So season one was really
focused on math topics
primarily, season two on science
including computer science,
and season three focused
on arts and humanities.
So we're really trying to
help answer the question, why
do I need to learn this stuff
and what's going to keep it
from becoming boring?
The math and science lessons--
there's 24 of those in all.
And they basically
have the flavor of--
each one kind of picks a
different creative challenge
faced at the studio
and then demonstrates
how math and
science concepts are
used to address that challenge.
So there's a lesson,
for instance,
on the water in Finding
Nemo, in Finding Dory.
How do we create the water?
There's a lesson on creating the
beautiful landscapes in Brave.
And it turns out parabolic
arcs play a huge role in that.
Each blade of grass in the
forest is a parabolic arc.
So why do you need to
learn about parabolic arcs?
Well, making grass.
Each of the lessons is hosted
by one or more Pixar employees.
We really wanted to give a
personal face to this material.
We also wanted to
put forth what's
kind of an aspirational
sense of diversity.
So the diversity of the
hosts doesn't accurately
reflect the diversity
of the studio at large,
but it's sort of what
we're shooting for.
So for instance,
Fran hosts a couple
of lessons, one on combinatorics
and how to use combinatorics
to create the robot armies in
WALL-E. Alonzo Martinez, who's
a character modeler, Susan Fong.
There's also a chance to
do some career modeling.
Each of these people
does a career--
is involved in a career that
most people don't know about.
Fran's a great example.
She had interest both in
computer science and clothing,
fashion.
And at Pixar she's been
able to put those two
disciplines together.
She's a digital tailor.
Who knew that digital
tailors was a career
and that you could
be very happy at.
I thought what I'd do
next is jump over and try
to do a quick demo.
Showing is better than telling.
So the content of
Pixar in a Box--
it's video content, typically
three to four minute
tutorial videos
featuring people.
There are some exercises that
help students demonstrate
mastery.
And then there are a lot
of interactive outlets,
many of which are
patterned after the tools
that our artists use.
So in the animation
lesson, for instance,
the creative challenge is how do
we bring our characters to life
with motion.
It turns out
polynomial functions
play a huge role in that.
These math and science
lessons are typically
broken into two parts.
There is a part 1, which
is more middle school
oriented, where knowledge
of that creative challenge
is deepened.
And there is an
interactive that goes along
with that here, which is--
my challenge here is to
make this ball bounce.
I've got a little timeline.
And then I have access to
a Bezier curve editor here
which, again, is patterned
very much after--
can I grab that--
very, very much patterned
after the artist tools
that our artists use.
Let's see if I can
get that to play.
Yeah, there it goes.
So I'm controlling the
height of the curve.
At this stage, students
are thinking of themselves
as content creators.
They're not worried so much
about how the technology works.
And that's where
part 2 comes in,
the second part of the lesson.
They're typically more
oriented to high school level.
This is where computational
thinking and algorithm design
comes out.
And so we walk
them through what's
known as De
Casteljau's algorithm
for constructing Bezier curves.
And there's a bunch of
interactive elements
and exercises that
help them experience--
see if I can grab that--
help them experiment with
how this algorithm works.
And by the end of this lesson
these high school students
really understand
cubic polynomial curves
and how they're used
to create animation.
There's a lesson on how we
brought Meredith's hair to life
in the movie Brave.
We use a mass spring
system underneath.
So her hair is being simulated.
There's a physics simulation.
And we give students access
in lesson one to a simulator.
And at this stage,
they're basically
operating as a technical artist.
So they're working
to dial in parameters
to get the kind of creative
look that the director might
be interested in.
And then in part 2, we kind
of go behind the scenes--
and, well, what does someone
writing this simulator need
to understand?
And that's where we
bring, in this case,
the laws of physics
to bear, along
with a computational framework.
Let's see if I can restart this.
Si there's a little simulator
here that is running live.
And the Khan Academy CS
Platform is really kind of nice.
You can interactively
manipulate parameter values
to see what effect it
has on the simulation.
So they're hopefully building
up a bunch of intuition
for how these ideas work.
But they're doing
that in the context
of solving a creative
challenge and in the context
of storytelling.
So I will stop there.
[APPLAUSE]
Thank you, Tony.
Next up is Alan Edelman.
Alan is a Professor
of Applied Mathematics
and a member of CSAIL, a
Computer Science and Artificial
Intelligence Lab here at MIT.
He's passionate about
the interactions
between classical
computer science
and computational science
and is a founder and chief
scientist of Interactive
Supercomputing and Julia
Computing Inc. Thank you.
Thanks very much.
The previous
speaker, Dr. Derose,
was talking about
for younger students,
but I'm already ready to go to
infinity and beyond in higher
education.
So when thinking about education
for computational science
and engineering, I
was thinking about,
do I talk about the
math or the software,
because in my
lifetime, it's always
been math and software and the
interactions between the two.
And then I thought about it,
and I realized that, you know,
the first order MIT
is the Massachusetts
Institute of Applied Math.
So I don't have to
talk about math.
I think math,
everybody's sold on.
So I'm going to
concentrate primarily
on software and the
interactions with mathematics.
So I've got one slide talking
about computational science
and engineering and a kind
of prediction for the future,
and then the rest
of my slides are
going to be about software
and its interaction
with mathematics.
So that works.
I got the right button.
So computational
science and engineering
goes by the name of CCE
here at MIT, the Center
for Computational Engineering.
What is it?
Well, it's actually in
a lot of places at MIT.
It's computation that involves
the physical universe,
whether it's in biology or
chemistry or aero astro.
It's literally
pervasive around MIT.
And what I like to say,
is that of course we
know why it's important
because the physical world is
the world we live in.
And many of our problems are the
problems of the physical world.
And when I look at today's
machine learning, which
is exciting
everybody these days,
I tend to think of it as sort of
the big data machine learning.
Forget about anything
about the physical world,
just take the data, which
of course may come from
the physical world, but
in the end it's the data,
process it through something
like a deep neural network,
which is kind of a big overgrown
[INAUDIBLE] fitting-- really,
it's a regression problem--
and see what we can get.
And we're doing a lot
of things these days
with this technology,
but we're not
bringing in the physical
world as much anymore.
People are trying.
There's a lot of
good work coming on.
But I think-- and this
is my prediction--
that there's going to be a
lot more machine learning
in the physical
world in the future.
And I think it's going to even
have a rather different kind
of character than
the kind of character
of machine learning of today.
And I would love to
see this happening
in the College of Computing,
in the Schwarzman College.
I'd like to see it happen
through our students
here at MIT.
So that's my first
prediction and now I'm
going to get into the teaching.
So I was very happy to see
that our president talked
about bilinguals at MIT.
I kind of feel like I've been
a bilingual much of my career
at MIT, or a multilingual.
And we certainly--
everybody else
has been talking
about multilinguals.
And of course this is
the right way to go.
So here's sort of an outline
of some of the slides
that I'll be talking about.
So let me sort of
jump right in, though.
So Professor Boyd already
mentioned abstraction.
The problem with talking
about abstraction
is that it's abstract, right?
It's very hard to talk
about abstraction.
So what I thought I would do
is talk about a real example
that happened this year.
Am I speaking properly
in the microphone?
Is that OK?
So a real application
that happened this year
involving a big
climate model-- you
know, the folks who
want to figure out
exactly how global
warming is happening.
So the climate modeling folks
came over to my office in CSAIL
and they brought those
scary looking equations
in the upper right over there.
And my computer science graduate
students were like, well,
we know lots of things, but
we don't know that math.
But I did what I always do.
I said, you know,
let's figure out what
are the right abstractions.
And for me, abstractions are
both in mathematics as well as
in software.
And so I just said, you know,
let's talk abstractions.
Let's figure out what's
the right thing to do.
So in talking to the
climate modelers, they said,
we use abstractions.
We use matrices and vectors.
They are our abstractions.
And then, of course,
it became clear,
there are levels and
levels and levels.
And this particular is
a differential equation.
It has gradients and divergence
and all these scary math words.
But in the end, the
abstraction, the one
that really comes to
heart for everybody,
are mathematical functions.
So can we just get
those functions
at the highest level
in the software.
And then you see some
Julia code down there
which just does exactly that
after a few weeks working
with the computer scientists.
And it never would have
happened by either group alone.
It would never have happened.
You'd need to have
that combination.
But I think everybody's kind of
sold on interdisciplinary work
now.
So I'm going to see if
this button still works.
And I'll move on
to my next slide.
So again, a software slide.
So I'm going to
make up a kind of--
I want to talk about elegance
and reproducible science.
I'm going to kind of make
up a fictitious story
about a conference
that you might go to.
So you go to a conference
and big fancy professor
gets up there and talks
about his computation.
He talks about the math
behind his algorithm.
He might talk about the physical
simulation that he's doing.
He might tell you how much
time it took to calculate.
And suppose a student
in the audience
is all excited and inspired by
Mr. Fancy Professor and says,
can I-- oh, I want
to do this too.
Can I have your software?
What machine did you run on?
How did you do it?
And I'm not saying that
professor is really snobby.
In many cases, the
professor actually
doesn't know how it was done
because his graduate students
wrote the software.
And he doesn't know.
And he doesn't even
think it's important.
But as far as
teaching our students,
this culture has to change.
The very essence of
science, as we all know--
whatever it is that makes
us believe in science
is the reproducibility
of science.
And so many people are
talking about that these days.
We really have to
change the culture
that you can't say that the
software is coming out later
or it's not ready yet.
It just can't be done.
If you're going to
publish a paper--
it's even a question
of whether the paper is
the primary anymore, or is
the software the primary?
This is already changing.
And then I think we'll
see more changes.
And on a completely
related note,
there's the question about
elegance in computation.
And so as a
mathematician, I've often
seen beautiful
mathematics, mathematics
that makes me just
feel very happy
and mathematics that's
just nuts and bolts.
And it's time that we
also talk about what's
really good in computing.
When we see an
elegant computation,
we should praise it.
And we should teach students
to do really nice computing.
All right.
Now the story of bridges.
And so as I've mentioned,
I've been involved
with the Julia project--
I have it on the back
of my phone here--
for the last several years now.
And our dean of
engineering talks
about creating bridges at MIT.
And I've had all kinds of
interdisciplinary work at MIT.
Many people have done it.
But the Julia project for
me has been a very different
experience.
And I'd really love
to tell you about it.
So Julia is happening everywhere
at MIT, in all the courses.
I asked one of my PhD
students to put together
a list of sample names--
it's not everybody--
but you could see that Julia is
happening in every department
at MIT, just about.
If your name is missing, or
your department's missing,
please let me know
and I'll add it.
And I'll also give a plug
for Manvitha's poster.
You'll see her later today.
But I want to say
something about how
this is more than just--
I could build a calculator and
mass produce it and give it
to everybody at MIT.
And I could say, oh
yeah, everybody's
using my calculator.
But there's something
different that's happening.
And this is what's
exciting me more.
People are talking to each
other across departments.
People are using each
other's software.
And people are
building-- it's not
just everybody doing the same
software reinventing the wheel.
People are going bigger
and bigger and bigger.
And this is really
what the excitement is.
This is what we want
to see from computing.
And it's exciting for me
to watch exactly that.
OK.
And this is my last slide.
And this is kind of related
to my previous slide which is
the notion of how software is--
software for sure is
mathematics gone live.
There's no doubt about it.
And it's enabling us,
as Professor Boyd said.
But it's actually
even more than that.
I mean, that's pretty good,
that mathematics going live.
And we've gone far with that.
But it's actually
more than that.
And this is the sort
of thing that I'm not
sure I appreciated until just
a year or two ago myself,
but I'm watching it happen.
And the idea is when
you can actually
compose software, when
software can work together
with other software
so you really
can build upon other
software, things
that you can't even imagine
happening start to happen.
So I've got sort
of a bad example,
and then I'll give you the
more modern good example there.
In the box there, I have
an example of a homework
that I gave in my
parallel computing class.
This was an old homework.
I only did it once.
I didn't dare do it again.
But what I did was
I asked students
to download a parallel program
off the internet, anything
they wanted, and make it run.
Sounds simple enough.
75% of these good,
hardworking MIT students
succeeded, but not 100%.
Then I asked them
to do two things.
You can compute a fast
Fourier transform on a matrix
and invert it.
You know, whatever.
And they couldn't do it.
Nobody could do it.
Nobody could get it
to work together.
One student finally got it
working after the deadline.
And he's gone on to become
pretty famous in the area.
So nobody could do it.
And that's a problem.
Now what's happening today.
And I'm sort of seeing
it in the Julia world,
and it could happen
in any world that
lets software compose, is you
could do something like take
a machine learning program.
Machine learning is built
on matrices with parameters.
That's what deep
neural networks do.
What if you're creative?
I don't want to use
matrices with parameters.
I want to think out of the box.
I want to think differently.
I want to use
differential equations.
Well, Chris [INAUDIBLE] is doing
that with the Julia software
because the technology lets you.
He goes ahead and does
it, and they're now
using that in
customized medicine
now to be able to give
the correct dosages
for individual people.
So I'm going to finish
with my motto here,
which is that when software
composes, humans compose.
And I literally mean working
together and being creative.
So thank you very much.
[APPLAUSE]
Thanks, Alan.
So next up is Wendi Heinzelman.
Wendi is the Dean of the
Edmund A. Hajim School
of Engineering and
Applied Sciences
at the University of Rochester.
She's also a professor in
the Department of Electrical
and Computer Engineering
with a secondary appointment
in the Department
of Computer Science.
Her research interests
span diverse areas
from wireless communications
and networking
to mobile cloud computing and
multimedia communications.
Wendi.
Thank you.
It's an absolute pleasure
to be back at MIT.
I graduated from here
a few short years ago.
And so it's really nice to be
back and see so many professors
and colleagues who I
haven't seen in a long time.
So thank you for the
invitation to come and speak.
Let me get that working.
Where am I aiming this?
The big button?
There we go.
OK.
So I don't think I have to
convince anyone in this room,
but there are a lot of people
who don't really understand
that there is basically no
field today that doesn't either
produce or consume just massive
enormous amounts of data.
And that data is
really the thing
that's going to drive the new
innovations, the new insights
into those fields.
So I loved hearing the
example this morning
from the humanities about
the program that figured out
all the words of
she and he and him
and her in, like, massive
amounts of textbooks.
Can you imagine asking graduate
students to do that by hand?
You know, just 10,
15 years ago, that's
what you would have had to do.
It's almost impossible.
But nowadays, we have the
tools and the techniques
to be able to gather
that data and make
key new insights that we
could never make in the past.
And really, data
is the new gold.
It's going to be the new thing
that is going to set industries
apart, that's going
to make them be
able to come up with new
advances and new techniques.
And so it's really important
that any student who
is educated in the 21st century
understand how important this
is, understand how to
compute on the data,
and how to be able to make these
inferences out of that data.
Now, computational thinking
is not just about coding.
I certainly think it's important
that every student know
how to code.
And many students,
particularly here at MIT,
probably all of your students
come in knowing coding,
whether they're in
the music department
or they're in the
aerospace department.
They are tech savvy.
That's probably
one of the things
that attracts them to MIT.
But that is not the
case in hundreds
of universities and
institutions around the country
where many students come in
never having touched a computer
before, never having coded.
They certainly have
their smartphones
and they've played on
them, but they've never
actually done any
design, analysis,
creation of their own.
And so it's really
important that we
do teach them how to code.
But it's also important
that we teach them
about computational thinking.
So it was already talked
about abstractions before.
I think abstraction
is a key concept where
we have to get
students to understand
that there are these massive
problems that they're
trying to solve.
If you can abstract away
some of those details--
maybe you come
back to them later,
maybe you don't need to
come back to them later--
that helps you solve
those problems.
Same thing with decomposition.
If you can take
a massive problem
and decompose it into
manageable pieces
that you do know how to
solve, and then build it back
up together to solve
the original problem,
again you're going to get
really interesting new insights
and you're going to be able to
find new solutions to problems.
Obviously, algorithm design.
That's sort of one of the keys
to being able to describe step
by step how you can go
about solving a problem
or doing something.
And then, finally,
pattern recognition.
We all know how important
pattern recognition
is becoming.
We have lots of different AI
machine learning techniques
that are basically taking
all this massive data
and figuring out how
to find these patterns
and how to use those to
predict things going forward.
Now I'm going to talk about a
few different models for how
we can teach this sort
of computational thinking
to all students and in
particular, in this panel,
we're focusing on science
and engineering students.
And so as I mentioned,
I feel very strongly
that for a student to be
educated in the 21st century,
they need to have some
sort of computing chops
that they can go out
into the world in.
And this is not just
about getting a job
but as someone
was saying before,
it's about how are we
going to advance all sorts
of different disciplines.
Having students
educated in these skills
is going to make
a huge difference.
So in the foundational
computing model,
the idea is that computing
is sort of a generic skill
that all students need to learn.
This is similar to how we've
been teaching math in science
and engineering over the past
several decades, I would say,
where in the current model
students take math courses,
they take their calculus,
differential equations,
linear algebra classes.
They all take that
in years one and two.
And then we all build on that.
We assume the students
know that math,
and we can use it in our
classes that we teach
in the disciplines themselves.
You can imagine a similar
model in computer science
where you have computing skills,
very introductory courses that
are taught that are open
to all students within all
of the disciplines.
They take those courses.
Everyone assumes that once
you complete those courses,
you have the skills and then
you can use those skills
in your subsequent classes.
And there are pros and
cons to this model.
So one of the pros
is that you're
creating a standard skill set
upon which everyone can build.
And you're providing
the education,
you're providing the
knowledge of these techniques
by the experts in
the field themselves.
Another advantage is
that oftentimes students
will change their major.
They might start as a
biologist and decide,
I really don't
want to do biology,
I'd much rather do biomedical
engineering or chemistry.
And if they have a sort of
common skill set that everyone
can build upon, they can
transfer those skills
into the other majors.
However, it's really hard for a
computing course or any course
to be everything to everyone.
And so this makes it
very challenging for how
do you teach these courses that
really meet the needs of all
the different disciplines.
And of course, this creates
a very, very heavy load
on computer science which
becomes a big issue.
The second model is
teaching computing
as a domain-specific skill where
you're actually teaching it
within the domains
themselves, and you
tailor computing, the
skills that are taught,
as well as the
examples that are used
to the discipline themselves.
And this has a lot
of advantages in that
the domain-specific examples
can be really motivating
to students,
especially students who
think, oh, computer
science, computing, I
have no interest in this.
I'm going to be a biology major.
I don't need to know this.
But if you teach it to them
within the context of something
they're really interested in,
it can be very motivating.
It also does distribute the load
to teach all of these courses.
However, it's very
challenging to ensure
that you have experts who can
teach these courses in all
of the different disciplines.
And then there's the model
of the CS plus X, which
I believe there is a whole
panel on this afternoon,
so I'm not going to go
into much detail on this.
But this is basically the idea
that you take half your courses
in computing, half your courses
in whatever other discipline
you have, and kind of meld
those together into degrees.
And one of the things I
really wanted to focus on
is we know that
computing in general
has not attracted
the numbers of women
or underrepresented
minority students
that are representative
of the population.
And we also know the importance
of diversifying our field
because without getting lots
of different backgrounds, lots
of different ideas, we're going
to be missing out on solutions.
We're going to be
missing out on systems
that have inherent
biases in them
that we don't see because we're
not part of those communities.
And so it's really important
that we diversify the computing
field.
And there's been a lot of work.
I want to call out the
CRAW, the Grace Hopper
Celebration of
Women in Computing,
and the TAPIA conferences.
These are all great ways
to create communities
among diverse groups of
students so that they
can feel welcome
in the communities
and thrive in these communities.
One of my concerns is that some
of the science and engineering
departments are much
better at attracting
underrepresented students and
attracting women in particular.
And so the concern is that we
don't want computing courses
to drive further away
some of these students.
And indeed, what we'd
like to do is create
nice on-ramps to computing
so that the students who
take those courses and say,
hey, this is really fun.
This is a lot of fun to code.
It's a lot of fun to think
about database systems.
Maybe I could double
major in computer science,
or maybe I could minor
in computer science.
We actually could attract more
diverse students into computing
by offering these different
on-ramps onto computing.
And in fact, at
Rochester, we've done this
in trying to create
lots of different ways
that our students can learn
about computer science
with zero background, not
having any skills in computing
in the past.
And what we've found
is that about 30%
of our graduating class
is female at this point.
It's about 33% right now.
And about half of those
are double majors.
So half of them came in wanting
to major in something else,
tried a CS course,
realized how great it was,
how fun it was, and
then decided to double
major in computer science.
So I think that there's
a real opportunity
to help us diversify our
field by teaching computing
sort of throughout the
disciplines as well.
Thank you.
[APPLAUSE]
Thanks to all of you.
That was great.
So I think in the
interest of time,
since what does follow this
session really is lunch, maybe
I'll just open it up to
questions from the audience.
Great.
Mic.
So one of the things I've
been thinking about listening
to these talks is
that when we want
to get computer science used in
related fields, how much people
from those fields actually need
to understand about computer
science, right?
If I'm a chemist,
I assume that I
can use a spectrometer without
knowing how to build one.
And I wonder whether we can
get to that point in computing
where we've sort of boxed things
up enough so that they're just
tools that people use versus
the need to actually become
a jack of all trades
computer scientist
in order to do the work they
want to do in other fields.
I know some of you have
opinions about this, so jump in.
I think that's--
actually, I can't see you.
There's a column.
OK, so you are here.
I guess you couldn't see me.
Reciprocity.
That's a superb question.
And I think actually we have
to struggle to achieve that.
Actually, it works
both ways, right?
How much-- I guess the
traditional curriculum
is that you teach people
everything in that field.
You start from
the very beginning
and then a year later
you know a little bit,
two years later you
know a little bit more.
And you teach them everything.
And I think there's
a new-- actually,
this also means for
doing outreach stuff.
You need to condense-- it's
not like you shorten that.
You throw away a
lot of that stuff.
You abstract it away.
So it would be wonderful
to have courses--
I know they're
starting to exist--
that teach you just the basics
and a lot of is obscured.
And it's actually OK
as a professor when
someone says, how
does that work,
and you simply look at them and
you go, none of your business.
And then you lean
forward, and say
I'm going to let in on the
secret, I don't even know.
But it does.
So I think it's a
very good point.
Wendi.
Yeah, I'd also like
to address that.
I think that's an
excellent point.
I think one of the
concerns that is not
just the case of not knowing
how a microscope is built.
When something goes wrong, do
you know that it went wrong?
And so in the microscope case,
if something is malfunctioning,
you're getting
crappy data, you need
to know that you're
getting that bad data
and why you're
getting that bad data.
I think the same thing is
true in a lot of computing
instances, especially
with deep learning
and with any kind of
machine learning techniques
where, depending on
the data that you
put in, the data you get
out is telling you something
based on the data you put in.
And if you look at
it as a black box
and don't really understand
how it's working at all,
you can get into making really
bad decisions because you don't
understand how it's working.
So I think there's
a very fine line.
I think for sure-- and my
computer scientists tell me
this all the time--
it's not that we need to
create computer scientists
in every field.
That's not the case.
We don't want people
who are advancing
the state of the
art in computing
and designing new algorithms
and things like that.
We need to get people the
skills so that they can
use those tools and techniques.
And that's a different
thing than creating
a bunch of computer
scientists sort of all
across the community.
To address Professor
[INAUDIBLE] question,
I'd like to say yes and no.
So less computer science
and more computer science
is going to be my answer.
Of course, it depends.
If the thing that's
being built is a tool,
then you don't need to know
how to do a bubble sort.
That's clear.
If what you're
doing is advancing
computational
chemistry, say, and you
are building a
computation on a grid,
and so somebody in EAPS
and so is somebody else,
maybe you do need to
start to figure out what
computer science
abstraction that
lets everybody work
on a grid and not
have to reinvent the wheel
or push further ideas.
And so it depends.
Another question.
Thank you so much for the talk.
So my name is
[INAUDIBLE] I'm a postdoc
in mechanical engineering.
So I have a question about--
so because when students they
choose the majors, sometimes
they believe that they can have
better career because of major
and sometimes they believe
that they like this major.
So I was wondering
as an educator,
is there any
difference to treating
these different scenarios
and how to make sure
that they align together?
Thank you.
I can address that.
I think in any engineering
and science field,
you're going to have lots
of career opportunities.
But we deal with this a lot
at the University of Rochester
with humanities
majors, for instance.
There's a lot of parents
who tell their kids,
don't go into the humanities.
You're not going to get a job.
And we fight back
very hard against that
because there are
so many skills you
get in the humanities
that will serve you
well the rest of your life.
And we have data from
our own students,
for our alums, that show
that your major does
matter for your
first job, and it has
almost no impact after that.
So it's more about
the skills that you
can get in those majors.
And so we do try
to really encourage
students to find what
they're passionate about
because whatever
you're passionate about
is what you're
going to excel at.
If you don't like
what you're doing,
even if you think does
a good job at the end,
you're not going to do great in
it because you don't enjoy it
and you're not putting your
heart and soul into it.
So I think it's really
important to make
sure students are thinking
about what they enjoy,
what they're good at,
what they want to do,
as opposed to just
thinking about a job.
Now that being said, I think
infusing computation skills
into everything is going to
make students that much more
marketable when they get out.
Thank you.
Go ahead.
Hi.
Warren Seering, also from
the Mechanical Engineering
Department.
We've just heard that it depends
how much computer science do
you need to know
to do something.
If it depends, who decides,
and particularly can
we trust the students to decide?
Hmm.
Who wants that one?
It's yours.
[LAUGHTER]
I tend to trust the
wisdom of faculty
in consultation
with the students.
So I'll just say that.
Any rebuttal?
That's an excellent answer.
Faculty in consultation
with the students.
Sounds like the right way.
Great.
A question.
[INAUDIBLE]
Excellent.
We talked about
improving diversity.
Before the break,
several of us were
talking about how
computer science is just
like mathematics.
It's something everybody
learns and they infuse it
in their work.
But by expecting students
to be A, skilled in math; B,
skilled in CS, we're probably
creating some new barriers
to diversity because
the reason a lot
of students who really have
the creativity and such,
the problem solving
to be an engineer,
is they drop out because
they can't do the math.
And now we're
saying, well, you got
gotta be able to do
the math and the CS.
We know, in
underrepresented groups,
math skills coming out of high
school are frequently poor.
Now we're saying, oh, and you
have to be good at CS too.
Aren't we creating
some new barriers?
Tony, you may want
to address this.
Others may want to address this.
Sure.
I mean, one of the core skills
is collaboration, right?
And I mean, we like saying
at Pixar that making a film
is a team sport.
It takes 100 people
working together.
There are artists
that really don't
have much technical skill.
There are technologists that
don't have much artistic skill.
But they work together to
create these beautiful stories.
And so I think the more we
can create opportunities
for students to hone
collaboration-- somebody
said it earlier that it's
something you have to teach,
right?
It's not something
that's emergent.
And so I think
collaboration is something
you really need to
focus on, especially
in the context of diversity.
I would also say,
again getting back
to this concept of
on-ramps, that you
have to have ways that kids
can get into these fields
where they haven't been
coding their whole life.
And so that's, I think, one of
the big turnoffs to many women
and underrepresented
minorities in computer science
is that they're sitting in
a class with someone who
built their own computer
from scratch when
they were 10 years old, and
they've never done anything.
And then you just feel
like, hey, wait a second,
I'm not good enough.
And that's not the case.
They just haven't been exposed.
They don't have the experience.
So if you can create
lots of ways--
and you don't want to take
the person who's been coding
their whole life in and do--
you don't want to hold them up.
So you have to have
opportunities for them
to go soar, but you also have to
have opportunities and on-ramps
for those who are not
comfortable with this, who've
never been exposed, to try it.
And they may hate it and go
away, but they may love it.
And so giving them
that opportunity--
and I think the same
thing is true with math.
We've been experimenting
at Rochester
with creating different
types of math courses
that are teaching
the same content,
but maybe slowing
it down, having
it be utilizing summers--
so students do
have to stay and we
pay for them to
stay over the summer
so that they can
kind of get caught up
with some of the
other students who
had great math in high school.
So it's a huge issue.
And unfortunately, I
think it's something
we have to address
in the high schools
that we have such lousy
math, science, and computing
education in so many
inner school high schools
and such great versions of those
in suburban, well-off, affluent
high schools that we create
this two class system.
How do we address that
both at the high school
level and then creating
on-ramps at the college level?
My name is Curt Newton.
I'm with MIT Open Learning.
Reflecting on Professor
Edelman, the story
you're telling about how
you were able to come up
with this really
fortuitous collaboration
between the climate modelers
and what you're doing in Julia,
how are we doing at people who
have those kind of problems
that they're wrestling
with finding the resource?
How proactive or
how intentional are
we able to be versus
just getting lucky?
Everyone's looking at me again.
So you know, the internet
is an amazing thing.
And I'm sure we're at
the tip of the iceberg
as I think your
question suggests.
There could be way
more collaborations.
GitHub is an incredible thing.
People can collaborate online.
The Julia project-- many
people were involved
in the creating of
Julia who had never
met each other until
some conference happened.
There's lots of ways now.
And probably we need to foster
that as part of the education
until it becomes
familiar to everybody.
Any other questions?
Please.
Hi.
My name is [INAUDIBLE].
I'm a PhD from MIT from the
Media Lab, then research
scientist here the last 20
years and entrepreneur building
technology companies and
e-learning, teaching computer
science, design thinking,
computational thinking.
I think that one point
about what many of you
are talking about is that
we want to teach computing
in the disciplines--
we heard it in the
earlier panel--
because it teaches
students how to think,
how to solve problems.
And it's a great tool to
think about that discipline
in some creative,
innovative way.
And yes, learning about
the disciplines teach
the engineers and the
computer scientists,
give them tools to
think about programming
and making decisions.
And what's lacking a
little bit is how we also
orient the students, not just
about ethics but about impact,
solving problems that matter,
doing engineering and computer
science, and using these tools
in ways that are actually
changing the world.
And maybe you can talk a little
bit more about that, Wendi,
when you are hinting towards
starting doing the computer
science in the disciplines and
the disciplines in the computer
science already
in the K12 system
so people can grow and
be ethical but also solve
problems that matter and
really make an impact on what
the world needs now.
Yeah, no, I think
that's absolutely key.
And I love the panel that we
had before on teaching ethics
in computing and in
all of the disciplines
because if you think about
it as an afterthought,
you're already using
tools and techniques now
that are inherently based
on a set of assumptions.
If you think about it from
the get go, from the start,
you're looking at how you create
solutions that are ethical.
Now, that being said,
of course, there's
all sorts of unintended
consequences.
And it's going to
be impossible to be
able to think through all
the potential outcomes
of technology use in all
of the different fields.
But if you have lots
of different people
from different backgrounds
coming together to
think about these
ideas, and I think
bringing in the philosophy
folks and the ethicists
to help kind of formulate
even what the questions are,
because it's very hard
for engineers and computer
scientists who haven't thought
about this before to be
thinking about that--
so having those
interdisciplinary teams I think
will help and go a long
way towards addressing
that that concern.
Tony, do you want
to say anything
about computational thinking
in the context of youth
storytelling?
Sure.
This kind of comes to teaching
it from within another domain.
So we piloted a CS course with
the Oakland Unified School
District.
It was a middle school
class where it was--
they have a, you know, teaching
how to build web pages.
This was a second class which
was coding for storytelling.
And so about a
third of the class
was walking students through
the storytelling principles
in season three.
And so that came
out of that project,
that part of the course with
an original story of their own,
with original
characters, so really
kind of their personal voice
that meant something to them.
And then the rest
of the class was
how do you realize
that story in code.
They were using Code.org,
but it could have
been Scratch or other things.
So I think it brought students
that might not otherwise
be interested in computing--
you know, they're coming from
the arts and humanities--
brought them into
the field a bit.
OK.
I think there's
one more question
and then we're going
to break for lunch.
Hi.
I'm Roy McDonald,
PhD student here.
And we heard about the
importance of making sure
that software is open
source especially
as things become
more black box, and I
think that's a tradition
that looks set to continue.
But it seems sometimes
that the data required
to kind of either put things
into practice or push research
or play around with
your own ideas,
if you're going
to start companies
in that kind of things,
it seems that data
is becoming less available
sometimes, especially as more
and more of it is being
collected by private companies.
And I was curious what you
think universities' role is
in making sure that that's
available to students.
Thank you.
I would say that we need to do
our best to make this a value.
And we should all be
talking about this question,
about how important it is
to have access to data.
And very often the very
same private company
that trying to protect
its data, picking sort of
in its own self-interest, as
perhaps companies need to,
might even benefit in ways they
don't realize by opening it up.
And so the more stories
like that, the more people
realize that, the more
we all ask for that,
the better we will all be.
I think there's also a role
for government agencies.
So for example, NSF has
a number of programs
that is trying to entice and
provide funds for faculty,
students to create
data sets that then
have to be open, that are
then available for the entire
community to build off of.
So I think the more that
agencies like NSF and NIH
really support open
access of the data,
the better off
we're going to be.
I would say one thing.
I think your point is dead on.
Access to data is extremely--
public access to data
is extremely important.
And it's not that good now,
and it's getting worse.
But I think actually
one of things
we'll have to address
to take care of that
is a bunch of incentive
structures in universities
and among researchers.
So what's wonderful is if
you win some fancy prize
or you write some fancy
paper or something like that.
There are other extremely
important artifacts.
One is software that
we've talked about.
Another is data and
making it available.
These provide incredible
value to a very wide range
of researchers.
And it's approximately not
valued at universities.
I'll speak from my own.
Maybe at MIT where it's
much more enlightened,
that's not the case.
[LAUGHTER]
But I think that's--
now the good news is that sort
of the younger people coming up
who will eventually
be running things,
they're totally board with this.
So it's a very good point.
So we are going
to wrap it up now.
And please join me in
thanking our great panelists.
[APPLAUSE]

---

### Computing is for Everyone
URL: https://www.youtube.com/watch?v=Du5rlleQyk8

Transcrição não disponível

---

### Experiences with CS+X Majors and Curricula
URL: https://www.youtube.com/watch?v=fyato3wXsN4

Transcrição não disponível

---

### Teaching Computer Science to All
URL: https://www.youtube.com/watch?v=N8HdEbAVLP4

Transcrição não disponível

---

### Welcome to CELEBRATE: The College!
URL: https://www.youtube.com/watch?v=0NLFItC-4dA

Transcrição não disponível

---

### Computing: Reflections and the Path Forward
URL: https://www.youtube.com/watch?v=vCyZUiBr_ds

Idioma: en

I'm delighted to introduce
the first technical session
of the day, whose focus is MIT's
rich history of innovations
in computation,
and how that sets
the stage for possible
paths forward.
During this session, we'll hear
from Sherry Turkle, the Abby
Rockefeller Mauze Professor
of Social Studies of Science
and Technology, Sir Tim
Berners-Lee, the 3Com Founders
Professor of Engineering,
and Patrick Winston, the Ford
Professor of Artificial
Intelligence and Computer
Science.
They will share their
perspectives on MIT's role
in advancing the frontiers
of computer science
research and education, in
scaling access to information
and computation globally, and in
considering the social impacts
of computation.
To help set the
stage, let me give you
five very quick examples
of MIT's wonderful history
in computing.
The pioneers of
artificial intelligence
included Marvin Minsky and
John McCarthy, key players
in the 1956 Dartmouth Conference
that created the field.
Both served as MIT faculty.
And McCarthy, of course,
later moved to Stanford,
to launch their AI efforts.
Public-key encryption
underlies much
of modern secure
communication on the internet.
It was pioneered, in part,
by Ron Rivest, Adi Shamir,
and Len Adleman, or RSA--
all at MIT at the time.
And others, of course, also
contributed to this area,
including MIT
alumnus Whit Diffie.
Starting in the 1960s,
MIT fostered a wide range
of fundamental research in
human and computer vision,
manipulation, and locomotion.
This led, among
others, to the creation
of iRobot, Boston Dynamics,
Mobil-I, and SenseTime.
In 1983, jointly
with DEC and IBM,
MIT launched Project Athena.
It provided access to
computing resources
for every student at MIT.
And it led to the creation
of Cerberus, the X Windows
system, and Zephyr, an early
instant messaging system.
And an Education For All,
I've had the privilege
of working with John
Guttag and Ana Bell,
to create a MOOC on
computational thinking--
which has had 1.2 million
learners around the world.
These are just a few examples of
MIT's influence in computation,
and its impact on the world.
And later today, you'll hear
examples of current innovations
in computation, in such diverse
areas as biology, medicine,
economics, design, urban
planning, finance, and others.
So let's get the day started.
[APPLAUSE]
Good morning, I'm Sherry Turkle.
I will be talking about
a critique of the idea
of the friction free.
Most of us here
today were introduced
to the idea of the friction
free as a really good thing.
It's an aesthetic of
engineering efficiency,
so why shouldn't it be
a really good thing?
But technology is the
architect of our intimacies.
Technology shapes our ways of
thinking about social life,
about politics.
It shapes our ways of thinking
even about ourselves--
about the self itself.
So this idea that
technical things
should be smooth
and easy blends into
and bleeds into other domains.
Efficiency becomes
aspirational--
in politics, in
business, in education,
and in our thinking
about relationships.
And that's the kind of
thing that I study here
in my MIT career.
In my own research on
technology and people,
I see hopes for the
friction free pop up--
when humans of all ages tell me
why they prefer, for example,
to text rather than talk.
Why they would rather send
an email to a colleague
just in the next cubicle,
or in the next office.
Or why they would rather
text rather than talk
to their spouse than have
a face-to-face conversation
is usually tied up with a hope
for greater efficiency and less
vulnerability.
That's friction free.
Now, artificial intelligence,
perhaps without meaning to,
has become deeply
woven into this story.
Why?
Because artificial
intelligence is
almost definitionally about
the promise of efficiency
without vulnerability--
or, increasingly,
about the illusion of
companionship without the
demands of friendship.
But by trying to move ahead
toward the friction free,
we are getting ourselves into
all kinds of new trouble.
But here, I get ahead
of myself, and let
me backtrack just for a moment.
The idea of the friction free
has particular meaning for me,
because I'm a member
of a generation that
sold it to the world.
So I want to begin by
talking about my generation.
And I'm the Harvard
class of '69.
Don't boo, don't boo.
And we're about to have our
50th college reunion in June.
And since the 2016
election, I've
been studying how my class--
that class of '69--
that famous class of '69--
thinks about our choices.
And I've discovered
that as we look back
many of us had the
expectation that for us things
should be easy, including
our political activism.
Now, why did we think that
things should be easy?
In my interviews, I hear that we
were the children of those who
had triumphed over fascism--
in some case, over the
threat of extermination.
My parents, for example, told
me that they had saved the world
so that I wouldn't have to.
And we were supposed
to have an easier life.
An easier life.
Well, the Vietnam era,
that wasn't so easy.
But after the war
was over, my cohort
was quick to declare victory.
We inserted ourselves into
a narrative of progress,
technology, and efficiency.
We even had new ideas about
making politics efficient.
Consultants-- we would outsource
activism to professionals.
This idea of efficiency
shaped our worldview.
And we shaped a world in which
when people look for solutions
the first look is often to the
efficiencies of technology.
Indeed, my generation
made our love
of the digital technology
that came of age with us
central to our identity.
This new digital world,
infused with our values,
had a distinctive aesthetic.
And I have said what it is.
The difficult will
it be made easy.
The rough will become smooth.
That which had friction
will become friction free.
That digital world we would
give to ourselves and we
would give to our children.
And this new world that
the computer augured
wouldn't just be friction
free in the sense
that economic transactions
would go more smoothly--
helped by such things as
electronic-funds transfer.
No, this vision was to
minimize and even eliminate
social friction, as well--
interactions that might
cause emotional stress.
In one often cited
near-future scenario,
that is near self-parody, but
which is actually partially
translated into an actual app
that you can put on your phone,
you order a beverage
on your phone--
your mochaccino, cappuccino,
whatever you kind of want--
and you send it to your
favorite coffee shop.
And as you walk to pick it
up, an app on your phone
routes you so that you avoid
your ex spouse, or anybody
you are having an argument
with-- your department chairman
who you're not in
a good place with.
And you only passed
your friends.
It's like the Marauders
Map in Hogwarts.
It's a Harry Potter thing.
And it prevents you from
seeing any of these people
with whom you might
have any-- and there's
the word-- friction.
But who said that a
life without conflict,
without dealing with
the past, or rubbing up
against troublesome people
makes for the good life?
Well, we did.
We did.
And you can see the fit between
my generation's aesthetic
of easy and what's possible
in the world of apps.
But there was also
considerable tension.
Because in many cases,
life was teaching us
one thing and technology
was teaching us another.
Let me go through some examples.
Life taught, for example, that
political organizing was hard.
The internet made it more
convenient, but less effective
Face-to-face conversation taught
that when we stumble and lose
our words, it's painful, but we
reveal ourselves to each other.
Screen life allowed us
to edit our thoughts,
never be interrupted,
and broadcast at will.
We preached authenticity, but
we practiced self curation.
We preached authenticity, but
we practiced self curation.
Technology encouraged us to
forget what we knew about life.
And we made a digital
world, where we could forget
what life was teaching us.
My generation infused
digital technology
with our value of easy.
But here is my call to arms,
after a professional lifetime
here at MIT studying
this technology.
It's time to associate the
digital with other values
than the value of easy.
Let's say, the opposite of easy.
And it's time to remember
that the opposite of easy
is not just difficult.
The opposite of easy
is also evoked by words
such as complex, involved,
and demanding.
That's what digital
culture demands of us now.
It's time to reclaim our
attention, our solitude,
our privacy, and our democracy.
We have time to make
the corrections--
not much time, but we have time.
And to remember who we
are-- creatures of history,
of deep psychology, of
complex relationships, that
intrinsically generate friction
as they are worked out.
Why is that?
Friction means being authentic.
Friction means
being vulnerable--
putting yourself in the
place of another person,
with all of the conflict
that can bring--
including inner conflict,
that needs to be faced.
How should we take
these insights
into our thinking
about the new college?
First, consider the idea
of unintended consequences.
For years I've written
about technology's
unintended consequences.
And that narrative no
longer fits the known facts.
We now introduce technology
with consequences,
that we can see straight off--
with consequences
that are intended.
We knowingly put in place
technology that will spy on us,
use our lives as data--
for the purposes and the
profit of corporations,
political parties,
governments-- anyone really,
that can profit from what we
say, see, or watch online.
Computer counselors--
this is a subject very
close to my heart--
computer counselors,
in the role of psychotherapists,
are put in place
to simulate the feeling
of human understanding
where there is none.
Technology is becoming an
intentional participant in what
I call an assault on empathy.
Making this step from seeing
technology's effects as
unintended to intended wakes
us up to our responsibility
as citizens, as consumers,
and frankly, as humans.
Second, get responsible
about social media.
Around 10 years ago,
when Facebook was just
coming into high schools, I
began interviewing students
about their attitudes
about privacy.
One young woman, an early
Facebook enthusiast,
told me she wasn't
much concerned.
And she said to
me, who would care
about me and my little life?
And it was a good question.
And here's the answer.
In the current corporate
regime, when we go online,
our little lives
are bought and sold
in bits and pieces to
the highest bidder,
and for any purpose.
When I wrote about
that interview in Alone
Together in 2012, I asked
whether we could have intimacy
without privacy, and whether
we could have democracy
without privacy.
And I argued that,
no, we could not.
But here's the thing.
When I considered
those questions,
I thought about those
two problems separately.
I thought about those
two problems separately.
I had a lot to learn.
The social-media
business model evolved,
to sell our privacy in ways that
have fractured our democracy.
All of this unfolded
in plain sight.
But here's what I've
learned in my studies.
Even after we could see it
unfolding in plain sight,
we didn't want to see it.
We had a love affair with
technology that seemed magical.
And like all magic it worked
by commanding our attention,
so that we took our eyes off
what was actually going on.
But here we are
today in a new place
and with a mandate
to pay attention.
We can no longer
say, who will care
about us and our little lives?
Now the question is,
how much do we care?
We have to face not
only the question,
how does technology
impact society,
but another question
more difficult to deal
with but always adjacent to it--
how does society
impact technology?
Because technology is
animated by money and power,
by social values and
social blindness.
Once you look for it, you
see society and technology
everywhere.
If a program to decide who gets
a mortgage sees mostly white
faces, because mostly white
faces have received mortgages
in the past, the
program will be more
likely to say that white
faces should get mortgages.
Society in technology.
If more white people get bail, a
program trained in that culture
will suggest bail
for white people.
Society in technology.
These examples have
become well known.
But they are good to think with
because they illustrate why AI
scientists need to
be trained in a new,
digitally-sophisticated
sociology of knowledge--
because social relations will
always become embodied in code.
So we have to live in
our technological world,
but remember what we know
about life and the life
we want to live.
We have to work
on the real world
as hard as we work
on our technology.
We can't just work
on our technology
and hope it fixes
the real world.
That's how I see the mandate
of this new institution.
Thank you very much.
[APPLAUSE]
Well, thank you.
Thank you for setting up,
for defining the context.
As we go, it's an exciting time.
It's a positive time-- a
time of hope-- and to be
starting a new college.
But anywhere where you
talk about technology,
you talk about
computing out there--
not just to experts, like you've
seen, but people on the street,
in general--
when you ask them about the web,
about technology in general,
people are very concerned,
people are very skeptical.
There is much more of a concern
bouncing all the utopia--
which we started off
with 30 years ago--
than there's ever been before.
And for very good reasons.
And some of those
have been outlined.
Let me say that, for
me, looking back--
it was 30 years.
We say that the
birth of the web was
when I wrote the first memo,
and that was March 12, 1989.
When in 1989, I was
looking back at 20 years
of internet development,
sitting there in CERN in Geneva,
and decided that we needed
to have a global information
space.
It would be really cool.
I needed it.
It should be a
collaborative medium.
And I looked at the technology
which was available.
And I looked at the capacity
of computers and programming
languages, and computer
protocols and so on,
and put together
the world wide web.
But back then, for
those of you who
have enough gray hairs to
remember what it was like,
then was a sort of
fashion for cyber utopia.
John Perry Barlow had
written a manifesto
for cyberspace that
basically said, guys,
we won't need all your
organizational structures.
We won't need your nations.
Because when we connect
in the cyber world,
we will connect just as
individuals, as peers.
There will be peace and love.
And we will organize ourselves
without all this stuff which
comes from nations and laws.
Because on the web, on the
internet there were no nations.
And in fact, it's true.
When I started off, I sat
down and plugged my computer
into the internet
in CERN in Geneva.
And nobody coming to the very
first web server that I built,
with the installations of
the very first web browsers,
had any idea that I
was in Switzerland.
And when they made a blog and
stored it somewhere on the web,
they had no idea.
And they didn't care about
the international border.
So you would have been
forgiven for imagining
that we could go down a path
where we end up producing
very much stronger
social structures,
very much stronger democracies.
For democracies and things,
we looked at the blogosphere.
And, initially, when people
blogged on the internet
there was something very,
very positive about it,
that they felt that they were
both choosing their words so as
to get more and more readers.
And they were
choosing the things
they linked to to only link
to the other blogs, which
were as good as they could be.
And they found that
within the other bloggers
that they discovered
and they linked to,
and who linked to them,
there was this feeling
that this is great.
Because I'm just
writing about the bird
that I like, and all the
other bird fanciers are
writing about the birds--
together, we more or less
have a better online resource
about all these different
birds than we've ever
had-- that has ever
been published a book.
And things like
Wikipedia, which now is
one of the marvels of the web--
it's a brilliant example of
where people work together
to tweak the way
the processes work,
to tweak the way that you
can complain about things,
the way arguments are handled.
And the way, eventually,
the community as a whole
works towards some idea of
positive absolute truth.
And, Wikipedia, that's been a
great example for the positive.
And in fact, for
the first 20 years
of that 30 years of the web,
if you'd talk to me about it--
if you'd have come
to me and said, Tim,
you invented this web thing
and I found some junk on it.
You know, there's all
kinds of bad stuff on it.
I would say to you as a
user, just don't go there.
Don't click on that again.
If you click on the
links on that page once,
then take it out
of your bookmarks.
Nurture your bookmarks to
point you to good things.
And people did it.
And we all did.
And we all ended up with
brilliant experience of life
on the web.
But, in a way, using a
technique which just in fact
wrapped this into what we
now know as a filter bubble.
Wraps us into a group of
people who ended up living
in a world in which
they all mutually agree
about a lot of things.
And where we don't
worry about the fact
that there is another
group of people--
large-number-of-people
groups-- who have not ended up
in cycles, and virtuous circles
where they end up producing
truth, but have ended up
in vicious circles where
they've ended up producing
untruth and nastiness.
And so the web, we need to
do a mid course correction.
I've been calling for many
years-- more than a decade--
for web science.
In the sense of a
science like this,
we have computer science
to look at the brain.
We need web science to
look at this process.
The process of web
science involves
looking at the way people
interact on the web,
and the way organizations
interact on the web.
So it's a very
multidisciplinary thing.
So I've been calling for
that for a long time.
One of the great things
about a college of computing
is that it should be
very multidisciplinary.
Yes, a huge amount of energy
have to go to computing.
But it has to be strictly in a
way that all the other fields--
some of which are new
fields completely.
But, certainly, all
the ones we know--
we have to involve economics.
You can't understand how
the world works without it.
Understand economics.
You can't understand how
the web works without it.
Psychology-- you
can't understand
how the web works
without having people
understand how
microscopic systems lead
to macroscopic phenomena.
So you need physicists.
You need climate scientists.
Because we have a there
is a climate change which
we have spent a long
time looking at, but now
we have a social climate change.
And the social climate
has changed very much
for the worst.
Just as the climate in
the world has got hotter,
the social climate, a lot of
people feel, has gotten nasty.
So we need to use this
college of computing
as a very powerful tool,
to bring all of the fields
around computing
together with computing,
in order to do a reset.
Some of you may
have heard-- yes,
I have a project at MIT called
Solid-- so solid-mit.edu--
which I found is exciting as
one part of a sort of reboot
to the web.
It's a project to use web
technology, but in a way
where we reorganize things.
We separate the
apps from the data.
We say that everybody should
be in complete control
of their own data, so that you
get what we call a Solid pod.
You have one or two pods--
some for home, some for work.
They may be out there in a
piece of cloud that you own.
Or you may be running it
on a computer at home,
if you're on the geeky side.
But wherever you
store your data,
the Solid rule is you have
complete control over who
and what gets access
to it for what.
And so the Solid attitude is use
web technologies but in a way
in which we cheat we
flip the world around.
People have said, but you're
turning the whole privacy
question upside down.
I think it's more a question
of turning it right side up.
We're building a world
in which individuals
own their own data initially.
And if anybody else wants to
use it, you have to come to me.
It's an exciting world in
which if I build an app--
if I can build it, write
a neat program today.
and go to sleep, and
tomorrow find out
that people are using
it all over the world,
without me having to build it
backend because they are using
it with their existing stores.
So for me, that's exciting.
It's the example-- having
that project at MIT,
it was great to have
colleagues, space,
and excitement-- and energy,
and review at MIT and CSAIL.
I hope that the college
will do many things.
Also, we have a startup now.
And one of the things I
think it's great to be at MIT
is that MIT famous for
respecting and making
it as easy as straightforward
as possible for you to be
able to spin these things out
into companies, when you feel
that you need to have
a commercial energy
behind these things.
So the startup is Inrupt.
Thanks to Glasswing
for funding it,
even though it looks like a
really interesting, different
sort of project.
And folks, do go to inrupt.com,
or solid.inrupt.com.
The hope is that with
Solid this arc of the web,
which initially I
thought started off
as being a sort of
potentially utopian thing,
now seems to be coming over
and potentially heading
towards a very
dystopian future--
for democracy, for
education, sowing in,
where are all of these
things in utopian days
that we hoped we were to do
may be severely threatened
or already are largely
destroyed-- mid course
correction.
So with projects like Solid
re-decentralizing the web,
turning it back into a place
where individual people have
a mandate, individual
people have power,
then we're hoping that the
trajectory over the next 10, 20
years will be towards massive
individual empowerment,
massive ability of groups to be
able to collaborate and solve
the huge problems in the space.
And if John Perry
Barlow rolls over
in his grave at this
current situation,
maybe his spirit will be
happier with us in the future,
as we build more and
more really positive,
creative, collaborative,
democratic systems on top
of this new version of the web.
Thank you.
[APPLAUSE]
Well, I'm glad to be here.
I've been looking forward
to it for 50 years.
[LAUGHTER]
[APPLAUSE]
When I started thinking
about what to say today,
it occurred to me that I
have been around for a while.
And maybe it would be
my best contribution
if I talked a little
bit about where we are
and where we come from--
and where we might go,
and how we might get there.
In the beginning
I thought, well, I
will give a comprehensive
overview of everything.
But I decided in the end to
point out that where we are
is here at a historic moment--
not only for MIT,
but for the world.
Because computing
is no longer just
for practicing professionals,
it's for everybody.
It's an important thing
to know about, just
like literature and history, and
a little bit of mathematics--
and, perhaps, anthropology.
So that's where we are.
But where we came from,
that's impossible.
I started thinking about this.
And a friend loaned me a chart
from the 25th anniversary
of Project MAC.
And I thought, well,
let's see, I'll
just cover those milestones.
There were about 300.
And extrapolating
to today, I think
I would have to talk
about 1,000 things, which
would give me a little
less than one second each.
So I soon gave up
on that, and decided
I would give you a personal
history of computing at MIT--
and talk a little bit about, and
focus on the greatest computing
innovation of all time.
That's my agenda.
So to start, I want to start
on my very first day at MIT.
As a freshman, I found
myself wandering around
in Building 26, looking for
the lecture hall in which I
would learn physics.
I find myself looking
in this door--
looking in this window.
It was the IBM 7090 computer.
And, boy, was I impressed.
It was inspiring.
This was the day when
computers had gravitas.
[LAUGHTER]
They had blinking lights,
and tape drives spun.
It was wonderful.
But the amazing thing is
that so many wonderful things
were done with that computer.
The first great
[? II ?] program was
done on that computer,
a program that
did symbolic integration,
the same way I
was learning to do integration
in my calculus class.
And that computer had--
this cell phone is 50 to
and 100,000 times faster
than that computer.
And this computer has about
250,000 times as much memory.
So it's amazing that
anything got done on that.
But in any case, it
was still inspiring.
Well, a few years later--
I think I was a senior--
I witnessed a debate between
Seymour Papert and Hubert
Dreyfus, in a class
taught by Jerry Lettvin.
Lettvin was a character.
He announced on the first
day that there would
be no quizzes and no homework.
Everyone would get
a B, unless they
did a term paper,
in which case they
would get either an A or a C.
[LAUGHTER]
Well, then came the debate.
Dreyfus, a philosopher, argued
that computers could never
be intelligent.
And he talked about how it would
be impossible for a computer
program to play chess
at a championship level.
And he talked about
fringe consciousness,
and used a lot of big words.
And being young and
impressionable, at the end
of his talk, I thought, well,
who will have the courage
to debate against this wisdom?
It wasn't a face-to-face debate.
Papert came in a
few classes later.
And in the meantime,
they had somehow
arranged to have a
match between Dreyfus
and a chess-playing program
written by Richard Greenblatt.
And this game enabled Papert
to start his talk by saying,
Dreyfus has said that
computers can't play chess.
And if that's true, then
Dreyfus can't play chess either.
[LAUGHTER]
In any event, I started
hearing about the artificial
intelligence laboratory.
And it seemed like a place
where fun was going to flourish.
It attracted people from the
Teech Model Railroad Club--
people like Richard
Greenblatt and Tom Knight,
who found that computers
were even more fun than model
railroads.
So I suppose I was right
when a friend of mine
suggested that I might go to
see a lecture by Marvin Minsky,
and I did.
I didn't really know
what I was going to do.
I had found myself
in graduate school.
I didn't know why.
My father had started talking
darkly about law school.
And I went to this
lecture by Minsky.
And there was such
joy in his talk,
and such a pride in what
his students had done--
and such a passion for what
would be done in the future.
I left the lecture
saying, to my friend,
I want to do what he does.
And pretty soon, I
was doing what he did.
And pretty soon after that, he
was talking about what I did.
Here we mgiht give it
a fourth example, this,
and say that is an arch.
And the description
of this structure
agrees with the description
it's been building up,
except for one small detail--
the top thing is no longer
a block, it's a wedge.
And the program has to say,
I'll accept things that
are wedges as well as blocks.
And that's pretty easily
changed by saying,
this can be block or wedge.
Or in the actual program,
it generalizes and says,
that can be a prism.
Well, the point
of the program is
that it doesn't learn so
much a little bit at a time,
as in the traditional
reinforcement theories
of learnings-- which
work very well for rats
and very badly for people.
But for each example,
the machine jumps
to some sort of conclusion--
learns a new relation.
And it can learn very fast.
It's learned a lot
from four examples.
On the other hand, it
takes a good teacher.
If you gave it
misleading examples where
there are many differences
between the things its seen
and the new things,
then it will be at sea.
There will be a
lot of differences
that it could put in here.
And it won't have any
good way of deciding
which differences to
represent in its final result.
So it's good to
know, even back then,
we were thinking about
a different kind of AI
than the kind that's
popular today.
Today what we have is
statistical and perceptual.
And that's complemented
by the things that
were happening back then and
should happen in the future--
the cognitive and the
thinking part of AI.
In any event, I
finished my degree,
and a year-or-two
later found myself
director of the MIT Artificial
Intelligence Laboratory.
There's controversy about
how that came to be.
Some say I had
arranged a coup d'etat.
Others say I was
tricked into it.
But in any case, Seymour
Papert said, don't worry,
you'll only have to do
it for a year or two.
And it turned out to be 25.
Dan, wherever you
are, take care--
this could happen again.
[LAUGHTER]
So a short time later--
really, rather at the beginning,
I knew I was young and stupid
and didn't know anything
about running a laboratory.
So I went around MIT asking
department heads and laboratory
directors how I could make
the artificial intelligence
laboratory a great laboratory.
And to my surprise, in
my first dozen efforts,
no one had any ideas.
They hadn't thought
about the question.
So then I thought,
in desperation, I
would go to see Jay Forrester.
Forrester had built
the Whirlwind computer
in the late '40s and early '50s.
And it was to be a prototype
for the computer that ended up
in the SAGE air-defense system.
And that was a really
magnificent computer.
The relay racks there
were 11-feet tall.
They employed hundreds
of people to build it.
It was the first computer
with magnetic core memory.
It was the fastest computer.
And when I went to see
Forrester it was frightening.
There was a table with
a white tablecloth, that
was set for tea and cookies.
Forrester was in an immaculate
suit and well-chosen tie.
I wasn't.
[LAUGHTER]
And for the first 25 minutes
of our 30-minute interview,
he told me why we
should not have
an artificial-intelligence
laboratory at MIT.
I never did understand why.
But, finally, in
desperation, I said,
well, Professor
Forrester, it must
have been a great
laboratory, because
of the excitement
associated with building
that wonderful computer.
And he looked at me like I
was the king of the fools
and said, young man, we weren't
trying to build a computer,
we were trying to protect
the United States against air
attack from the Soviet Union.
And that had a big effect on me.
Because what it told me
is, you don't become best
by wanting to be best.
You become best by having a big
mission, and then being best
will take care of itself.
So there we are.
We have inspiration,
courage, joy, and mission.
And so it's natural
to think, well,
what should be the mission
in the new college?
And, to me, the mission ought
to be to take everything
at MIT to another level--
not just computing,
but everything.
And that ought to
be in the service
of an even bigger mission,
which as President Reif
said in his inaugural address,
what MIT is about solving
the unsolvable,
shaping the future,
and serving the
nation and the world.
But it isn't all serious, as
even Forrester pointed out.
Listen to this one.
And before leaving,
we would like
to show you another kind
of mathematical problem,
that some of the
boys have worked out
in their spare time-- in a
less serious vein for Sunday
afternoon.
[MUSIC - "JINGLE BELLS"]
Yeah.
So they had fun, too.
But you know, did
you see those words?
The things that they worked
out on a Sunday afternoon
on their spare time.
They were spending
a lot of money,
and they didn't
want the taxpayers
to think that they were
doing just frivolous things.
So what's left?
There is something
that's left, it
has has to do with curiosity.
And by curiosity, I don't
mean just ordinary curiosity.
I mean the kind that
leads to great things--
that sort of
out-of-control curiosity
that led Copernicus
to figuring out
where we are in the universe,
and Darwin to figuring out
where we are in evolution, and
Franklin, Watson, and Crick
figuring out the
nature of our biology.
And when you say, well,
what could possibly be next,
that brings me back to the
greatest computing innovation
of all time.
And what's that?
It's us.
We are the greatest computing
innovation of all time.
Because nothing else
can think like we think.
Chimpanzees can't do it.
Neanderthals can't do it.
And we don't know how to
make computers do it, yet.
But it's something
we should aspire to.
And it's something that
we've been aspiring to
for a long time.
The Greeks started
thinking about thinking.
Alan Turing started
thinking about
whether computers could think.
And Marvin Minsky
showed us how to do it.
But in going forward, I think
we have to go backward too.
And not just a little bit--
about 75,000 years.
That's when we started thinking.
And this is what Ian
Tattersall had to say about it.
So what do you mean
by re-combining?
Well, as a Berwick
and Chomsky have
noted in their seminal
book Why Only Us,
it's all about the ability
to put symbols together
to build symbolic descriptions.
And once you have
that operation, which
they call merge, then
you get to what I call
the strong-story hypothesis.
Yes, this is a
strong-story hypothesis,
that says the way we differ from
other species is in the stories
that we tell.
And we start with our
stories in childhood.
They persist
through high school,
and eventually become
the studied stories
in areas like these--
which happen, of course, to
be the five schools at MIT.
[LAUGHTER]
So some say that,
if we do all that,
we will be partaking of
another forbidden fruit,
and that this knowledge will
become an existential threat.
I'm more optimistic.
I am optimistic.
Because, to me, unless we
get hit by an asteroid,
our biggest existential
threat is actually us.
So I take a more
optimistic view.
And I think that,
in the end, there's
no reason why computers
can't think like we--
they can't be ethical and
moral like we aspire to be.
Some say, ethical
and moral as we are?
How could it be possible
for a computer to do that?
Well, every time I
watch the evening news,
I think to myself, it
can't be that hard.
[LAUGHTER]
So I don't know
what others may do.
But as for me, what I hope to do
with my friends and colleagues
and like-minded people is go
forward into the future with
these kinds of ideas
painted on a wall--
the desire to put all
those things together
to develop a greater
understanding of ourselves
and how we think, and
how other people think.
And that can't help
but be a good thing.
And that's the end of
my story for today.
But I hope it will
be just the beginning
of a story that will be told
in the days and years to come.
[APPLAUSE]

---

### Computing at the Crossroads: Intersections of Research and Education
URL: https://www.youtube.com/watch?v=Qoghqz5MfNI

Idioma: en

[MUSIC PLAYING]
Good morning.
My name is Hashim Sarkis, dean
of the School of Architecture
and Planning at MIT, and I'm
very pleased to welcome you
to the next round
of speakers who
will be focusing on the
intersections of learning
and computation.
One of the first things I
learned when I came to MIT
four years ago is
that at MIT, we
use the word "learning" much
more often than education.
At MIT, we learn.
We don't educate.
We learn from each other.
We learn from our students.
We learn from the
machines that we make
that, in turn, learn from us.
And in all this,
we learn by doing.
This pedagogical approach
underpins the seal of MIT--
Mens et Manus-- and its mission
to use science and technology
to advance society.
Motto and mission
mirror each other.
In 1896, pragmatist
philosopher John Dewey,
the man who himself coined the
phrase "learning by doing,"
founded the famous
school in Chicago
called "The Lab
School" on the premise
that we could bring scientific
method to bear on education.
Learning, like
scientific exploration,
should advance through real-life
experimentation, not by rote.
When carefully examined,
lessons from life
can become lessons from school.
Dewey extended his use
of scientific methods
from education to
other aspects of life.
For example, he advocated
using scientific method
to the solution of
social problems.
This approach he
"called democracy."
Remarkably, his embrace
of scientific method
in [INAUDIBLE] led him to
also embrace its corollary--
human imagination.
Learning by doing
involves action.
Action generates uncertainties
to which we have to respond.
And in such uncertainties,
it is our imagination
that helps us
relate the outcomes
of our past experiments
to anticipate future ones.
Every great advance in
science, Dewey observed,
has issued from a new
audacity of the imagination.
Dewey worked very
hard to integrate
imagination and scientific
method in learning,
but for many reasons,
these two attributes
have been increasingly severed
from each other over time.
Dewey was writing in 1929 at the
beginning of the second machine
age and the Great Depression
when new machines were
robbing people from
their jobs and where
their mechanical jobs
and rote education
were, in turn, robbing
them from their ability
to exercise their
imagination and to deal
with emerging uncertainties.
I don't know which
machine age we are at now,
but we certainly live in
an age of uncertainty.
We are creating
new machines that
are threatening our
jobs, our imagination,
and our democracies
like never before.
It is precisely
in our commitment
to learning about
these new machines
by founding the Schwarzman
College of Computing
that we need to
invest in cultivating
the human imagination.
After all, this is where
the new jobs will always
be after the old jobs are
taken over by the new machines.
Yesterday, I learned that Dan
Huttenlocher, the first dean
of the College of Computing,
had studied at John Dewey's Lab
School in Chicago.
It bodes very well.
Those of you who know
Dan have seen the work
that he has done at Cordell
bringing design and creativity
into science and technology.
Dan, welcome back to MIT--
the perpetual lab school where
we thrive on uncertainty,
where science is
the eternal method,
and where, with your help,
imagination will become
the next endless frontier.
Thank you very much.
[APPLAUSE]
It's an honor to be included
in this celebration of the MIT
Stephen A. Schwarz
College of Computing.
I'm going to talk today not
about computing, per se.
I'm going to talk about why in
life, science, and health care,
we need computing and
computation and technology.
As many of you
know, I'm a member
of the Department
of Biology, which
is not a part of the Institute
that has traditionally
been associated with computing
and artificial intelligence.
However, this is
changing rapidly,
and the interface
represents an exciting--
and perhaps the most exciting--
part of the future of
biological science.
For example, MIT has
recently launched
a new, exciting J-Clinic focused
on machine learning and health
care.
There will also be, this
summer in this building,
a Koch Institute symposium on
machine learning and cancer.
But before focusing on this
interface of cancer and machine
learning and
computing, let me talk
a moment about the history
of biomedical research at MIT
and why it needs transformation
by the Schwarzman
College of Computing.
When I arrived at MIT in 1974
in the newly-opened Center
for Cancer Research, I
discovered a computer
in the corner of the
laboratory of Dave Baltimore.
But in the next 10 years, I do
not think it was ever touched.
Many experiments were underway,
and research discoveries
were recognized by
four Nobel prizes,
but not one involved
the computer.
In a related story that
Eric Lander told me,
when the Whitehead
was being designed,
there was not a room or space
set aside for a computer
until Eric suggested
that it might
be useful at some stage
in the future to have
a computer in the building.
All of this changed in the
1990s with the commencement
of the Human Genome Initiative.
The genome was completed
in 2003, cost $3 billion,
and brought biomedical science
into the IT and computer age.
The MIT community led this
genomic transformation
as it also led the development
of biotechnology in the 1970s.
We're now in the age
of biotechnology,
particularly here
in Kendall Square
with the most concentrated
biomedical research
community in the world.
What is shown here
on this slide is
MIT in the red buildings
along the Charles,
and then the
light-blue buildings
occupied by biotechnology
research labs
with greater than 50 employees.
There are probably
another 100 companies
located in this space that
are smaller than that.
This community has been
labeled "the most innovative
square mile on earth."
The products of
Kendall Square are
improving the lives of
patients with genetic diseases,
spinal muscular atrophy, for
an example, as well as cancer.
Amazing as this complex
is today, biotechnology
and all of life
science, in my opinion,
will be transformed by
artificial intelligence
and machine learning.
Amazon, Google,
Microsoft, IBM, and others
are all locating
research facilities
in and around Kendall
Square with a primary focus
on health care.
We need their expertise and
market power to flourish.
But MIT and the
Schwarzman College
will be an essential part of
making the transformation.
A significant challenge to
the future of this innovation
is the growing cost
of health care.
The red-dash line on this
graph is the percent increase
in the cost of medical
care from 1980 to 2010.
It has increased real
purchasing power 2.5 fold.
Note that the dashed
line just below that
is the increase in purchasing
power of the top 5% wage
earners in this country,
and the lines below that
represent the percent increase
in income of other lower wage
earners.
Clearly, this separation in
the cost of health care from
the purchasing powers of
families cannot continue.
We, as a nation, are spending
18% to 19% of the GDP
on health care, and greater
cost will have really
significant economic
impact and social impact.
Proprietary pharmaceuticals,
such as patent drugs,
accounted for about 12% of the
costs of health care in 1980,
and it accounts for
about 12% in 2010.
So this has been part of
the increase in health care,
but it does not account
for a major part
of the increase in cost of
health care in this country.
The cost of every segment of
health care has increased--
physician time, hospital beds,
record keeping, reimbursement,
et cetera.
We need an
across-the-board increase
in the productivity of health
care, better quality health
care per dollar of cost.
High-tech and computing has
delivered this transformation
in every sector of society
except in health care.
Hopefully the incorporation of
new technologies such as these
will reduce this rate
of growth in cost
while sustaining
increased quality.
The promise of the
addition of data and IT
extends across the
board from genomic data
to access to patient behavior
in patient environments.
We're in the process
of collecting
genomic data showing the
inherent risk of many diseases.
It now costs $300 to sequence
a human genome, not $3 billion
for the first genome.
Medical records,
lifestyle data will
reveal early signs of illness,
pathology, blood assays,
imaging all with artificial
intelligence and machine
learning will precisely diagnose
an illness at early onset.
The integration of
these data will also
be the basis for prescribing
treatment paradigms,
when to treat, how to treat,
and when not to treat.
This is the heart of what
is described as precision
medicine, and the same
innovation, same information,
will drive discovery
in medical research.
When you know where
to look, discoveries
are very likely to follow.
The above data will have to
be shared with physicians
and patients respecting their
privacy in trustworthy ways--
in ways that's understandable
by both physicians and patients.
All of this depends
on new technology.
Having employees and
patients and citizens trained
are aware in biomedical
science and computer science
and being able to create new
algorithms that seem mysterious
to me sometimes.
One outcome of
this transformation
is improvement in
diagnosis of early disease.
The impact of this
will be astounding.
I use here the example
of a terrible disease
of pancreatic cancer.
If you diagnose pancreatic
cancer at stage one or two,
a patient has a 55% chance
of five-year survival,
and these odds are improving
as we improve our treatments
Justice Ruth Bader
Ginsburg exemplifies this.
If a diagnose happens
at stage three or four
where 80% of all
patients are diagnosed,
the chances of five-year
survival falls to less than 5%.
Unfortunately, Patrick
Swayze is an example.
Thus, advancing
control of cancer
is best approached by devising
means for diagnosing it early,
improving treatment at
this stage for cures,
and an essential part of this is
the integration of patient data
to predict risk, developing
better imaging technology
to identify the cancer before
it gets large enough to spread,
and that's machine learning
as you will hear today.
My sincere expectation
is that MIT
has taken a strong
lead in this space
through the establishment of the
Schwarz College of Computing.
Its research and
educational effort
will accelerate
the transformation
of health care and
health care research
to the benefit of
patients around the world.
We sincerely need this college.
I am so excited to be
part of this celebration.
I thank Mr. Schwarzman and the
family and MIT for the vision
to launch the Schwarzman
College of Computing.
Thank you.
[APPLAUSE]
Hello, everybody.
Thank you very much.
It's a very exciting
time for MIT,
and I'm really glad
to be part of it.
Computing is making inroads
to every academic discipline.
It's redefining
disciplines, creating
very exciting intersections.
And one such intersection is
computation and economics,
and that's what I'm going
to be focusing on today.
So as we're all experiencing,
advances in computing
are transforming
myriad marketplaces.
Instead of the brick
and mortar marketplaces
that we're all used to, we
now have online markets,
which not just act as platforms
for bringing many more
people together, but also
they offer a wide-array
of new goods and services.
The challenge is that for
these marketplaces to work,
we need not just better
technology, better hardware,
software algorithms,
but we also need
the design of economic
incentives to move in tandem,
and I'll try to make
this point by providing
two examples to you.
One of the iconic examples
of digital markets
is online advertising markets.
Two of the top technology
companies, Google and Facebook,
owe virtually all
of their revenue
to their ability to
monetize their platforms
through advertising.
The chart here shows the
advertising revenue of Google
over time since 2001.
It has reached, in
2018, about $117 billion
amounting for 72% of the
revenue of the company.
Much of this revenue comes from
what we call the "sponsored
search auctions."
So when you enter a search
keyword into a search engine--
and in this case,
for some reason,
we entered "used
cars in Bristol"--
it results in a page
like this, which
comprises a list of
organic search results
that you see in the bottom.
These are results that are
related to your keyword
and basically ordered
in order of relevance
by some underlying
algorithm such as page rank,
but also a list
of sponsored links
that you see here in the red
boxes paid by advertisers.
Each time you enter a keyword
into the search engine,
an auction is run in
real time in order
to decide which advertisers
links will be shown,
in what order, how much are
they going to be charged.
Think about it-- in real time.
That's a lot of auctions.
And how do these auctions work?
Advertisers-- let's
say in this example,
some car dealer in Bristol--
submits bids for
keywords-- let's
say, used cars-- and says,
I'm going to pay per click.
I'm going to pay $1 if any
user clicks on my link.
And then the way these
links are allocated
is in the order of the
bids that are submitted.
The highest bid
gets the top slots
because they're more valuable.
Why?
Because users scan the
page from top to bottom.
So how do the payments get done?
I just said $1, right?
Very small amounts.
And from these
very small amounts
basically emerged a
very huge industry.
As highlighted by the chief
economist of Google, Hal
Varian, most people don't
realize that all that money
comes pennies at a time.
So let's now look at how
these small amounts are made.
Sponsored search
started in mid '90s
with the so-called
"first price auction."
And this is basically
bidders exactly
pay what they bid in the scheme
that I just told you about.
A drawback was
quickly discovered.
Why?
Because if you
pay what you bid--
you say you're going to
pay $1, and you pay $1--
what happens is if you lose--
if you do not win--
you'll have an incentive to
keep increasing your bid.
And if you win, you
have an incentive
to shave off your bid next time.
And this was actually observed
very clearly in data--
led to a very unstable
price dynamics
with huge revenue consequences.
So instead, in 2002,
Google deployed a system
which was inspired by the
so-called "second price
auction" in which the bidders
pay the second highest bid
instead of what they bid here.
And let me show you how the
second price auction works.
This is a very simple example.
We have three links to
allocate, and the three highest
bidders will get those links--
in this case, $10,
$4, and $2 bidders.
And let's see what
they're going to pay.
The highest guy will
pay the second highest
bid, which is $4.
The second one will pay $2.
The next one will pay $1.
This is basically
what we mean by
a generalized second-price
auction exactly deployed
by Google.
You immediately see a problem.
If actually the top
guy bids $3 instead,
he still gets a slot-- the
second one in this case.
But then he pays the
second highest price,
which is $2 instead of the $4.
So he increases his pay off.
So that means that he has
an incentive, actually,
to game the system.
This is the problem with
the generalized second price
auction.
It's open to manipulation--
another form of instability.
I hope that these examples
highlight the costs of not
getting incentives right.
We need not just better
technology, but also
better incentives.
And in fact, many
companies are now
moving towards
designing much more
sophisticated
incentive-compatible, fast,
scalable auction design.
Let me turn to another
marketplace, a much bigger
marketplace-- finance.
Finance is as old as humanity,
but in the modern economy,
it's done not through
moneylenders but very high-tech
banks and computerized
platforms.
What that means is a huge
increase in interconnections.
Basically the exposure of
many of the global banks
to each other's
assets and liabilities
skyrocketed over the
past several decades
and actually reaching
an all-time high
before the global
financial crisis of 2008.
Accompanying this
is also a huge rise
in the complex
financial products,
such as derivatives contracts.
So interconnections are good.
They enable you to
do risk sharing,
better matching of demand to
funds, but at the same time,
also has unintended
consequences because they
act as conduits for spreading
the financial distress
as the world found out in 2008.
So more interconnections
are more dominoes
exposed to each other.
And actually, research
shows that the extent
of the spread of distress
very much depends
on the underlying connections
between these financial
institutions, size
of the shocks,
and the incentives of these
financial institutions
to insure themselves against
these financial distress.
Technology not only enables us
to design better marketplaces,
but also trading.
Enormous amount of
resources are placed
into fast algorithmic
trades in order
to beat the market by
a few milliseconds.
But of course, this
doesn't mean that it
will lead to efficient
allocation of resources
and actually may be a very
big source of instability
again because of
false probing bids
as well as very much out of
control feedback effects.
In fact, one of the biggest
crashes of the stock market
was the flash crash
of 2010, which
was caused by miscalibrated
algorithmic trading.
So again, this is
another example
where we see that we have to
design technology together
with these incentives.
This takes the form of
policy and regulations
here in order to counteract
these systemic crashes
due to interconnections,
complex financial products,
as well as algorithmic trading.
So all of this technology
and all of these developments
provide great
opportunities for us,
but it could also
very much backfire
if we don't design these systems
with a proper understanding
of both computation
and economics.
And that was one
of the incentives
for us to design a
completely new major at MIT--
computer science,
economics and data science,
which is a true collaboration
between departments
of economics and EECS.
Our goal is to educate the
next generation of students
who are equipped with the
foundational knowledge
to address these
exciting problems.
This Schwarzman college will
be a great enabler for us
to be able to realize our dream.
So thank you very much.
[APPLAUSE]
Hi everybody.
How are you guys doing today?
My name is Sarah Williams.
I am a professor of
technology and urban planning,
but I'm so much more.
I'm trained as a geographer,
an architect, an urban planner,
and a data scientist.
And I combine these skills to
really create data for action.
And at the heart of
data action is the idea
that we need to build expert
teams to work together
to really capture
the insights of data,
and I believe this kind
of collaborative learning
is what really allows us to
make data work for civic change.
So because I like to
tell stories with data,
I'm going to tell
you a story today.
My story starts
in Nairobi, Kenya.
I've done a lot of
work in Nairobi.
This is a typical
scene on the street.
Nairobi suffers from
severe congestion problems.
This is similar in many
rapidly developing cities.
As a transportation
planner, one of the things
that we use to address this
issue is transportation models.
But we need data to
make those models,
and many cities, especially
rapidly developing cities,
don't have that data.
I applied my skills
in machine learning
to extract roadways for the
Nairobi metropolitan area,
creating the first
GIS data set, which
allowed us to make this
transportation model.
And while my model was
successful, one of the issues
is we didn't have
data on the matatus.
Matatus are the main way
people get around in Nairobi,
and people really
depend on this almost
like they depend on a
public transit system.
If you don't know
what a matatu is,
I'm going to show
you a small clip.
Imagine riding in this.
[AUDIO PLAYBACK]
[MUSIC PLAYING]
- If you wouldn't
know better, you
could be in a club or
a bar, but you're not.
You're in a minibus
driving through Nairobi.
And with volumes like
this, you'd better not
come on an empty stomach.
On the outside, they're
painted on odd designs,
and their name "matatu" is
derived from the key Swahili
words for "$0.03 for a ride,"
which nowadays is more $0.30.
The fares are raised
by the conductors.
- [SPEAKING SWAHILI]
- Traveling by matatu
is a daily reality
for millions of Kenyans.
[END PLAYBACK]
So if you want to get
around on a matatu,
you ask your matatu driver.
And right before we did
our research in Nairobi,
the main way you understood
how to get around
is just this informal
talking with the drivers who,
as you can see, really
pride themselves
in the designs of
their vehicles,
and it's really something
that people in Nairobi
talk about all the time.
And so I thought, how could
create raw data from my model,
but create data that
everyone could use?
If I didn't have an
ability to find out
where the matatus went, the
average person in Nairobi
didn't as well.
So if you know Nairobi, you
know people use their cell phone
for everything.
They use their cell
phone to buy coffee.
They even use their cell phone
to buy a ride on the matatu.
So we decided, how
could we leverage
this ubiquitous technology to
collect data on this system
but open it up
for anyone to use?
We developed an
application in coordination
with the University
of Nairobi, and this
was so that we would provide
context for the Nairobi
metropolitan area, but also so
that the technology we built
would retain and remain in
Nairobi so long after we left,
this work could be built upon.
We collected the data in GTFS.
How many people
know what GTFS is?
One.
There's always one person.
You probably all use GTFS today.
It's what allows you to route
yourself on Google Transit.
So if you're using Google
Transit, in the background
is GTFS.
I note this as importance
because by making it
in a open data format,
it was instantly
usable by a lot of different
software companies--
so not just Google.
GTFS is a text file
that has basically
latitude and longitude points
that gives us a schedule.
And as we collected the
data, it came streaming
in collecting the routes.
But these routes were,
combined, hard to understand.
How do we visualize it?
How are we going to
communicate this information?
We ultimately decided
to develop something
that looks like a subway map
that you might see in New York,
London, Paris, even Boston.
And this map was developed
in coordination, not just
with our team in the
University of Nairobi,
but also the matatu drivers,
owners, the government,
and this is a idea that
collaborating with data
allows people to
trust the results.
And here, you're
seeing them actually
noticing that there's a lot
of missing routes in the top.
They're instantly
using our map to create
new plans for the city.
The maps went viral
on the internet.
We got them published
in the papers
so people who don't
have cell phones
could access the information.
And what I like to talk about
is, how do you measure success
on a open data project?
And that's when other
people leverage your data
for their own change.
So in Nairobi, I
was really excited
when the government invited
us to a press conference
and made the map an
official map of the city.
So while we were
largely disinterested
through the project,
they felt that they
could trust the data set.
And now, it's the
official map for the city.
Google has put the data
into Google Transit.
It is the first informal
transit system navigable
through Google Transit.
The World Bank copied
our visualization
to get support for
their BRT bus line.
This is actually the World
Bank's map of BRT, not our map.
It looks very
similar, doesn't it?
And there are now
five apps in Nairobi
that use our data as the base.
Our collaborative
process really helped
us make relationships with
Nairobi's strong technology
community.
And we have continued to
teach classes, do research
with that community.
A recent class where we
had a World Bank policy
expert embedded in
the class collaborated
with [INAUDIBLE] Reroute, which
is a Waze-like app in Nairobi
to perform semantic
analysis on text streams
from Twitter to
map where crashes
were happening in
real time in Nairobi,
and that came from
another class.
So semi-formal transit provides
mobility around the world,
not just in Nairobi.
The majority of countries have
this kind of transit system.
The work in Nairobi
inspired Amman, Managua.
In fact, 26 different
cities have used our tools
and have become
part of our network
in developing data sets
and maps for their systems.
This has caused us to
create a global network
for mapping transport data
where we can provide resources,
keeping people to
the policy, but also
help them scale their work--
create continued involvement.
We just launched the Africa
resource center last August,
Latin America in January,
and I hope the work
that I showed you
today shows how
a geographer, urban planner,
architect, data scientist,
can teach the students
of the next generation
how to combine these schools
to create civic change.
Thank you.
[APPLAUSE]
All right.
Hello, everyone.
I'm Vivienne Sze.
I'm a faculty member
in the EECS department,
and I'm going to talk to you
about energy efficient AI.
So today, most of the processing
that's being done for AI
happens in the cloud, but
there's many compelling reasons
why we want to move it out
of the cloud, into the edge,
and process it locally
on your device.
So the first thing
is communication.
So if we really
want AI to be used
by or accessible to many
people across the world,
we need to reduce the
dependency on the communication
infrastructure-- so bring
it directly to the person.
We saw this morning,
also, there's
a lot of applications of AI
in the health care space.
So privacy is also really
important in terms of the type
of data that we're collecting.
So again, maybe you
want to keep the data
on the actual local
device-- preserve privacy.
And then finally, there's
a lot of applications
that involve interactions
with the real world
and where you don't want to
have a slow response time.
So a typical example of this
would be self-driving cars.
So imagine if your car's
going very fast on the highway
and you're trying to
avoid a collision,
you might not have time to
send the data all the way
to the cloud, wait for it to
be processed, and then pushed
back out to the car itself.
So latency is
another reason why we
want to do the
processing at the edge.
But there are
challenges involving
moving this computing to
the edge itself, primarily
power consumption.
So for example, if we take the
self-driving car as an example
again--
so self-driving cars consume
over 1,000 or 2,000 watts
for just the computing power to
just crunched the data of all
the sensors that
it's collecting.
So that's a challenge.
And then if we think about
moving this compute onto
a smaller device-- so let's
say a handheld device like
your phone or these
smaller robots--
the power challenges
are even more stringent.
So for example on
these small devices,
you have very limited battery
capacity because of the size
and the weight of the
device, so you can't
have too much energy there.
And then also, if we take a
look at the existing embedded
processors out there,
currently, they
consume, in order of
magnitude, more power
than what is allowed on
these handheld devices.
So typically on these
handheld devices,
you can only afford about a
watt of computational power.
So if we take a look
and if you think
about how we have dealt with
this over the past few decades,
typically what we would
do is we would just
wait for Moore's Law
and Dennard Scaling
to give us faster, smaller,
and more efficient transistors.
But this trend has really slowed
down over the past decade,
so we need to think
of something else.
This is not going to
be a solution that
will carry us forward.
So in our group, primarily
what we've been looking at
is, how do we deliver
energy efficient AI
through cross-layer design
all the way across the stack.
So what does that mean?
The first thing is we want to
develop new algorithms that
are energy efficient.
So we really want to
think about the energy
consumption of the
algorithm in addition
to the accuracy of the algorithm
and how these algorithms
might map onto hardware.
The second thing
we need to do is
we need to build more
specialized hardware
and redesign the
computers from the ground
up really targeting AI.
So this means new compute
architectures and new circuits.
And then finally,
it's really important
to think about how this computer
hardware would be integrated
into an actual system.
So both the sensing or
actuation, if you're a robot,
are also important.
So you want a holistic
solution in terms
of reducing energy consumption.
So now I'll tell you a
little bit about couple
of the projects that
we've been working on.
So the first is, we've worked
on building efficient hardware
for deep neural networks.
So if you're familiar
with deep neural nets,
it's used for a wide
range of AI applications.
Today, it delivers state
of the art accuracies.
People are very
excited about that.
In terms of developing
specialized hardware
for it, what we actually
really focused on
was reducing the cost
of data movement.
So as it turns out,
it's not really
the computation like
doing the multiplies
or adds that's really
expensive, but it's
how you move the
data from the memory
to the compute engines, which
is consuming a lot of energy.
And so we really designed
a specialized hardware
named "Iris" that focuses
on minimizing this data
movement so we can drop
the energy consumption.
So as a result, we can do tasks
like image classification,
which is the core task
in computer vision
and under a third of a watt.
And in the end, if you compare
it to existing mobile GPUs,
it's, in order of
magnitude, lower
in terms of energy consumption.
OK.
So then another project that
we've been working on-- this
is in collaboration
with Sertac Karaman
who is a roboticist in
the AeroAstro department
here at MIT.
We've been looking at how do
you do autonomous navigations
for these very small drones
about the size of a quarter?
In an autonomous navigation,
one of the key things
that you have to do before
you can actually navigate
is figure out where you
actually are in the world.
So that's localization.
So you can see
here in the image,
we're getting a video stream.
And then on the most
right-hand side,
you can see we're trying to
estimate the position in the 3D
world.
And this localization is the key
step in autonomous navigation.
And with this chip [INAUDIBLE]
that we developed together
through the co-design
of both the algorithms
and the hardware, we're
able to do this processing
in under a tenth of a watt--
so around 24 milliwatts.
And so where can this
actually be used?
Well actually,
there's a whole class
of low-energy
robotics out there,
which take less than a
watt to do actuation.
So for example, you can imagine
these lighter-than-air vehicles
that can be used for
air quality monitoring
or miniature satellites that
you could use for deep space
exploration or origamian
and foldable robots
that you can use for
medical applications.
So all of these robots
take very little energy
to actuate and interact
with the real world.
And so it's really important
that the computation is also
very low power.
Another example that
I want to talk about
is some work I'm doing with
Thomas Heldt in Institute
of Medical Engineering
and Sciences here at MIT.
And really what
we're focusing on
is looking at the role of energy
efficient AI in the health care
space.
In particular, we're
looking at the monitoring
of the progression of
neurodegenerative diseases,
which currently
affects more than 50
million people worldwide.
One of the ways in which people
are assessed for dementia,
let's say, is that they
have to go into the clinic
and talk to a
specialist and they're
asked a series of questions.
And the issue with
this is, first of all,
it's very expensive to do this.
It's very time consuming,
so people can only
go maybe once or
twice a year at most.
And then third, it's also a
very qualitative and subjective
assessment.
So for example,
different specialists
might have different conclusions
in terms of their evaluation.
And even a specialist
themselves,
the repeatability in terms
of their testing might vary.
What's been really exciting
is that recently, it's
been shown that there is
some correlation between eye
movement and these
types of diseases.
And so if you can
measure the eye movement,
eye movement will
give you a much more
quantitative evaluation of
the state of the person's mind
or the disease
progression or regression.
And this could be
useful in terms
of evaluating whether or
not a drug is working.
But again, the challenge right
now with these eye movement
assessments is that you have
to go into the clinic to do it.
It takes really
expensive cameras,
and it's quite inconvenient.
And so with Thomas,
what we're looking at
is whether or not we can
integrate the eye movement
tests onto a smartphone
itself so then you
can bring it into the home.
It'll be very low cost, and you
can do frequent measurements.
And this would be
a good complement
to the specialist's assessment.
So in summary-- oh,
and so also for this,
it's really important to do
the processing on the device
because obviously, it's
medical information.
So in summary, I
think energy efficient
AI is really important.
It really allows AI to extend
its reach beyond the cloud.
What it enables us to do is
you can reduce your reliance
on the communication network.
You can enable privacy.
You have lower latency.
And so you can use AI for
a broad set of applications
from robotics to health care.
And in order to enable
energy efficient AI, what's
really important is, you need to
have kind of cross-layer design
from algorithms all the way
down to specialized hardware.
And by having
specialized hardware,
we really believe this will
enable the progress of AI
over the next decade or so.
Thank you very much.
[APPLAUSE]
Good morning, everybody.
My name is Munther Dahleh.
I'm a faculty member in EECS--
also the faculty
director of the Institute
for Data, Systems, and Society.
And today, I want to
make a few remarks
about the economics of data.
To put this whole
thing in context,
I think it's important
to sort of summarize
the evolution of computing.
So if you think about it
back in the '50s and '60s,
we relied on mainframes to do
centralized heavy computing
that allowed us to simulate
very complex problems in terms
of the weather phenomenon, or
transportation systems, energy
systems, and so forth.
Several decades later, I
would say '80s and '90s,
we started talking about
mobile communication.
And communication enabled
distributing the computing
across many agents,
but allowed us
to collect enormous amount
of data about these agents.
And now, we're in an era
where these mobile computing
devices are embedded
in physical systems
where people not only
are collecting data
about their
surroundings, but they're
making decisions with respect
to these surroundings.
And that, for example, changed
the way many infrastructures
are operating.
One example would be
the transportation
systems were obviously
getting from bad to worse.
Demand is increasing.
We cannot supply more roads
to deal with these demands.
And so the only hope we have
is to manage the information,
to manage the data,
incentivize people to do
the right thing in
order to minimize
these kind of congestions.
So there's a change
in the paradigm
in terms of research
and education
where we used to think
about physical systems
and engineered systems
on one side and people
and institutions and
on the other side.
And now because of this rapid
computing and decision making,
these two words have connected.
And research, in
terms of understanding
whether you were thinking
about infrastructures,
you're thinking
about voting systems,
you're thinking
about health care,
you have to think
of these things
in an integrated way that--
[INAUDIBLE] 614
program is one that
addresses this kind
of a challenge,
but also IDSS PhD program
also is structured
to enable the bilingual
student to be able to think
about those problems
together in solving some
of the societal challenges.
And you can see that this sort
of decision making over data
is exploding in terms
of the number of papers,
in terms of the funding
that is going there,
in terms of the
revenue generated,
and certainly, the skill set
that is needed from students
is entirely around data.
Whether it's actually processing
the data, machine learning, AI,
you name it, it's
all about data.
So if that data is
so important, how
do we begin to think
of it as a commodity?
How do we think about the
value that data provides?
And I like this quote that
says that "personal data is
the new oil of the internet
and the new currency
of the digital world."
If it's a currency,
what is its value,
and how do we quantify that?
I have to say that many
of the data companies
are struggling
today figuring out
how to get customers
because customers are not
clear on the value
that the data provides.
Bloomberg and Thomson Reuters
use fear tactics to say, well,
your competitor has
bought this data.
Why don't you buy this data?
We are very fearful of things
like 23andMe providing our data
to come back and bite us in the
future in terms of insurances
and God knows what if
your data turns out
to have something
that is not desirable.
We cannot yet figure out the
impact of Cambridge Analytica,
or for example, the 150 million
financial data sets that has
been made public by
the Equifax breach.
How do we quantify
this kind of a breach?
We still have a hard
time understanding this.
So we need a marketplace--
a place where
we discover the value of data.
And [INAUDIBLE]
saved me a minute
because she described
the whole ad market.
I don't know how many of
you would have imagined
that we have over $200 to $300
billion market in advertisement
30 years ago.
I think all of us would
think this is insane.
The most interesting thing
about the advertising market
is that it's entirely
predicated on your data.
At the same time, you are
not part of that market,
and you do not have a
choice of how your data is
being processed in that market.
And what we want to do is
bring that privacy back
to the consumer--
retain that choice to the user.
Creating a market in which
the data value is discovered
is going to be beneficial
in many applications--
transportation, energy, the
future of the integration
of all the electrified
cars into the market,
and understanding the
processing of all the stories
that we have, logistics,
fraud data, and what have you.
It's an amazing space that will
allow this sharing of data.
A few bullets to tell you that
this problem is challenging.
We like to work on
challenging problems.
It's challenging because data
is not a real commodity--
it's a digital good.
It has zero marginal
cost for replication.
It's very difficult
to authenticate.
If a buyer comes
into to buy data,
they have no idea
what they're buying.
Typically, a buyer should
come in to buy some value.
They want to do something with
the data-- not to buy data.
How does a market
match the two together
and keep the privacy of the
different data distributors?
How do you
authenticate the data?
And finally, how do we deal
with externality because if I
sell my data to two different
competing companies,
the value of my data drops.
Not only we're working
on understanding
the mathematics that go with
this, which require economics,
algorithmic game theory,
optimization, machine
learning, and a lot
of information theory,
but also, we're trying to apply
it in places where it matters.
One project that I will
tell you that in IDSS
that we're taking
on is empowering
farmers in sub-Saharan Africa.
The problem that farmers--
right now, they are very
worried about getting credit
to upgrade their operations
because they're poor,
and they're worrying about
mortgaging their land.
What we want to do by
creating a data market--
a sharing market
between farmers--
is to be able to extrapolate the
value of introducing technology
and the gain that
they can have so
that they can go
and negotiate better
terms with these creditors.
We are setting up
this whole ecosystem
around this platform for data.
My feeling is that creating
these kind of platforms where
data is shared incentivize
people to contribute
good data because they can
retain the value themselves
but also can benefit
from everybody else's,
and allows us to address
critical and societal
challenges.
Thank you for your attention.
[APPLAUSE]
Thank you.
Over the last few
months, a number
of organizations
and conferences have
marked the 10th anniversary
of the worst financial crisis
in our lifetimes.
And in the aftermath
of that crisis,
we passed the
Dodd-Frank Act of 2010--
a sweeping piece of
legislation that completely
reconfigured the
landscape of regulations
and financial ecosystem.
This act was 2,319 pages long,
contained hundreds of new rules
and regulations, thousands
of conditions and exceptions,
and it was written
by many authors--
a number of whom had no idea
what the others were doing.
It's just too complex for
any one person to comprehend.
And as a financial economist
interested in regulation,
I really had a hard time
making heads or tails of it.
Did the financial reforms
deal with the issues
of the financial
crisis, or did we
go too far or maybe not enough?
What do these questions
have to do with computing?
Well, computer
scientists and engineers
understand complexity deeply.
For example, the Google search
engine, or the Linux operating
system, or the Mozilla
Firefox search engine
are all examples of complex
rules and regulations
that contain hundreds of various
different laws, thousands
of conditions and
exceptions, and were
written, in many cases, by
authors working entirely
independently.
What if we use the principles
of good software design
to analyze the
financial regulations?
What would we learn
from that process?
Well, in order to
tell you about that,
it's not as crazy an idea as you
might think because, after all,
the laws of the land are,
in fact, the operating
system of our society.
So if you take a look at all
of the laws of the US Code,
you print them out, it would
be many volumes on a law school
library shelf.
But the fact is that
over the past 80
years, if you took the annual
versions of the entire US
legal code, they would
actually fit on a thumb drive.
Not really that big
a data set at all.
In fact, the 2014 vintage
of the US legal code
is over 200,000
pages long, but it's
only 1.8 million sentences--
about 41.4 million words.
And that actually
compares pretty favorably
with the kind of complex
code that computer scientists
and engineers use all the time.
So let me tell you a little
bit about the legal code.
First of all, like most
pieces of complex software,
it's actually divided up into
discrete units called "titles."
The current US legal code has
53 titles, such as Title 12--
Banks and Banking.
That's the part
of a US legal code
that's most relevant for
financial regulation.
Title 26 is the Internal
Revenue Code-- something
we'll all be dealing with
in a couple of months.
Title 35 is the Patent System,
and so on, and so forth.
And each of these
titles is divided up
into even smaller sections.
Now is this collection of titles
and sections well-designed?
Well, if you talk to computer
scientists and engineers
about what makes a good
piece of software--
and I've done this.
A number of my colleagues
here in this room
have helped out with this--
they point to the following--
the five Cs of
software development--
conciseness, cohesion, change,
coupling, and complexity.
In the interest
of time, I'm only
going to focus on one of these--
coupling.
Coupling is an
idea that has to do
with how different parts of
a complex piece of software
interconnect.
And the best way
to understand it
is through a pretty
simple example.
Imagine you have a
piece of software
that's comprised of different
discrete components labeled
A, B, C, all the way
through I, each of which
engages in certain kinds of
computations based upon inputs
that it takes in and
outputs that it provides.
And the arrows between
these different components
indicate whether or not there
are any interdependencies.
For example, the
arrow between A and B
indicates that B
calls or references
A. And the reason the
arrow goes from A to B
is that if you
make a change in A,
that could affect the
proper operation of B.
So when you make changes
in A, you better go check
to see what's happening with B.
Now once you define
these nodes and arrows,
you can actually define
subsets of these nodes that
have special properties.
And one way to do that
is using the property
of strong connectedness.
A strongly connected
subset of nodes
has a property that you
can go from each node
to any other node in
that subset by following
a particular path of arrows.
They're all connected sort
of like the New York subway
system.
So as an example, in
this particular graph,
nodes B and E are a strongly
connected subset because you
can go from B to E,
and E to B. Notice
that A is not part
of that subset
because while you can get
from A to B and A to E,
there is no way to get
from B to A or E to A.
It's sort of like the
Boston subway system,
or as they say in Maine, you
can't get there from here.
Now once you define all of
these strongly connected subsets
as I have here, you
can then come up
with a measure for how
efficient this code is.
What you do is, you
look at the largest
strongly connected subset--
in this case D, G, F, and H,
which we call "the core"--
and you ask the question,
how big is the core?
Because the bigger the
core, the more difficult
it is to manage this code.
Why?
Because by definition, the core
is a strongly connected subset.
So any change you make
in any part of that core,
you need to worry
about what it's
going to do to every
other element in that set.
And so by that measure, this
is a pretty badly written piece
of code because the core is 44%
of the total number of pieces
of code.
By comparison, if you look at
Firefox across the versions one
through 20, the median size
of the core is only 23%.
Now across the 53 titles
of the US legal code,
there are 36,670
discrete sections.
And if we treat each
section as a node,
and we look at cross
references from one section
to the other as the
arrows, we can actually
calculate the core
of the legal code.
And what is that?
It turns out that the core
is 6,947 sections, or 19%,
which is pretty good
except for the fact
that not all parts of the
legal code are created equal,
and let me give
you a few examples.
In 2009, US Congress passed
the Omnibus Appropriations Act,
which is basically a list of
appropriations for various line
items--
a very, very simple
piece of legislation.
Not a lot of interdependencies.
So when we develop a graph
of those interconnections,
we see that it's actually
pretty straightforward.
The nodes that are
colored red are the core,
and you can see that the
core is a really small part
of the overall set.
Now the US Patent
System Title 35
is quite a bit more
complicated-- many more
interconnections,
not surprisingly.
But actually, when
you look at the core,
it's still a pretty small
part of the overall system.
Now what about Title 12--
Banks and Banking-- the
part that interests me
the most as a
financial economist?
[LAUGHTER]
Yeah.
This is a problem.
Highly complex with
interdependencies
that most people couldn't
even begin to calculate,
but that's not the worst.
Title 26-- the
Internal Revenue Code--
[LAUGHTER]
Talk about your spaghetti code.
We really need tax reform
sooner rather than later.
Using this and other
measures, we actually
can analyze the legal code
to understand whether or not
it's well-written or
whether there are going
to be problems that arise.
And in a paper that I wrote with
some students and a practicing
attorney, we
developed these ideas
and applied it to
the legal code,
and we published it
in a law journal.
At the time we
started our research,
the complexity of financial
regulation and design
was not a subject that
involved computing.
Now, it is.
This is why we
need the Schwarzman
College of Computing, and I
want to join my MIT colleagues
in thanking Mr.
Schwarzman and his family
for this transformative gift.
And I want to
congratulate and thank
Dean Huttenlocher for
taking on this incredibly
exciting challenge
and opportunity
to launch the college.
Thank you.
[APPLAUSE]

---

### Computing for the Marketplace: Entrepreneurship and AI
URL: https://www.youtube.com/watch?v=XsGyLueFAGk

Idioma: en

This is an incredibly exciting
day for MIT and for the world
as we celebrate the launch
of the Stephen A. Schwarzman
College of Computing.
Our world is full of complex
and challenging problems.
And the widespread adoption
of computing technologies
has transformed
virtually every industry.
I believe that with dedication
and the right resources,
there is no problem
that's too hard to solve.
And we are only just
beginning to scratch
the surface of what
high-performance computing can
really do.
From health care to
engineering to the bleeding
edge of scientific research,
the best and brightest minds
across nearly every
field will rely
on high-performance computing
and artificial intelligence
to help make the
impossible possible.
This college places
MIT at the forefront
of AI and high-performance
computing research
and will develop the next
generation of great leaders
whose ideas will
change the world.
Congratulations,
and I look forward
to seeing graduates from the MIT
Schwarzman College of Computing
do incredible things.
[APPLAUSE]
At MIT, we're dedicated
to bringing knowledge
to bear on some of the
world's greatest challenges.
But in order to
scale our impact,
we must accelerate the time
from discovery to market.
Building on a myriad
of efforts at MIT
to spur innovation and
entrepreneurship, in October
of 2016, we launched The Engine
to help founders from MIT
and across the region
create the next generation
of world-changing companies,
or as Katie Rae says,
to bring disruptive
technologies from the lab
and into the light.
Since Katie joined
The Engine as CEO
and managing partner
just two years ago,
The Engine has already
invested in 16 companies,
working with some of the
toughest technologies,
including deep software
and AI, robotics,
and quantum computing.
And now, with the launch of
the MIT Stephen A. Schwarzman
College of Computing, we
have another opportunity
to revolutionize the scale of
impact enabled by computation
and AI across disciplines,
across technologies,
across society, across the
nation, and across the world.
Shortly, Kate is going
to lead a discussion
with our esteemed panelists.
But first, I have the privilege
of introducing a person that
needs no introduction.
Eric Schmidt has had
a distinguished career
at Google and its parent
company, Alphabet,
growing Google from a startup to
a global leader in technology.
Now a technical advisor
to Alphabet, Eric
advises its leaders on
technology, businesses,
and policy.
And he was recently tapped
to lead the National Security
Commission on
Artificial Intelligence
to advise the US government
on the national security
implications of AI
and how to maintain US
competitiveness in the field.
Over the past year, we
have had the great fortune
of having Eric here at MIT as
a visiting Innovation Fellow.
He has inspired MIT
scholars to take innovation
beyond invention, to address
global problems, play
the leadership role in advancing
conversations around human
and machine intelligence, and
the tremendous support that he
has given to launching
this college.
Eric.
[APPLAUSE]
I'd like to acknowledge
three sets of people.
Steve and Christine
Schwarzman, your generosity
is far greater than this
audience understands.
The gift that you have
given to create this college
has now ignited an explosion
of planning and philanthropy
that will be billions of
dollars from other donors
and other universities
to help establish
the age of intelligence.
It's an extraordinary
achievement.
[APPLAUSE]
And I mean it.
It's very, very rare that
you get a founder who
has that kind of leverage.
And we have them in
our audience right now.
Second person I'd
like to acknowledge
is President Rafael Reif, a good
personal friend of all of us.
The president years
ago sat down with me
and started talking
about the problem
that he saw of how the
university was structured
and how this new technology,
which he understood
would be transformative
in the many other parts
of the university
that he cared about.
He couldn't figure out
a way to get there.
So he and Provost Marty
Schmidt figured out
a way to create a college, which
you are hearing about all day
today, which works
within the culture
and norms of a university to
actually achieve something
which no one has
yet been able to do,
and that is to
aggressively diffuse
this new technology, which
I'll mention in a minute,
into fields which need it but
can't get it on their own.
And the final person
I'd like to acknowledge
is my very close friend,
Dan Huttenlocher,
one of the best hiring
decisions made by MIT
in a very, very long time.
As the new dean, Dan will bring
a level of professionalism
and sophistication.
And he knows how to
build organizations.
So you have everything
you need right now.
You have a strategy.
You have the funding.
You have the vision.
And you have a leader
to pull this off.
That's why this is such
an extraordinary moment.
The reason that we're
fundamentally here
is to talk that this new age
is far broader than people
appreciate and that the
technologies that we're
talking about here,
which started out
of computer vision--
computer vision is now
better than human vision--
appear to be able to
solve some very, very
longstanding problems that
have existed in society.
And what I want
you to do is I want
you to imagine a world where
each and every one of us
has an assistant.
As a child, you
have an assistant,
which is your teddy bear,
which helps you learn language.
As an adult, you
have an assistant
that allows you to figure
out what to do during the day
and figure out what your
choices are and help educate
you and make sure that you're
telling the truth and all
the other things that humans
run into in daily life.
And as an elderly person,
you'll have an assistant
who will help you with
your medical needs
and deal with loneliness
and keep you connected
to your friends and achieve
the things that you care about.
This model of an
assistant, which
is both a virtual assistant
as well as a robotic assistant
in different forms, is at the
basis of the vision of how
people--
all of us-- will
see a difference
in our lives every day.
And this story is profound.
So it's now clear, for
example, in medicine
that we can begin
to forecast events
based on your health care data.
And many companies,
including mine,
are working very hard to
take the data that exists
and put it all in a format
where machine learning can
go and take that
data and help solve
horrific diseases and
terrible problems people
find themselves in,
but more importantly,
help predict outcomes,
help you know
that you need to get yourself
to the hospital pretty quick,
that kind of a thing.
In the same way, we're
also seeing results
in distribution networks.
I'll give you an example.
Our company recently showed that
we can predict the wind turbine
electricity use in a clever way
using reinforcement learning.
And we're able to
essentially anticipate
the combination of
where the wind is
and where the demand is.
Now you say, oh, OK, that's OK.
There's lots of wind,
there's lots of demand.
It's a huge number, because
these distribution markets
don't make a lot of money.
So all of a sudden,
this technology
is the difference
between not being
able to fund these
projects with capital
and debt versus the ability
to fund them and grow
them and build real
businesses from them.
So on the margin, even
traditional businesses
benefit from the kind
of scalable analytics
that this technology provides.
So this brings us to
the panel right now.
And what we want to
talk about in this panel
is entrepreneurship.
Now, economists will
explain in great detail
that job growth
and economic growth
is not coming from
really big companies
and not coming from
really small companies.
It's coming from a relatively
young and fast growing
companies of all kinds, not
just tech companies, but others
as well.
We know that entrepreneurs drive
the economy, whether we like it
or not, of at least
the Western nations.
We know this to be true.
And what do you need to
have these companies?
You need entrepreneurs.
And it turns out, you
need some other things,
which I'll talk about.
Now, entrepreneurs come in
many, many different kinds
and phases.
You have the sort of
scientific founder,
the stereotype in my world.
But there are many,
many other people
who are sort of
fulfilled with a vision.
They have an idea.
They care about something.
And they personalize it.
They believe in it.
And they convince
others to follow them.
It's one of the most
fundamental human skills.
And that entrepreneurial
spirit, which
is that the basis of our success
as a country, needs more juice.
One of the things that's
interesting is that while
in the world I live in, it's
incredibly entrepreneurial--
there's all this
funding and so forth--
the total number of firms
that are entrepreneurial
is declining in America.
It's also declining elsewhere.
There's something
about the system that
makes it harder and
harder and so forth
for people to want to do this.
So it seems to me that
a job of the university
is to help create entrepreneurs.
Now, they're born, right?
They have this special skill.
They have a skill that I,
for example, don't have where
they can see it and
they know it and they
want to make it happen.
But the university can help
support them, educate them,
get them connected, get them
connected to their friends--
their friends [? as a set ?]
found the company and so forth.
MIT has been at the center
of this for a long time.
One of the things
that's interesting
about this new
world is that as we
enter the
entrepreneurial age, it's
going to be
important to remember
that if you have multiple
entrepreneurs and multiple sets
of data and multiple
platforms, they all
benefit by sharing data.
Now, you sit there
and anybody who's
been through a
legal review says,
well, we have to own this.
We have to own this.
We can't release
it and so forth.
But in fact, this data
gets better with sharing.
And so the platforms
that I and others
have built over the
entire aspect of my career
will now be augmented by
additional sources of data
that can be used by
entrepreneurs to solve
really hard problems.
So I mentioned all the
language and learning things.
Imagine if we can begin to
build knowledge libraries of how
people think, what they
do, what choices they have.
Imagine the contribution
that that would have
in this age of intelligence.
There are so many other
examples where you
can apply AI to solve problems.
There are people who
are, for example, using
artificial intelligence
to solve problems
that are computationally
intractable by doing
essentially AI estimates
for the hard bits.
And that technique will lead
to fundamental expansions
in material science, which
will again create companies,
fundamental expansions
of knowledge
in climate change
and climate science,
and so forth and so on.
MIT, for example, is a
leader in organic chemistry.
Organic chemistry
can be understood
as a relaxation problem
of getting these compounds
to merge together.
Every one of those is another
startup under the business,
another problem solved.
So we are at the beginning of
an explosion of essentially
innovation and discovery in
each of the key fields that
is needed to create
these companies.
Let me tell you that we
need more entrepreneurs.
MIT is at the forefront
of creating them.
There's a long history
in Cambridge-- you all
know this-- of doing this.
And we need more
and more and more.
The shortage of talent is being
addressed by the university
here.
The shortage of faculty
is being addressed
by the Schwarzmans' gift and
the creation of this college.
So we are on plan.
The problem is we're not
doing it fast enough.
So I want the vision
where medical care
is far more effective, far
cheaper, and far better.
I want the vision of material
science and new businesses
and services around that.
I want the ones in biology that
solve all sorts of diseases,
but more importantly
are new compounds,
new organic approaches to life.
I want the sustainability that's
implied by the climate change
work.
I want all of those
businesses to happen.
And what's beautiful
about it is that MIT
is at the forefront in
every single example
that I just gave you.
Thank you very much.
[APPLAUSE]
[INAUDIBLE]
I am.
Thank you.
[INAUDIBLE]
Thank you, Eric.
All right.
Well, welcome, everyone.
This is a panel on AI
in the marketplace.
So how does AI play
with entrepreneurship?
And we have the most incredible
people here with us today.
So I'm going to do a very
brief introduction of each one
and then jump right
in to the discussion
that we want to have.
So next to me is Helen Greiner.
And she is the
co-founder of iRobot.
She's a roboticist,
incredible thinker
about how robots will
affect the future,
and a multiple
time entrepreneur.
Next to her is Jim Breyer.
He is a renowned
venture capitalist.
And in Boston we
always lament that he
was the one that backed
Facebook, but has
done an incredible
amount of investing,
both in China and the US.
So we'll have very
interesting perspective.
Next to him is Bob Langer,
who's one of our gems here.
Entrepreneurship has
founded many, many companies
and made a huge impact in
different areas of health care.
So thanks for being here.
And then Jocelyn Goldfein,
who is at Zetta Ventures
and is a partner there and
works with entrepreneurs in AI.
And then certainly
last but not least
is Professor Tang, who is the
co-founder of SenseTime, which
I believe was Hong Kong's
first unicorn and a true leader
worldwide in AI.
If you didn't see him for
the Quest for Intelligence,
he's also absolutely hilarious.
So welcome.
But no pressure.
Yeah, no pressure.
So we're going to start
off with US versus China,
because how could we not?
It seems like we're kind of
in the middle of a space race
right now with China.
And I thought I'd
start with Jim,
who has invested on both
sides of this ocean into AI.
And my question to
you is, how do we
not make this something
that destroys one another,
but amplifies it?
And kind of what do you see
coming from both sides, the US
and China?
And how does this play out?
What are you excited about?
What are you scared of?
Well first, it's just
wonderful to be here at MIT.
And Steve and
Christine Schwarzman,
what a gift to students,
faculty for decades to come.
And Rafael is just
a great leader.
So I want to start
with that around MIT
as a local Bostonian.
China/US in two minutes.
I just returned.
Very proud to be a series
A investor in SenseTime.
Very proud to have done so much
around artificial intelligence
in the last two years.
If I had to categorize AI
and the state of AI today,
here's where I think the
US is still unparalleled,
as I speak from one of the
great universities of the world.
Our top universities
are turning out
the very best, brightest,
most creative technical
and philosophical leaders.
And many of the
best AI companies,
particularly in health care
that Breyer capital is backing
is an interdisciplinary
group of people,
whether it's biology, physics,
electrical engineering,
computing, ethics.
They're all coming together in
these very interesting startups
that are working with hospitals,
academic centers in very
interdisciplinary ways.
That is happening in the US in
profound areas of health care,
in particular cancer,
AI, cardiology,
that is not occurring at
that level at Tsinghua
or the very best
Chinese universities.
So in a nutshell, my view is
for deep domain-specific AI,
particularly in medical and
health care; that diversity,
female leaders,
scientists, biologists,
machine learning come
together in the US
in more profound ways than
anywhere else in the world.
And I don't expect
that to discontinue.
[SPEAKING CHINESE]
OK, excellent.
So I'm going to move
to Bob Langer next,
because he is steeped in
how technology impacts
many areas of health care.
So I thought we'd
just jump to Bob
and say, what are you
excited about in terms
of what the impact of AI
can have very specifically
in the medical and
health care field?
Well, I think there is
a tremendous opportunity
to do an awful lot of things
in medicine and health care.
I mean, just basically
because you can hopefully
get so much more information.
I mean, if you just look at
diseases now, most of what
we do, usually
there's a single test,
like say you could
go into the doctor
and get your cholesterol tested.
And that gives
you an indication,
say, for heart disease.
It's not perfect, but
it's an indicator.
But let's say you'd want to
take a disease like cancer.
I mean, you could--
which I think certainly
we don't really
know how to do early diagnosis.
But could you, for example, take
blood samples, urine samples,
and you can get something
like transcriptomes.
That's just one of
many things, or you
could get a protein
profile, in which case
you'd have enormous
amounts of information.
You could sort of get what I
call a fingerprint, in a sense.
And then you could actually
try to use AI to, for example,
analyze those fingerprints,
decide what type of fingerprint
gives somebody is at risk for
cancer, what person is not.
And then you could also do
drug testing that way, too.
You could see maybe you'll have
a fingerprint, for example,
that as you look at
it, that shows you
that that drug's going to treat
that cancer for that person.
Similarly, I think there's a
lot of other opportunities.
I mean, we do a lot of
chemistry in the lab.
And one of the big challenges
today in nanotechnology
is we've developed
nanoparticles that can deliver,
I think, some of the
drugs of the future,
like messenger RNA
and DNA, siRNA.
Right now, there's ways to
target them to certain places
like the liver, but
very difficult to do it
for other places.
But what we've been
able to do is literally
design thousands and thousands
of chemical structures.
And we're talking to
some of the people at MIT
and the AI area about how you
could find out which structures
are going to be most effective.
Let's say you do round one.
You do thousands.
And you see that
some work pretty well
at targeting certain
types of cell types
that then hopefully
will allow you
to start predicting what ones
to synthesize for round two
and so forth.
So I think there's
just an enorm--
and of course,
the upshot of that
is that you could someday have
much, much better therapies
for almost anything.
So I think there's just enormous
opportunities for the future.
OK, I'm going to come back
to health care in a minute.
I'm going to skip all the
way down to Professor Tang
and then back to
Jocelyn, back to Helen.
Professor Tang, you both
teach and start companies.
Part of your education was here.
But you are deeply steeped
in the China market.
If you were giving advice
to the future entrepreneurs
both in the US and China
about how to create a winning
company, what
advantages would you
tell them to draw upon
here versus China?
Well, thank you.
Well, since the tool
I have in my hand
is really like a hammer,
so I approach everything
as if it's a nail, right?
So my advice to
the company, if you
start here, I think the
first thing I want to advise
is you start with the
collaboration with SenseTime.
[LAUGHTER]
That was not a joke.
[LAUGHTER]
So just think about it.
In the US, you almost
have everything.
You have the technology,
the university,
and the rule of law
and the venture capital
and all the big company who
want to buy you when you grow.
So really everything's set up.
You have a pipeline.
But at the same time, everybody
here who start a company
have the same access
to that pipeline.
So you are just
one of everybody.
So to have some
distinct advantage,
I think it makes sense
to actually collaborate
with a company in China,
Hong Kong or anywhere
in other part of the
world, because China
can offer something different.
They have market.
They have people who
are willing to work 24
hours for much less the salary.
So anyway--
Today, today.
Anyway, if you
combine the advantage,
then it put you in a
different position.
So I think for this topic,
I have some comments.
I think it should not be
really China versus the US.
It really should be
China and the US.
We should really
collaborate, work together.
AI is really a perfect tool to
break boundary-- break boundary
between countries, between
academic and the industry,
between different industry.
So we should really
take advantage of that.
And also every country
should approach
this in a different way.
If we all approach this from
the same way like in China,
is probably the government
played more roles
to work with a company.
In the US, it's
more market driven.
I think it's good.
In China, it's like you
are raising a farmland.
So you have higher production,
but it's just [INAUDIBLE]..
But in the US, it's
like a rain forest.
So everything can happen.
So you have something very
unique come out of it.
So they both have an
advantage and a disadvantage.
So the key is really
to collaborate.
Thank you.
Thank you.
Jocelyn, you invest
in Silicon Valley.
I do.
And I'm sure more
broadly than that.
But when you think about
what is coming out of there
and how you coach your
entrepreneurs to succeed,
how do you think about AI and
succeeding as a business coming
out of the valley?
Well, at Zetta we were
defined by three things that
were the first stage of
institutional capital
that goes into a company.
So I'm working with
very early companies.
We're, I would say, the
invention capital, not
the growth capital.
We're defined by
seeking companies
that invest in
AI, that are built
around AI and around competitive
advantages from data.
Zetta stands for zettabyte,
which this crowd would know
is a trillion gigabytes.
And thirdly, we invest only
in startups with B2B business
models.
So I am very focused on
finding entrepreneurs
who want to transform the
world of business with AI.
And I think it's no surprise
that a technology that
is novel, that is cutting edge,
and that is potentially risky
really got its
start in consumer.
My introduction to AI
was working at Facebook.
And my first big project there
was adopting machine learning
for the newsfeed's rancor.
And it makes sense that
the first places where
AI could prove itself
would be things like movie
recommendations or deciding
whether to show you
the cat photo or the baby photo,
decisions where if we got it
right everybody was better
off, and if we got it wrong--
well, at the time,
we didn't think
that the order of
your news feed had
civilization-shaking
consequences.
Now that we move
into business, we
are trying to
solve problems that
are more mission critical
where companies are betting
their bottom line on it.
And initially we've talked about
this concept of the AI risk
curve, which dictates that
AI will be adopted not just
as the technology is ready
to solve the problem,
but also as human beings and
companies are ready to accept
the consequences, the social and
business consequences of using
technology for this problem.
So we think that
it's natural then
that in business
at first companies
were willing to adopt AI as
an assistant to a human being,
to make recommendations
to help human beings,
maybe to help
sales reps optimize
how they were calling on leads.
The next stage is
to trust AI so much
that it can automate
fully a task
and without human oversight.
And that, I believe, is
the area that we're in now.
But what is coming and
what really has me excited
is a world in which we can start
turning AI loose on problems
that are too hard for
human beings, problems
like human health, problems
like the health of the planet.
And I want to chime in and
sort of echo something Xiao'ou
said about
collaboration, which is,
I don't know if space race
is the right analogy here.
I don't want the US to have the
best health care in the world.
I want the whole world
to have the best health
care in the world.
I want the whole world to have
access to the best technology
to solve our climate problems,
because that's only a problem
that can be solved globally.
I want the whole world to
have access to the best ways
to design for smart cities.
And I think reasonable
people can disagree on
whether the whole
world ought to have,
for example, the best
military applications of AI.
So there's places maybe
where nation states
need to guard or
protect their secrets.
But I would say that I
am excited about startups
that have a global aim and
intend to have global impact.
Nice.
OK, so now you
brought up the issue
of potentially negative
impacts of AI and robotics.
I mean, I think there's a
general feeling in the public
that these things are scary.
They're going to take our jobs.
They're going to
change the world
in ways that are going
to drive more inequality.
So Helen, not to put you on
the spot, but as the roboticist
here, I thought this would
be a fun one for you--
[INAUDIBLE]
--which is if we're driving
more and more inequality because
of AI, because the assistant
becomes the worker,
what should we do?
Are we going to tax robots?
Are we going to--
how should we fix this?
And what's the role of
the entrepreneur in that?
We can talk about the negative.
But let's just touch
on the positive.
Having a little robot do
your vacuuming actually
save you time so you
can do other things.
And people have plenty
of other things to do.
Having a cell phone in
a third world country
allows people to communicate
or know market data.
People would not give up these
technological inventions.
And as they get
more, I think it's
hard to predict exactly
what the ramifications are.
We're at pretty low
unemployment right now,
in this country at least.
And I just see so
many wonderful areas
that are going to be pushed.
I mean, electric
cars are coming.
Like, huge industries
are forming--
automatic cars, drone delivery,
roboticizing their house,
making it care for you.
There's so many
areas that are really
changing the infrastructure
of everything.
And those will provide
jobs to people.
And then as a society, we can
choose what else we want to do.
Maybe there's more teachers.
Maybe there's more
exploration of space.
I think we have to make
those positive decisions.
The environment was mentioned.
We have to make those decisions.
I'm for the four-day
work week myself,
not for people who work
for me, but everyone else,
because people
couldn't have imagined
earlier on that people would
only work 40 hours a week.
It's like, you work
from sunup to sundown.
You maybe have Sundays off.
Then you have to
go and socialize
and go to church and stuff.
But I think we will
morph as society.
And that's where policy
and AI and robotics
have to get together,
because I think
we can do things that makes
everybody's lives improve.
Anybody else want to--
Oh, and we shouldn't
tax the robots.
Oh, don't tax the robots?
[INAUDIBLE] Taxing productivity
is a really, really bad idea,
because you have to have your
company be more successful
and create jobs.
If you don't do that, someone
else potentially from China
is going to eat your lunch.
Jim, so I do think these policy
questions around privacy--
and I'm going to ask all of
you to weigh in on this one.
These questions around
convenience versus privacy
are really tricky questions.
And I think if we go
back to China versus US,
we've made really different
decisions about that
and partly because China can
make decisions with a very
centralized government.
And the US will have a
very hard time legislating
almost anything, as we've
seen over the last few years.
And so my question
for you is, how do you
think about the evolution
of that tricky problem?
In two minutes or less.
Yeah.
I mean--
I would just offer,
there are certain areas
where the privacy is so
important that, again, I'm
a very firm believer, if we just
focus on health care and AI,
I think we're one or two years
into a 10-year wondrous journey
on how AI, biology,
computational science, many
of the professors and postdocs
and students represented
in this room are going to create
huge, beneficial, wondrous
outcomes to improve
cancer diagnosis,
help doctors come
to better decisions.
The challenge is so
often in the press
or in discussions or in events
in Cambridge or Palo Alto,
it's an either/or.
Will AI replace the doctors?
Absolutely not.
The doctors' decision making,
if it's computational pathology,
rather than wait
10 days for slides
to come back or
breast cancer imaging,
improving the efficiency and
then improving the accuracy
so doctors can do better
work for patients.
That is what the promise, if
we take AI in health care,
is so powerful.
Now with that, we just
have to be really stringent
around privacy laws, security.
And I think the US will
continue to have an advantage
relative to that, because for
the breakthrough outcomes led
by the women and men in this
audience, you need the privacy.
You need the security.
But it is wondrous when
I see the great hospitals
and universities working
together across departments
on interdisciplinary, wondrous
outcomes in cancer, cardiology,
and other areas where AI
serves as a foundation.
So Bob, you are a
professor at MIT.
And you've trained so
many future entrepreneurs
and will continue to.
When you think about how you--
and academia is open.
Academics collaborate worldwide.
When you think about
teaching this next generation
both entrepreneurship
and collaboration,
how do you think
that works with AI?
Well, actually I
think you want to--
I mean, we publish
everything we do.
I mean, we patent it, too.
But we publish everything we do.
And I think that we want to
get knowledge out to the world.
And I think that my experience
in most areas of technology,
or at least biotechnology, is
that publications are good,
that getting it out is actually
an additional validation.
But I would say that I think
that actually both of what
you said I think are right.
I think you want to do things
that are good for the world.
And I think what you said
about medicine is exactly--
that's the kind of thing
I was trying to get across
when Katie asked me earlier.
So I think that you
want to do those things.
You want to get technology out.
You want to learn things.
But what I see sometimes--
and I think this is an important
thing about entrepreneurism.
So many times that when
you come up with an idea,
let's say you came
up with an idea
in the AI area or some other
area, in my experience,
a lot of times people are
going to tell you it's
a terrible idea.
They won't give you any money.
And so I think the most
important thing to me
when I talk to
people in the lab is
we try to come up
with ideas that we
think will change the world.
You should recognize
that you're going
to get a lot of criticism.
And a lot of times
people are going
to tell you it won't work.
And I say keep trying.
Keep going after it.
And so if you don't do it, a
lot of times nobody else will.
And I think that's
the opportunity
for a great entrepreneur.
Awesome.
OK, so now we are going
to go down the line.
And we'll start at this
end with Professor Tang.
So AI is going to be
applied, I believe,
to almost every industry.
When you think of the
next big things coming,
what is the most
exciting space for you
that AI will be applied
to besides SenseTime,
besides [INAUDIBLE]?
[LAUGHTER]
I think AI is pretty much
going to be applied to anywhere
you have data.
And in this digital
world, it's really hard
to find somewhere that
doesn't have data.
So it's probably going
to be everywhere.
At the same pace?
Or do you think
something's going to lead?
It's certainly a different pace.
And also I think it's not really
to find a new area to apply it.
We have so many areas.
We just only touched the
surface at this point.
You may hear a lot about
medical applications,
autonomous driving,
fintech, all this thing.
But if you go down to
it, it's really hard
to find any company who
has made a lot of money
from those areas.
Even for Google, they haven't
started making money on that,
because it's really difficult,
because you need to really work
with the industry.
And the industry is
the people who have
worked on it for 100 years.
So they have a lot of
[? domain ?] knowledge.
And people from AI,
from computer science,
they just know algorithms.
So it's really the two
sides need to work together
to get things done.
And just give you an example.
We are working with
a casino company.
So I'm very confident.
We have these cameras and
all these things with that.
We can track all the
people on the blacklist
will come in to try to cheat
on you and all those things.
And they said, we don't care.
Nobody can cheat on us.
The house always wins.
The movies-- Ocean
Eleven, Ocean Twelve--
those just story.
What we care is
our own employee.
We have 100,000 of employees.
And when they are
dealing the cards,
they can really cheat us.
So those are our biggest loss.
Now, how can you
monitor those things?
So you don't know the
need of the industry.
So for automatic
driving, it's also
you need to work
with a car company.
They can teach you a lot.
But this is really
difficult. It's kind of hard.
So I think for the
next step is not
to have all this big
stories of new breakthrough.
It's just how we get
down to the business
to actually make money.
If we cannot make money,
we cannot survive.
I think every technology will
have its 15 minutes of fame.
And for AI, we have
to think about how
to get through the 15 years of
fame to really get it working.
And I really think MIT
did the right thing.
They said that they wanted to
start a college focusing on AI.
But it turns out it's
a college of computing.
But in China, we have a lot
of college [INAUDIBLE] AI.
I think someday AI
[INAUDIBLE] bubble.
But really it's computing, it's
innovation, the technology.
That will survive.
Great.
Thank you.
Jocelyn, what are you excited
about that's going to pop next?
I think we're so much
in the early innings.
I think we're in the command
line days of the internet.
And someday 10, 20
years from now, people
will look back and think
how primitive we were.
I think what I am most
excited about is building
the structures and systems
that enable us to do this well
and to do it across the board.
I think the Schwarzman
College of Computing
is an excellent first
start, this idea
of educating AI bilinguals.
I am excited about,
and I think there's
a huge role for academic
institutions for MIT, Tsinghua,
Stanford and other places
like that to step forward
in terms of creating a code
of ethics, a code of standards
for the types of work that
we're doing around data and data
privacy.
I think those will be
fundamental building
blocks actually to enable
the kinds of innovation
we want to see.
And I look forward, frankly,
to more MIT grads and more
scientists, engineers,
and mathematicians
actually to go
into policy making
and to go into
politics so that we
can have policies
and regulations that
create structure
for what we're doing
that are informed by people
who deeply understand.
And I think those are all
sort of necessary precursors
to enable the world
we all want to live in
and that we want our kids
and grandkids to live in.
And I guess I
would ask you, Bob,
just slightly
differently, when you
look at where your students
are applying AI, like what do
you think is going to pop next?
Well, I think it's
what I mentioned and I
think Jim mentioned, too.
I think therapeutics and
diagnostics, I think,
are giant opportunities.
Jim, anything besides
health care that--
There's a lot, of course.
So much of what we all
need to do as investors
is go very deep
in certain areas.
And I think the big challenge
is the interdisciplinary nature
of bringing chemists,
biologists, computation--
I could go on and on-- together
in small entrepreneurial teams
that are functioning
extraordinarily well
and having diversity
of women, men,
underrepresented
minorities all at the table
as we're building these
next generation companies.
But my goodness, to be an
entrepreneur or postdoc as part
of an entrepreneurial
venture with affiliations
at MIT and elsewhere--
what a phenomenal
time, I believe,
to be a technology investor
as well as an entrepreneur.
And Helen, you get
the last word here.
You already know how
I'm going to answer.
Well, I wonder
how you'll answer.
Combining the computation with
the physical, things that think
or robots, because--
Maybe more specifically, though.
What are you excited
that you're seeing
where robots will be applied?
Even where they're at
today with autonomous cars,
unthinkable in the
[? '90s. ?] In fact,
[INAUDIBLE] we chose not to do.
We said, oh, they'll
never let us on the road.
They'll never let us get
insurance and stuff like that.
Package delivery, logistics
in general-- the whole train
is being automated with
robots doing warehousing
and fulfillment, et cetera.
It will be trucks.
And getting the robots
into the physical world,
getting that
computation from just
being on desktops
and in the cloud
to in the physical world--
I find that the most exciting,
because at the end of the day,
they kind of come to life.
That's amazing.
Well, I want to thank the entire
panel for their comments today
and for being here.
Appreciate it.
[APPLAUSE]

---

### SOLVE 2019 Announcement
URL: https://www.youtube.com/watch?v=wx3nfcoDsKg

Idioma: en

Hello, everybody.
Hello.
Good afternoon.
Hello.
[LAUGHS] I'm Alex Amouyel.
And I'm the executive
director of MIT Solve.
[WHOOPING]
Woo!
Yeah.
I'm delighted to be here today
as part of this amazing day
of celebration.
Every year, we launch
four specific challenges
around education, health,
economic prosperity,
and sustainability.
Through open innovation, we
find the most promising tech
based social innovators
from all around the world,
including those already
right here at MIT.
These Solver teams use
AI, machine learning,
and many other
technologies to positively
improve the lives of
thousands and millions
already and hopefully
millions more in the future.
I am delighted to
announce that today,
we are launching our
four new challenges.
They won't be open until July 1.
Selected finalists
will be invited
to pitch their solutions
in front of our judges
at Solve challenge finals
in New York on September 22.
And of those, 32 will be
selected as Solver teams
and will receive
support from my team
to connect them with a
strong community of funders,
corporations, experts, peers,
and more to help advance
their work.
There's already $725,000
of funding available
so far for this cohort.
And over the last
two years, Solve
has brokered over $7 million of
funding for our Solver teams.
So it's worth applying.
But it's really also much
more than just funding.
And I'd like to notably thank
Eric Schmidt and Schmidt
Futures, who you heard
from earlier today,
as they're bringing a
new prize pool to us
with $200,000, which
focuses specifically
on supporting solutions
that used advanced computing
techniques or that leverage
artificial intelligence.
So it's very apt
we're here today.
And thank you, Eric.
[APPLAUSE]
So our four challenges
are number one, early
childhood development.
How can every child under five
develop the critical learning
and cognitive skills they need
to reach their full potential?
Number two, healthy cities.
How can urban residents design
and live in environments
that promote human health?
Number three, circular economy.
How can people everywhere
create and consume
goods that are renewable,
repairable, reusable,
and recyclable?
And finally, number four,
community driven innovation.
How can citizens and
communities create and improve
social inclusion and
shared prosperity?
There's a lot more info on our
website, so do check it out.
And everyone is
encouraged to apply.
So you can be an MIT
student, faculty, alumni.
Or you can be not
related at all to MIT.
You can be 13 or 84.
And our youngest selected Solver
was 13 when she was selected.
You can be a non-profit
or a for profit
or not registered at all.
You can be working in the US or
anywhere else around the world.
It's about open
innovation, and so we're
looking for the most
promising solutions
no matter where they come from.
If you have a good solution
for one of these challenges,
please apply.
If you know people
with good solutions,
please tell them to apply.
There are lots of other
ways to get involved.
Host a Solver fund,
come to Solver MIT,
volunteer our events, mentor
our Solver team's, join
Solve as a member, or bring
funding for prize pool.
And do get in touch.
We'll only be successful
if everybody gets involved.
So thank you and happy solving.
[APPLAUSE]

---

### Computing the Future: Setting New Directions (Part 1)
URL: https://www.youtube.com/watch?v=9f2l1033lNo

Idioma: en

Good afternoon.
Welcome to our next session of
the day, computing the future,
setting new directions.
My name is Cindy Barnhart.
And I am MIT's chancellor and
the Ford Foundation professor
of engineering.
Being a chancellor can
entail different things
at different schools.
Here at MIT it's about
all things students.
So it is through this
lens, that of students,
that I view the future and
the directions we take at MIT,
and why I and so
many others at MIT
are so excited about
the Steven A Schwarzman
College of Computing, a catalyst
for inventing the future,
and envisioning and evolving
MIT's brand of research
in education.
Our students will
benefit in many ways.
For one, there will be
increased opportunities
to create and engage in
multidisciplinary education
and research, and to learn
through new integrated
curricula, and to pursue
flexible pathways and degree
programs.
No doubt our students will
reimagine and reinvent
their experiences
at MIT, and beyond,
in ways we can't fully
fathom right now.
And that opportunity
for invention
is just one reason we see
game changing possibilities
for the new college.
And speaking of
game changing, we
are grateful and thrilled
to be hosting and hearing
from David Siegel, Diane Greene,
Drew Houston and John E Kelly
in this session.
They are each luminaries
in their fields,
and also, thankfully, steadfast
supporters of MIT's mission.
They will offer us their
own unique experiences,
perspectives, and insights
on our session's theme,
while serving as powerful
examples to our students
about the kind of
positive impact they too
can have on the world one day.
So without further
delay, it is my pleasure
to welcome to the stage
David Siegel, co-chairman
of Two Sigma investments, and
founding advisor of MIT's quest
for intelligence.
[APPLAUSE]
Thank you very much.
Thank you for the opportunity
to speak here today.
As we think together about
the future of computing,
though I'm extremely optimistic
that the best is yet to come,
my talk today will focus
on being realistic.
I apologize, but unlike what
has become common in the tech
industry today, there will be no
hype in what I'm about to say.
Maybe a little bit of hype.
As many of you know, I've
spent my life, pretty much
my entire life working
with algorithms.
And I focused on the field
of artificial intelligence.
The research and progress in
these areas in my lifetime
has been astounding.
We've moved, very importantly,
from a world where computers
assist people to one where
computers are routinely
making decisions for people.
And this is a big difference.
This is a massive shift.
And in my mind, of course
this will make the world
a better place.
But-- and I want
to be realistic--
we are not just here.
There is a fear that we could
move in a direction that is far
from an algorithmic utopia.
We need to build a world that
benefits from and understands
how these technologies
work, and most importantly,
where our new
technologies fall short.
I see a lot of
misinformation out there.
And I worry that it's
distracting people
from the real technological
problems that we face,
and, importantly, the
research that we must do.
I see parallels between
the world today,
and of that remarkably
entertaining movie, Dr.
Strangelove.
If you've never seen it, you
should go watch it later.
And no, I'm not Peter Sellers.
[LAUGHTER]
I'm glad that you
guys got the joke.
It parodies the bizarre
thinking of the Cold War era.
The plot centers
around the dangers
of an unstoppable
algorithmic doomsday machine.
The threat of nuclear
war back then was real.
But paranoia about
the Red Menace
was borderline hysterical.
Some of the hype around our
increasingly algorithmic
AI-driven world, and the
dangers that might exist,
remind me of this paranoia.
I've learned to love
algorithms over the years.
But my experiences with them
have been very good but not
always perfect.
Important basic
research is still
needed to take our algorithmic
world to the next level
safely and sanely.
As algorithms are called upon
to increasingly make decisions,
we must all pause and reflect
on their current strengths
and weaknesses.
In this talk, I'd
like to highlight
four fundamental
research areas that I
feel need substantial
progress over the coming years
to avoid a
Strangelove-like future.
They are managing complex
ability, complexity,
and reliability, security,
data integrity and bias,
and algorithmic explainability.
I believe the way professors
and students at this new college
approach these research
areas and others will
have huge implications for
the world we live in tomorrow.
Years ago, algorithms
were pretty simple.
It was easy to look at 10
lines of Fortran code--
as I did as a kid--
and understand what
they were doing.
Today we rely on algorithms
that run on complicated computer
systems.
They could have 10 million or 50
million or more lines of code.
Managing the complexity
of these systems
has become a real
engineering challenge.
It's one thing if
they are performing
something simple and harmless.
But it's a different
situation when
they perform more crucial
tasks, like driving your car.
Managing their
ever-increasing complexity
while maintaining reliability
requires fundamental advances
in software engineering.
Microsoft is full
of top engineers.
It's a terrific company.
Still, they famously have
trouble rolling out updates
to Windows.
Let's face it.
Windows is not that
complicated compared
to some of the big systems
we're developing today.
We are building
software that exceeds
the complexity of anything
humans have ever built before.
I mean, that's a
remarkable thought.
We have to continue to learn
how to manage this increasing
complexity better.
Related to this
challenge is security.
As our dependence on
decision-making algorithms
grow, we need to make
sure our systems are
more secure than ever.
People didn't pay much
attention to security
when the internet
was first invented.
And we all know that
we're continuing to pay
the price for that today.
Despite some important
advances, the problem
will become more acute as
decision-making algorithms
proliferate.
In any society you
have bad actors.
We have to assume
that people will
try even harder to exploit
security flaws of increasingly
critical software.
So let's learn from our
experience with the internet
and make security
a top priority.
Data integrity issues
are the third area
which I think basic research
has to be prioritized.
This is in the news
quite a bit since data
is such an integral part of
machine learning algorithms.
But substantial
unsolved problems
in data privacy, data
ownership, and data bias remain.
Without solutions, the
algorithms that drive our world
are at high risk of
becoming data compromised.
Concerns about
data use can hinder
the development of algorithms
and their benefits.
For example, if there
really was a way
to guarantee privacy
and protection
of any kind of data--
let's take health care data--
people would be more
willing to contribute
their valuable information
to medical research projects.
Consider this.
A few years ago, the Centers for
Medicare and Medicaid Services
began to withhold some patient
records from research data
sets.
They contained information
on substance misuse.
The centers cited privacy
concerns and took the data out.
But substance
misuse is associated
with more than 60,000
deaths a year in America.
So that omission was a
big loss for researchers,
and by extension, patients.
We are missing the
opportunity to solve
pressing problems because of
the lack of accessible data.
Moving to my next concern--
and this is related--
are questions of data
bias, which are becoming
more and more important.
Frankly, most data sets
have some kind of bias.
Any machine learning algorithm
built from such data--
you guessed it--
is probably biased.
But I would point out that there
is some confusion between data
bias and ethics--
which is, ethics in AI
is a very hot topic.
When people talk about
ethics in the context of AI,
sometimes I think what they
really mean is data bias.
Not to digress too deeply into
the topic of ethics in AI--
that would take up the
rest of the afternoon.
And in fact part
of the afternoon
is dedicated to a
discussion on this area.
I'd just like to remind
people that in my view--
and maybe this is a
little controversial--
AI is really just a tool.
Algorithms are tools.
Tools can be used for
good things or bad things.
Perhaps it gets a
bit trickier when
your tool is making important
decisions automatically
for you.
And then it may be
reasonable to ask,
is this an ethical application?
But I think the
argument ultimately
applies more to
deciding if the tool is
being used appropriately
than is the tool appropriate.
There's another
problem too, which
is debating ethics is a
societal discussion that
is as old as humanity itself.
Last I checked, there
is no right answer
to most ethical questions.
And to me, that means
there is no universal way
to create ethical AI.
Maybe this is
controversial-- not sure--
but discussions about ethics
should really, in my view,
primarily be about how
the algorithms are used,
not about the
algorithms themselves.
But let's get back to data bias.
Data is the material
that makes up AI tools.
And biased data makes bad tools.
It's like having a hammer
that's unevenly balanced,
making it likely to hurt
someone or break something.
You don't want a biased hammer.
So research is needed to ensure
that the integrity of data
sets, that much machine
learning and other AI research
is being done with, does not
have biases that are impacting
their effective use.
I'd finally like to talk
about explainability,
or interpretability, which
remains a central problem
in machine learning.
Why did a model do that?
Machine learning algorithms
can't provide an explanation
for their outputs.
Or at least they can't provide
an explanation in the way
that we normally would mean
explanation, an explanation
that humans can understand.
The parameters an
algorithm relies on
are numerous and complex.
There is no way, usually,
to intuitively explain
their thought process.
This is obviously a problem,
not just for researchers--
it makes it hard for you to
improve your algorithms--
but it's also for society a
big issue as we increasingly
rely on these algorithms.
I believe we have to
be able to understand
the world we're
interacting with to be
comfortable with living in it.
People will only tolerate
AI's lack of ability
to explain its
decisions up to a point.
We don't complain or
ask too many questions
when decisions are in our favor.
Like when a loan
application is approved,
you're not going to
debate the loan officer,
why did you approve it?
Imagine, though, If someone
is denied a heart transplant.
And the AI that made that
decision is asked, why?
It can't just say, well, I've
solved millions of equations
and they concluded
no heart for you.
Sorry.
When an opaque algorithmic
decision works against us,
we want to know why.
And if the answers
aren't clear because
of the interpretability
problem, we won't stand for it.
And I think that's totally fair.
Related to this is the
issue that you can't yet
reason with an algorithm.
People reason with each
other all the time.
That is, in fact, related
to our unique ability
to have-- we have one-shot
learning, the ability to learn
from an example of one.
Presented with a single
compelling example,
we can change our minds.
This is a critical part
of human reasoning,
but not yet something that we
perfected in any kind of AI
or machine learning scenario.
OK, let's take stock on my talk.
I've shared some important
research problems
that need to be
addressed to manage
our transition to an
increasingly algorithmic
decision-making world.
In an earlier era of
computing, we typically
were writing software
to solve problems that
clearly had a right answer.
Years ago, one of the
first programs I wrote,
used the Newton-Raphson
method to compute
the square root of a number.
How many people here
have done that exercise?
A lot.
It was a fun project.
And like most coding,
until more recently,
you could tell if
your program worked
because there was a right answer
that everyone agreed upon.
Today we are increasingly
asking computers
to do things where
there isn't necessarily
a right answer, or
at least an answer
that everyone agrees with.
I encourage the
research community
to focus on the critical points
that I've mentioned today.
Data bias can make algorithms
suddenly subjective.
Ethics vary by
individual and culture,
complicating matters further.
Complexity can
introduce critical bugs
that are hard to find with
traditional testing approaches.
The lack of explainability
will reduce our confidence
that answers being produced
are believable or even correct.
One big advantage in
a human-driven world
that we've had over
the years is that it's
proven to be very robust.
One reason it's
robust is each of us
goes about our
decision-making differently.
Imagine if critical decisions
are made increasingly
by a single, or a small
number of implementations
of an algorithm.
That robustness could be lost.
This is particularly challenging
when the decisions being made
don't have an obviously
correct answer.
It's probably OK for
the world to rely
on one provably correct
square root subroutine,
but maybe not for complex
and critical problems.
We have to be careful not
to turn over decisions
to algorithms and AI that are
beyond their current abilities.
Dr. Strangelove learned
that lesson the hard way.
Thank you very much.
[APPLAUSE]
That was great.
Hi there, I'm Diane Greene.
I am so honored and pleased
to be here with you today,
celebrating the Schwarzman
College of Computing.
I [? believe ?] compute--
and I think everybody here
does-- is going to be key to
future research breakthroughs
in, across science
and engineering.
And the Schwarzman
College of Computing
positions MIT at the
forefront and readying
it to contribute and lead.
You know it's core
to MIT's success
that it provides faculty and
students the labs and the lab
equipment to make their
breakthrough discoveries,
things like the new nano
building or scanning
electron microscopes.
But now we have
a new tool that's
just completely general purpose
and pretty much every lab
is going to use, which is
compute and advanced AI.
And they also need data.
And if you will, data's kind of
like what the great libraries
full of books used to be.
So we need these three
things for our researchers.
They need access to data.
They need self-learning AI.
And they need a lot of compute
to do their groundbreaking
research.
Today, the public clouds are the
leading and largest amount of,
have the leading and largest
amount of compute capabilities.
Industry also has access
to most of the data.
Compute is super expensive,
although we keep it down.
And that issue needs addressing.
And I believe we can do this
by evolving our models for how
industry, academia, and
government all collaborate.
And we need to solve
the funding issues.
You know computers, they've
been important to research
since the first
[? ENIAC. ?] But we're now
at this turning point
in their criticality
because instead of being
constrained by a human's
ability to code up an algorithm,
we now do algorithm development
by combining data
with the algorithm
and having it learn
and find patterns
faster and across more data
than any human is capable of.
And it can actually
surpass what a human can
do for finding these patterns.
Of course with this
AI, and particularly
with the compute-intensive
technologies,
such as neural
net deep learning,
the rule is the more
compute and the more data,
the better the accuracy
of the results.
Just to give you an illustration
of the power of AI in science,
an experiment was run by
the late physicist Shoucheng
Zhang and his
research group, where
they used machine learning to
rediscover the periodic table.
And they were able to do
it in just a few hours.
And this, perhaps chemistry's,
one of its greatest
achievements, originally
took scientists years
of trial and error to achieve.
And I think it shows
that AI will undoubtedly
be instrumental in future
Nobel Prize-worthy discoveries.
This is a pretty
different situation
from the 60s and 70s when
DARPA funded the ARPANET
to connect the universities.
And researchers
created the internet.
And then it went into
the civilian sector.
The large internet
companies invented
a disruptive and
highly profitable
model that has let
them transcend the need
for government-funded research.
And they've been able to
fund their own compute data
management and AI.
And as a result, the
large internet companies
have the most advanced
compute technologies.
So industry's now ahead of
the academic institutions
in compute
capabilities, in access
to some of the biggest data
sets, and also in the ability
to assemble large numbers of the
world's leading AI researchers.
But this is problematic because
it's not industry's mission
to do basic research, or to
collaborate with outsiders,
although they often do.
A friend at Cornell, PhD
student [INAUDIBLE] [? Raghu ?]
recently ran a few queries for
me on the origin of papers,
in the top two--
over the last three years in
the top two AI conferences,
ranking them by the
number of citations.
And it showed that up to 60%
the top 5% of the papers,
ranked by citation,
came out of industry,
whereas overall, all the
papers, 25% came from industry.
Alphabet's AI research
lab, DeepMind, recently
won a protein folding contest.
And a computational biologist
said, DeepMind's success just
kind of came out of the blue.
He said their work was just
tremendously impressive.
But he said it should be noted
that they did have the best ML
tool, machine learning
tools, and the deepest
pockets for compute.
So industry's cloud data
centers are super big.
Single buildings could have
a whole football stadium
inside of them.
They're hospital-clean, have
rows and rows of compute racks,
and what you see increasingly is
huge numbers of specialized AI
compute clusters.
They need this because the speed
and the real estate efficiency
of computer chips are not
still doubling every year,
as Moore's law said.
And so the speed
improvements now
are coming from
workload-specialized custom
chips and architectures.
This underscores the validity
of a new axiom, no chip, no AI.
China understands this.
They've invested some--
I read recently-- 65 billion
over the last two years
to jump start their
semiconductor industry
capabilities.
The cloud companies are spending
billions of capex dollars
every year to expand their
compute storage and network
capabilities.
And the researchers that
work in these companies
have more access to compute
and to certain proprietary data
sets than anyone else.
They're also able
to pay a multiple
of universities' salaries.
Two weeks ago I was in Germany.
And a professor told me that her
university's top AI graduates
and scientists were constantly
being recruited away,
either to America or China.
Last week I was told by
two different people,
independently, that a Chinese
university near Hong Kong
was given $800 million
just to recruit AI talent.
So as the rest-- and
another interesting thing
is as the rest of the
world starts to adopt AI,
it's even possible
that we're not
going to be able to
keep up with the demand.
If you look at the internet
companies, or the open AI,
a non-profit AI
think tank that's
working on artificial
general intelligence,
they recently issued a report
that showed that the large AI
training runs over
the last few years
have been growing exponentially,
doubling every 3 and 1/2 months
since 2012.
That's pretty fast growth.
In a sense, capitalism
is doing its job.
And perhaps we just
leave well enough alone.
The industry AI labs are
open sourcing their work.
They're publishing
their research.
And in general, they
really are working
to make the world
a better place.
But that won't
solve for the sort
of long term research that leads
to the giant breakthroughs.
Where will that
continue to come from?
And that's why we're here
today because we believe
that's precisely where
MIT and the Schwarzman
College of Computing will
play an important part.
As an aside, EECS
long term research
is starting to be disadvantaged.
In particular, because
the cloud companies
run these mega operations
across more and more specialized
workloads, and they
get unique knowledge.
For example, they know how
many of each type of machine
instruction get to execute it
for the different workloads.
And they can experiment
with new configurations
in every new data center.
I'll just quickly mentioned that
access to large amounts of data
also brings challenges.
First, it needs to be cleaned
in order to be useful.
This is super labor intensive
for graduate students,
industries using large
amounts of compute
to develop and run
automated techniques.
The data is also
siloed and difficult
to join with other
data sets, particularly
if they're spread
across different cloud,
since they don't
seamlessly work together.
Then valid privacy concerns
are driving regulators
to legislate the use of data
in sometimes counterproductive
ways, like those bits can't
cross my geopolitical boundary.
This advantages the countries
with the biggest populations
and the fewest regulations.
A recent article
in Nature Medicine
described the
China-based study that
showed that deep
learning could be
used to diagnose
common childhood
diseases with high accuracy,
with the high-end of what
the doctors could do.
This was a collaboration
of American and Chinese
researchers.
They had a 600,000
patient data set.
It's from Guangzhou, China.
It'd be difficult to
get that much data
and use it similarly
in the United States
due to our regulations
about patient data.
And then industry
is understandably
careful about sharing data that
could violate users' rights.
We can't yet credibly guarantee
the privacy of users' data.
So it's good that the
Schwarzman College of Computing
will take a seat in
policy discussions,
do scholarly work there,
and help shape regulations.
As an aside, I was here
this morning and listening
to the ethics
discussions, I just
wanted to mention that
technology is always pounced on
by aggressive
opportunistic people.
My first startup was a
low-bandwidth streaming video
company.
And guess who the
first users were?
It was all for pornography.
And we had to decide, no,
we don't want pornography
using our technology--
which we did.
It didn't cost us
much at the time.
But then in VMwar, the
virtualization company,
some of the first
users were hackers
using the sandboxing
of the virtual machines
to plan their hacking attacks.
Looking at the data problem,
in order to solve it,
I had a really
interesting discussion
with a guy, Keith
[? Weintraub. ?]
He's an MIT alum and
Stanford professor.
And he did a lot of
work to simulate data.
But then he could not
replicate the same
results between his simulated
data and the real data.
So then he went to
incredible effort
to collect his own data-- in
this case, streaming video.
He's still only
had a tiny fraction
of what YouTube, Netflix,
and the others had.
But he did it anyhow because
in talking to the companies,
they were open to
sharing with him.
But they didn't have all the
data that his research needed.
These are still early days.
Data is growing
incredibly quickly.
And there's an
enormous opportunity
for the Schwarzman
College of Computing
to be part of the solution.
To give a sense of how much
more data will be coming,
think about edge to cloud
connected device estimates.
AMD CEO and MIT
alumna Doctor Lisa Su
recently said at CES
that two billion sensors
are in use today.
And it's estimated to grow
to 35 billion by 2020.
Anecdotally, she also
said, just for fun,
I just really love
building chips.
We're so excited
about technology.
We can help turn the impossible
into the possible, which
is, of course, a
classic MIT phrase.
So in summary,
compute data and AI
are now invaluable to
researchers, research
scientists, and engineers.
The industry labs have been
able to attract the largest
collections of top AI experts
and provide them with the most
compute and salaries.
For companies and our
economy to continue
to benefit from the
commercialization
of breakthrough discoveries, we
need to encourage more industry
MIT collaboration.
And it's in the best
interests to have
government fund these efforts.
It's common that
students and professors
will do an internship or take
a sabbatical in industry,
which is really valuable.
But the focus doesn't tend
to be on long-term research
or radical new ideas for
how to approach problems.
And the IP states
that the company.
If we could also have
top industry researchers
and engineers take sabbaticals
and [INAUDIBLE] MIT,
then we'll benefit from
industry's unique knowledge
and assets.
And when a breakthrough
does occur,
the industry researchers
will be in a position
to bring it back
to their companies
and accelerate the
commercialization.
Finally, we need to find
a way for government
to properly fund the
immense compute needs
that faculty and students have.
I wonder, could
the cloud vendors
receive a tax break for donating
compute to universities?
Should the government
and universities
collaborate on a giant cloud?
That would be super expensive.
It's urgent that academic
researchers and their students
have access to generous
amounts of compute
on a par with industry.
This means developing
these mechanisms
to fund compute and salaries
to solve the IP and privacy
issues that will let
us bring industry
collaborators and their
resources to academics.
It's needed to help the
extraordinary minds at MIT
continue doing long-term
fundamental research
with every advantage
that's available.
It turns out that compute
and advanced AI are now
critical to the
research process,
and thus they're integral, as
we know, to the mission of MIT
and the Schwarzman
College of Computing.
So thank you very much.
I'm excited to see what the
coming years bring to MIT.
[APPLAUSE]
All right, good
afternoon, everyone.
I am always so happy
to be back at MIT.
It's always a
highlight of my year.
And what an amazing
occasion to be here.
So it really makes
me really proud
to see MIT continuing
to lead from the front
and making ambitious investments
like the one in the college.
All right, so we're going to
talk about computing in 2030,
and specifically something
that I call the human machine
partnership.
So humans have a
pretty unique ability.
So we have this superpower--
let me get the clicker going.
Let's see.
Well, we've a
superpower where we
are able to invent
technology and tools that
make our life easier.
And we offload a lot of
our work to machines.
And so we've done
that in a very--
we've offloaded a lot of
literal heavy lifting machines,
culminating in things like
the Industrial Revolution.
And more recently we've
started to offload
a lot of our intellectual
work to machines.
So back in the 1600s, a computer
was a person, not a thing.
And then now, of
course, that's changed.
And computation
is something that
used to be done only by humans.
And now it's almost
entirely done by machines.
And that's a really important
trend that's going to continue.
So we've created this virtuous
cycle where we invent machines.
We offload more work.
We use some of our free time
to invent even better machines.
And a lot of good things happen.
Our productivity goes up.
Our standard of living, or we
raise our standard of living.
And we're able to tackle
bigger and bigger challenges
as a society.
And all of us here
are beneficiaries
of that compounding growth.
And with every
turn of the cycle,
we also find new things
to do with our time.
And I'm not talking
about things like being
an Instagram influencer,
or a drone operator.
Those are real jobs now.
I mean just things like just
the concept of knowledge work,
or the concept of going to
an office from nine to five.
That's a relatively
recent phenomenon.
In fact, the term
knowledge work was only
invented about 60 years ago.
And even in my own
lifetime, as a millennial,
we've seen a lot of
profound changes,
from the birth of the PC, to
the rise of the modern internet,
to cloud and mobile in
the last 10 or 15 years.
Technology is
continuously transforming
our lives and the way we work.
And our productivity
is always accelerating.
Except for the fact
that it kind of isn't.
And this was kind
of surprising to me
because I'm like, oh, we
have all this technology.
But around 10 years ago, the
rate of productivity growth,
at least in the US, fell
pretty dramatically,
below historical levels.
And now, I'm not a
labor economists.
And some of you in the
room are labor economists.
So I won't get too
ahead of myself here.
But I want to talk
about a couple
of things that can't be helping.
And I really started
getting interested in this
a few years ago because my
company was doing super well.
But I felt--
I had this feeling that my
life was kind of on autopilot.
And that my work, or
the experience of work,
had turned into this grind
or this kind of treadmill
where you wake up,
or I wake up, try
to get to work out in, go to
the office, meetings all day,
come home, try to eat, clean
out my inbox, pass out, repeat.
Now to be sure these are
first world problems.
I'm very lucky to be in that
situation in the first place.
But at the same time
I'm like, is this it?
[LAUGHTER]
And it was frustrating because
I definitely felt busy.
And I was working hard.
But I didn't feel productive.
And when I looked
into it, it turned out
I wasn't that productive.
And maybe none of us are.
And when you shine a
light on where we spend
our time at work--
and McKinsey did this
a few years ago--
It turns out that we spend
the majority of our time--
actually over 60% it--
on certain tasks, like
finding information,
and email, and
coordinating people.
I like to call these
things work about work.
And to be fair, not all
60% of that is waste.
But that does mean
that we only spend
40% of our time doing the jobs
we were actually hired to do.
And so I thought about
that a little more.
And I'm like, OK,
what that's really
saying is take Monday,
Tuesday and Wednesday
and just light them on fire.
This week and every week
of your working life times
a huge chunk of the planet.
Or in other words, for those
of you who are students,
and you get your whole
life in front of you,
and you're about to begin
a 40 or 50 year career,
unless something changes
you have a decade, maybe
two decades of email
to look forward to.
[LAUGHTER]
And that's not all.
As you can see, all the
different apps we're using now
have taken our attention
and they've shattered it
into a million little pieces.
So it doesn't make
a lot of sense.
How many of you would say that
you were hired for your minds?
That your job involves
using your brain at work?
OK, now how many of you feel
that you actually have the time
and space to actually think?
That you're able to-- you finish
every day and you're like,
man, I just crushed it.
I was in the flow state.
I was super focused.
If you have any tips
for the rest of us--
but a lot fewer hands, right?
And no one is immune to this.
So if Einstein
were alive today--
[LAUGHTER]
--what would his day be like?
He'd like spend his
first couple hours
like archiving LinkedIn
invitations and Groupons.
And then he'd get down to work.
And like just as he was about
have some brilliant flash
of insight, like his phone would
buzz and would be like a Trump
tweet, or slack notification.
But would we still
understand relativity?
And maybe there's like
10 other breakthroughs
that would've happened by now
if we weren't in this situation.
Like, we'll never know.
And it's not just inefficient.
It's also making
us really unhappy.
So 2/3 of people report to
it being disengaged at work.
Half people report being
consistently exhausted at work.
And those numbers have
only been going up
over the last 20 years.
And this is tragic
because work can be
such a fulfilling experience.
And everybody deserves that.
So OK, something
needs to change here.
But what should we do about it?
The first thing we need
is a change in mindset.
So if you think about
the knowledge economy,
it's basically one
ginormous exchange
of dollars for brainpower.
Right?
And every company can
tell you down to the penny
where payroll goes,
where the dollars go.
But then when it
comes the other side,
where does the brainpower
go, we are totally blind.
We're not even measuring it,
let alone spending it well.
And this is insane.
Because every single project
that we've talked about today,
and every single challenge
that we have as a society,
depends on being able to
harness our brainpower.
And it depends on be able
to do knowledge work well.
And we have a long
way to go there.
But we need to start
treating our brainpower
like the precious
fuel that it is.
It's the fuel for
human progress.
And so it's things
like this that
were reasons why we evolved
our mission at Dropbox to put
a dent in this problem.
And we believe that knowledge
work has gone totally
off the rails, needs some
fundamental re-imagination.
So we decided our mission is to
design a more enlightened way
of working.
And we'll talk about
Dropbox another time.
But to borrow the words
of a wise professor
from a couple hours
ago, my advice to you
is download Dropbox.
But anyway, so what
should we do about this?
So I think that there are
a few tactical things.
Like we need to create
the conditions for people
to be able to use
their minds at work.
So first I remember visiting my
dad at work when I was a kid.
And he spent most of his career
at Draper Lab down the road.
And I went into his office.
And a lot of things
are the same.
So he had a desk.
He had a PC.
He had a phone.
But a lot of things
were different.
He got five emails
a day, not 500.
And he had a door
that could close.
And he could turn his phone off.
And you could kind of
see how someone like that
could get, actually
use their mind at work.
Now I'm not saying that
we should go back to 1995.
But we definitely need a
much calmer and more focused
working environment.
That's not going to be enough.
And then when you
think about, well,
how do we improve
knowledge work,
I think machines can help a lot.
And I think
following this recipe
of offloading a lot
of our busy work
to machines is a
good one to follow.
And we can start.
There's a lot of busy work here.
But maybe if we could
find a way to have
machines do more of
the heavy lifting,
then we would be freed up.
And maybe things would start
going in the right direction.
So how do we do that?
Humans and machines have
these complementary abilities.
And computers have a kind
of mechanical intelligence.
So they're really good at
following instructions,
really good at
following recipes.
They don't make mistakes,
so a lot of benefits.
But they have a ton
of limitations too.
And as we've heard
a little bit today,
a three-year-old
human can run circles
around our most powerful
supercomputer when
it comes to basic things like
carrying on a conversation,
or just knowing basic
facts about the world.
But that's starting to change.
And that's a big part of why
all of us are here today,
is there's been this
amazing Renaissance in AI
and machine learning
in the last 10 years.
And machines have been
getting all these new skills.
So they can see.
And they can hear, and
speak, and to some extent
they can read and
understand things,
certainly to much greater
degree than in the past.
And these are just a few of
the headlines in the last month
alone.
And some of them seem
a little ridiculous,
an AI that spews fake news,
AI that plays StarCraft.
What are we really
looking at here?
But under the hood, there's some
pretty amazing stuff happening.
So you just take the
StarCraft example.
So many of you have
probably heard about AI
dominating humans in board
games like chess and Go.
But it turns out what's
really exciting from AI
perspective is that these
AIs are teaching themselves
to play these games.
They're not running some
pre-programmed strategy
from a human.
And it also turns out
that a game like StarCraft
is actually a lot
harder of a nut
to crack than some
of those games.
And the reason is because some
of these real time strategy
games, like a StarCraft,
what makes those hard
and what make real life
hard have a lot of comments.
So there's a lot of uncertainty,
a lot of complexity,
a lot of ambiguity.
And things are
happening in real time.
So decisions and stuff is
coming at you a lot faster
than you can deal with it.
I love StarCraft.
I've been playing
StarCraft since I was 14.
I'm also CEO of a
2,500 person company.
The parallels between
those two things
are actually pretty surprising.
[LAUGHTER]
And I don't think this is going
to happen in next 10 minutes,
but I think in the future,
we're gonna look back on this,
AI being able to figure
out how to coordinate,
and plan, and juggle these
things is a really big deal,
and can help save
us a lot of time.
But when you add
all this up and can
unpack what it
means to actually do
some of these things
from an AI perspective,
it means that
computers are starting
to get these new skills.
They're able to synthesize
and summarize the texts.
They're able to read articles
and answer natural language
questions about them.
And they're able to start
doing things like organizing,
and coordinating, and planning.
And these are things that
occupy a lot of our time.
So it's pretty exciting.
And then we look at a lot of the
questions that we have at work,
or the things that
we need to get done,
machine learning can help a lot.
And so you take a
question like, all right,
which of these projects
might run behind schedule,
or which of these emails
do I need to respond to?
A data sciencist
would translate that
and be, oh, that's a
classification problem.
And computers are really
good at taking all this data,
filtering out the noise,
figuring out what's relevant,
and making accurate predictions.
And they're getting
a lot better.
Or something like, hey,
organize my email to topics.
And the data scientists
will be like, oh, it's
a clustering problem.
And importantly, AI
doesn't need to be
perfect to be really useful.
Just ask anyone who's
used Google Translator
and a ton of other examples.
And more and more, what we
have to look forward to,
we're going to have our machine
brains take a first pass
at everything and cut
through a lot of the clutter
so that our human
brains don't have to.
And hopefully that one day
that like super obnoxious,
like 15,000 unread everything
badge will finally go away.
But anyway, you can start to
see how some of these skills
might be applied to things like
finding information in email
and coordinating-- all this
work about work that occupies
the majority of our time.
We'll start being massively
assisted by machines.
And frankly, the work, when
you think about the machines
be able to organize and
prioritize and plan,
coordinate, we're going to see
that these are increasingly
machine tasks in disguise.
In the same way that
computation went
from being a human thing
to a machine thing,
we're going to see
a lot of other verbs
move in that direction too.
And it's going to be a gradient.
So some will be fully automated.
And some will be more assisted.
But one way or
another, the machines
are going take a lot of the
heavy lifting off our plate.
And we're going to be freed up.
And this is important because
the world needs humans.
Computers don't have feelings or
dreams or soul or imagination.
We've got a lot of
unique abilities
that computers won't be able to
match anytime soon, no matter
how much of an AI
optimist he might be.
And so what we should be asking
for is the best of both worlds.
It won't be long before we
all have an AI co-pilot that's
able to not only take the
drudgery out of our work,
but it will spend
all of its cycles
to make sure that
all of our cycles
are used as effectively
as possible.
And this is important.
This complementarity
and augmentation
is important because so
much of the narrative in AI
is kind about this
existential zero sum
battle against the robots.
First, they're gonna
take all our jobs.
And second, they're
gonna wake up.
And then, they're going
to kill all of us.
And I think at least part of
that narrative is correct.
And we should not
kid ourselves, not
about the robotic apocalypse.
But things like
automation are going
to displace a lot of jobs.
And they're going to
create real problems.
This is gonna be one of
our biggest challenges
as a society.
And fortunately, we got a lot
of smart people already making
progress in some of these areas.
That's one of the reasons I'm
so excited about the college.
But there is, for
example, I really
enjoyed spending time with
[INAUDIBLE],, who is at Sloan.
And she's shown that, through
her Good Jobs Initiative,
that companies like
Costco that invest
in their employees and
entry level employees,
pay them well, give them career
paths, give them training,
are actually more profitable and
more competitive than companies
that treat-- that don't
make those investments
or treat their employees
like cogs in a machine.
So we're going to need a lot
more of that kind of thinking.
But what excites me most
about the computing in 2030
is that we're on the
eve of a new generation
of our partnership
with machines,
one where we can combine
the unique superpowers
of the human brain
and the silicon brain.
And we'll be able to redesign
knowledge work so that machines
take a lot of the
busy work and so
that people can be freed up
to do the people's stuff.
And we can spend our
working days on the things,
on the subset of our
work that's really
fulfilling and meaningful.
And as we do that,
we'll be so much
better equipped to tackle all
the many challenges we face.
And my hope is that in
2030, we'll look back on now
as the beginning of a revolution
that freed our minds the way
the Industrial Revolution
freed our hands.
And my last hope is that
it happens right here,
that the new
College of Computing
is a place where that
revolution is born.
Thank you.
[APPLAUSE]
Good afternoon.
I'm John Kelly, IBM
Executive Vice President.
But more importantly
this afternoon,
I am a technologist
who has spent
nearly four decades in
technology and information
technology.
I'm also a person who works in
the most advanced labs in IBM
and around the world
in universities.
So I get to understand the
past, as well as see the future.
I want to congratulate Rafael
and the entire MIT community
for this new college.
It couldn't be more important to
the United States and the world
than right now.
And Steve and the
Schwarzman family,
I want to thank you not only
for the magnitude of the gift,
but I argue the timing of the
gift is what's really critical.
Because we are not on
some form of a continuum
with just another
technology rolling forward,
we are at an inflection point.
We are at the beginning of
something which we're just
barely beginning to understand.
And we've talked a little
bit about it this morning.
But let me show you my
perspective on this.
And let me put it quickly
in a historic perspective.
The first era of computing was
all about mechanical switches
and doing arithmetic.
And that underlying technology
evolved into vacuum tubes.
And eventually we
ran out of power.
And we ran out of humans
to make those machines do
what we wanted them to do.
So we went to a new
technology, transistors.
We started to
integrate circuits.
We started to integrate at
an exponential rate, which
is Moore's law, and started to
increase the density of memory
and storage in these computers.
And all of a sudden,
we realized in the '50s
that, hey, we can
program those computers.
We can put a program
into that machine.
The storage is big
enough to process things
fast enough that we can teach
the machine to act and do
things that we do.
The last 50 or 60
years of computing
has been the programmable
era of computing.
Everything from
the largest systems
that IBM builds to the
laptop or your phone
are programmable systems.
That computer cannot do
a single thing that we
haven't programmed it to do.
We're now entering an
entirely new era of computing.
This is not just about it's
all of a sudden artificial
intelligence and
machine learning.
We now are moving to
technologies and machines that
do not require programming.
They learn on their own, which
is the artificial intelligence.
And the underlying technology is
moving fast, as I'll show you.
And we can't even
begin to understand
what's going to happen in this
third era because we're only--
I pick 2011 as to start only
because that's the IBM Watson
artificial
intelligence machine--
but we're only eight
years into what
will be a minimum
of 50 or 60 years,
and I will argue,
more than that.
So the timing, Steve, of this
gift at this point in time,
at the beginning of a new era of
computing, couldn't be better.
It reminds me of the
early days in IBM
when we invented the first
programming languages,
the FORTRANs of the world.
And we went to the
universities and said,
we need a new curriculum
around computer science.
And they said, what's that?
Said, we need to
invent programming.
What's that?
Well, we need colleges
and universities now
to major in this area.
Let me point out what's also
different now in the underlying
technology.
Computing systems have
been designed basically
for that programmable era.
We started in the '50s with
very simple programming.
We advanced those
technologies with Moore's law.
We began to do very heavy
duty multi-processing,
massive parallel processing.
We threw everything we could
at this roadmap of computing
to eke out computing power
in the programmable era.
While we were doing
that, we were working
on artificial intelligence.
But we were using the
programmable systems
to do artificial intelligence.
The first in the IBM AI
history, the first games
of checkers, the first game
of chess, Deep Blue that
beat Garry Kasparov, was
on a traditional computer
with a little bit of
AI specialty in it.
The system, the infamous
Watson Jeopardy system,
was fundamentally on
a classical computer.
We operated AI separately
from programmable computing.
That time has ended.
We are now co-designing
artificial intelligence
in our systems simultaneously.
And the best example I can give
you of that new convergence
and where computing is going
now is the Summit system
or the Sierra system.
These are the two
largest computers
in the world that we built
for the United States
Department of Energy for
science and for defense.
Those systems not only
perform the largest
traditional modeling
computation of 200 petaflops,
which by the way--
I was told not everybody's
an MIT computer scientist--
that means that it does
200 billion calculations a
million times a second, 200
billion calculations a million
times a second.
As impressive as that is, that
is a more powerful AI system
than a compute system.
In fact, it was
only on the floor
at Oak Ridge for
about 30 days when
a team from Oak Ridge, and
some people from Google
helped, prove that that
machine set a world
record in AI learning.
And that's because there's
equal to or more AI custom
accelerators in that system
than there are general purpose
processors.
So for the first time, the
two worlds have come together.
And every system that IBM
designs going forward now
will be optimized for
artificial intelligence.
It's a different world.
Keep that system in mind
because I'll come back to it
in a minute.
The other thing I should
tell you about that system
is not only is it big--
it's about three
tennis courts in size,
it's still smaller than a cloud,
but more powerful than a cloud,
both in terms of
compute and AI--
but that computer
consumes 13 million watts
of power, 13 megawatts of power.
For reference, what's between
your ears is about 20 watts.
So with 13 million
watts of power,
we're still barely
approaching what a human
can do with 20 watts of power.
So we're doing something wrong.
We're doing something wrong.
Realizing that we had to
push the roadmap of AI
and computing
together, last fall, we
came up to MIT and Rafael and
said, from an IBM standpoint,
let's try to advance
the research here.
Let's try to advance
the research.
We formed a new partnership.
We're putting in hundreds
of millions of dollars.
We're putting our best
talent from IBM co-located
with the best
faculty and students
here at MIT to try to advance
the technologies beyond Summit
and Sierra, all the way from
the underlying physics of lower
power, artificial
intelligence processors
to how do we secure
these systems through how
do we share the
responsibility for the systems
that we're going to be creating.
And as new as this new Watson
AI lab is, I can tell you
we are ecstatic with
the success of this lab.
And in our view, the synergy
between this lab, Steve,
and your college is going
to be off the charts.
We're just thrilled.
And I thank all of
the tremendous faculty
who have gravitated to this.
The research and
the papers and ideas
that are coming out
of this are stunning.
So as I said,
everything we've built
in the last 60 or so years has
ridden this Moore's law curve.
And I see Rafael and I
work for years and decades
on the technology behind this.
I see Ray Stata has worked
for decades in the industry
as well, many of my
colleagues from the industry.
Moore's law basically
was an observation
that we doubled the number of
transistors in a performance
every 18 months or so.
And that has served us well
as we shrunk the devices.
But fundamentally, that
law is based on the fact
if we keep shrinking,
it will go faster.
That is no longer happening.
And the energy consumption
budget of writing that law,
as I told you, is 13 megawatts.
So we're reaching
the physical limits
of shrinking and Moore's law.
At the same time, we can't
get that much power and energy
in and out of a box.
It's becoming
physically impossible.
So with that being the case,
what's beyond Moore's law?
How do we decouple
ourselves from this concept
of let's build denser
transistors, which are either
a 1 or a 0?
And can we compute
in a different way
than just ones and zeros?
And can we compute in a
way that the computer's
capacity and capability scales
beyond Moore's law and faster?
Well, we're fortunate,
because in May of 1981,
a group of scientists from IBM
and MIT got together in May,
held a conference on something
called quantum mechanics,
and started to discuss
the theory of using
quantum mechanics for compute
way ahead of its time.
Richard Feynman is
in that picture,
by the way, in the back right.
The guy taking the picture is
an IBM fellow, Charlie Bennett.
The fundamental principles
for quantum computing
came out of that conference.
We and many others
in the industry
spent the next two to
three decades trying
to build a quantum computer.
10 or 15 years ago, we
had quantum computers
that look like this,
basically, a research project.
Basically, we had built, instead
of transistors, quantum bits,
or qubits, we had
built two of them.
We had taken them down
to almost absolute zero.
We had gotten them, for the
quantum physicists in the room,
to entangle.
And we believed we could compute
for the first time ever using
quantum theory at the bottom of
that physics research project.
Fast forward now
just a few years.
And this was early last year.
This is the world's largest
and most publicly available
quantum computer.
It's 20 qubits.
It has roughly the performance,
on certain problems,
as that enormous Summit Sierra
system that I showed you.
Its prototype, though,
of a commercial system.
A couple of months
ago, we announced this.
This is a commercial, fully
viable quantum computer.
Reliable today at 20 qubits,
we've announced we have 50.
At 50 qubits, again,
you cross over
what's possible with a
traditional computer.
So that's the new roadmap.
The exciting thing is scaling
a quantum computer in quibits--
and I don't have
time to get into it--
is a new exponential.
In fact, when we reach
100 qubits in that system,
I can prove to you that
it will do calculations
that exceed the number
of atoms on the planet
if they were all transistors.
So we are about to enter a whole
new regime, a whole new regime.
And it was thought, up
until very recently,
that there were
only a few problems
that those computers could do
if anybody could ever build one.
So we built one.
And now we look at what are
the kinds of problems we can
do with that kind of machine?
Well, in computing
as we know it today,
there's a set of easy problems
that programmable things,
the math, the things
that we all understand.
There's a set of
really hard problems
that classical
computers struggle with.
As an example, if you want to
model from first principles,
how do I put two or three
atoms together into a molecule
and get the lowest
energy state, which
will be a stable molecule?
That's a tough calculation
for a classic computer.
We have to make all
kinds of approximations.
That quantum
computer can do many
of those kinds of problems, can
solve many of the hard problems
classical computers can't
solve, and in fact, can
do a whole class of things that
classical computers will never,
ever be able to do.
So when you look at
this, you say, wow.
Things like factoring
numbers, which
is the basis of all
encryption on the planet,
are really hard for
classic computers.
These quantum systems
do it in a snap.
Things like simulating
quantum physics itself,
what better way to do it
than on a quantum computer?
Things like finding
new materials,
new drugs with large
molecules, dozens of atoms,
hundreds of atoms,
not two or three,
can only be done on
a quantum computer.
And optimization
routines, whether it's
sorting data or modeling
the financial systems
of the planet, can only
be done at this scale
with these kinds of systems.
And most excitingly,
we have demonstrated
many artificial
intelligence algorithms
will run on this system.
So now we're in a
world where we've
brought AI and compute together
with Summit and Sierra.
We're progressing
those technologies
to take down power
and performance.
And we're introducing quantum
computing into the world.
So I hope now you
believe that this is not
like a small increment or
a typical point in time.
We are on the verge of
some incredible things.
And I will end with what many
of the other speakers said.
I can prove to you, either
in my labs or what we're
doing in our labs and
with MIT, everything
I told you is going to
happen in the computers.
The issues will be all
around the human factors.
We take for granted
when we communicate
human to human or
human to machine.
We know that-- we must know
that that machine is fair.
We must be able to
explain what it's doing.
It must be secure.
It must have ethics.
And I will argue
that in the past when
I built big computers, I at
first could just build them,
ship them.
If it had a reliability
problem, we'll go fix it.
We learned over time we had to
build quality and reliability
into those systems from
day one of development.
If I had time, I
could prove to you,
we must design in these
attributes to these big systems
and not think that we can do
it later or paper it on top.
We can build systems
that increase
the transparency and
ethical behaviors
into the guts of these systems.
And I think this, Steve, I
was thrilled that you included
this whole domain of ethics
and fairness and policy
into this new school.
I would argue that
physics is fun.
This is where the rubber
really meets the road.
And so I want to congratulate
MIT, Rafael, Anantha,
the whole team, Marty,
the whole team at MIT,
for this wonderful time.
And with the birth
of this new college,
I thought it was
appropriate for us
to give the college a new gift.
And so today I'm announcing
that we are building in our IBM
manufacturing plants
five racks of Summit
that we will deliver to
MIT and to the new college
as a platform for the college.
Thank you very much.
[APPLAUSE]
OK, hi.
I'm Regina Barzilay.
There was some change
in the program.
So I'm not Antonio Torralba.
And I am going to talk about how
AI changes the way we diagnose
and treat diseases.
So I would say this is
kind of a funny topic.
And the reason it is funny.
Because today when
you open a newspaper,
you read all the amazing
things that AI delivers to you.
And there are all the
time breakthroughs.
But when you go to the
hospital, none of us
actually see this AI.
And this is not surprising.
There were a lot of studies
that demonstrated only 5%
of US health care providers
are saying that they are using
AI to help their patients.
And the definition for
AI is pretty fluid.
So it's very important for
me, as a patient, to make sure
that whatever we are
developing in this space
is actually deployable
in the hospitals
and can be used
to help patients.
So exactly a year ago, I was
standing here and talking
in Quest of Intelligence lunch.
And I was showing a system that
we just developed and launched
with MGH which can
read mammograms.
And I'm happy to tell
you that the system was
in production for 13 months.
It read 40,000 mammograms.
And here you can see
actually how is it done.
This is a traditional room
in every hospital, where
after you've taken
your image, there is
a person who sits and read it.
So now each one of these
images is read by a machine.
And this is my collaborator,
Dr. Connie Lehman,
who looks at the
prediction of the machine
and then signs the report.
So the question that I had--
and it was great to see that
it's working, and it's helping,
and it's doing what humans
supposed to be doing--
but the exciting part is can
machines actually do what
humans cannot do?
And now I will show you an
example of something like that.
What you can see here are
two images of women who,
at the time when
they had a mammogram,
they didn't have cancer, one
of this women developed cancer.
Another one didn't.
Today, none of you and
none of the radiologists
can say which one of these
women will develop cancer
in two years.
But what we know from biology
is that cancer doesn't
grow from today to tomorrow.
It's actually a
very long process,
which makes a lot of
changes in tissue.
So the logical question is,
can you take the machine
and train it on the images when
we know outcome in two years
or in five years to say
what is there to come?
And because machine can
see hundreds of thousands
of images, and because it has
capacity to really distinguish
between these
details, machine was
able to do this
task pretty well.
And this week, we're
actually launching
at MGH this new risk model.
And what we've seen is that
if this model places you
in top 20%, actually, you
have a very non-trivial chance
to get breast cancer.
And now lots of
physician at MGH thinking
how we can design procedures
which would help these women.
And the good news
for breast cancer,
you can actually do
something to prevent.
And you can imagine what it
will do for other diseases
like pancreatic cancer.
But diagnosing early is
just one of the issues in.
The second big question
is how we actually
curing the disease, correct?
And there are lots and lots
of diseases for which we
don't have a cure now.
You can see that investment
in pharma continues to grow.
The drugs become more
and more expensive.
And there are a lot a lot of
drugs that actually fail, fail
even in late stages.
So the question
is what can we do?
Can we use technology to help
us to design drug faster?
And this is really
a big question.
Because if you're
thinking about it,
when you are designing the
drug, even if you are designing
small molecule, which
is an easy case,
it's a combinatorial space.
It's a huge, huge space.
And you are looking in
this huge space to just
to find the molecule
with the right profile.
So how we actually
doing it today,
there was a lot of
advancement in manufacturing.
So you can do high
throughput screening.
You can check lots
and lots of molecules
and see if they're
toxic or not toxic,
if they're ported and so on.
But obviously, you are
bounded by how many molecules
you can do.
Maybe you can do
10,000, 100,000.
You can not do hundreds
of millions of molecules.
And that's exactly where our
modules come in These models
are trained, given
the molecular graph
to predict various properties
that people care about.
And what we've demonstrated
that these models actually
can do it pretty well
across both chemical
and biological properties.
But what is more
interesting is actually
to see what these models do.
When these models try
to interpret molecules,
they translate them into some
continuous smooth space, OK?
And the geography of
the space actually
relates to how good are the
property of this molecule.
So it opens us doors to try
something really new, which
goes into the
heart of chemistry.
You can start with
your molecule.
Translate it into this
very nice continuous space.
Optimize it in the
continuous space.
And then generate
a new molecule.
And that's exactly what
we are currently doing.
And this work is very recent.
It's just half a year old.
And our hope is that when we
achieve the required capacity,
we can totally change the
process of drug design.
And as I told you, I
really care to make sure
that what we are doing
can actually be deployed
and can make a difference.
So we have a consortium of 13
pharmaceutical companies that
not only help us with funding,
but also take our tools,
implement them in practice,
and give us feedback.
And jointly with them, we are
working on drug development.
So this is very exciting.
And let me just finish my talk
by showing you some diseases.
And I'm sure all of you know
the names of these diseases,
for none of these diseases
we have cure today.
And there will be almost--
each one of us knows a
person who was diagnosed
with one of those diseases.
And to me, the real question
about AI and health care
is, if in 10 years from now,
maybe in five years from now,
when we are celebrating the
birthday of the college,
we can actually cross some of
these diseases from the list
and find the cure.
Thank you.
[APPLAUSE]
Hello, everyone.
So I'm sure all of you guys
know what wearables are.
And probably many of
you are wearing them.
So I'm going to tell you now
about the move from wearables
to invisibles.
So I work on radio signals.
I, when I started
as professor at MIT,
I worked a lot on improving
Wi-Fi, improving cellular,
connecting many, many people
on the internet, et cetera.
But about five
years ago, I started
thinking that these radio
signals are way more powerful
than we are using them today.
Just think with me.
Radio signals
propagate in space.
They traverse walls
and obstacles.
And they reflect
off the human body
because our bodies
are full of water.
And some of these
minute reflections
will come back to us through
walls, through obstacles.
And now if I have a smart
device that I can interpret
these reflections
from wireless signals,
maybe I can start
seeing through walls.
So that was [INAUDIBLE]
when we started.
And somehow, I convinced
my student, let's try.
OK, so let me show you some
of our early experiments.
So this is our early device.
And we're going to put it in the
office adjacent to this office
behind the wall.
And we're gonna
monitor this person.
And this red dot that you
see on the side of the screen
is where the device thinks this
person is standing right now.
So let me play
this video for you.
So as he moves, you can see
the red dot moves with him.
And it's all purely based on
the reflection from his body
through the wall.
And we can track him
pretty accurately.
He has no sensor on his
body, no cell phone, nothing.
He might be oblivious to being
tracked from behind the wall.
It's quite accurate.
So this is actually
from a few years ago.
And over the past
few years, I've
been working on this
with my students
here at MIT, improving
the technology further.
And I want to show you some
of our most recent results.
In particular, when
you look at this,
I mean, you see the red dot.
And you know he's there at
that particular location.
But the red dot doesn't
tell you is he standing,
is he sitting, like what
is he doing in there?
And when he moves, you see
the red dot slides with him.
But you don't know
whether he took
a step with his right foot,
his left foot, you don't know.
So let me show you
our most recent result
from just a few months ago.
So now the big frame is
what the wireless device
sees from behind the wall.
And the small frame is the
camera inside the room.
And as you can
see, we are getting
the full skeleton of the
people from behind the wall.
Let me play this.
You see when he sits, the
device knows that he's sitting,
got his full skeleton.
It knows how people are moving.
There are multiple
people who are moving.
And remember, all of this
using a radio signal reflection
of the human body from behind
the wall without any wearables.
Now how do we do this?
Like, how can we make this work?
So there are two things.
We have advanced radios
and machine learning.
So I tell people radios
is like your ear.
For our device, this is
the ear of the device.
It has to be very,
very good and very
sensitive to sense
these minute reflections
from behind the wall.
But what is more important
is the brain, of course.
And that is the machine
learning technology.
So you hear a lot about machine
learning neural networks.
They operate on images,
on text data, on audio.
But what we are doing here is
we are making neural networks
operate on radio signals
to be able to do something,
not just get as good as a
human, because we all cannot see
through walls, but maybe to
do something that we cannot do
today.
OK, so what else can we monitor
using these wireless signals
without any wearables?
Sleep stages.
So perhaps you guys know
that when we go to sleep,
our brainwaves change.
And we enter different stages,
awake, light sleep, deep sleep,
rapid eye movement, or REM.
Now in the US, one
in every three people
have sleep problems.
So being able to understand
sleep and improve sleep
is very important.
But actually, sleep stages are
not just important for sleep.
They are important for
a variety of diseases.
Just to give you an
example, depression.
So do you guys know that one
of the signs of depression
is that REM, this rapid
eye movement stage,
happened too early, very
early during the night?
And that is one
of the signs that
could happen in depression.
So imagine if you can monitor
sleep stages every night
in the home very
easily, then perhaps we
can tell when someone is
falling into depression even
without them realizing it, OK?
So unfortunately, today if you
want to monitor sleep stages,
you send your patient
to the sleep lab.
They put these
electrodes on their head.
It's not really a
happy experience.
Probably, you would do it one
night, two nights maximum.
But you don't want to
live this way every night.
So let me show you
what we can do.
So this is our device.
It transmits very low
power wireless signal.
It analyzes these reflections
using machine learning
and spits out the sleep
stages throughout the night.
It would know when this
person is in REM, which
is the stage in which we dream.
What else can we do?
So here's this guy sitting
like you guys and reading.
We can get his breathing.
These signals are nothing
but his inhales, exhales.
And we ask him to
hold his breath.
And you can see the signal
stays at steady level
because he exhaled.
He did not inhale.
Now I want to zoom in on
these signals further.
So this is the same signal,
the breathing signal.
These are the inhales.
These are the exhales.
And the first time we saw
the signal, it was like,
oh, this is-- there is
some noise on these blips.
But it turned out,
actually, this is not noise.
If you zoom in, these are
his heartbeats beat by beat.
And again, remember
without any wearables,
purely by analyzing the wireless
signals in the environment.
So what application does
this technology have?
So of course, there
are many applications.
But the application that I'm
interested in is health care.
So when we started
working on this,
we got a lot of emails and
contacts from doctors say,
can you monitor my
patient at home?
And you can think about
that like for example,
doctors discharge
patients from hospitals.
And when the patient
goes home, they
have no idea is he breathing,
like what is his heart beats,
his vital signs, is he
moving, is he in bed?
What's going on?
They don't know.
Imagine discharging every
patient with a device like this
so that you can continue their
health monitoring at home.
This is also very important
for our aging population.
We know that for older
people, chronic diseases
are very important problem.
But we know also that many
hospitalization and chronic
diseases are also
avoidable if you can
detect the problem early on.
So this is include heart
problems, pulmonary problems,
things that are related
to UTI, kidney problem.
There are so many Alzheimer's
issues even, depression,
all of those things.
So today, if you want to
monitor patients at home,
what do you have?
Something like this.
So if you want to
monitor breathing,
you put the nasal probe
or chest band on them.
For people who have
Parkinson's, you
ask them to wear sensors, the
accelerometer on their limbs,
and move like that.
For sleep, I told
you you have all
of these sensors on their head.
We are changing
this image to this--
a smart Wi-Fi like box that sits
in the background of the home
and monitor breathing,
heartbeat, sleep, falls, gait,
mobility, interaction with
caregiver, all of that
purely by analyzing the
surrounding wireless
signals without asking the
patient to wear a single sensor
or to write diaries
or to change anything
about their usual schedule.
We have deployed so far more
than 200 devices in homes
with patients in different
therapeutic areas.
We are working with doctors
in Parkinson's disease,
in Alzheimer's disease,
in pulmonary diseases,
and in depression.
And we are working together
with our doctor colleagues,
with pharmaceutical companies,
with the health care system,
to try to bring
these technologies
and these advancements
to health care.
Thank you.
[APPLAUSE]

---

### Computing the Future: Setting New Directions (Part 2)
URL: https://www.youtube.com/watch?v=Bq1y57pVya4

Idioma: en

Hello.
Thank you for coming today.
My name is Antonio Torralba.
I'm a professor of electrical
engineering and computer
science at MIT.
And I'm also the director of
the MIT-IBM Watson AI Lab,
which is an amazing example
of fruitful collaboration
between academy and industry.
And I'm also the director of
the MIT Quest for Intelligence.
The Quest for Intelligence
is an MIT-wide effort
trying to understand,
what is intelligence,
and answer fundamental questions
about human and machine
intelligence.
My job now is to introduce
the next three speakers--
Justin [? Inbound, ?]
[? Dimitrus ?] [? Procemus, ?]
and Alexander [? Madri. ?]
All of them are my colleagues,
and they are very involved with
the Quest for Intelligence.
And they are all superstars in
their own ideas of research,
working amazing students.
So please join me in
welcoming to the stage, Justin
[? Inbound. ?]
[APPLAUSE]
So I want to ask this
question-- why do we
have today all these AI
technologies, but no real AI?
So what I mean is,
we have machines
that do things we used to
think only humans could do,
but we don't have anything like
the flexible, general-purpose
intelligence that
each of you can
use to do every one of
these things for yourself.
So why not?
What's the gap?
Well, the neural networks
and deep learning
driving today's AI are based
on simple equations that
were derived in the
1970s and 1980s,
to capture the most basic
animal learning processes.
Think Pavlov's dogs,
or a rat in a maze.
They're finding associations,
recognizing patterns, but not
understanding.
And that means they're
at most one small step
towards real machine
intelligence.
Real intelligence is not
just recognizing patterns,
but modeling the world--
explaining and
understanding what you see,
imagining things that you
could see but haven't seen yet.
Solving problems, making plans--
to make those things real.
And then building new models as
you learn more about the world.
Now, the goal of
my work is to try
to write down these
kinds of equations--
to capture this kind
of human learning,
and then to use this to build
more human-like forms of AI.
And I'm starting with trying
to understand how children
learn and think, because
they are the original model
builders.
And we are still far from having
a AI with the intelligence
of even a 1 and 1/2 year old.
But imagine if we
could get there.
Imagine if we could
build a machine that
grows into intelligence
the way a person does--
that starts like a baby
and learns like a child.
This may not be our
only route to real AI,
but it could be our best bet.
Because think about
it-- a human child
is the only known scaling
route in the universe.
It's the only system we know
that reliably demonstrably
grows into human intelligence
starting from much less.
And we know that even
small steps towards this
could be big.
So we're starting with the
most basic common sense, that's
in every 1 and 1/2
year old, but no AI.
The intuitive physics that you
see in a child like this one--
stacking up blocks, playing
with playing with toys.
Or the intuitive psychology that
you see in a kid like this one
here-- another 1
and 1/2 year old--
that lets them figure out what
somebody else is doing and why.
To read their mind,
in a sense, even
for a complex action like this,
that you've never seen before.
Think about it.
Watch this kid.
These kids are just 1 and 1/2.
If we could build robots with--
[APPLAUSE]
--this kind of intelligence,
with this skill and helpfulness
around the house, with
this kind common sense,
that would be amazing.
So to do this
we've had to invent
new kinds of AI-programming
languages, known
as probabilistic programs.
These build on but go far
beyond the deep learning
that's driving today's AI.
And they allow us to bring
together the best insights
from multiple eras
of the field--
ideas that may not
have a simple home
in today's neural networks--
all into a unified framework.
So this means, for
example, symbolic languages
for representing and reasoning
with abstract knowledge,
or probabilistic
inference for reasoning
about cause and effect
from sparse uncertain data.
Probabilistic
programs may be what
you're going to be hearing
about in the next few years,
if you haven't yet.
Just to give an
example from research--
just in the last
year, in 2018, we
and many of our collaborators
have used these tools
to give that kind of
intuitive physics to robots--
to allow robots to
stack up blocks,
and even play the game "Jenga."
To be able to imagine how
to use new tools, even ones
they've never seen before.
To plan complex
actions, to make sushi--
or at least the rice--
to pour ice water,
and even to learn to walk.
The next step, the next
challenge is model learning.
How could a child, or robot
build an intuitive physics
model for themselves?
Learning these
probabilistic programs
means that your learning
algorithm has to be
a program learning program--
an algorithm that
writes algorithms.
It's the child as coder.
This is a much harder
form of learning
than in today's neural networks.
But children do it,
so machines can too.
Now, we've made a small
step towards this,
with a system that can
learn new programs, that
can capture simple visual
concepts-- such as a new tool,
or a new handwritten character.
So look at these here.
You can learn thousands of new
characters and new alphabets
from just a single example each.
You don't need 1,000 examples
to learn a single new concept,
like today's
deep-learning systems.
Now, our Bayesian
program-learning system
can learn like you do.
It uses probabilistic programs
to model the causal processes
that put ink on the page--
programs for writing,
and action, and drawing.
And then it runs these
programs backwards,
to learn the program
most likely to have
produced the character you see.
This lets us generalize new
concepts from a single example,
and even pass a simple kind
of Turing test-- like this.
We can show a new concept to
both humans and our machines,
and ask them to
draw new instances,
to imagine new examples.
Now, can you tell which
are the people's and which
are the machine's drawings
before I showed you?
Try here-- see if you can.
My bet is that most
of you couldn't
So we're passing this
very simple Turing test.
It's a small step,
but it does scale.
This idea lets us learn programs
that can describe the shape
of a chair, that can answer
questions about pictures,
and that can even learn a new
video game 1,000 times faster
than today's deep-reinforcement
learning systems--
and almost as fast
as a person can.
Will this be the idea that
finally delivers on AI's dream,
to build machines that
learn like children?
Probably not yet.
But it may be the
next small step.
It may be the next form
of deeper learning.
So stay tuned.
Looking ahead, once we've
reached this first moonshot
stage in our program here-- the
18-month-old stage of common
sense--
stage two is to learn
the most important thing
that every child learns
between age 1 and 1/2 and 3,
and that's language.
And then stage three
is using language
to learn everything else--
to access the full
sweep of human knowledge
that builds culturally
across generations,
and across societies.
And that puts you in a
position to contribute
new knowledge yourself.
If we could build
AI like this, this
would be a AI that
lives in a human world--
that humans could talk to,
teach, and trust the way
we've always done
with each other--
even with people that were just
meeting for the first time.
This could be AI that
makes us actually
truly smarter and better off.
Thanks.
[APPLAUSE]
So in life there are some
very significant moments--
birth of a child,
getting married.
So in the life of
institutions there
aren't too many such moments.
I have been 34
years at MIT, and I
believe today is such a moment.
So Steve and Rafael, thank you.
So I would like to talk to
you about interpretable AI,
and to motivate.
Suppose you consider
a driverless car.
And this driverless car
is involved in an accident
with loss of life.
Who is at fault, the driver,
the passenger, or the algorithm?
And most importantly,
can society
tolerate not understanding
this question?
For another question similar--
closer to us, let's suppose
you have a student who is not
selected for freshmen
admission, even
though he might be
the valedictorian
of his high school.
Is it an adequate
response for the algorithm
to say the algorithm
made the decision?
I suspect not.
So in my view, and perhaps
in the view of others,
interpretability matters.
However, existing
methods achieve
either high-quality performance
or inevitability, but not both.
For example, neural networks,
that you have heard lots about.
But other methods that
have exotic names,
like random forest
and boosted methods,
have high-quality performance
but low interpretability.
More classical methods,
like regression,
developed 200 years ago.
And classification
regression trees
have high
interpretability but not
as high-quality performance.
In my group, we aspire,
and to some degree
we succeeded, in
developing new methods that
have both characteristics--
both interpretability
and high-quality performance.
One of my heroes, Leo
Breiman, who passed away
about 13 years ago, developed
one of the most interpretable
methods--
classification and
regression trees--
CART for short.
And he commented that on
interpretability trees
right in A plus.
However, on performance,
they were not as strong.
So we developed in
my group some methods
that are like trees, except
we use the tremendous progress
that my field
optimization have made,
that allows both
interpretability
and performance.
And in addition, allows to
partition the space into, let's
say in a classification
setting, into regions--
in which we can make
a decision-- classify
appropriately.
Just to illustrate an example--
this is actually an example
developed with Dana Farber
researchers, on predicting the
mortality for a patient taking
chemotherapy.
I have a personal connection
to this application.
My father was diagnosed about 12
years ago with gastric cancer.
And at the very
end, the doctors,
with very good intentions,
were prescribing chemotherapy--
even though it was very
clear, to me at least,
that the end was very near.
So using this new method,
we have developed algorithms
that are state of the art
in terms of performance,
that predict mortality.
But most importantly,
they are interpretable.
And at least in my
experience in the last decade
or so with medical doctors,
interpretability is a must.
That is, if you say to a
patient, stop chemotherapy,
a natural question is,
why do we believe that?
And if you follow
the algorithm, it
says, because at least
a doctor understands
certain enzymes are elevated.
The change of the
weight is elevated.
You can explain the reasons why.
And that's the claim,
just to illustrate
regarding the combination of
predictability and performance.
So on the horizontal axis
is the depth of the tree--
the number of questions we ask.
And on the vertical axis is
the accuracy of the method.
The blue-- this is like the tree
we have seen for chemotherapy.
It has about the same
performance-- slightly better,
in fact--
than a state-of-the-art
method called random forest.
And the green-- this is a tree
similar to that, except that
instead of asking one question,
we ask multiple questions
at a time.
It has similar performance
with another state-of-the-art
method, boosted trees.
So this enables the
application of his methods
for assessing
mortality, morbidity,
and many other
questions in medicine.
This is an application we
developed with Mass General
Hospital-- currently at use at
the trauma department there.
In which an incoming patient
in the emergency department
before any surgery--
but just on data, we make
predictions about mortality,
specific morbidities,
or any mobility
in ways that are understandable.
They are delivered like the
application on the right.
And as I participate in
discussions with doctors
in the morning rounds,
I have seen that it
has changed the dialogue.
It has discovered things that
humans typically don't fully
understand, or expect.
Because many human
doctors have let's
say, thousands, maybe 2,000
experiences in their lives.
This is based on about
a million experiences,
from hospitals all
over the country.
Clearly, the [INAUDIBLE]
data, in this case,
beats the human experience.
Of course, this is
not only to surgery.
In many other applications,
we have seen similar results--
for cancer, cardiovascular
disease, type 2 diabetes, liver
and kidney transplantations.
On pharmaceutical research,
the clinical trials
typically analyze the mean
of the treatment effect.
However, subgroups might exhibit
a very different response.
So these trees that we have
developed-- optimal trees--
we have applied it.
And they produce very
interpretable subgroups.
And in this case,
identifying subgroups
of exceptional responders
could guide the design,
and inclusion criteria could
find [INAUDIBLE] in failed
trials identify opportunities
to relabel existing drugs.
In summary, the message I
would like you to remember--
and I believe other
speakers have spoken--
is interpretability matters.
And relative to our discussions
on ethics, I would say,
interpretability helps
the ethics discussion,
because at least we understand
what the algorithm is doing.
It's not just a black box.
On that note, thank you.
[APPLAUSE]
Welcome, everyone.
I am Alexander [? Madri. ?]
And I will talk about AI--
surprise, surprise.
But, actually, what I want to
talk about is the aspect of AI
that I think we need to
get right, to make sure
this is actually a
successful technology.
So I guess it's fair to
say by now that we all
are very excited about what
AI seems to be able to do.
Like, things that we thought
are impossible to do merely 10
years ago now seem to be
completely within our grasp.
And this is exciting.
So we are thinking
of all the ways
in which AI can change the way
we work, the way we travel,
the way we play.
So I think we can just say,
we are all ready for AI.
We are all excited about this.
I am excited.
But as excited as I am, I
can stop to be also worried.
Because I think that there
is a question that we should
be asking ourselves-- a
question that actually
drives most of my
recent research.
And this question is, even as
much as we are ready for AI,
is AI ready for us?
What do I mean by that?
Let me demonstrate.
So what you see on the
right is a beautiful peak.
To us humans, it is a pig.
And a state-of-the-art
classifier will view it
as a pig as well.
So far so good.
What's interesting here?
Well, it gets interesting
if I add to this image
a little bit of a
carefully chosen noise.
What I will get is the
picture on the right--
which again, to us looks
like a beautiful piggy.
However, to the state-of-the-art
classifier this is actually
an airplane.
So there are two lessons here.
The first lesson is,
if you ever doubted,
AI is a magical technology.
It can make pigs fly.
[LAUGHTER]
That's important.
The second thing
is, that is not what
you would expect to happen.
So what's going on?
Well, your first reaction
might be, is this for real?
Because, indeed, I told you
that the noise I needed to add
has to be very carefully chosen.
And maybe in the
real world, I can't
have fine-grained enough control
over what the machine sees
to really trigger it.
And that's actually what
many people initially
thought, until a bunch of MIT
undergrads proved them wrong.
So what they did-- they
3D printed a turtle,
that looks like a turtle to us.
But, essentially, it
classifies as a rifle--
from all angles and zooms.
So if you ever doubted
this so-called adversarial
perturbation is a real thing.
So, well, should we
worry about that?
And the answer is, yes.
And, why?
Well, many reasons.
The first and the most obvious
one is security context.
If I can make your system see
something different than I see,
that's how many
security breaches start.
So one of the promising
uses of this technology
could be in facial recognition.
And here, we see a
state-of-the-art facial
recognition system, that
recognizes correctly the person
in the picture--
despite having
these funny glasses.
However, when this person
puts on these even more
funny glasses on,
something magical happened.
The system believes this a
completely different person.
Think of the implications.
But yeah, this is security.
But what about safety?
Sometimes we don't
really believe
there are some bad guys
who want to get us.
But still there are systems
like that, in which we really
want to be sure that there
is no inputs that cause some
of an undesirable behavior.
And true enough, this system
has a very undesirable behavior,
if you just perturb the picture
that you have in the right way.
So this is about
security and safety.
This is about getting the
getting the decision right.
But sometimes it's
not only about getting
the correct decision.
It's also important about
getting the correct decision
for the right reasons.
So for instance, one of the
promising applications of AI
was to use it in hiring--
to make decisions about who
to hire and who not to hire.
And the idea was
that, well, this
would be purely data driven.
So the outcomes will clearly
be impartial and optimal.
So that was the dream.
What happened in reality?
Well, what happened in reality
is that using these solutions
actually reinforced all the
biases and all the inequities
that we wanted to avoid
in the first place.
And my most amusing example
here is a resume screening tool
that came to a conclusion that
the two most important factors
predicting job performance
is being named Jared
and playing lacrosse
in high school.
[LAUGHTER]
Yeah, so things are not great.
And you know, definitely you
should be careful about that.
But one more aspect
I want to talk about
is about, who will this
kind of AI benefit--
at least the AI technology
that we have now?
So think about--
what do we new need
now to deploy an AI solution?
We need a lot of data.
We need a lot of compute.
Also, we need an education.
That is actually
quite hard to get.
So MIT and other universities
are trying what we can.
But still there is
not that many people
who have the right
training to use [INAUDIBLE]
So is this something,
really, that an average man
can use, understand, and apply?
I don't think so.
So I guess the conclusion here
is that AI technology that we
have now, it has to change.
It has to evolve.
So on one hand, it
has to be secure.
It should be tamper resistant.
It should not be something
that is the weakest
link in our current system
from a security point of view.
It should be reliable, so
we can actually confidently
deploy it the context
[INAUDIBLE] human lives matter.
It also should be equitable.
It should be mindful
of the societal impact
of the decision it
makes, and make sure
that it's consistent
with our values.
Finally, it needs
to be accessible.
So, essentially, even people
without the specific training
and these hard-to-get
resources can actually
fully take advantage of this.
So the way I like
to phrase it is
that essentially what we
need is we need AI 2.0.
Which means we need to look at
the AI 1.0 that we have now--
the proof-of-concept AI--
and think about all the ways--
all these aspects of here, and
figure out how to fix them.
And only then the AI
2.0 actually can deploy.
So essentially trying to
come up with this AI 2.0
is what my group spends
most of its time on.
And there are two lessons
that we have so far.
So one lesson is that
actually many people
think of getting one of
these goals in isolation.
But actually it turns out that
if you elect to go for all four
of them, you actually have some
very nice synergy that you can,
and you should
take advantage of.
The second lesson
is that, actually,
even if your goals are just
for achieving these four
properties, you might end
up with models that are also
better, from the point of view
of more traditional properties.
In particular, the models
that you get to be robust,
they actually also
turn out to come up
with better
representation of data.
So let me demonstrate.
So what you see on the left
is just a picture of a dog.
That is correctly
classified as a dog.
So so far, so good.
But it's sometimes useful
to look under the hood,
and to see why did the models
decide to call it a dog?
And, essentially, one
natural way to do it
is to create a heat map--
where you look at every pixel,
and you see how much influence
these particular pixel
had on the final decision.
So if you do it to
a standard model--
the AI 1.0 model--
you will get a heat
map like this, which
is only mildly informative.
However, if you do
it to a robust model,
you will get things like that.
Where, essentially,
suddenly what
was the driving
decision of the model
seems to be much more compatible
with what we humans would
think as important.
So the goal is AI 2.0.
And this is, of
course, a quest that
goes beyond any
single research group.
You need to get expertise from
many different domains, that
also transcend CS.
So in particular areas
[INAUDIBLE] computing,
I think it's greatly poised to
exactly tackle this challenge.
So there's a great many
of my colleagues here
that think about these
questions and share this vision.
These are just some of them
that I've talked to so far,
but there are many, many more.
But this is more than just
about faculty thinking
about this stuff I think the
most important mission here
is to educate the next
generation of engineers
and researchers, that
understand these issues
and know how to deal with them.
And that's what we are
doing at MIT as well.
I believe that this is my group.
And I believe that
in my group there
will be many of the future
leaders in this field.
But there will be
many, many more.
And, essentially,
the goal here is
to get AI that actually is ready
for real-world deployment--
the AI that's
actually human ready.
Thank you.
[APPLAUSE]

---

### Creating a College: A Conversation
URL: https://www.youtube.com/watch?v=K1C2f5kpi-E

Transcrição não disponível

---

### How the Enlightenment Ends: A Conversation
URL: https://www.youtube.com/watch?v=nTyIeMuUavo

Idioma: en

People want to take
their seats, we'll--
[APPLAUSE]
Well, thank you.
Now we're here for the
non-technical part of the day.
Dr. Kissinger, you're
the only person
I know who got interested
in AI in your mid 90s.
Let's start there.
How do you get interested
in this subject?
I want to apologize
to this group
because I am an argument
against universal suffrage,
but it deals with computers.
But I became interested
at a conference
where somebody, they had
artificial intelligence
on the program.
And I thought it might
be a good occasion
to catch up on my sleep.
And I was wandering
out of the conference
when Eric Schmidt said
to me at the door,
this might be an
interesting subject.
So I listened to a
presentation about the creation
of a computer that would
defeat a game of Go.
And it would teach itself by
playing games against itself.
And the speaker was
confident that he
could create this in a couple
of years and go on from there.
So to me, the idea of
itself teaching machine
that would achieve intellectual
dominance about an established
field had significant
historic applications.
And so I asked the
speaker afterwards,
when are we going to become
idiots to the machine
that you're creating,
when is it going
to achieve a level of
intellectual dominance?
And I started
reflecting on the issue.
So I got in touch with Eric
Schmidt and Dan [INAUDIBLE]..
And they'd started a series
of conferences, to which they
invited me, in which the
topic of the evolution
of artificial intelligence
was being discussed.
And I gave a dinner.
There was a conference
for AI in New York.
I invited eight of
their participants
plus eight what I
called civilians
who know nothing about
AI to see whether we
could have an exchange.
It was a total
failure, because I
asked the AI people at the edge
of what did coverage they were.
But they weren't
very eager to say
that in front of
their colleagues
since they were aiming
to follow Google.
And the civilians were thinking
about government regulation
and to how to avoid it.
And I wasn't particularly
interested in regulating it.
I was interested in
understanding it.
So over a period of
nearly three years now,
Eric and Dan and I,
together with people
that we invite periodically
have been discussing
at of the problem and its
impact on the political world,
on the evolution of strategy,
and on the evolution of history
as we could discover it.
And in the course of these, I
wrote an article about concerns
that I have about
the topic and why
I think it is a
fundamental topic
and if one wants to understand
the likely evolution
and above all if one
wants to master it
and what responsibility it
brings with it to those who
engage in it and
to bridge the gap
between the
extraordinary capacity
of the scientific field
to make advances in it
and the lacking consciousness
of the political and social
science and humanistic world
about the implications of what
is changing passively.
Let me pick up there,
because I want to pick up
with your Atlantic article.
There's sort of a
central thesis in it.
Let me read it and get
you to comment on it.
You wrote that "the
technological advance that most
altered the course
of modern history
was the invention
of the printing
press in the 15th
century, which allowed
the search for
empirical knowledge
to supplant liturgical
doctrine and the age of reason
to gradually supersede
the age of religion.
But this age of reason
is now being challenged
by an even more sweeping
technological revolution--
the birth of a world that
relies on artificially
intelligent machines powered
by data and algorithms,
but ungoverned by ethical
or philosophical norms."
What you're really
asking there is,
does the AI revolution
presage a new enlightenment
or a new Dark Age?
We don't know.
The capacities of
artificial intelligence
for suppressing free
expressions and for guiding
preferred thoughts are enormous.
The necessity, however, of
understanding these properties
and bringing them
into some relationship
are equally if
not more enormous.
Right now, we don't
have it theory
of how do we relate the
multiplication of choices that
arrives as a result of
artificial intelligence
to ethical criteria,
or even to define what
the ethical criteria that
apply to these fields.
But at a moment when most parts
of the world are in upheaval
and when the direction
of the upheavals
is a key issue and of how to
relate the various aspects
to each other, it is imperative
for artificial intelligence--
if it plays any role in the
process as it will and must--
to understand those questions.
So we're here at one of
the world's great research
education institutions.
We know how much
the Enlightenment
changed education.
How do you think AI has
to change education,
both essentially the mix of
education between humanities,
philosophy, technology?
In the present world, the
AI technical knowledge
is far ahead of the political,
social, and other applications.
I have a number of friends
who authors or historians.
And when I meet with them,
I told them my concerns.
And they don't really believe
that AI is a fundamental shift
in human consciousness.
But I think it will be
inevitable if one engages
in intellectual exercises
where the result of it
is totally unexpected, which
means that when they have
no criteria for their
process that the impact of AI
will be of historic consequence.
And we have to
understand it first.
How was it possible with
the Alpha Zero machine
that by playing
it, by permitting
it to play with
itself for 24 hours
to come up with a style
of chess that had never
been played before in the 15,000
years of established chess?
What was the thought process,
all the intellectual practice
that got there?
And what are the implications?
Because in every
other respect, it
was following the
rules that had been
established for the individual
moves you could make.
But the outcome
was a game of chess
that had not been seen
before in human history.
So we need to
understand that process.
Dr. Kissinger, if you were
Secretary of State Kissinger
today, would you have an
assistant secretary for AI?
That is, you spent so much
of your time negotiating arms
control to limit and control
the spread of nuclear weapons.
Do we need an AI, basically,
control agreement with Russia,
with China?
Do you see that, is
there a parallel there?
Well, arms control was
importantly an issue
of numbers.
The theory was and it
was basically correct
that if the two sides
could limit the numbers
and if they could
share that information
and if it was inspectable--
and all of it was at
that time possible--
arms control negotiations
might stabilize the situation.
None of this applies
to AI as it now exists,
because the transparency that
was essential for arms control
would be very hard to establish,
because it's my understanding
that modern weapons now
can be developed in the way
that a transparent perception
with the help of AI
will be very well
be very difficult.
The speed of cyber weapons
would negate some of the control
systems that had been
established in the period
that I know we used to think
that as both sides learn more
about each other's weapons,
they can build restraint
into their activities.
But with AI as it now stands,
the other side's ignorance
is one of your best weapons.
So sharing
information, it's going
to be much more complicated.
On the other hand,
since it is also
possible to multiply
the types of weapons,
the kind of information
that needs to be shared
must be carefully studied.
But some criteria
for it have to have
to be established or
the world is inherently
at the edge of an
explosion, because with all
the reduction in time and
the increase in speed,
preemption becomes an enormous
temptation, complicated
by the fact that you
are not clear about what
you want to preempt.
All I'm trying to make clear
is that when I was at Harvard,
I spent a lot of time with MIT
professors who were studying
the problem of arms control.
But almost all of the
premises we developed then
will be overturned by AI or
will have to be modified.
And I don't know any systematic
work that is being done now.
So when I listened to
the previous discussion,
there were so many fields just
out of that particular question
that would come to your mind.
What comes through,
Henry, from your remarks
and from the
Atlantic essay, is I
think we're at a
unique intersection
that our species has
never stood out before.
That is, in 1945,
we entered a world
where one country post-Hiroshima
could kill all of us.
I think it's not impossible
to project out in the future
a world where one person
can kill all of us,
but at the same
time where all of us
could actually fix everything,
that these same powers actually
are creating a world,
yes, where we're one of us
could do great harm to
all of us, or all of us
could actually feed,
house, clothe, educate,
improve the health care of
every person on the planet
if we put our minds to it.
And I would say that means
we've never been more godlike
as a species than we are today.
But if each individual is
going to be that godlike,
then being grounded
in norms and values,
philosophical constructs, at a
minimum the simple golden rule
is going to be essential.
But we've actually never
lived in a world where
people embracing sustainable
values that every single person
has to do that.
Up to now, the idealistic
notion of world peace
assumed that at
some point, there
would be a world order which
every country and human being
would follow and which
it's then implemented
by some kind of
universal consent.
And it is an attractive theory.
It has never been
achieved because
the cultural perceptions of
some of the basic values that
affect the issue of peace
and the issue of cooperation
have never been
sufficiently harmonized
and because the world
did not yet have
a uniform political system.
Now we have a world system
that is sort of global,
but it is composed of
different cultural precepts.
So when you the issue
between us, say, and China,
it's partly the technical
ability to match each other.
But it's also partly
a different conception
of how order is achieved.
So one of the questions
that will arise inevitably
is, is it possible
for two or three
systems of artificial
intelligence to coexist,
and how does one do that,
and how does one define it?
Because I think it will
be more immediate issue
than a global artificial system.
One of the things my friend
Larry Diamond, the democracy
expert at Stanford,
has written about,
Henry, is that democracy
recession we're going through.
Democracy has been rolled
back in a lot of places.
And it's hard not to notice the
connection with the rise of AI.
You've got governments like in
China and Egypt, Turkey, Saudi
Arabia, Iran, Russia,
Hungary are all
perfecting these systems of
control using AI control.
Is AI going to be the
death of democracy?
Well, AI makes it technically
easier and even technically
possible to supervise
your population
and to inform a
central authority
about the conduct
of the population
and to do it instantaneously.
So then AI permits type
of political campaigning
which destroys or limits the
idea that separate ideas are
in conflict with each other
by working towards a consensus
idea that it is mechanically
easier to distribute.
And so it becomes more
difficult to develop
elite kind of values,
which to some extent
have to exist to make any
democracy work in order
to have a standard of conduct.
That is certainly one of the
challenges of our period.
On the other hand,
there are many arguments
for why to permit a
technical system to dominate
every aspect of a
country's a society's life
gradually leads to a
deadening of values
and therefore must be avoided.
So how to reconcile the
extraordinary technical ability
to universalize good ideas to
keep them from being dominated
by the dangerous
ideas and to know
what are the central ideas,
these are all challenges.
And from what I've seen of
artificial intelligence,
right now the
technologies are way ahead
of the humanists
and philosophers.
And we have to find a way of
closing this gap up before,
with the best
intentions of the world,
technology creates
capabilities that
can no longer be brought into a
context with which humanity as
in the West conceive
it can coexist.
Some of the futurists
who write about AI
present us with a world
where manual labor will not
be necessary.
Machines and robots
will do all that.
And we'll all be able
to elevate and explore
our greatest human attributes.
But one of the things that has
always struck me about that
is actually manual
labor, working
with things, whether
you're working in a factory
or even a journalist working
with the just raw reporting,
that's where I
get my best ideas.
I think working with a
product is where you see, hey,
we can fork off there.
We could do this better here.
Looking at it from a
historical perspective,
what do you think would be
the implications of a world
where manual labor at
the scale we've known it
since the industrial
pollution really
is completely diminished?
Well, the shift in
the type of labor
from the countryside
to the cities
has already had a tremendous
impact on revolutions.
A problem that I
have reflected upon
without having a
debate solution,
if what if one looked
at human history,
the definition of
the nature of reality
has been one of the driving
forces of philosophy
and after philosophy of
science and other fields.
That was the engine for the
period of Enlightenment.
Now we have a set of upheavals
that occur the opposite way,
that before you have the
philosophic conception
of the age of
reason leading then
to science based
on experimentation,
you have science developing
tremendous capabilities
without as yet having a
philosophical framework
within which to put them.
And when that develops
into yet another step,
as you have in Alpha Zero,
where you discover something
that you had not expected and
that you had not planned for,
you may have a shift
in the perception
of the nature of reality.
And that would then be a
transformation of history
that has, I would say, I
don't know a previous example.
So working in the field is
a tremendous responsibility
and a tremendous challenge to
the perception of evolution.
And it will probably
have to be navigated
by seeing whether more than
one system can coexist,
because the tendency
may be otherwise
to aim for a universal
domination which
with the destructiveness
of what can be done
would be too dangerous.
But these are
reflections of an amateur
who has taken his
experiences in politics
and his experienced
study of history
and applied it to this
extraordinary new development.
And these are questions
that I'm asking.
They're not answers
that I'm proposing.
So to close, Henry, when you
come back 10 years from now
and give Rafael a report card
for the School of Computing,
what will constitute
success do you think
for this great new enterprise?
I would like to see
whether the people who
are exploring the next
state, the next future
have got a better
grip than now exists
on the nature of the conceptions
that artificial intelligence
produces.
I'm told that self-driving
cars, for example, when
they stop at a traffic
light edge forward
with the other cars to see
whether they beat the traffic.
But where did they learn that?
It's not in their system.
So what are these
machine learning
that they don't tell us
because they don't communicate?
We have to study
what they're doing
to see what the process is.
Then I would like
to see whether it
had been possible to
develop some concepts that
are comparable to the arms
control concepts in which I was
involved, say, 50 years
ago, which were not always
successful.
But the theory was
quite explicable.
We don't have that yet.
And so in all of the
fields of exploration
or in most of the
fields, I would
be very interested to see
whether the enterprises
or the institutions
that are fostering them
are not just solving the problem
that got them interested,
but are studying
the implications
and they have made some progress
in the implications that
will determine our future
and the future of the world.
Rafael, you have your homework.
Thank you very much.
[APPLAUSE]

---

### Computing for the People: Ethics and AI
URL: https://www.youtube.com/watch?v=Sm7I4QjscVQ

Idioma: en

Good afternoon.
My name is Melissa Nobles.
And I'm the Kenan Sahin Dean of
the School of Humanities, Arts,
and Social Sciences
here at MIT and also
professor of political science.
I had some remarks
prepared today
in advance of the
next panel on ethics.
And then after hearing
the conversation
with Mr. Friedman
and Dr. Kissinger,
I thought it provided
actually quite
an appropriate
introduction to this panel.
And so I kind of quickly
re-writ my remarks
in keeping with what I
heard this afternoon.
So in this article in
this talk, we at MIT
were admonished to think
deeply and carefully
about the social implications
of AI in particular
and computation in general.
He also suggested that the
discipline that I study,
political science, and
particularly politicians
are not really
keeping up with all
of the technological advances.
And to a certain way,
that is certainly true.
The humanities, the social
sciences, and the arts
are all grappling very
deeply with the ways
in which computation
is changing the world.
And it is in effect changing
the way we study the world.
And our expectation is
that with the bridge hires
being contemplated
in the college,
it will go a long way in
helping us to achieve that goal.
But at the same time,
the conversation
also reminded us
that technologists
themselves must much more deeply
understand what they are doing
and how what they are doing
are actually deeply changing
human life and taking that on
in a really deep and intentional
way.
So as I understood
it then at the end,
he gave not only President
Reif, but he gave all of us
a homework assignment.
And that as an
assignment is this.
It is to be truly
collaborative in our endeavors,
because the welfare literally
of humankind rests on it.
And it is with
that gravity that I
hope we will think
about the next panel.
So I call the panelists
up to the stage.
Are you all coming?
They're coming.
So our panel will again be
moderated by Tom Friedman.
And they will explore more
broadly the social implications
of computation.
And I'll introduce them
now once they're seated.
If you all, when
I say your name,
you can just raise your hand.
The first is Ursula Burns.
She's the executive
chairwoman and CEO
of VEON, a leading global
provider of connectivity
and internet services
headquartered in Amsterdam.
We have Ash Carter, who is
the director of the Belfer
Center for Science and
International Affairs
at the Harvard Kennedy
School and a former US
Secretary of Defense.
Jennifer Chayes is a technical
fellow and managing director
of Microsoft
Research New England
for New York City and Montreal.
Joi Ito is director
of the MIT Media Lab.
Megan Smith is the
founder and CEO of shift7,
a company driving tech
forward, innovation
for faster scaled impact.
She's also a former US
Chief Technology Officer.
And finally Darren
Walker, president
of the Ford Foundation and
international social justice
philanthropy.
As I mentioned,
their discussion will
be moderated by Tom Friedman,
the three-time Pulitzer Prize
winner and a weekly columnist
for the New York Times.
Please join me in
welcoming them.
[APPLAUSE]
Wow, what an all-star cast.
This is great.
What a great way to
conclude this seminar.
Joi, I'm going to
start with you.
You're on the spot, pal.
And I'm going to start--
I happen to do a column--
I was in India last week.
And I did a column on
while AI's disrupting
the outsourcing industry.
And so we have comments
on our columns.
And I was reading the comments,
because they were particularly
interesting this week.
And there was one
letter there that
came in that made me think
of you and our prep call.
And the letter was from
Robert W. in San Diego.
And he said, "A
letter to the editor.
What we call AI is really
just machine learning.
My cats display
some intelligence.
If I come home from the
store with a bag of things
and toss them on
the couch, my cats
will run to the bag
to see what's in it.
They will smell it and pat at
it because they wonder about it,
just like our ancient ancestors
looked up at the stars
and wondered what those
points of light were.
It's a sign of intelligence.
A computer will never wonder
about the world or anything
in it, because it's
not intelligent."
And I was thinking about
it, because when we talked,
you said one of the
things you thought
was really important that we
not think of AI as some totally
new thing, that it's really just
extended intelligence, machines
increasing the
power of machines,
and that your big
concern is putting
this power, this
amplified power,
in the hands of technologists
not really prepared for it.
Please elaborate.
So let me just describe
extended intelligence
and talk a little
bit about that.
So actually Norbert Wiener,
MIT professor 50 years ago,
wrote this great book called
The Human Use of Human Beings.
And in it he describes
organizations
as machines of flesh and blood.
And I think corporations
are super intelligences.
They're this aggregate
of all these--
not necessarily more
wise than humans,
but they are more complicated.
And we can barely manage them.
And the way I think
about intelligence,
we already have
machines in the system.
And I think of AI as sort
of jet packs and blindfolds
that are going to come
on and just send us
careening in whatever direction
we're already headed in.
So it's going to make us more
powerful, but not necessarily
more wise.
And I think that a key thing
is to get our house in order
before these jet pack comes on.
And I think a lot
of the presentations
before were about that.
I think the problem has been
that a lot of machine learning
in AI has been in the
domain of engineers.
And it has looked like
a technical problem.
And it's been very difficult.
Even though we talk
about explainability,
a lot of the explainability
has been explainability
between technical people.
And it's not
explainability to courts.
It's not explainability
to political systems.
And when you look at the code,
a lot of the technology people
say, oh, we're just technical.
We don't deal with
racial problems.
We don't deal with the
political problems.
Weirdly, when you
look at the law,
they often say the
same thing, too.
Torts law says if you run over
a rich person and a poor person
at the same time, we pay
the rich person more,
because our job in torts is not
to deal with redistribution.
We're just trying to
technically keep the status quo.
So one of the really
interesting things
is that the technology
people and the law people,
which are both kind of
necessary to get this right,
have kind of punted
on the politics piece.
And I think we're
getting to the point
where as these things
are getting deployed,
this interface between
society and engineering
is not yet tuned to the point
where we can integrate society.
And I think we have to do that.
And to me, I think that's why
this college is so important,
that that interface
has to happen
before we put these jet
packs and blindfolds on.
It's a good segue to you, Megan.
You said the
College of Computing
should also have the best
community organizers, the best
school of social scientists,
the best justice technologists.
So you produce
bilingual scientists
with a heavy emphasis on
the non-CS part of the job.
Yeah.
What did you mean by that?
Well, we have an incredible
EECS department at MIT.
Extraordinary.
We don't need to replace
that or replicate that.
Computing is really
for anything.
And it was so interesting
to me as US CTO.
By way of example, we
think computing today
is for certain things--
self-driving car, precision
medicine, these topics.
And yet why is it also
applied to any topic?
And I'm wearing CS for
all, CS for all people
to have hands-on
keyboard designers,
but CS for all topics.
And the challenge--
if I as CTO would
have gone to HHS,
Health and Human
Services, a trillion dollar
agency, and I went to H,
and I went to a meeting
of precision medicine,
they'd say, oh great.
Let's get started.
If I went down the hall to
the foster care meeting HS,
they'd be like,
why are you here?
My computer's working.
right?
And these are
extraordinary people.
Our civil servants who work
in HS are just amazing.
They know more about these
systems and [INAUDIBLE]
those things.
So I guess one of
the key things that I
hope for this is
that I hope we're
going to do a lot of computing
on foster care solutions.
I think we should do some
computing on equality,
world child poverty.
We feed 22 million children
in the free and reduced lunch
program during the school year.
And we can only get to 6
million of them in the summer.
That's a big data problem.
I think it's more important
than self-driving cars.
I love my friends who
work on self-driving cars.
But there's lots of us who
could work on lots of things.
And so one of the
things I'm really
excited about for this
computing college, the College
of Computing, and I think
everyone who's working on it
is that we really could
not just diversify tech,
but we could techify
everything else.
And we could really work
on the hardest [INAUDIBLE]
the hardest problems together
in this collaborative way.
It's such an opportunity.
It involves not only
actually solving
some of the ethics problems.
We'd be bringing some of these
topics into the mix of the code
and some of these other
humans in the mix of the code.
We're super lopsided
on who gets to speak,
who gets money, who
gets to set the agenda.
This is really silly.
And I call it TQ,
like tech quotient.
Let's add the TQ to everything.
And we can really embrace
solving a lot of things.
And I think we'll start
going in the right direction
if we have blindfolds on.
So I haven't checked this.
But I'm going to make a bet that
there was not a single question
on the implications of AI in
the 2016 debates for president.
Can you imagine that
happening in 2020?
Yeah, definitely.
I think that's going to happen.
And it was interesting,
because we're
moving so fast that I
remember when Secretary
Foxx, who was Secretary
of Transportation
with Secretary Carter--
when he came to be confirmed
the Secretary of Transportation,
there were no questions on tech.
And by the time he'd finished,
self-driving cars and UAVs
and all that.
So we're moving fast.
And yes, it will be there.
So Darren, I want to test out--
I've got a book idea.
And I want to test it out
on you since you're here.
And it sort of goes like this.
I actually wrote a book
a couple years ago,
Thank You for Being Late.
And it argued that the
world is being reshaped
by three accelerations, what
I called the market, mother
nature, and Moore's law.
So technology, globalization,
climate, biodiversity,
loss population, three
giant accelerations.
And people are polite, so
they often come up to you
and say, hey, you're
writing a new book.
And I'd say, well
I just wrote a book
about the three largest forces
on the planet reshaping--
I don't have three
new ones this year.
And that actually
got me thinking
about what is going on.
And what is going on seems to me
is all of them are going deep.
And the book I think
I'd like to write
would just be called
Deep, because I
think all these things are
now going at a deep level.
It was very interesting
being in India last week,
because Jio, this new cell
phone company by Reliance,
has so driven down the
cost of cell phones
that suddenly a
couple hundred million
more Indians are getting
access to the network
now, because it got cheap.
And now they're able to go
deep in wholly new ways.
And of course, if you
watched the Oscars,
what was the Song of the Year?
It was Shallow.
[LAUGHTER]
But actually the verse is--
Was it?
It was.
--"I'm off the deep end.
Watch as I dive in.
I'll never meet the ground.
Crash through the surface
where they can't hurt us.
We're far from the shallow now."
And I think we're far
from the shallow now.
And so you've talked about
public interest technologists,
Megan, Joi.
You're all really talking
about if we go deep
without the philosophical,
legal, ethical norms
around just behavior
and privacy,
we're going to be really
far from the shallow.
What are your plans
at the Ford Foundation
to address that problem?
Well, first I think if we are
going to go deep without a view
as to whether AI
can advance justice
and whether it can
strengthen our democracy,
if we're going to engage
in this enterprise
without those questions driving
our discourse, we are doomed.
If we do not
understand that there
is a difference between
private interest
and the public interest and
that space is contested--
we saw that space in
the Zuckerberg hearings
where we had powerful
senators asking
this CEO basic questions about
how to turn their computer on.
The question was actually,
how do you make money
if you give it away for free?
Because quite frankly, in any
other sphere of importance
in our society, at a
congressional hearing there
would be some smart
person sitting
behind that congressperson
passing them notes,
saying ask him this.
He's wrong.
Challenge him.
The data says this.
And there's someone
sitting behind
in health, the
environment, human rights.
There was no one sitting
behind those senators,
because there are very
few people on the Hill
working in the public
interests on this larger
issue of this fourth
Industrial Revolution.
Why is that?
Because MIT and Stanford didn't
train them to go to the Hill.
And that's the potential
that this new Schwarzman
College offers and
why I think what
Rafael and Steve and the
others here are doing
is so potentially
transformational
for higher education and
more broadly society,
because if we don't have a
view about the public interest,
if we can't even define
what is the public interest
in this conversation,
then we won't
work for the public interest.
The professor said a moment
ago, I'm very excited about AI.
If I'm a black man on parole
or about to be paroled,
I'm not excited about AI.
Why not?
[INTERPOSING VOICES]
Because the way in which
predictive analytics is driving
decisions as to who gets
paroled and who doesn't is
having a pernicious effect
that is, in fact, reifying
and amplifying the very
human biases that we see
reflected in our society.
So the potential of AI
should be to help correct
the inherent biases,
the historic biases that
play out every day in America.
But it's not doing that.
And the answer can't
be, we don't know.
That can't be the
answer in a society
where inequality is
growing and where
those who have historically been
marginalized and disadvantaged
are having their
disadvantage compounded.
So will AI be a leveler?
Or will AI simply compound
the disadvantage and bias
that is already built into
our systems and structures?
I lived through this.
I've been at the New York
Times for 40 years almost.
And I lived through
this transition,
because I work for a news
organization that was basically
for most of its life printed
on a dead tree, on paper.
And over here, we
had a regulator
who said if somebody wants to
run an ad on your dead tree,
they have to identify
where the money comes from.
And over here, we
had an editor who
said if you make a
mistake on the dead tree,
you have to correct it.
And on top of the dead tree,
we had readers and advertisers.
Then along came Facebook.
They said, we're
not a dead tree.
We're a platform.
We don't need any
of your regulators.
We don't need any
of your editors.
But we want all of your readers
and all of your advertisers.
And we didn't know what to do.
And they were kind of cool.
And we were sort of old,
dead tree journalists.
And so we did the only
thing we knew how to do.
And that was trust them.
And they completely
violated our trust.
I mean not ours.
I mean the community's trust by
scaling their platform helter
skelter without
building in the editors
and the implicit regulation.
Do you think there's
any rolling that back?
No.
We have to engage.
And we have to talk
about the things
that we don't like
talking about, or at least
elites don't like talking
about, like regulation
and redistribution, because
unless we are prepared to have
a system that is fairer,
then our democracy
is going to be undermined.
So the question
for me is, who is
going to write the regulation?
Because actually, there is a
dearth of talent in Washington
who even understands the
fundamentals of that platform.
Ursula, you were saying no.
Elaborate.
Well, For full transparency,
I serve on the board
of the Ford Foundation.
And I think Darren's
foundation on it's
our problem that was created
and it's our problem to solve--
my reaction of no is
in the administration
that we're in today.
I don't think that there's any
interest whatsoever in pulling
back anything that would point
that towards more justice, more
equality, more freedom, less
regulation, or more regulation.
Bad time for government to be on
vacation at a giant inflection
point.
It's a really bad time.
And that's what
I'm nervous about.
I'm nervous about the
fact that we're moving--
I love Joi.
I love your analogy.
We're moving really fast.
And in a week, it's like a year.
So a year has passed in a week.
And we have probably a
couple of more years.
By the time we get
to the point where
we realize that
there's something
that we must do to
actually right the ship,
the ship will be in the
middle of the ocean.
That's one of the reasons why
I'm so excited about Schwarzman
College and I'm so excited
about being here at MIT,
because at the heart
of MIT is this idea
that hearts and minds, hands
can actually all come together
for the better good.
And this is not
only about getting
a whole bunch of good
computer scientists writing
these great good programs.
It is about making the
world a better place.
And we have to actually figure
out a way to mix this together.
There is not a lot of other
checks and balances out there.
You said something that
was really interesting.
You said we trusted them.
We didn't even trust them.
I mean, we didn't even ask them.
I'm not even on Facebook--
They asked us.
--so I--
Right, neither am I. But I think
that what we have to do now
is we have to
actually stand up--
And get in their face.
--and get in their face.
And the people who have to do
that are people who are smart.
A lot of it's going to fall on
the back of education, higher
education, because
I don't believe
it will be led by the government
until we actually force them.
And there's not enough
people out there to do that.
So just one thing.
So just a quick follow-up,
a little career counseling.
A lot of students here.
You've been a big employer,
hired a lot of people.
Going into this age of AI,
what would be your advice
to a student of the
kind of background
that you would be looking
for as an employer?
I want someone who believes
that nothing is inevitable.
I want someone who believes
that nothing is inevitable,
that their involvement, their
engagement, their contributions
to the solution will make
the solution a better place.
People who walk into my
company, any of the companies
who actually believe that
there's this thing that's
set up and all I
have to do is fit in,
I want them to leave tomorrow.
I want people who
walk in and say,
there's a way to make it better.
And it's not technical.
It's not scientific necessarily.
It's not even social.
It's active engagement.
It's a little bit of broad
knowledge and responsibility
to other people.
And it is amazing.
We have these amazingly rich
people-- that's why I love you,
Steve--
who literally own the world--
I mean, 1%, 99.9%.
And the idea that
there is nothing
they can do to make it
better is a false idea.
We don't only need them.
We need literally
the guy who comes
to work tomorrow for VEON
or for Xerox or for MIT
to actually believe that
nothing is inevitable,
that better is possible.
And I want them to
work towards that.
That's a great job offer.
Jennifer, [INAUDIBLE]
I just wanted to--
[INTERPOSING VOICES]
--comment, because there is
a nascent field that I hope
will be very well represented
in the new College of Computing.
In my labs, we call it FATE--
fairness, accountability,
which is really
being able to audit the
outcome, transparency,
so interpretability when
someone is not granted bail--
hopefully you haven't
used deep learning,
but you've used something
which is interpretable--
and ethics.
They put ethics on the end,
because otherwise there's
a conference called FAT,
which doesn't sound so good.
OK, but anyway.
So there are already
two academic conferences
in this area.
There's the
[INAUDIBLE] conference.
And there's AI ethics,
ethics and society.
And these nascent fields
are bringing together
legal scholars, ethicists,
social scientists, and people
in AI and asking,
how do we make some
of these decisions in a
more equitable fashion?
So I'm very excited.
I mean, I personally have
done something which we
call algorithmic green lining.
So we take a population-- you're
going to let people into school
or you're going to
grant them loans
or you're going to do something.
And how do I take that
objective function,
which as you said, if
we don't watch out,
we just optimize to some
objective function, which
amplifies this.
I mean, it's really simple math.
It just amplifies the
inequities in the data.
And instead we have
some diversity component
or we have some
fairness component,
which has to come
from interactions
with social
scientists, ethicists,
and give you an outcome which
is fair according to this.
And so there is a nascent field.
And I think that the
College of Computing
is an ideal place to really
grow this field, the interaction
of the computer scientists and
[INAUDIBLE] and the other part.
So I just wanted to say it's not
like no one is looking at this.
But it's nascent.
And it is something that I
think all undergraduates should
learn about to help them.
When they hear about
the predictions of AI,
have them question
those predictions.
But it can't happen without
universities taking the lead.
Absolutely.
And the challenge
for universities
that Rafael has
unlocked, because I've
talked to dozens of
university presidents,
is that the nature of
the problem to be solved
requires synthetic thinking
across all disciplines.
And when you talk to a provost
or university president,
they say with the
door closed, we're
not set up to work that way.
And so I literally have
had presidents say,
this is a powerful idea.
But we don't' know how to do it.
But the fight that I will
have to have with my deans
and faculty over this
cross-campus learning
and structures that
will need to be created
is too big a fight for me.
And so the brilliance
and the potential
here is that this new college
is starting from scratch.
And it can build all
of the disciplines.
And that's the challenge, right?
And so that's why
what MIT is doing
is going to set the pace for
every other university that
wants to be relevant
in the future.
My motto in doing journalism
is from my teacher,
Lin Wells, who said,
never think in the box.
Never think out of the box.
Today you have to
think without a box.
So if you are not, in
my case, arbitraging
what's going on in climate,
what's going on in technology,
what's going on in
globalization, what's
going on in business, you're
going to miss the story.
I'll give you an example,
the revolution in Syria.
The revolution in
Syria actually began
with the worst four-year drought
in Syria's modern history
between 2006 and 2010 where
a million Syrian farmers
and herders left their
homes, flocked to the cities,
lived on the margins
of these cities.
Outside did nothing for them.
Then they got connected
on cell phones.
Then the Arab Spring happened.
The whole thing was a
complete melange of market,
mother nature, and Moore's law
blowing the lid off the place.
And if you're just--
I got my BA in Arabic
and Middle East History.
If I'd stayed
there, I would never
have understood what's going on.
Ash, I'm really glad
that I live in a country
where engineers at one of our
biggest tech firms would say,
don't want my work
going into weapons
that are going to kill people.
I'm also really glad I
live in a country protected
by the Pentagon, the US Army,
Navy, Air Force, and Marine
Corps, because
there are people who
want to destroy our freedoms.
How do you resolve that?
Well, you're referring to the--
[INTERPOSING VOICES]
Google, yeah.
--even at Google.
And here's how if they were--
which it was not all
Google employees,
but some Google employees.
The first thing
I'd say is, listen,
I respect that at least
you're thinking ethically.
I'm going to come
to a different place
with that chain of reasoning.
But good on you, because you're
thinking about the morality
of what you're doing.
And by the way, think about that
with respect to everything else
you're doing.
It's not just if you're working
for the Defense Department,
suddenly moral weight
falls upon you.
Moral weight falls upon
you whenever you're
doing anything consequential.
Now for defense, I'll just
say one thing about defense.
And then I want to offer
a little hope here--
Please.
--and sort of how-tos
that I've learned
over the course of a long
career in technology,
not just defense.
But seven years ago, 2012,
before all this discussion
went on, I was the number two
in the Department of Defense.
And I issued a
directive addressing
the issue with respect to
autonomous weapons that
is still in force.
And what it says is
that there will not
be autonomous lethal
systems in our future,
that any decision to
use lethal force, which
is a grave matter on
behalf of our people,
there must be a human being
involved in the decision
making.
I didn't say in the loop,
because those of you who
understand technically,
that's not actually
literally the right
formulation, but a human
being involved in
decision making.
Nobody paid any attention
to it at the time.
But that is the extant guidance
in the Department of Defense
and the right guidance.
So the first thing I'd remind
them if they didn't know
was that.
But more fundamentally,
I'd say this.
Look, early in my career, I
was brought up by the Manhattan
Project Scientists.
And they said, get in the game.
This is too big a deal for
you to stand on the sidelines.
So I'd say first of
all, good on you.
You're thinking morally.
But secondly, you're headed
in the wrong direction.
If you want us to do the right
thing, it's our government.
It's the only
government you got.
You can't walk down the street
and shop at another government.
Get in the game.
Make us do the right thing.
Bring your knowledge to that.
And also I might say,
and that takes us
in a whole different
direction, how do you
like working for the PLA?
Because you do that, and
over there you can't tell.
The People's Liberation Army?
The People's Liberation Army.
And that is--
What do you mean by that?
Well, I'm somebody who worked
with the Chinese a long time.
And I don't mean I want to see
a Cold War between the United
States and China.
But it's a communist
dictatorship.
And they're intent upon using
AI as an instrument of control.
Those are not the
American values
that I think are important.
Now, to what to do about it,
yes, educating our students.
That's why I'm
here, because I want
to respond to that exact
desire of those young people
to say, hey, wait a minute.
What's going on here?
Are we doing the right thing?
But give them
something to work with.
Here are some
things to work with.
Accountability has
been discussed here.
And as an algorithmic matter, if
you know how these things work,
that is an automatic.
It needs to be a design
criterion for people
who are designing AI.
You need to say--
you need to build that in.
I realize that's
difficult. But I've
been running technology
programs for my whole career.
And I've heard
plenty of engineers
say they couldn't do
something when they
didn't want to do something.
And so I would say
if you don't do it,
I ain't buying it, if you
don't put it in there.
And then Darren was speaking
very eloquently about AI
as an amplifier of crummy data.
And there need to be data
standards and transparency
there as well.
Otherwise, you're just
massaging yesterday
into a perfected version of then
rather than creating tomorrow.
That is something
that is doable.
And the last thing is I'll
say to Congress, yeah.
I mean, that was a big--
that day will go down in
my personal mental history
as one of the greatest
missed opportunities
in our collective
history, because imagine
a different hearing.
Imagine a hearing in
which the members asked--
I've been joking that I wish
they had been that poorly
briefed when they
asked me about issues
of [INAUDIBLE] many,
many testimonies
as they were asking Mark
Zuckerberg about technology.
That is there's something
that can be done about.
And he, for his part, didn't
pass the test of history
in my judgment.
Nothing against him personally.
But that's not going
to fly, that accounting
for the ethical behavior
of what you've done.
For the members, I've
seen it work differently.
And you're right.
Darren's exactly right.
There are people behind him.
Where do they come from?
In my experience in
technology, in addition
to having some people who
know how to do the mix,
which is what
we're all about now
and what the Schwarzman College
of Computing is all about.
A mechanism that works well
for Congress I was part of once
upon a time for one instance,
and it has a few ingredients.
The first is the members have
to really want the information.
And now that's not
true about everything.
So I'm sorry to say,
you're not going
to be able to do a scientific
advisory piece of work
on some subjects where
they don't want to hear.
But here, this is something that
is not yet partisan, whatever
that would even mean
in this area, that
is not yet polarized.
They genuinely wonder.
So that's the first thing.
The second thing
is that you need
to have something that is
demonstrably not only expert,
but monitored, usually that you
get some wise group of people
to oversee it and
assure the members it
hasn't gone off track.
And the last thing-- and
this is really crucial, Tom--
is these people
find it much easier
to digest choices and
options than they do--
[INTERPOSING VOICES]
--an answer.
So here's what you do.
You say here are some
different ways you can
go about accomplishing this.
But they're all
technically well grounded.
But they differ in respectable
ways from one another.
And then they can
do what they do
best, which is apply
their broad values
and experience to represent
the people back home.
Now, when you frame things that
way, it goes down a lot easier.
And secondly, you find
that 80% of the content
is actually in the
common findings that
are behind all of the choices.
And that's a great healing
and coming together factor.
So for all these
things, I'm just saying.
There are ways-- not only
do we need new kids who
get it and are spirited, and
I'm so glad for this generation
which feels the way they do.
But you can give them
something to go on.
And the reason I
was comfortable,
just to get back to your
original question, with that
directive way back
in 2012 was I knew
I wasn't whistling in the wind.
I didn't only have confidence
in the morality of it.
I had confidence in
the doability of it.
So this is doable, darn it.
And we can't--
Good.
--have otherwise.
How doable is it to really
substantially increase
the representation of women
and minorities in CS, AI?
And what's the Schwarzman
School's plan for that?
Megan?
There are plenty of people
who could be in this faculty.
So we could gender balance,
race balance, geo balance,
topic balance this faculty.
It's 2019.
There's no question.
It's just a question
of how good are we
at finding the talent
that definitely exists.
I'd be happy to help.
It's there.
And we could do that.
And it's one of the most
important things we can do,
because we can set
this team up to be
the right kind of broad
faculty, thinking broadly,
coming from all over the world.
There's no question.
I think it is
absolutely required
that we approach this from
a completely different point
of view.
And that point of view is one--
we have to approach it that
the outcome has to be x.
This is not we're going
to try to figure out a way
to not hurt anybody's feelings.
And we're going to do the right
laws and whatever the heck it
is to make people tenured.
I think if we do that,
we know the outcome.
The outcome will
be majority male.
It will be an age swing that's
pretty skewed towards older.
So I think we can-- but we
have to be willing to do it.
It's like Darren said it.
Darren said, you go
to the universities
and they say, well,
I can't do that.
I know it's the
right thing to do.
But I can't do that,
because I would
have to fight with my ABCDE.
And I say, well, we
should fight with them.
But Ursula, I think
we have to be--
we have to be very
careful, because we
have allowed a narrative that
is about political correctness.
Absolutely.
This is about excellence.
That's right.
[INTERPOSING VOICES]
And so we need to talk
about excellence--
And there are excellent people.
--and stop talking about we need
this many blacks, this many--
yes, we need to--
but this is about excellence.
Excellence.
And you can find them
everywhere, by the way.
Yes.
So this is where my algorithmic
green lining came from.
I have a few heuristics.
I have much more
diverse labs than almost
any other technical labs that
I know of at any companies.
I have more women.
I have more people of color.
We have more LGBT.
And so what do I do?
If I'm looking for somebody
in machine learning,
I've a little
heuristic in my mind.
I've seen people go from
information retrieval
into machine learning.
I've seen them go from
information systems
into machine learning.
So I don't lower the
standards, for God's sake.
I broaden the scope of
what I'm looking for.
And that was where the
algorithmic green lining
came from, because I have a
few heuristics I've developed
over a couple of decades.
I want AI to look at the
very high dimensional space
and search it and see
what are the areas,
so I get a Pareto curve if
you want to be technical.
What are the ways that I
can change the criteria that
will get me the most gender
diverse, racially diverse,
et cetera?
So I want AI to be so
much smarter than me
in this high dimensional space.
So I think it is doable.
It's absolutely doable.
And there are, in fact,
technical solutions
if we make sure
that we are talking
to the people who understand
the social implications.
[INTERPOSING VOICES]
Thank you.
I think you've hit a good
nerve with all of us.
[INTERPOSING VOICES]
--for me, too.
When I opened up all military
specialties to women,
there were people who'd say--
you'd get members of Congress.
And so they'd say, well, you
know, we need a military.
We can't be running--
you can't be running
social policy there.
And I would say,
you don't get it.
I said exactly what Darren said.
This is half the population.
I am an all-volunteer force.
I need excellence.
For me to take half
the population off
the table would be contrary
to mission effectiveness.
And I can't do that.
So you're not only
wrong, you're upside down
in framing it that way.
And the second
thing I'd say, Tom,
to what everybody else has
said, this is, again, an area
where it's not like we
don't know how to do this.
There's tradecraft out there.
And if you despair
some of these things--
I had.
And these are not
things you want
to have to learn to be good at.
But when it came to sexual
assault, post-traumatic stress,
there are various
things that arise
in the environment
I used to run,
just as there is in the
environments everybody else.
And sometimes we
wrote the playbook,
which I'm proud
to say, because we
tried hard to learn and do
better and improve ourselves.
In other places, you could go
out and there's a playbook.
So the idea that
there isn't a way
to increase diversity in senior
leadership ranks and companies
I think is rubbish.
I think there are proven ways.
Go out there and find out
what people have done.
And you don't have to
invent it all yourself.
Tom, can I just give us
one reason to feel hopeful?
Yes.
Because this--
Then you're going to close it.
I would hate for us
to leave this not--
Exactly.
--feeling hopeful, especially--
Bless your heart.
[INAUDIBLE] of this moment.
There is a solution here.
In the 1960s, what we
think of today in the law--
I was trained as a lawyer--
as public interest
law did not exist.
If you were a young,
aspiring lawyer
and you were leaving
law school, you
went to work for a law firm
or a corporation or maybe
the government.
There was no such thing
as public interest law.
And law schools didn't
have a curriculum.
Well, that changed in part
because of the intervention
of a group of
philanthropists, foundations,
including the Ford Foundation,
to create intentionally
a new system.
That can be done.
Absolutely.
And actually MIT is going
to be the anchor of what
we will know in this society
as public interest technology.
On the Hill, there is
something called Tech Congress.
[APPLAUSE]
On the Hill, there is
something called Tech Congress.
A group of foundations are
funding young, bright PhDs,
MAs in CS to work on the
Hill for Congress people.
There are a number
of philanthropists.
We need more to join
us in this venture.
And there are a
number of efforts
like that that are inputted
into this ecosystem
to do exactly what you're
saying [INAUDIBLE],,
to actually
transform that system
to serve in the public interest.
Darren, that's a great--
[INTERPOSING VOICES]
--I think, setup for the
close, because I unfortunately
have to catch a plane.
But I just want to say that
when Rafael called me and said,
I want you to do this
seminar, I really
didn't know-- when Rafael
calls, I say aye-aye, sir,
and I'll be there.
We all do.
That's why we're all here.
But I'm so impressed by
what I've heard today.
I'm so excited
about this school.
[INTERPOSING VOICES]
I think it is so cool.
Steve, you and Christine
are doing the Lord's work.
Rafael, I salute you.
It was a privilege
to be here today.
[APPLAUSE]

---

### MIT Schwarzman Celebrate Closing
URL: https://www.youtube.com/watch?v=ofXvS95CzQQ

Idioma: en

Wow, what a day.
For those of you I haven't
had the pleasure of meeting,
my name is Marty Schmidt.
I'm the provost at MIT in
the Ray and Maria Stata,
professor of Electrical
Engineering Computer Science.
I had about 20
minutes of remarks
that are going out the window.
I just want to say three things.
First of all, we've had an
amazing three-day party.
And there's something
significant in what
Steve said earlier
in the conversation
with Rafael that I think
makes it amazing that we could
do something like we did today.
And that is, we kind
of think of ourselves
as outgoing when we look at
your shoes, when we talk to you.
And so to have an event
where we celebrate
what MIT is doing
in this space is not
something that comes
naturally to us.
But I think it was
just a marvelous event.
That brings me to
my second point,
which is a very
important acknowledgment.
I want to single out our
Dean of Engineering, Anantha
Chandrakasan.
[APPLAUSE]
I'm not done.
Anantha has many talents.
He's a gifted teacher.
He's an amazing researcher,
and an inspirational leader.
His talents were on
full display when
he organized the
dozens of people
who put on this celebration.
But I also want to
stress that I don't
believe that we
would be here today
had it not been for Anantha.
And it's not just
about today's event.
The College builds
on a number of steps
that MIT has taken in the past
several years, in which Anantha
has played the key
leadership role.
It began with standing up a $240
million dollar IBM partnership
at about 60 days after assuming
the job of Dean of Engineering.
He followed that by launching
an Institute-wide quest
for intelligence.
And lastly, Anantha was an
indispensable thought leader
and partner in the process
led to the decision
to create the College.
For all that he's done
to get us to this point,
can I ask everyone to please
join me in thanking Anantha?
[APPLAUSE]
So, I want to make one
other acknowledgment.
And it's that we've
been joined today
by a person who-- at least
for a few more months,
we can refer to as a
very special guest.
And that's Dan Huttenlocher.
So Dan, why don't you
come up while I do
a little bit of introduction?
And you're about to have one of
the most important things you
do, which is offer some remarks.
Just a week ago, I
announced that Dan
will be the first dean of
the MIT Stephen A. Schwarzman
College of Computing.
Dan's credentials
are impeccable--
a member of the
faculty of Cornell.
He was the founding
dean of Cornell Tech,
the graduate school
in New York City.
Tremendous scholarship
in computer science.
Brings extensive
backgrounds in industry,
sitting on the
board of directors
of Amazon and Corning,
and also chairs
the board of the John D
and Catherine T MacArthur
Foundation.
But Dan's no stranger to MIT.
And in fact, that made him a
logical choice for this role.
He has both a master's
degree and a PhD from here.
And he served, actually, on
two of the Institute's visiting
committees.
He's going to assume his post
as dean of the Schwarzman
College in the summer.
I'm thrilled to welcome
him here in person.
And Dan, being
cognizant of the fact
that people have been sitting
here for about three hours,
why don't you offer
a few remarks?
[LAUGHTER]
Thank you all.
And you'll notice I have
nothing written, which
means I'm not actually
going to say very much,
except one way to
look at this is,
thank you so much for throwing
this amazing party for me.
[LAUGHS]
No, really, much more seriously,
I mean, Rafael and Steve,
hearing the two of
you and hearing you
talk about how this came about.
And Steve, hearing how
important the sets of issues
that have been talked
about here today
are to you and to
your family, I think
that sets the stage for
what this college is about.
And with that, I'm
going to end the day.
And thank you all.
[APPLAUSE]
Whoops, except Marty's back.
Thanks so much, Dan.
That's terrific.
So, we're concluded.
Let me just remind you that
there's a reception and poster
session in the tent behind us.
And I look forward
to seeing you there.
And thank you, everyone,
for participating today.
Good evening.
[APPLAUSE]

---

### MIT reshapes itself to shape the future
URL: https://www.youtube.com/watch?v=nzt18BIAt3g

Idioma: en

[MUSIC PLAYING]
AI is accelerating growth in
virtually every field. It's an
ultimate disrupter.
Computer science is making
inroads to every academic
discipline and every aspect
of our lives.
The economy and society is
really demanding people who
understand this new domain of
computing and artificial
intelligence.
Our students recognize that
computing is shaping the future,
and they're pushing for an
education that sets the
foundation for it.
MIT is making a bold announcement
today. MIT is announcing the
creation of the Stephen A.
Schwarzman College of Computing.
That's a radical, bold move by
MIT to adjust and prepare the
students of today for the world
of the future.
There is tremendous interest in
understanding the ethical and
societal implications of new
technologies, and particularly
the technologies related to
computing and data.
I think that what excites me is
that it's not just for advancing
computer science, but every
aspect of computing and its
interrelation with every
academic discipline.
Scholars and faculty from other
departments will also be
interacting with, and shaping,
the tools that are being created
at the College. So this is not a
one-way street — it's a two-way
street, a collaborative effort
in a new intellectual frontier.
Departments across the Institute
have come to realize that
computer science matters to them,
and other departments want to
form joint programs with us.
So this is a sign of what's to
come, and shows you the reason
why we need to become a College.
At MIT we take very seriously
our responsibility to prepare our
students to be leaders, and to
be leaders in addressing and
solving the most important
challenges the world is facing.
We have done that for generations,
and we will continue to do that.
And now, more than ever, we need
to prepare our students to
understand the ethics and
societal implications of these
new technologies and these new
advancements, and that's a job
that universities must do.
My intent with this gift is to
make the world a better place,
and to make the United States as
strong a country as it can be,
and bring prosperity and a good
life to as many people as we can.
And we're in a unique time in
history where we actually can
do those things, where the
capability is there, and it gives
me real joy to be able to be
part of that process.

---

### Creating a College: Stephen A. Schwarzman
URL: https://www.youtube.com/watch?v=Nfcxxn0NZTc

Idioma: en

My intent with this gift is to
make the world a better place,
make the United States as
strong a country as it can be
and bring prosperity and a
good life to as many people
as we can.
And we're in a unique
time in history
where we actually
can do those things.

---

### The Future of Computing: MIT President L. Rafael Reif
URL: https://www.youtube.com/watch?v=P9H90JnF_28

Transcrição não disponível

---

