Player is loading...

Embed

Copy embed code

Transcriptions

Note: this content has been automatically generated.
00:00:00
alright well thank you so much for inviting me so i um
00:00:04
going to talk a little bit about the current state of the yeah i what i think it's limitations are
00:00:12
and uh where it might go next so
00:00:16
we've all heard about the t. learner main revolution
00:00:21
the idea behind it learning is actually very
00:00:24
old comes from the visual cortex in the brain
00:00:30
which is that the very back of the brain which is roughly
00:00:36
assembled in a series of layers like i showing here and that inspired the
00:00:43
idea of the neural networks which are assembled in a series of where i
00:00:49
i i ears for probably input been up in
00:00:56
a i research since the nineteen seventies but only recently
00:01:02
has the uh cut computing power and a
00:01:06
data in available to actually make them work
00:01:10
so it's things like human labelled image is the image that a data set
00:01:17
has suffered enormous progress in
00:01:21
computer vision especially in recognising objects
00:01:27
and in fact over the years of
00:01:31
the competition a model companies and researchers to
00:01:37
do oh well in this uh object recognition competition
00:01:42
in two thousand twelve the very first idea neural network
00:01:47
was entered in this competition and the error rate plummeted it
00:01:53
went way down and has gone down down down until it's about
00:01:58
the same or below human performance on this particular data set
00:02:04
so this has suffered enormous optimism
00:02:07
yeah and visible progress in computer vision
00:02:11
including in areas like facial recognition
00:02:16
oh i self driving cars which are now able to identify objects on the road
00:02:23
came flame like go where well we
00:02:28
also uh uh the go grad grandmaster
00:02:31
these should all be defeated by output go a program based on the neural networks
00:02:38
and even for language tasks such a and as open a eyes g. p.
00:02:44
t. three language model so just for fun i put in the names and
00:02:50
titles of the few uh talks at this particular meeting
00:02:56
and had g. p. g. three generate
00:02:59
down below some additional suggestions for possible talks
00:03:04
so i don't know of any day if these people are real but
00:03:07
it looks very convincing and generates
00:03:11
language ah that looks very human like
00:03:17
however as the previous speaker noted there are many limitations to correct
00:03:23
a high and he said there's no intelligence in artificial intelligence and
00:03:31
even though we don't have a well formulated definition
00:03:35
of what intelligence is i think i have to agree
00:03:39
so here are some of the limitations so first of all we know that these machines learn
00:03:45
through millions a human label examples such as follows
00:03:52
or sentences or or any other kind of data
00:03:56
and this is very different from human learning
00:04:01
which only requires a very small number of
00:04:03
examples if any at all these machines are enormous
00:04:09
the the neural networks underline a g. t. t. three is language generation program
00:04:15
have a hundred and seventy five billion parameters that's
00:04:20
i'm numerical oh wait in the neural network and it has
00:04:25
to be trained on text amount to hundreds of billions of words
00:04:29
he's this scale of a size and training
00:04:35
makes the creation of the systems available only to
00:04:41
large companies that have this kind of computational resource and
00:04:46
um it's collection of data on these machines
00:04:51
are sometimes not transparent in what they actually lower
00:04:56
there's a very simple example one of my graduate students trained it d.
00:05:03
neural network to decide whether a a photo contains an animal or not
00:05:10
so i hear the machine says animal ups and here it says no matter
00:05:16
it was trained on a large number of like a major photographs
00:05:21
but when my student trying to understand exactly what the
00:05:25
machine had burned to enable it to perform this task
00:05:29
you know that one of the things that it was using to
00:05:33
make this decision was the background not the foreground of the animal
00:05:38
but to get the background was blurry you can see it worry here and not learn here
00:05:45
the the uh that was statistically associated with having an animal because the photographers
00:05:51
focusing on of animal in the foreground here no focus at all so this
00:05:58
machine what had work to do the task very well using ah
00:06:03
using uh information that was not what we had intended it to lower
00:06:09
this is called short hacked in machine learning
00:06:12
you it's very common when the machine learn something
00:06:17
that is able enables it to perform the task but is not
00:06:22
what the same thing that humans use to perform the task and that's
00:06:26
the machine can then make errors that are very i'm
00:06:29
human like for example another a research group looked at but
00:06:35
uh the neural networks that were very good at object
00:06:39
recognition for example they could identify this as f. ayers rock
00:06:44
with ninety nine percent confidence i but if
00:06:48
the object was ah photo shot into different houses
00:06:55
now the system was very confident that it was the school bus
00:07:01
or fire goat or bobsled and um this is the new c.
00:07:09
but it really shows that the machine is the line on something very
00:07:14
different to make its decisions that what humans use an in real life
00:07:21
this kind of uh what we call ritalin is in d. you're in neural nets
00:07:27
can we use you accidents and catastrophes such as
00:07:31
ah self driving cars like has a uh huh
00:07:35
not recognising a fire truck stop on the highway and crashing into it
00:07:43
which systems like t. g. three which i mentioned before this text generation system
00:07:50
actually has been shown very clearly that it does not understand the text that it's
00:07:57
generation even though it can uh look very human like it can make
00:08:04
big uh errors in very nine human uh like a a a kind of behaviour
00:08:13
and so i yeah researches have call this clever hans
00:08:17
phenomenon clever hans was a horse back in uh the
00:08:22
uh that to turn of the twentieth century who supposedly could do mathematics by
00:08:29
a human would give it a math problem and it
00:08:32
would uh hound it's hove to uh give the answer
00:08:37
but it turned out that clever hans was not doing math at all it
00:08:41
was responding to startle bahrain language of
00:08:45
its trainer and this analogy now is that
00:08:49
the uh clever hans uh projectors these uh neural
00:08:54
networks or machine learning programs are used things settle
00:08:58
clues in the data that is not actually performing the task
00:09:03
the way humans would but responding to starbucks settles to just call correlations
00:09:10
and this is shown again the vulnerability of neural networks to
00:09:15
uh what are called adversarial attacks so this group showed that
00:09:23
you and to an image that is good correctly classified
00:09:28
by a neural network if you add it carefully engineer perturbation
00:09:34
this isn't always that's been highly magnified here this is the
00:09:38
result it looks identical the humans but now the machine will always
00:09:43
classify this as an ostrich and no matter what the
00:09:49
picture is given the engineer predation these are called adversarial attacks
00:09:55
i'm neural networks and they've been shown to work even
00:10:01
buys putting simple stickers on stop signs a machine will now
00:10:07
that has been trained to recognise stock signs one now recognise this
00:10:12
as as he limit a. e. sign which is not a good thing
00:10:16
ah especially if this is in miles per hour i'm not a good thing for self driving car
00:10:24
so these are of problems with uh are the current robustness of a i system it's
00:10:31
but also show that they really are learning something quite
00:10:35
different then what humans uh learn in the real world
00:10:40
so i wrote a paper recently called y. a. o.
00:10:44
i. is harder than you think which uh might through some
00:10:48
reasons why i think a i it's not soon going to be let the level of humans
00:10:55
and one of the uh reasons is that was a a quote from herbert right it's a philosopher
00:11:02
who wrote extensively about a guy and he noted that
00:11:06
i've expected obstacle in the assumed continuum of a i progress
00:11:13
has always been the problem of common sense so common sense is now a huge
00:11:20
kind of buzz word in a i it's a a
00:11:26
a problem that many people are trying to get at
00:11:29
such as paul allen the if if uh the cofounder of microsoft it
00:11:36
before he died a few years ago he invested a
00:11:39
lot of uh money in a t. g. a. i. commonsense
00:11:45
the department of the defence in united states is devoting quite a bit
00:11:49
of a fun beans you trying to get machines to have common sense
00:11:54
but it is a very difficult problem so i i i imagine for
00:11:59
example that a self driving in car is faced with a situation like this
00:12:05
what i understand about the world to be able to project
00:12:12
what's going to happen in this situation well there's many things that humans understand
00:12:20
such as what we might call intuitive physics
00:12:23
but but how different objects interact in the world
00:12:27
intuitive psychology what key how people interact with
00:12:32
their relationships are a biology you know other living
00:12:37
why this dog it's doing what it's doing models of cause and effect very vast
00:12:43
world knowledge that were unaware of consciously
00:12:46
but that we use to understand new situations
00:12:50
and filing the ability to abstract make analogies to
00:12:54
situations that we've been in previously all these things uh
00:13:00
are i would say a eyes biggest
00:13:04
open challenges and we're very far i would
00:13:08
say from being able to get a i systems to have anything close to human level
00:13:16
on all of these abilities which are fundamental to being an able to function in a
00:13:22
robust way in the real world so i worry about this extensively in every city well
00:13:31
called artificial intelligence a guide for free humans would just just come out in a friend's edition
00:13:37
so i hope that uh some of you will will take

Share this talk: 


Conference Program

Welcome words
Aurélie Rosemberg, Fondation Dalle Molle
Sept. 11, 2021 · 4 p.m.
Opening
Jean-Pierre Rausis, Président de la Fondation Dalle Molle
Sept. 11, 2021 · 4:15 p.m.
Artificial intelligence and quality of life
H. Bourlard, Idiap Research Institute
Sept. 11, 2021 · 4:30 p.m.
Artificial intelligence to think like humans
Melanie Mitchell, Professor at the Santa Fe Institute
Sept. 11, 2021 · 4:45 p.m.
Towards human-centered robotics
Sylvain Calinon, Research Director at the Idiap Research Institute
Sept. 11, 2021 · 5 p.m.
Supporting sustainable transitions around the world through water technology
Eric Valette, Director of AQUA4D
Sept. 11, 2021 · 5:15 p.m.
Biometric security
Sébastien Marcel, Research Director at the Idiap Research Institute
Sept. 11, 2021 · 5:30 p.m.
Compatibility with humans: AI and the problem of control
Stuart russel, Professor of Computer Science and Smith-Zadeh Professor of Engineering, University of California, Honorary Fellow of Berkeley and Wadham College at Oxford
Sept. 11, 2021 · 5:45 p.m.
Model subjectivity at the heart of consciousness to make robots more human
David Rudrauf, Associate professor at the University of Geneva, Director of the laboratory of the multimodal modeling of Emotion and Feeling
Sept. 11, 2021 · 6 p.m.
Round table
Panel
Sept. 11, 2021 · 6:15 p.m.

Recommended talks

Pose estimation and gesture recognition using structured deep learning
Christian Wolf, LIRIS team, INSA Lyon, France
Oct. 17, 2014 · 11:06 a.m.
389 views