Player is loading...

Embed

Embed code is not allowed

This talk is part of a  Private webcast, embeding is not permited.

Transcriptions

Note: this content has been automatically generated.
00:00:00
okay thank you ah now so i i've been asked to speak about uh
00:00:06
i've been asked to speak about a bias not official intelligence which
00:00:11
it's slightly shane because i would really love to talk about all the
00:00:14
antiwar flies in of artificial intelligence robots we've already been seeing so
00:00:18
maybe that will come up a little bit but uh let me get
00:00:20
on this is i think why oh somebody's implant mudslides okay
00:00:25
um well this is this is a ah why why who was i to say
00:00:31
this this is i think why i get invited to talk about a bias and
00:00:35
that i was made a professor of gender studies it below for this
00:00:38
year and it's all sends a a probably the next person staple in our
00:00:42
bias and it's because i published a paper that's actually about semantic
00:00:46
so i don't know how much you people know about the implicit association
00:00:50
test but it's something that shows the sexism racism uh add
00:00:53
an implicit level not necessarily how you would behave if you think about it but how you behave when you're not thinking
00:00:59
and what we show that is that you can get exactly the same biases
00:01:04
if you uh use the very standard techniques regain the semantics of
00:01:08
terms by reading the text on the internet no not that
00:01:12
they hate groups text like just ordinary text on the internet or indeed google news we did it with two different purposes
00:01:18
and we did it with two different representations and i think what was really interesting about this
00:01:23
is that not only do we replicate every single paper we found that had been
00:01:28
done about them plus association test that use words and the fall a like an ordinary
00:01:33
just the the all of the population that states but also the same the same
00:01:40
representations like it was this terrible racist and sexist uh things
00:01:44
and we use it just looking at a one and it's over on the right
00:01:49
no y. axis there is this thing that tells us things
00:01:52
like uh so this this the the impulses a bias
00:01:56
that tells us the uh that uh it's more likely the woman's name is
00:01:59
associated with domestic stuff in the males name is associated with career stuff
00:02:05
um that's the y. axis the x. axis is the
00:02:08
actual proportion of women who hold the jarvis
00:02:13
and the and then we use in this this this bias semantics thing to look
00:02:16
at the names of jobs that were in the u. s. labour statistics
00:02:20
and notice that we had to do some kind of weird hacks to do that because
00:02:24
this technique only works for single words in a lot of jobs have two words
00:02:28
uh and their title and nevertheless we got a ninety percent correlation ninety percent
00:02:33
so this tells us a lot of really interesting things about bias um because it tells us that the implicit
00:02:39
biases are reflected in a real world uh advance at least for for where women are are employed
00:02:46
uh so so that was not something we expected that was quite surprising so um oh
00:02:51
yeah okay just right so i i have a different definition of artificial
00:02:56
intelligence from the way you just heard i i i tend
00:02:58
to say that intelligence is doing the right thing at the right time and it doesn't matter if it's artifact or not
00:03:04
okay but doing the right thing at the right time become relies on computation as we just
00:03:08
heard about a lot of people that are afraid of artificial intelligence taking over the world
00:03:13
our uh don't relies the competitions a physical process
00:03:17
it takes time it takes basement takes energy
00:03:20
so there is no magic algorithm that's going to uh solve all the problems
00:03:25
so yes in in one way to think about this is the doing the right thing at the right time require search how do you
00:03:30
find of all the things you could do how do you think of the best thing to do that that's searching through a lot of options
00:03:37
and if you think about the cost oh i don't know why they don't use my slices never gonna for one slide
00:03:43
the cost of search is the number of options range to the number of actions okay so for example
00:03:49
um if you only have to to like your role that they can do a hundred things that has a
00:03:55
two step plan we'll turn left an intern right or something right that's already ten thousand passable plans
00:04:02
right because if you if you're thinking well what am i gonna do next i'll
00:04:06
try the first in order to do okay and not try something else
00:04:09
right that's ten thousand possible things that you have to check to which is the best one
00:04:14
and then an yeah indeed this is completely broken uh okay so
00:04:18
i'll just go off of the actual problem with them
00:04:22
yeah i i i i sent i sent uh let's see if i can find this
00:04:27
uh maybe i can ah well yeah that's just a little bit um huh uh_huh uh_huh
00:04:36
okay i don't know what happened i know that he he gets it
00:04:42
okay yeah okay um
00:04:47
ah huh
00:04:49
no it wasn't also wasn't insane because of you because the whole reason wasn't
00:04:53
advances because this is not about them because they haven't been properly
00:04:57
so alright so uh sorry let me try to remember what was the
00:05:00
point of trying to make the next slide um okay the the
00:05:08
yeah alright um i'll never mind the point is that all that there's more there's like there's more
00:05:14
thirty five moves of chess in the world uh i that's why there's more
00:05:19
thirty five move games of chess than their atoms in the universe
00:05:24
alright that's kind of so complex you won't know instantaneously and quantum isn't in save us which was
00:05:30
so i was talking about going to the button that but let's forget about that alright so
00:05:35
that that was an effort to try to say don't worry that as as we heard
00:05:38
before but but not to get back into the stuff that's faster and hopefully it's
00:05:41
not to mess up on the slides been changed um okay so basic definitions implied
00:05:48
by that resolve i showed you the beginning so if we talk about bias
00:05:53
when we talk in machine learning about bias we only mean that we're finding regularities machine learning advice
00:05:58
is a good thing and if you don't find regularities you can't know what to do next
00:06:03
right supplies itself is not the problem the problem of the some of these biases are based on regular use that we don't want to
00:06:09
persist like that women are are not given critters for example alright and
00:06:14
then i again it's it's it's not fitting on the slide
00:06:18
um so the third the third thing here is uh
00:06:23
is if i'm sorry i'm just going to texas
00:06:32
right no no no it's not it's a duchess me that was totally differently size
00:06:36
not that i don't know whether you do stuff that i'm trying to make it sort
00:06:39
things you yeah yeah yeah yeah i know sullivan fix it first and then we'll
00:06:46
the new uh
00:06:50
okay
00:06:52
alright
00:06:54
okay sorry about this right so that so this is the bottom line here
00:07:00
so prejudices actually acting on the stereotypes and that's the thing we're trying to prevent so happen to us so the so this one
00:07:06
says knowing their programmer means which includes that most are male right
00:07:11
that's the regularly notice if you look at the world
00:07:13
um stereotype is uh uh knowing that most programmers are male
00:07:19
and then um oh and then i will bother to do that thing i
00:07:22
just i just tell you prejudice would be deliberately hiring only male programmers
00:07:27
so it's not a problem to know about these things it's a problem not to know that we there's things are supposed to
00:07:33
be doing and so this is what a lot of people worry about with artificial intelligence that we we won't pick up
00:07:38
and the explicit information which is don't make those decisions
00:07:42
don't perpetuate there's the the biases we had before
00:07:46
alright so yeah that's one source of precious alright uh and again this has been
00:07:53
shrink it supposed to be at least three sources approaches an artificial intelligence
00:07:57
so one of them is um absorbing information automatically from
00:08:02
culture and that's the one i just showed you
00:08:05
but there's another one that uh it gets into the the new york times
00:08:09
quite often which is sort of ignorance by people because with insufficient diverse
00:08:14
but that's not gonna be here because liberty large is that the
00:08:18
um the third one i think is the most important one
00:08:21
which is remember that a in artificial intelligence stands for artifacts somebody writes it
00:08:27
and so the vast majority things we really need to worry about is to
00:08:30
deliberately built in prejudice when someone says they come to you and say
00:08:36
look me to redesign the welfare system so that more money goes to my constituents who have my race
00:08:42
right and that's the kind of thing that we need
00:08:44
to think about artificial intelligence having transparency inaudible
00:08:49
i think i'm just gonna skip purposes because of like else's yeah so the automatic thing
00:08:54
i don't think that we can change culture a lot of people want us to magically change user read stuff in and
00:08:59
then just create neutrality but that's not how fairness works fairness
00:09:03
takes a lotta negotiation it's very complicated um oh
00:09:08
that but but we can we can do is architecturally say okay there's stuff that
00:09:13
we have learned about the world and other stuff we ought to do
00:09:16
the um the second source that we just talk about is this a ignorance a diversity
00:09:23
which is um uh whips and okay every night and look at
00:09:28
this and so the the second point is the um
00:09:32
diversifying a says you can have more people but also really important if you find a problem just

Conference Program

Opening
Gautam Maitra, Founding Member, Women's Brain Project
Dec. 12, 2017 · 8:45 a.m.
168 views
Welcome Words
Maria Teresa Ferretti, President, Women's Brain Project
Dec. 12, 2017 · 8:48 a.m.
Welcome adress
Françoise Grossetête, member of the European Parliament
Dec. 12, 2017 · 8:55 a.m.
Presentation of the day
Sylvia Day, Forum host and WBP ambassador
Dec. 12, 2017 · 9:01 a.m.
Keynote
Khaliya
Dec. 12, 2017 · 9:04 a.m.
Introduction of Elena Becker-Barroso
Elena Becker-Barroso, Editor-in-Chief of The Lancet Neurology
Dec. 12, 2017 · 9:21 a.m.
230 views
Introduction of Gillian Einstein
Gillian Einstein, University of Toronto, Canada
Dec. 12, 2017 · 9:28 a.m.
Introduction of Else Charlotte Sandset
Else Charlotte Sandset, Oslo University Hospital, Norway
Dec. 12, 2017 · 9:39 a.m.
Introduction of Carol Brayne
Carol Brayne, University of Cambridge, UK
Dec. 12, 2017 · 9:44 a.m.
Introduction of Maria Teresa Ferretti
Maria Teresa Ferretti, President, Women's Brain Project
Dec. 12, 2017 · 9:52 a.m.
157 views
Introduction of Liisa Galea
Liisa Galea, University of British Columbia, Canada
Dec. 12, 2017 · 9:56 a.m.
Introduction of Lawrence Rajendran
Lawrence Rajendran
Dec. 12, 2017 · 10:03 a.m.
242 views
Introduction of Thorsten Buch
Thorsten Buch, Director, Institute of Laboratory Animal Science (LTK), University of Zurich, Switzerland
Dec. 12, 2017 · 10:08 a.m.
Introduction of Meryl Comer
Meryl Comer , President & CEO, Geoffrey Beene Foundation Alzheimer's Initiative
Dec. 12, 2017 · 10:59 a.m.
Introduction of Mary Mittelman
Mary Mittelman, New York University School of Medicine, US
Dec. 12, 2017 · 11:05 a.m.
Introduction of Angela Abela
Angela Abela , University of Malta, Malta
Dec. 12, 2017 · 11:13 a.m.
Introduction of Tania Dussey-Cavassini
Tania Dussey-Cavassini, Former Swiss Ambassador for Global Health, Switzerland
Dec. 12, 2017 · 11:20 a.m.
477 views
Introduction of Raj Long
Raj Long , Bill and Melinda Gates Foundation, Vice-Chair, World Dementia Council
Dec. 12, 2017 · 1:30 p.m.
198 views
Introduction of Antonella Santuccione Chadha
Antonella Santuccione Chadha , Swissmedic, Swiss Regulatory Agency, Switzerland
Dec. 12, 2017 · 1:32 p.m.
370 views
Introduction of Marsha B. Henderson
Marsha B. Henderson, Food and Drugs Administration, Office for Women's Health, US
Dec. 12, 2017 · 1:36 p.m.
Introduction of Maeve Cusack
Maeve Cusack, European Institute for Women's Health
Dec. 12, 2017 · 1:43 p.m.
Introduction of Hadine Joffe
Hadine Joffe, Harvard Medical School, US
Dec. 12, 2017 · 1:47 p.m.
Introduction of Maria Houtchens
Maria Houtchens, Harvard Medical School, US
Dec. 12, 2017 · 1:55 p.m.
Introduction of Valerie Bruemmer
Valerie Bruemmer, Senior Medical Advisor, Eli Lilly
Dec. 12, 2017 · 2:03 p.m.
Introduction of Malou Cristobal
Malou Cristobal, Polytrauma/ TBI / Vestibular Rehabilitation Program, New York Harbour
Dec. 12, 2017 · 2:08 p.m.
Wrap up of Panel Discussion 3
Raj Long , Bill and Melinda Gates Foundation, Vice-Chair, World Dementia Council
Dec. 12, 2017 · 3:23 p.m.
Presentation of Sofia, Robot
Sofia, Robot
Dec. 12, 2017 · 3:28 p.m.
Introduction of Nicoletta Iacobacci
Nicoletta Iacobacci , Singularity University Geneva
Dec. 12, 2017 · 3:32 p.m.
Introduction of Fabrizio Renzi
Fabrizio Renzi, Innovation and Technologies Director, IBM, Rome
Dec. 12, 2017 · 3:36 p.m.
Introduction of Joanna J. Bryson
Joanna J. Bryson , University of Bath, UK
Dec. 12, 2017 · 3:48 p.m.
Introduction of Myshkin Ingawale
Myshkin Ingawale, Facebook
Dec. 12, 2017 · 3:58 p.m.
Introduction of Kathryn Goetzke
Kathryn Goetzke, President, Chief Mood Officer & Founder, The Mood Factory, and Founder, iFred
Dec. 12, 2017 · 4:07 p.m.
Introduction of Nikolaos Mavridis
Nikolaos Mavridis , Interactive Robots and Media Labs, MIT, US
Dec. 12, 2017 · 4:13 p.m.
Keynote
Lynn Posluns , Women's Brain Health Initiative, Canada
Dec. 12, 2017 · 4:52 p.m.
Closing remarks
Mara Hank Moret
Dec. 12, 2017 · 5:12 p.m.
600 views
Thanks
Annemarie Schumacher Dimech
Dec. 12, 2017 · 5:16 p.m.
Closing song
Sylvia Day, Forum host and WBP ambassador
Dec. 12, 2017 · 5:23 p.m.