Player is loading...

Embed

Copy embed code

Transcriptions

Note: this content has been automatically generated.
00:00:00
so much for the invitation displeasure in order to be here to
00:00:03
discuss such an interesting topic like between autonomy control what is the future
00:00:08
what division intelligence of the celebrity would you great must on the
00:00:12
fiftieth anniversary of the foundation and you may be wondering why someone for
00:00:19
disarmament research is speaking about a i uh and if our colleagues in the
00:00:23
interpretation with allow me for a quick deviation from speaking points i don't have any
00:00:27
slides so that's that's gonna be a great um i i just wanted to explain
00:00:32
why uh we as a nice to to to that has a a main objective
00:00:38
building a more secure and peaceful world are looking
00:00:41
at artificial intelligence among many other emerging technologies that's because
00:00:47
disarmament international security arms control in a way it's always been technology driven
00:00:53
even if we go back fifty sixty years states when they
00:00:57
recognise that certain technologies when used in the context of conflict
00:01:01
where causing a consequence is what damages that they wanted
00:01:05
to avoid in the future would convene and come up with
00:01:10
policies regulations treaties legally or politically binding uh solutions
00:01:16
to mitigate the kind of negative consequences of certain use of technologies
00:01:21
when a saying these because the speaker before me uh just made a very valid point on on this is that the the regulation
00:01:28
if we look at what states in the context of international peace and security are trying to do
00:01:36
in the framework of the when when it comes to artificial intelligence it's exactly that
00:01:42
it it might feel like uh and it's just one new
00:01:45
iteration of a process they've gone before but in fact the something
00:01:50
critically new in what states are doing they're trying to
00:01:53
regulate the use of what technology before this technology is actually
00:01:58
fully developed integrated in in military capabilities and
00:02:04
used to effect so you know in the past
00:02:08
nuclear weapons would develop we used everybody saw that were bad and they said okay let's come up
00:02:14
with the regulatory framework to avoid proliferation and then
00:02:17
progressively band or reduce uh and mitigate the risk
00:02:21
four a i nobody really not uh how it's gonna be developed where it's gonna be used so they're making
00:02:29
i force jane uh creating a framework that could mitigate risks
00:02:35
a tool set of different initiatives on technology that doesn't really
00:02:40
exist as we know it yet so the purpose of the next fifteen minutes i really want to
00:02:45
take you on on a on a journey with me and the journey is trying to transfer everything
00:02:50
that you've heard that is pretty much one percent
00:02:53
applicable uh from previous speakers but try to project
00:02:59
all of the challenges all the problems that you've that you've heard in what context where
00:03:04
artificial intelligence might be used to make life and death decisions that might be used by
00:03:10
governments to decide uh whether or not to go
00:03:14
to war or which target to strike first or
00:03:18
um how many soldiers should be a devoted to a military campaign
00:03:24
but also 'cause here we're dealing with states and states through all the various government the governance
00:03:30
uh initiatives that we've heard ever missed the allies that the reason need to do something about
00:03:36
when it comes to this technology one of the main
00:03:38
risk related to proliferation is that this technology which is developed
00:03:43
in our culture of sharing of open source everyone can called and there are
00:03:49
plenty of other places where with very limited oversight and review mechanism what's been coded
00:03:56
these algorithms and models can be uh made available can be replicated can
00:04:02
be shared so in a culture that is that makes of uh uh
00:04:07
knowledge sharing and an open source one of its main strength trying
00:04:11
to limit access of this technology to don't stay factors that might have
00:04:17
a malicious intention is a very significant challenge
00:04:22
let me just go back to some of the points have here otherwise interpreters are not gonna like me very much so
00:04:29
we have all heard our you know they i is presenting an unprecedented opportunities for
00:04:35
economic and social development for well being uh you know way that
00:04:39
it would be augmenting at least at the beginning not in sicily substitute
00:04:44
but definitely augmenting human capabilities to conduct a very wide range
00:04:48
of of tasks we've also heard about this technology doesn't come without
00:04:53
its limitations without its imitation constraints
00:04:56
related to transparency reliability predictability fairness
00:05:00
uh and accountability except for now all of these problems when transferred into
00:05:08
the peace and security uh context of course are uh and uh uh
00:05:15
but the the concerns around all these issues augmented by
00:05:18
an order of magnitude because the legal ethical the moral
00:05:23
uh that the political constraints around it by really exponentially exponentially higher and
00:05:30
an important and all of the very uh makes sense
00:05:33
but still promising governments architecture that we've just just discussed
00:05:39
and we're doing actually a project that you wanna give
00:05:41
exactly on this topic most of that governments explicitly says
00:05:46
national security is an exception this set the rules do not apply when
00:05:53
national security is is a construct whether it's data whether it's
00:05:59
oversight on on development of algorithms except so there is a vacuum a regulatory vacuum
00:06:06
that could uh uh if not field quickly could uh uh uh uh create um
00:06:13
potential loopholes for both state and non state actors to act you know in a responsible way so
00:06:21
what is one of the the main kind of point of of of tension here first of
00:06:25
all what are we talking about when we're saying military applications of probably i what division intelligence um
00:06:32
but by not getting into specifics i think we can divide it into two broad categories one
00:06:37
is it official intelligence that is used as a a a a neighbour or as a um
00:06:45
multiplier of of decision support so in order to uh uh
00:06:50
give all of the the kind of the the military architecture better
00:06:55
command and control and decision making power a i can be broadly so as
00:07:00
an enabler of uh as a decision to or to what the other category
00:07:05
of the activation intelligence as an enabler of autonomy and opponents functions okay so
00:07:12
within these two categories you can see you know a a quite quickly you can
00:07:17
you can think of examples as a decision support to optimise logistics optima it's uh
00:07:24
ah force deployment or operation and mission planning in the military context all of
00:07:30
that is clearly powerful because it assumes that this technology is gonna be used
00:07:36
in combination with human brainpower human judgement and and uh um you know
00:07:42
way these are teaming between the technology and the and the human being
00:07:47
but then there is the second category which is the uh uh a
00:07:52
i as a neighbour of autonomy now we for those of you that are
00:07:56
the most uh uh uh you know the very if not passionate about media
00:08:00
fares have turned the news uh you seen even you know in the context
00:08:04
the recent conflict in ukraine how i'm a white spread is
00:08:09
the use of drones sort of platforms that do not have
00:08:13
a human being in it because they're remotely pilot or we could see in the future we can foresee scenario in
00:08:20
which as the technology becomes more and more reliable in
00:08:23
the conduct of specific tasks those tasks might switch from being
00:08:29
remote control to be upon us which is and number logo into because
00:08:34
they're people better equipped than me to go into that if the distinction between
00:08:38
very sophisticated automation and upon in those are two
00:08:42
two different things but let's just say the ability of
00:08:46
a a military system to engage and uh interacting
00:08:51
with the environment in which the systems that void
00:08:54
make decisions based on the the famous input kind of that you
00:08:58
received so a garbage in garbage out applies also in the military context
00:09:04
when the decision to be taken is do i engage is
00:09:08
this a tank that i am eagerly unlawfully allowed to target
00:09:13
what is this an ambulance well the margin of error there is this of course
00:09:17
much more limited and what are the concerns when it comes to
00:09:20
state to state relationships the concerns are fundamentally to one is that
00:09:26
in this kind of strategic competition uh among powers to kind of be
00:09:31
first movers in if you want in choosing guy capabilities that might be a race to the bottom
00:09:37
what do you mean respectable i mean a racing
00:09:40
to integrate artificial intelligence and military capabilities before they reach
00:09:45
the necessary technical maturity to be fully reliable and before the appropriate governance framework
00:09:51
at various applicable to be very uh uh kind of domain is is to that
00:09:57
of course these are clear issues because link the second problem
00:10:02
and while in the civilian domain that most of the questions around the eye
00:10:07
is how can we make this technology applicable also that the benefits and distribute it
00:10:13
you know in an equitable way across society the basic slightly different
00:10:17
in the context of the military that had been piece kind of that
00:10:21
a a context in which states can be broadly split into
00:10:24
categories doors that are leading the development and see themselves as potential
00:10:29
users of this technology and those that instead see themselves as that potential recipients or target
00:10:36
of of these technologies does that might be less technologically advanced and of course there
00:10:42
the tension you can see how it is you know it is particularly uh uh
00:10:49
dangers because of course the incentives are very different one we want to make sure
00:10:53
that any governance framework that is designed doesn't really uh impede uh uh innovation and progress
00:11:00
the other one sees a very risk averse approach where it says when
00:11:03
you don't wanna hear about this cannot artificial intelligence cannot find its way into
00:11:09
'kay weapon systems because if it does it doesn't matter how sophisticated technology might be
00:11:15
it just not ethical it's not morals against you might need meaty
00:11:18
it's there are all sorts of of uh considerations beyond the technical what's
00:11:24
so and this is the the uh the issue that i wanted to to raise also with a a
00:11:30
point made by the previous speaker is about the for the importance of focusing on behaviour
00:11:35
because it's gonna be difficult to control access to the technology that remains an enabling technology so
00:11:41
uh can be re purpose you know that would that would be adequate that technical skills it's gonna be hard to control
00:11:49
the algorithms with the models themselves there have been some attends
00:11:52
but you know it's not necessarily the the the most effective way
00:11:57
the focus should potentially instead focused on on behaviour and and use so
00:12:03
what do we want to to to to mitigate as a as a risk
00:12:08
including a combination of soft controls which scene and all applicable
00:12:12
standards norms principles and a code of conduct
00:12:15
down to softer uh uh regulations legally binding once
00:12:20
the issue that we're gonna find though is the same as it was shown earlier how do you
00:12:25
good principles are fine how do you implement what does it mean that you want a
00:12:31
you know to that you want to develop a i uh capabilities that affair there
00:12:36
parent okay we understand what you're aiming at what are we gonna do about so how you going to
00:12:42
develop a implementation guidelines cactus is procedures to make sure that
00:12:47
you leave by the word that you're giving 'cause it's very easy
00:12:51
would be the principle that would be harder to actually translate those principles into into projects
00:12:59
um final final word caution the time is really about the what
00:13:04
is happening to the u. when level in the context of the uh
00:13:07
the national security so this discussion we all understand that
00:13:11
a i as a very broad range of applications admittedly domain
00:13:16
however when it comes to the current discussions they're focused on on the very tip of the iceberg
00:13:23
which is what we call lethal autonomous weapon systems so these are
00:13:29
weapon systems uh and that are intended to
00:13:34
use force would would that little fact so to to to keel were
00:13:39
destroyed by the people were infrastructure that have opponents functions embedded in that
00:13:47
now this discussion has been another boy you with the details
00:13:50
but it's uh it's been a a if you want hosted
00:13:54
by the you want process that really deals wait a humanitarian
00:13:59
impact and trying to minimise the material impact any material concept
00:14:04
therefore it's the discussion that is very legal dominated because it's all
00:14:08
about international more internationally many valuable and doesn't even doesn't believe the space
00:14:13
for much broader discussions around how to build capacity how we work with industry to create make
00:14:18
sure that the standards of the developing can be leveraged also when it comes to meeting application
00:14:24
how do we uh uh make leverage that comes the market power the states
00:14:28
had to procurement to influence the way in which the technology is used to that
00:14:33
so uh really success in in regulating meaning to use of the ah and
00:14:40
passes through the recognition that this is a very ambitious goal that cannot really be achieved
00:14:45
exclusively by governments uh even though they do retain the regulatory
00:14:50
power but they cannot solve this we could problem alone um
00:14:56
it means that really try to find a way for governments to leverage the come
00:15:00
tremendous amount of work that industry industry groups
00:15:04
industry either individual companies working in industry groupings
00:15:08
and the technical community more broadly as really uh been conducting and try to kind
00:15:13
of planning that work that already exists into the appropriate that one for thank you
00:15:20
how do you have some question
00:15:28
the question i have one question
00:15:33
i said coffee coffee fountains and u. i.
00:15:38
yeah instead context of wow uh in them uh
00:15:46
yeah so i think there are uh the there's a lot of hype and a lot of uh uh
00:15:54
and easy to understand concerns as to why a technologies like uh
00:16:00
q. level it's you know might might find their way into into that field
00:16:06
as a uh uh uh i think we're still very far
00:16:09
away from that and probably will never reach that point because that
00:16:14
military forces are not interested in the pulling something
00:16:19
that they don't know how it works i don't know what kind of effect is going
00:16:22
to achieve and they can control it so it's going to be very interesting to see
00:16:28
what is gonna be the right balance uh with where is the final decision as to
00:16:33
how far didn't want to push autonomy while retaining the adequate form of control as well
00:16:40
thank you again for another which yeah sure
00:16:46
i suppose you you will like
00:16:49
does the following question in what problem points you malls the distance you there's a lot
00:16:53
we proposed with one visible compression most of the for that now with the whole butterfield
00:16:58
high intensity button fit in your one another consultant i want all korean the next ten years
00:17:04
providing for not only for innovation baseball before but there is one of the project but we are it
00:17:10
my my because there's this creating the complex format and in control
00:17:14
multiple projects for general purpose they systems to rejoin and gets like
00:17:19
tool to the symmetry power gives most decision making but no he wasn't what poke a point to the malls in the next ten years
00:17:27
so to me it from where i'm sitting in considering the the you when
00:17:31
whether you like it or not remains the only platform where the big powers
00:17:35
are here and have to talk to engage you know that to me what requires me the most is that
00:17:41
right now that is a very very narrow
00:17:44
focused on discussion and everything that isn't about
00:17:49
i'm not problems uh function peeling no pressing this button of
00:17:53
pulling the trigger is completely ignored as deemed not read it
00:17:57
now i will argue if the decision to pull the trigger
00:18:01
is by human but is based on information that is generated by
00:18:05
intelligence were very complex system is that not also problem but for some reason
00:18:11
because you can put humans in jail and have them accountable in front of court
00:18:15
then it's less of a problem then if it's a machine that actually makes
00:18:18
the last mission so what worries me the most is the the kind of the
00:18:24
you know a a tunnel vision if you want the
00:18:27
of member states they're focusing exclusively on this very specific case
00:18:34
i think that's oh yeah and the salvation uh_huh
00:18:39
yeah thank you very much for the snow view both
00:18:42
the usual corners from that perspective very un no no
00:18:48
i want to ask you one thing the concept of without really some has been apart
00:18:53
weekend in beer aspects and though
00:18:58
so do you see to ration sole use he you
00:19:04
potential for you we're or a new rooms or with the body
00:19:09
to be call here that you made and powerful enough in forcing
00:19:15
is your body for the governance of a high uh_huh that's very
00:19:19
that that's a very good question probably perfect for p. h. d. c.
00:19:24
oh and it is true that is not denying that the whole
00:19:28
the concept of multilateralism has been challenged quite significantly in the last
00:19:33
ten years for a probably in in the last four months
00:19:37
you know way so there are clear indication that something and i
00:19:41
know i'm not naive to the point of saying that the u.
00:19:44
when is going to be the forum in which we can foresee
00:19:48
an international a. i. treaty well a global
00:19:51
actually this is not really the right historical moment
00:19:56
to achieve that however you remains the only the you know a platform where you have a one hundred
00:20:03
ninety four member states so maybe the question is that there is also this kind of cultural uh uh
00:20:10
aversion to consider anything less than a treaty it's not really important is a lot of work
00:20:16
that can be done in the through softer to
00:20:20
it's whether it's confidence building measures capacity building mormons
00:20:24
standards that my not politically binding not legally binding but
00:20:28
politically minding commitments that the un can really well but
00:20:32
in order to do that you need to use the prop it to study one has available and the current
00:20:39
but to that they're using the the conventional certain conventional weapons
00:20:44
is being created to develop only one thing which is a legally binding protocol
00:20:49
so uh the one offers a lot it just a matter of making sure that it's used to it's you know best

Share this talk: 


Conference Program

Welcome words
Aurélie Rosemberg, Prof. Antoine Geissbuhler, Vice-recteur de l’Université de Genève
June 15, 2022 · 4 p.m.
Conférence opening
Mme Fabienne Fischer, Conseillère d’Etat du canton de Genève – Département de l’Economie et de l’Emploi (DEE)
June 15, 2022 · 4:05 p.m.
Evolution of Computing and Vision
Prof. Giovanna Di Marzo Serugendo, Directrice du Centre Universitaire Informatique de l’Université de Genève et Professeure ordinaire à l’Université de Genève
June 15, 2022 · 4:15 p.m.
Reality, Truth et Artificial Intelligence
Prof. François Fleuret, Professeur Ordinaire, Directeur du groupe de recherche en apprentissage artificiel, Université de Genève. Professeur Titulaire, École Polytechnique Fédérale de Lausanne. Chercheur externe, Institut de Recherche Idiap. Cofondateur Neural Concept SA
June 15, 2022 · 4:30 p.m.
117 views
Governing the rise of Artificial Intelligence
M. Nicolas Miailhe, Fondateur et Président, The Future Society (TFS)
June 15, 2022 · 4:50 p.m.
Artificial intelligence & issues related to States, Peace and International Cooperation
Dr Giacomo Persi Paoli, Directeur du programme de sécurité et de la technologie de l'Institut des Nations unies pour la recherche sur le désarmement UNIDIR
June 15, 2022 · 5:10 p.m.
Panel discussion
June 15, 2022 · 5:30 p.m.
Closing of the 50th birthday celebration
Jean-Pierre Rausis, Président de la Fondation Dalle Molle
June 15, 2022 · 6:30 p.m.

Recommended talks

The importance of evaluation for multilingual information access
Carol Ann Peters, Institute for Information Science and Technologies
Sept. 1, 2011 · 4:58 p.m.