Player is loading...

Embed

Copy embed code

Transcriptions

Note: this content has been automatically generated.
00:00:00
ah i'm i'm going to present i'm not sure it's adjourned recognise the provider it's cool it's gone and it's i just
00:00:07
uh in a metal you or a different uh way of training out there that they share it
00:00:13
so before going on in here i can i just very very with the things that i guess but
00:00:18
i don't think so much about that and get out of it but i got it right
00:00:22
so basically it is the only way our goal is to train and given to model
00:00:28
we now what you read well which is all the discovery
00:00:31
or discriminated retraining our bases are simple on the generator
00:00:37
and on the real data sample from the real data distribution would you want to learn
00:00:43
so basically we have implicitly bigger labels assigned one coming from
00:00:47
the real beta words you are coming from the generator
00:00:52
and then the train the generator that generates a in a link that this committee that the simple sound like real so
00:00:58
basically it's strange because then i'll make the label because it level lock ups so because we're always this like this
00:01:07
ah like it's really interesting is that the generator like in the out and go there
00:01:11
uh we thought it's a mapping from what they expected to the real data space
00:01:16
so these are not expect their example on some disruption within which we can easily sample
00:01:22
so we can form like that like this is he saying that because there is a shot
00:01:27
and i think really bad we know that the temples uh this is just an expression more than once the
00:01:33
uh as you are some because we have the players one of them is trying to
00:01:37
maximise the value function another one is trying to minimise the that function so basically
00:01:43
uh i think maybe we're of in thing you really didn't show
00:01:48
there are a lot model this division is going to be able to relate the story
00:01:53
so right but uh you can interrupt the necessary so this is just to illustrate your load but may
00:02:00
just uh basically this coming either it get examples from the related distribution we get in the labels
00:02:07
examples from the generated the generator is getting variance from the discovery so that's how to generate a model is thing
00:02:15
okay so uh this is when you generate images and balance level they found so many of digits i'm not going
00:02:23
to do with it i just thought struck the size wary focusing on that but it's pretty clear uh_huh
00:02:29
by the number thing but that had some big i mean that that boxing and does it grows exponentially
00:02:35
so the the tray ah although it's very popular topic which are useful
00:02:40
oh in many applications it's there's so many problems so there isn't
00:02:45
that your top labour you really want to use that
00:02:48
um so this is what we of b. b. c. it's happening but it's a lot more complicated basically
00:02:55
the parameters of the discriminated the image and likeness and uh like one one
00:03:00
parameter just working out the main dish and what you wanted to
00:03:04
then when your devised a valid points that they shouldn't be doing this line we actually once you find them that all points
00:03:11
which is going to give us the optimum the optimum else they i think
00:03:15
that one but this never happens because at varying on on its use
00:03:20
um and there's the neural network of bearing on what's in chains there's so many so many problems one
00:03:26
of them or menu then what you want one mentioned that your slides i will just you
00:03:32
all this and like this is samples examples of what collapse and it's not so basically
00:03:40
uh they have been so many um variance of the garden going out with them but it's very difficult to
00:03:47
say for that one and then outperforms all the rest although they have been doing someone's someone's work
00:03:53
because they're they're a little higher priced and oh and depending uh said get get get different performances
00:04:00
so what if i guess it uh is sewn on different mythology how we can uh can you hear me or oh
00:04:08
so the parental as you probably can train a login other side of the brakes
00:04:13
um and this can be applied to any variant of
00:04:16
that cannot within that already exist so oh
00:04:21
okay so it is motivated by you know some
00:04:24
empirical observations so basically if impinges so
00:04:28
uh run multiple insinuations of of of of the father stereo pairs
00:04:35
we will see beneath the convergence of around the mote so you are real i'm not sure
00:04:41
i can see but if you are a rerun experiment where we just they um
00:04:48
oh very train one it's the one pair of other senators regularly
00:04:53
and we train another one but the this one the second
00:04:56
one estrangement stumble off well and symbol of well
00:05:00
such adversarial paris we we didn't want to one that does give me the screen their output
00:05:07
in some region of interest this is what we get the sit there
00:05:10
for that generate their card it's up for this game either
00:05:15
a bit of a cripple just come in there that is some uh change regularly there's what
00:05:21
collapse and tons of the way that we train a a there there's no what colour
00:05:26
uh the probability that a problem or it will it will not be hard right
00:05:31
at the number of independent adversarial problem out here to our
00:05:35
guest goes down exponentially somehow can use this uh
00:05:40
so this is how the structure of the algorithm it's basically that have shaped our job with them in the end
00:05:46
but there is one pair networks and uh oh which is very convenient for practical
00:05:51
applications although you use many other smile appears again you then just one pair
00:05:57
but here's our insensible so basically the is trained to discriminate older older
00:06:02
examples and the generator is trying to pull all the display niggers
00:06:07
domain we as tuned to the the diesel here's the tape all
00:06:12
uh in our uh our main thing is that when when there's
00:06:16
no one network things lessons them all because we didn't get
00:06:20
what's collapsed or steam out okay so oh basically this is
00:06:24
the essence training and can generate than just one pair
00:06:28
so and it not only improves what were surprised that it not only improves our remote coverage
00:06:34
but it all seemed plus converges sun rises type and you say the blood samples from
00:06:39
a faint from the length examples here this is the real data distribution
00:06:44
and this or samples all taken from the different local oh local uh generators
00:06:50
indeed and they're not that in different colours so that little generator
00:06:54
convert just in a different way than the local ones
00:06:58
and they just so uh see i like that the next generations
00:07:02
we see that it's like seventeen the the local and so that the trajectory of converging is different
00:07:08
the best therefore the faster convergence it's is really interesting okay
00:07:14
um hum some metrics when you get like a different brands like that we've done of this again
00:07:20
um dark brown over other buttons on any brian you mean then when the the performance is better
00:07:28
so i want to show some also it helps what collapse and uh though and i mean what we see
00:07:35
on copper experiments it also happening and you know the the samples what this very interesting is that are
00:07:42
just selections are also reduced and i'm i'm sure just few years
00:07:50
happen
00:07:52
so i asked the president bush does that big mama generator and this is the start of local ones
00:07:59
so basically they're all of us like teen way more that i'm not very conceited probably the resemblance
00:08:05
but there was telling you can definitely see that there ought selecting a quite a lot like about
00:08:10
so uh in the blue r. square that's that will generate another you train and the other ones are regular actually
00:08:18
and uh basically they are sometimes does oh like you can do this for multiple of data set
00:08:25
and you say that you're due to that selection sometimes or the local paris it's going to it's going
00:08:30
to completely fail the um the convergence various target level and we always get this and we also
00:08:39
you mean
00:08:41
even his show it i mean it's not just some parts of their comments
00:08:45
to because the scene but the convergence is a lot more stable
00:08:48
so basically this is what we get and this is our agenda that has been
00:08:53
this is just to equalise computation although although although this
00:08:57
there's more defined benefits so just you just you
00:09:03
yeah

Share this talk: 


Recommended talks

Deep Supervised Learning of Representations
Yoshua Bengio, University of Montreal, Canada
July 4, 2016 · 2:01 p.m.
2369 views
Brain MRI Analysis and Machine Learning for Diagnosis of Neurodegeneration
Dr. Ahmed Abdulkadir, University of Pennsylvania
Feb. 6, 2020 · 10:04 a.m.
163 views