Player is loading...

Embed

Embed code

Transcriptions

Note: this content has been automatically generated.
00:00:00
Happy to stay here today even
00:00:03
especially dispute with there's a lot
00:00:05
of training models in the machine
00:00:08
learning because this is not something.
00:00:11
I'm I'm no laws not something I've done
00:00:13
a lot and I think this very a lot of
00:00:14
interactions possible with what I'm
00:00:16
going to present to the so the the
00:00:19
general topic of what I'm going to
00:00:22
speak about these training images what
00:00:24
we can training images some with this
00:00:26
explain a bit what this is and and this
00:00:29
is the front page there. There's this I
00:00:31
think the situation for in that
00:00:33
situation we have the models is a
00:00:35
geological model. So it's a three D
00:00:38
model of the subservient rest the other
00:00:43
one yeah so the okay yeah I and one
00:00:50
generate I I more. So this is I
00:00:54
generated this treaty model but the you
00:00:58
can see that the model has some
00:00:59
complexity this entire life features
00:01:01
and things and and how can we from a
00:01:04
little bit relatively small number of
00:01:06
data like this generate some complexity
00:01:09
in the complex series beyond what is
00:01:11
only in my teeth okay I'm going to
00:01:14
speak about this the other side of the
00:01:17
talks like this so first I'm going to
00:01:19
tell you a bit about this thing called
00:01:21
multiple point is thirty six what is it
00:01:23
is it's at turn bitter essentially the
00:01:26
the idea of using training images to
00:01:28
create stochastic models. They're going
00:01:31
to present some algorithms of how we do
00:01:33
this in this I guess is probably the
00:01:36
most interesting part for you. And also
00:01:38
with this kind of mythology one hundred
00:01:40
issues people face and how can we try
00:01:42
to address this issue okay first to
00:01:46
give this general setting I'm going to
00:01:48
so it's more application our goal for
00:01:52
we'd all this training images is to
00:01:54
build stochastic models of re system
00:01:58
that a sort of realistic an application
00:02:02
here this is the an utterance with
00:02:04
alone in a how so that was the the
00:02:07
court I mean it aside. So people put
00:02:09
the lens feel here not having in
00:02:12
eighties that they put a lot of waste
00:02:15
in the leaked. So this disability that
00:02:17
does more less its shape so this can
00:02:19
imaging geeks in the groundwater in the
00:02:22
this interface is full of what I mean
00:02:23
and and all these dots their wits
00:02:26
people maybe well estimation where the
00:02:27
court I mean goes. So this lot there's
00:02:29
the more than harden Simmons and if you
00:02:32
think of the cost of it that's hundreds
00:02:34
of millions to to try to make it
00:02:36
decides but unfortunately all these in
00:02:39
word in this whole thing failed
00:02:42
miserably in the the never manage to to
00:02:46
to remedy indicated this act why didn't
00:02:49
never make it because in the sub
00:02:52
surface here yeah some professional
00:02:54
pasty can images them connected
00:02:55
features with the current I mean it
00:02:57
goes in no one can ever could have
00:02:59
identified them is on waves because
00:03:01
these are point measurements and we
00:03:03
ended tree this piece where everything
00:03:05
is connected with us. So why didn't
00:03:08
they my needs to characterise this
00:03:12
colour start of connectivity correctly
00:03:14
because the usenet is that don't
00:03:16
consider connecting. So the problem
00:03:20
here is that what you have point
00:03:21
measurements went with that but it's in
00:03:23
three D to find out what are the
00:03:25
property of deceptive it's one of the
00:03:28
one one interpolation method we know is
00:03:30
creating audio statistical based
00:03:33
animated in I guess most of you know
00:03:36
the by on this divide one formant have
00:03:40
the ram is only "'cause" there's pairs
00:03:42
of points. So you have the pen this
00:03:44
between two pairs of points in the
00:03:46
whole vibe rhymes small that as one
00:03:47
function that has just a few parameters
00:03:50
there are some implicit that assumption
00:03:54
when you use when you mother something
00:03:56
would divide on in terms of spatial
00:03:58
connectivity you assume that the
00:04:00
connectivity of the intermediate values
00:04:02
a maximum extreme values that is
00:04:03
connected it indeterminate values
00:04:06
connected. So and example if I they
00:04:10
could generate classical just that's
00:04:11
connected in assimilate the field that
00:04:14
has a given viable and I would get
00:04:16
something like this even rough I mean
00:04:17
yeah we just I just see the recognise
00:04:19
the C D.'s is I I I know I what kind of
00:04:22
argument and if you look at these
00:04:24
models that are generated enemies that
00:04:26
could generate a model for my point I
00:04:28
mean it's not like this. I would think
00:04:30
that the I don't use so the colour by
00:04:32
here but is the the green is the
00:04:35
intermediate values they were connected
00:04:37
you can have a possibly in through the
00:04:39
whole model. But extreme the blue and
00:04:41
the red than interconnect. But that's a
00:04:43
a property of this sort of models that
00:04:46
use this will underestimate corrected
00:04:48
so for this to call you can inside in
00:04:54
the same important decide we use the
00:04:56
more complex net. So make that based on
00:04:59
this make goes and framework with more
00:05:01
complicated we truncation is we
00:05:03
categories as or any in the end we end
00:05:06
up with more complex more like this
00:05:08
which will have some degree of
00:05:09
connected okay in this in this we could
00:05:12
models some with deterministic from all
00:05:16
this we could get them in some frozen
00:05:18
contaminant transport in the sub
00:05:20
surface and this and this the the
00:05:23
general application of this okay small
00:05:25
is that would generate a large number
00:05:27
of stochastic models of the sub
00:05:30
surface. And then you have a PDF for an
00:05:32
example of predictions for the what I
00:05:34
mean is so this was this goes yes oh
00:05:41
yes yes you no matter just once or
00:05:51
position you have some everything's
00:05:54
actually you measure more parameters
00:05:57
for the but hydrodynamic of this the
00:05:59
prosody the heart the permeability but
00:06:01
you got you can consider because one
00:06:03
bad yeah yeah once got I teach based
00:06:06
actually have many things that think
00:06:07
the point with the problem doesn't
00:06:09
change. So you have just one thing that
00:06:11
you measure any wanted to put it this
00:06:12
patient. And to do this you have to
00:06:15
make assumptions because you you don't
00:06:17
of between your voice that some scales
00:06:18
don't yeah so here we have some
00:06:23
assumptions I'm not going it this
00:06:24
imagery this kind of models. So this is
00:06:26
the hydraulic properties. They put it
00:06:29
in in this is the resulting flow once
00:06:32
we think those properties we certainly
00:06:34
them that contaminating it that's what
00:06:36
we we happy because we see something
00:06:38
going in mean know it happens okay and
00:06:42
then we can compare these that this is
00:06:44
an incentive of current I mean it's got
00:06:46
so this is you have a well somewhere
00:06:48
does the contaminants you of the idiocy
00:06:52
have different responses what different
00:06:53
models and you can we could PDF for the
00:06:55
probability of the court I mean it and
00:06:57
that but I mean and teaching this
00:06:59
place. That's the kind of analyses so
00:07:01
the problem is that all of this is one
00:07:04
order of magnitude underestimating the
00:07:08
the contamination compared what what
00:07:09
the the the conditional really okay
00:07:12
than this up. But it still okay because
00:07:14
if we compare to a homogeneous medium.
00:07:17
So something that doesn't have all this
00:07:18
connectivity within a hundred times
00:07:21
yes. So anyway we were very happy. I
00:07:26
okay I see we all four yeah I I I I but
00:07:46
I was a problem that have I have a
00:07:52
correlation right okay so the the
00:08:03
general reason for that is that the
00:08:06
umpire one so yeah I'm showing two
00:08:10
different fields we'd also associated
00:08:12
by once computed on this school
00:08:15
instance patient things a one typically
00:08:18
has two what three parameters on
00:08:20
totally parameters what what is the
00:08:23
diversity of spatial structures you can
00:08:26
identify with two three parameters an
00:08:28
example is that these two fields if you
00:08:30
look at the viable they have almost
00:08:33
about the the the two point
00:08:36
connectivity that point correlation of
00:08:38
this. This this teams however you want
00:08:41
to look at it is the thing. But if we
00:08:44
look at the end obviously different. So
00:08:45
there things the the the these tools
00:08:49
sisters as who a blind to some
00:08:51
property. So we need something
00:08:52
different in this is not the particular
00:08:55
case that we may to that many cases in
00:08:58
nature where things are more complex
00:09:01
than just what operation functions so
00:09:04
by well I'm so all by protecting six
00:09:06
yeah I just a sample of what kind of
00:09:08
complex structure in nature would have
00:09:10
some rain things you'd have some time
00:09:14
and ice networks in in a surface
00:09:16
processes to catch remote sensing
00:09:18
images you see that wrist. So they mean
00:09:21
you approach the you approach that was
00:09:23
proposed in this is we are come to
00:09:26
multiple point just like this it's easy
00:09:29
to use training images. So instead of a
00:09:31
viable on which the simple I analytical
00:09:34
to to buy me the function we have a
00:09:36
unit. And in this irrespective settings
00:09:40
this imagining it. So I have an image I
00:09:43
use this teenage nasty okay my step
00:09:45
surface or whatever process that right
00:09:47
the model we look like in the in the
00:09:50
white bill has six looks like this
00:09:52
that's property of this and then I have
00:09:55
the so this is just give the structure
00:09:59
mostly be sure and they have the that
00:10:01
which I measure. So in this case it
00:10:02
level. And I want the number that makes
00:10:05
this structure. And rocks it then be
00:10:08
passed such that this condition. And
00:10:11
here. I don't see how I do it with the
00:10:13
result is we have two realisations here
00:10:16
to feed which one of those data. So
00:10:18
where where where we have a a black
00:10:20
well why don't we do have it is it
00:10:22
square and but the structure our full
00:10:25
respect it if I was that he thinks of a
00:10:27
large number of those models I would be
00:10:30
at for example the probability of
00:10:32
having a white channel a black tie back
00:10:37
a feature at every point exactly stock
00:10:40
I don't some statistics kind of some
00:10:41
predictions okay input these models
00:10:43
into four simulators or whatever
00:10:46
whatever all the functional okay so how
00:10:50
is this done that mean I don't have to
00:10:54
do this is the there's the whole the
00:10:56
whole list I'm going to so if you have
00:10:58
then and they have several ways to
00:11:02
categorise had these are great in some
00:11:04
of them are really amazing textures
00:11:06
because if we go we look at this this
00:11:08
is really like an example texture that
00:11:10
we want to constrain the data that
00:11:13
could in computer science terms that's
00:11:14
how we would all the problems
00:11:16
probabilistic terms to be different but
00:11:19
the us all database that can be divided
00:11:21
in makes a basic battery so so the
00:11:23
basic basically one piece at a time
00:11:25
because of the new image extract sort
00:11:27
of some of them by actions you put box
00:11:29
that prefer another distinction that is
00:11:34
we have a type of items that functions
00:11:36
by learning in making sort of a
00:11:40
database of buttons you take a training
00:11:42
made you decompose it into for a few
00:11:47
buttons. And then you big statistics on
00:11:49
its buttons. And there's another kind
00:11:52
of I'm going to complete distances
00:11:54
between back. So they don't store
00:11:55
anything they don't make a database of
00:11:56
buttons it just look at the just look
00:11:59
up in the image endeavour this
00:12:00
instruction and way too so if you so
00:12:04
the driver is based in the
00:12:06
classification of the buttons. They
00:12:08
work essentially like this. We have a
00:12:10
very simple training in into this one
00:12:12
so this is very small training image as
00:12:15
twelve but it's simple buttons in it if
00:12:17
I look a package today thanks for
00:12:20
pixels with this this is not a
00:12:22
possibility if I look at everything and
00:12:25
then each of them as a certain
00:12:27
documents et cetera frequency can
00:12:29
people is frequencies and I told "'em"
00:12:32
in for example a tree. So this in this
00:12:35
is quite an efficient way of doing
00:12:37
still ways in tree structure. And you
00:12:39
number the excels in this in this is
00:12:42
the back then that in depending on the
00:12:45
if the if the first one which is in
00:12:47
this case one on top is black collide
00:12:50
to put it in this range and especially
00:12:52
the tree and then you you keep
00:12:53
numbering you know within the second
00:12:55
one not a wide would you would a it's
00:12:58
time brains. And if you have more than
00:13:00
two categories would have everytime
00:13:02
tree branches that that's quite an
00:13:04
efficient way of storing things or the
00:13:06
people have proposed a list you just
00:13:08
make a big list of all the buttons. And
00:13:10
you have counters for the for the
00:13:11
frequency and then you can use this to
00:13:15
reconstruct the without you would go in
00:13:18
one place else okay look at what other
00:13:20
pieces around in image and draw from
00:13:23
the from this kind of frequencies to
00:13:26
draw approach you draw an outcome. And
00:13:29
you can generator you made pixel by
00:13:32
pixel which is this a little bit more
00:13:36
on the arguments about this and it this
00:13:38
this between buttons. So that there's
00:13:41
also a bunch of them essentially if you
00:13:44
start with the the I D.s every without
00:13:47
to going into any image of Daniel could
00:13:48
generate a button simulate the picture.
00:13:51
You will look at the neighbourhood in
00:13:53
do look at your training. So for
00:13:55
example I'm looking for this fact that
00:13:57
we put a convolution of my training
00:13:59
data training image identified that
00:14:01
they are one two three four five places
00:14:04
where these buttons exactly matched.
00:14:06
And how we take one of the five
00:14:08
randomly and put it in my in the the
00:14:11
model I want to jen and this many ways
00:14:14
to combination again. So minute nifty
00:14:16
somebody's an interrupted convolution
00:14:19
so okay many ways I'm only sufficient.
00:14:22
So I'm going to to show what I mean by
00:14:25
interrupted convolution in the idea of
00:14:27
this is based on something that sign on
00:14:31
probably knowing proposed first
00:14:35
nineteen forty eights before they had
00:14:37
computer how can you do convolution and
00:14:39
search for back then in in the big
00:14:42
database when you don't have much
00:14:44
computing that actually that they
00:14:46
eleven computers in the maybe some of
00:14:50
you know the principal the idea was to
00:14:52
generate a random sequence of English
00:14:54
initially language. Well how can we do
00:14:56
this the first thing was to randomly
00:15:01
left. So then you you don't have much
00:15:05
interest in this goblin language then
00:15:08
you can draw like that is based on the
00:15:11
frequency in english. This is this is
00:15:13
what we what we tried to have a zero
00:15:15
all the first or the second order
00:15:17
approximation. So every time he was
00:15:19
trying to draw like this can move with
00:15:22
the high order of statistics. So be
00:15:25
than the marginal probability in in
00:15:27
English or the the second order
00:15:29
probably or so I'm going to such an
00:15:31
example of a channel needed in this
00:15:34
what I'm struggling to show these days
00:15:36
than a training text which is not the
00:15:39
same as what you the time is in french
00:15:41
symbol go from used from where we live
00:15:44
and you say two hundred fifty page to
00:15:46
be one this is just the the first to
00:15:48
dissect. So let's generate a minute
00:15:50
excellent the first strategy is to look
00:15:54
at the marginal probability in french.
00:15:57
So we have some probabilities there.
00:15:59
And the first we know that he's more
00:16:01
frequent so we will draw more
00:16:02
frequently. So so we get this which of
00:16:07
course the the the spaces come at the
00:16:09
random location is not autocorrelation
00:16:12
electors so it's not then what we can
00:16:15
do and what's side and deal with tables
00:16:17
of random numbers in able to
00:16:18
frequencies at the time. We can look at
00:16:20
the joint probability of having to
00:16:22
succeeding like that. So then you have
00:16:24
to make a big probability table which
00:16:27
is the probability of an a a BSC in it
00:16:30
on the on the possible pairs. And every
00:16:32
time you want to go to next letter you
00:16:34
look at what's the previous one and you
00:16:36
have the conditioning probability for
00:16:38
example having in a me knowing that the
00:16:40
previous it was eight yeah and then we
00:16:43
can simulate this and that we have
00:16:46
something that is very slightly better
00:16:48
but still not that's fine but we can
00:16:51
increase the order and channels you not
00:16:53
really do this at a time was not able
00:16:55
to do it without computers but we can
00:16:57
we today all before. I have something
00:17:00
like I know basically the three
00:17:02
previous letters. And I'm generating
00:17:04
the fourth one so okay sort of
00:17:07
interesting is go higher it's got all
00:17:10
eight it of course is not possible
00:17:12
because I want to make a table of
00:17:14
probability yeah yeah I have two
00:17:15
hundred fifty six I think a "'cause" if
00:17:18
I want to make two hundred fifty six to
00:17:20
the power eight doesn't work. So sound
00:17:24
had the same problem at all the three
00:17:26
probably what they need in in in this
00:17:30
things article a mathematical theory of
00:17:32
communication is a tiny paragraph that
00:17:35
explains what it it is not all the the
00:17:38
purpose of the paper for him is just
00:17:39
something says in passing. So what it
00:17:42
does very simply takes was to generate
00:17:44
random initially takes a book that's
00:17:47
going to be straining book. And it
00:17:49
chooses the first letter when the
00:17:51
Minnesota realistic. E and it just a
00:17:54
plausible kinda random page is starts
00:17:56
with you know the first he encounters
00:18:00
takes the next letter in that's going
00:18:02
to be sent to see what is going to be
00:18:05
if not if the training at the opposite
00:18:08
new page again with the early first if
00:18:11
this say that there and so so that's
00:18:13
something that is cold night sleeping
00:18:16
glass actually aside and then so the
00:18:18
idea is that we just going to say about
00:18:21
the book. And we don't have any more
00:18:23
probably distribution we don't more the
00:18:25
PDF remote a viable with the model any
00:18:27
kind of model. We just a sample from a
00:18:30
training set. So you'll example. We
00:18:33
start thinking a here. That's our to
00:18:36
first let us and this are training
00:18:40
text. And we start really a bad and I'm
00:18:45
starting to read I really gnarly it
00:18:47
definitely find a here that next let is
00:18:51
are and not be sit in my similes and
00:18:54
take the not then you event yeah the
00:18:58
are the new everytime searching for I
00:19:00
think when you random location and I
00:19:01
keep Reading into definitely a picnics
00:19:05
on and so can I conclude. So you can
00:19:08
imagine that that's quite sufficient
00:19:09
competition because every time you
00:19:11
either probably a small piece of text
00:19:14
you don't have to scan the whole book
00:19:16
to find a number of matches and see how
00:19:18
much you much compute frequencies just
00:19:20
keep Reading the first one you find you
00:19:23
put in it. So now if I want to see me
00:19:26
that all of that you know it works like
00:19:29
could have something on the love memory
00:19:30
arrow it's very fast and have to store
00:19:32
anything in probably I just have to
00:19:34
store my book override because Jane
00:19:36
destiny. And it goes fast I can
00:19:39
generate them now I want to go back to
00:19:43
spatial models that had before putting
00:19:46
up that really you're more how do we do
00:19:48
this week and exactly the same
00:19:51
illustration context. So instead of
00:19:53
having a previews events them looking
00:19:56
for that is just a succession of
00:19:57
letters and wanting it can be some
00:20:00
suspicion button into E so here my
00:20:02
sufficiently I want to know what's
00:20:04
happening here. What to generate the
00:20:06
value here conditioner to three values
00:20:08
here to reread whether an never
00:20:11
training image. So I'm going to start
00:20:15
that random location okay yeah I look
00:20:20
in my image it doesn't that's that much
00:20:22
because down evergreen it should be I
00:20:25
was somewhere else still doesn't match
00:20:27
I was somewhere else and then it
00:20:29
matches. So I see what value I have in
00:20:32
the middle. It's red and not put the
00:20:35
red in any okay that that's
00:20:37
sensibilities and then I'll go to
00:20:39
another location in then I can do I can
00:20:41
do all the locations in this image
00:20:44
major now will get the picture that
00:20:46
resembles this somehow using higher
00:20:48
order some sort of higher order
00:20:51
statistics this can be pushed a bit
00:20:55
further actually when I say them
00:20:58
actually doesn't match. I'm computing
00:21:01
at least it's is not simple binary
00:21:04
choice a marginal magic could match a
00:21:06
little bit. So we could replace this
00:21:08
algorithm something by a pack then this
00:21:11
this you have to practise too much then
00:21:13
and you accept the the the central
00:21:16
value when that this this is under
00:21:19
seven facial and if you do this then
00:21:23
you're you you don't have to use
00:21:25
categories you clap continues either.
00:21:27
So you can you have just the L one L
00:21:29
two distance. And you can you can make
00:21:32
a threshold on it then you can see we
00:21:34
put these fibres are you mean for it
00:21:36
and you can also do it for twenty
00:21:38
models of course this whole thing with
00:21:40
the neighbours about body into so an
00:21:43
example is this we have a training
00:21:44
image. And we can generate when the
00:21:47
model based on it that takes some of
00:21:50
these properties but you see that is
00:21:51
not the force that exactly see and then
00:21:53
we could compute semester to you once
00:21:56
it if you go back to this connectivity
00:21:57
problem for for transport problems. So
00:22:00
here is an example of this is it's
00:22:03
completely construed is not a one water
00:22:05
it's not the it's not from a cat is
00:22:08
just a some kind of random remote
00:22:10
sensing image. Let's say that this is
00:22:13
up always make that we have we coming
00:22:15
through this and I use classical just
00:22:18
artistically viable on the the
00:22:20
transformation on the things needed. I
00:22:22
can get this model. This thing this has
00:22:26
the same by a bomb wise that the same
00:22:28
properties is this okay is is there is
00:22:30
a copy. And the the skewed distribution
00:22:34
values and so I can also use the this
00:22:38
is fading imaginary commodity. I good
00:22:40
no less competitive in terms of
00:22:44
histogram that a two restaurants there
00:22:47
to see if there's a our bounce the
00:22:50
identical and so if I just look at
00:22:54
these metrics the these images should
00:22:55
use. Now let's look at the same flow
00:22:59
problem that I had for supper the court
00:23:00
I mean and I flow it's true this. And
00:23:03
then that a difference. So here you
00:23:05
have some connectivity for this the
00:23:07
white is the highly contaminated high
00:23:10
concentration go through and he doesn't
00:23:13
because they see value gonna put it and
00:23:17
if I do the same as in my a how gays if
00:23:22
I have a well somewhere now look at how
00:23:24
much contamination I getting out when
00:23:26
you want case I would get this slow
00:23:27
increasing contamination because yeah
00:23:30
and emotion worded everything is nice
00:23:31
and smooth in the other case I get
00:23:34
oldies shop in connected properties and
00:23:37
contamination arise directly very
00:23:39
quickly have a high contamination. So
00:23:41
if you put away that weekly
00:23:43
contaminated this is what we look to in
00:23:45
this case yes it's okay now now
00:23:49
although I spoke about batch of these
00:23:51
are great. So I want to present around
00:23:53
are going to do this sort of the same
00:23:55
things. But more efficiently and this
00:23:58
is wages statistics meets computer
00:24:00
graphics. They are really some nice
00:24:02
comfortable graphics and will that we
00:24:03
use for this kind of applications one
00:24:05
of them is initially thing which is
00:24:07
quite widely used for texture
00:24:09
generation. So the the more than this
00:24:13
this is really in the slide that took
00:24:15
from computer graphics people. They
00:24:17
would have degree than exam press
00:24:20
exactly training image and they want to
00:24:22
generate the texture of it that is
00:24:23
visually interesting that justified. So
00:24:28
the first way of doing is precisely I
00:24:30
if you tile pieces of your training
00:24:32
image you get this something
00:24:35
interesting with your whole kind of
00:24:36
boundaries. And you don't make sure
00:24:38
that the the overlap now what you can
00:24:41
do is have overlapping blocks. And make
00:24:44
sure that they minimise that you choose
00:24:47
the boxes jot that the overlapping part
00:24:49
that they was not to be you can do it
00:24:51
is but you in see what kind of article
00:24:53
the principle of image skating is then
00:24:57
to use a dynamic programming are
00:24:59
related to cut them ultimately find a
00:25:02
minimum error box. So you don't overlap
00:25:05
two blocks you could the nicely and
00:25:07
then they look much nicer. And you can
00:25:10
generate some kind of animal from this
00:25:12
how is the minimum error I mean always
00:25:18
this cutting thing computer. So first
00:25:23
you compute the arrow between the two
00:25:26
overlap are we as a result overlapping
00:25:29
the end. So did you never map and then
00:25:32
you compute the costs a minimum cost
00:25:34
pass all these are not so you made it
00:25:36
would be a couple roughly. And you can
00:25:39
see the cost that will be the fastest.
00:25:42
This effect and then you got got along
00:25:44
this and you you put the the part of
00:25:47
the side of the that yeah and that
00:25:50
would have minimum artifacts. So how do
00:25:55
we use this to generate digest that's
00:25:57
the model without retraining age. We we
00:26:01
we started domain piece of it we put it
00:26:03
in a gallery mage and we look at the
00:26:06
overlap. And we find another place that
00:26:10
has the that matches when the overlap
00:26:11
put it in there and then we look at the
00:26:16
continue. And went in with would you
00:26:18
nearly will have at the end of the of
00:26:21
the line we have to well the last one
00:26:23
one up in one on the left side and
00:26:25
every time with two cats because it and
00:26:27
we can also with injury the way the
00:26:31
good used on is the dynamic programming
00:26:35
which is I I need they need a little
00:26:38
bit here because it's a nice have right
00:26:40
it's quite quite cute. So you start
00:26:43
with this overlap between two batches
00:26:45
in of an arrow. So every time you have
00:26:48
an L which could be the absolute value
00:26:51
of the of the difference between the
00:26:52
two batches that and you are you have a
00:26:54
fiend of airlines and want to find a
00:26:56
cut in this direction that will have
00:27:00
the some of the errors actually that
00:27:02
menu. So how will this be done
00:27:05
efficiently we could compute all the
00:27:06
possible positing the minimum for
00:27:08
example what to be very efficient. So
00:27:10
dynamic programming with that is that
00:27:12
we start with the first roll okay and
00:27:15
we have you know that we we don't do
00:27:18
anything official then we look at the
00:27:20
second role and we're going to compute
00:27:22
this instead of the arrows going to
00:27:26
computers for this role every time some
00:27:29
you like we see here is the minimum of
00:27:33
the three overlap three history one
00:27:35
country that minimum between press the
00:27:38
value of the air. So it's a sort of a
00:27:41
community minimum of that reproduce of
00:27:45
this tree previews aerobics it's and
00:27:49
then I can continue the thing with the
00:27:50
next. So I have the minimum of the
00:27:52
three previous collected a arrows press
00:27:56
this one so I not of minimum that sums
00:27:58
up and I continue computing these for
00:28:02
each a teen arrive at the end here. And
00:28:07
then I have a competitive cost on this
00:28:10
role and I can simply say which one as
00:28:12
the minimum but this cost. So we find
00:28:15
which one as a committee minimum of
00:28:17
discourse it happens to be this one and
00:28:20
I know that there must be a boss going
00:28:22
from there to there that in that here.
00:28:24
And that has the minimum cost because
00:28:27
accumulated the minimum maybe I don't
00:28:29
exactly where it is possible. But I was
00:28:31
very easy because I just need to trace
00:28:32
back to find where the minimum is if
00:28:35
we've done. And I can't find the
00:28:37
minimum so is just this is very
00:28:39
efficient because you could have down
00:28:41
one just to tools and that it is
00:28:43
extremely efficient also into the
00:28:45
intrigue. So this was again something
00:28:50
or nineteen fifty nine but that ideas
00:28:52
context is very interesting and this is
00:28:54
a matter that he modified also because
00:28:56
all this when they generate textured is
00:28:58
non condition that have the just the
00:29:00
texture that you is the need them out
00:29:02
of text. Now went to adapt it to data.
00:29:05
So there's an extract the when we
00:29:07
choose given past what to put a
00:29:10
database what to consider the this is
00:29:12
not too hard. So I have a little movie
00:29:15
here that this was very quickly what we
00:29:22
get so that's that's quite the complex
00:29:24
theological image here that we have and
00:29:29
we use this that that you see every
00:29:31
time. We going to put a little box in
00:29:33
this a cut that happens that removes
00:29:36
all possible octopus we would have and
00:29:38
all these points that conditioning data
00:29:40
these constraints and every time we
00:29:43
choose the boxes such that because
00:29:45
constrain it it is decided
00:29:47
symmetrically I because sometimes we
00:29:49
have to keep cutting that if the I'm
00:29:51
more many points within a single batch
00:29:53
you have to keep cutting it's more it's
00:29:55
more yelling gives pretty good with
00:29:57
that is extremely faster than so yeah
00:30:03
so the when I still really fast if I
00:30:07
compare it with the previous I've been
00:30:09
this I don't kind algorithm that showed
00:30:10
before these are some computation time
00:30:13
differences. So the the that accepting
00:30:17
this I don't I'm this is the time
00:30:20
computing time for giving case and this
00:30:22
is the number of pixels regenerate my
00:30:25
output malls. So this we see that we in
00:30:28
the thousands of sick the image kidding
00:30:31
I just saw the for the same things we
00:30:34
are we have to look at this scale we
00:30:36
are more in the port if you think it's
00:30:39
so we hundred about faster that's very
00:30:40
interesting when we do big three D
00:30:42
models that have millions of pieces
00:30:45
okay so that was a just a few method.
00:30:51
But this is the big overview of all the
00:30:53
methods that exists to do this kind of
00:30:56
training data based mated so it's a so
00:30:58
I'm not going to talk about all of them
00:31:00
there's a lot of them that they do some
00:31:02
of them do things some of them don some
00:31:04
of them are fast some of the last role
00:31:07
for the whole is a whole bunch of
00:31:09
things you can now I'm going speak more
00:31:14
about the problems the issues we are
00:31:17
with this whole framework because it
00:31:19
has also been criticised a lot for
00:31:22
several reasons in and we also to try
00:31:25
to address those reasons there are
00:31:28
basically two unique challenges the
00:31:31
first one is that the usability of the
00:31:36
methods as to be proved is not using in
00:31:39
people away saying okay if I want to
00:31:41
see properties with less that there's
00:31:43
not enough go there not enough money to
00:31:46
a general for emotional software
00:31:49
available. So yeah I can say metal that
00:31:51
still being developed something new but
00:31:54
it is there is a need to their topics
00:31:55
in this is something which one another
00:31:58
more from them than criticism is
00:32:01
basically where do we find of any age
00:32:03
we need the training image for any
00:32:05
model we do that is large enough that I
00:32:09
was present extreme values for example.
00:32:11
So if we have a policy model it's not
00:32:13
the problem. But if we I mean for for
00:32:15
the whole space of real number we have
00:32:17
a we have a model here not yeah
00:32:19
restricted to what exists now training
00:32:21
is more training in did. We not going
00:32:23
to have models look that was so now we
00:32:27
have to find ways to I didn't come up
00:32:30
with big training image for example
00:32:32
with remote sensing you got a lot of
00:32:34
ABDSS that's interesting training unit
00:32:37
or we can sort of in really out a
00:32:39
training based on what we have subways
00:32:40
small sending is indeed a way to solve
00:32:43
Palmer tries it they could be and
00:32:47
another problem coming for not a
00:32:50
question coming from just statisticians
00:32:52
is that the bottom function that
00:32:54
underlies whatever we tried to model
00:32:57
these these numbers because we have
00:32:59
always grace and he's then we decided
00:33:01
different results you so then we have
00:33:03
different parameters. So how do you
00:33:05
define an insurgency already don't
00:33:07
function this is something that is the
00:33:09
sole defines noise that there's not the
00:33:11
model. So it's interesting in a way
00:33:13
because we get the complexity. But we
00:33:15
lose that rockabilly okay I'm going to
00:33:20
address the the first you improving the
00:33:24
flexibility of these in TS methods by
00:33:27
showing some things we can do that are
00:33:29
that are or practical them all
00:33:30
complexities. And that's ratings them
00:33:33
because hope as we can do tonight
00:33:34
simulations actually to your question
00:33:36
you don't need to have one viable can
00:33:38
have many I have at the same time you
00:33:40
can model nonstationary fiends and you
00:33:43
gotta work training images to edit them
00:33:45
in to improve it. So how do much
00:33:48
evaluates training is were basically
00:33:51
you have several Bibles that a co
00:33:54
located your training in age. And it is
00:33:56
starting to find a pattern in than
00:33:58
define it like this which is a unique
00:34:01
by back then you you you can find and
00:34:04
by evaluate the multivariate. So you
00:34:06
have back those that got your
00:34:08
neighbours and they also go to the
00:34:10
divine and when we look for a button
00:34:12
for example in this. I mean much of
00:34:14
what we look for for text buttons you
00:34:17
look in the environment and you also in
00:34:19
the order of office on the colour so
00:34:22
this this is something you can do one
00:34:24
question is that need need to complete
00:34:26
this this week in your buttons. And how
00:34:28
do you come the this is across several
00:34:30
others you have to wait them somehow.
00:34:33
And that that's the that's the army
00:34:34
that's a question we don't we have to
00:34:36
wait then but I'm assuming that we can
00:34:38
give away we can do some thirty but my
00:34:40
mother and here is a little dog example
00:34:45
of that and the example is that we have
00:34:48
we trying to model yeah digital
00:34:50
elevation model. So we have an area
00:34:54
that we still have a we have some
00:34:57
elevation data here with a low
00:34:59
resolution. And actually we have in the
00:35:01
case like this we have the whole word
00:35:02
that low resolution is not a problem
00:35:04
thirty meters resolution the whole
00:35:06
words every able for of the problem of
00:35:08
the like the high resolution. So now
00:35:11
we're going to use a training base for
00:35:12
this. So we have another location or
00:35:15
training location where we know the
00:35:18
course innovation that's easy but at
00:35:21
this location we need some very
00:35:23
detailed measurement we also have the
00:35:24
final results. So we have a look okay
00:35:26
the high and low resolution last week
00:35:28
and violence. And now what we want
00:35:30
these at this location our target aria
00:35:33
one bit isn't of course resolution find
00:35:36
what is the highways and in this is
00:35:38
pretty easy we do things because we
00:35:40
going to just simulate these are we as
00:35:46
these and these buttons are
00:35:47
conditionally to the high value for
00:35:50
example. I going to take buttons in
00:35:52
this teenage guided by these high by
00:35:54
whereby of high values that I see those
00:35:57
factors in putting. So because the
00:35:59
general shape is there but I would I
00:36:02
would be so and we we they're dropping
00:36:04
this kind of things for for this
00:36:06
television models and we can we can get
00:36:08
pretty good results we can now skate
00:36:11
every visual yeah with the fact four
00:36:13
hundred preserving lots of features
00:36:16
assuming that we have a training area
00:36:18
that is sort of okay it is this even
00:36:21
from the same process of course another
00:36:24
example here is digital physics. So
00:36:27
somebody family would you a physics we
00:36:29
have some kind of electrical
00:36:31
measurements some comedy are we that
00:36:34
gives this sort of view what's in the
00:36:36
sub surface but it's often very biased
00:36:39
is very noisy we don't really know what
00:36:42
it's quite corresponds to but in this
00:36:45
case there was some jail radar that was
00:36:48
done in the choir in germany. And after
00:36:50
that people really excavated this
00:36:53
thing. And the look at what was and the
00:36:56
two before. So we know what is the
00:36:58
dearly that unit and we have the
00:36:59
reality there you can superimpose them
00:37:02
in the okay to me that you that some
00:37:04
sense what what we see a sort of
00:37:06
elongated the there's an interface
00:37:08
between layers and so now if we have
00:37:12
this list with the training image we
00:37:14
have another location a similar setting
00:37:17
where we only have the generator what
00:37:19
good the jurors you look like. So see
00:37:21
anything we used to make the right
00:37:23
buttons. And this is an emails we get
00:37:25
of what the reality could look like. So
00:37:28
I don't know if this the real one but
00:37:30
it looks pretty realistic Alaska
00:37:32
geologist with okay this looks like the
00:37:35
the the setting that is you know it's
00:37:38
also correlated we can still so you
00:37:39
think out of interfaces it these in
00:37:42
these in the in the geophysical it was
00:37:44
images of course not to see they have
00:37:46
some really fundamental differences in
00:37:48
how the buttons I mean it's a so it's
00:37:50
not just a copy of your that you need
00:37:51
to get guided by this geophysics no no
00:37:56
stationary T so often if you have users
00:38:01
of these methods the first time they
00:38:03
use it it would take some kind of image
00:38:06
they have that it looks very nice very
00:38:09
complex buttons in this okay listen to
00:38:11
my mother and this is the quite
00:38:14
dramatic example where we take an image
00:38:17
of the is that it's these than the
00:38:19
satellite image of this and a ones in
00:38:21
the bank that is we have a lot of the
00:38:23
with rivers because we know stationary
00:38:26
cost. And make assimilation without
00:38:28
degenerating you feel. And everything
00:38:30
is mixed. So all the all these channels
00:38:33
that everywhere the the the topology of
00:38:35
the image that respect that we don't
00:38:36
know what's what's that old of course
00:38:39
because you don't indicate that there's
00:38:41
a there's no stationarity and indeed.
00:38:43
So again if we use the motive I'd
00:38:46
framework. We can create the sort of
00:38:49
the only viable that describe the
00:38:51
nonstationary. So in this case this is
00:38:54
my my training image this is my the
00:38:57
mean nonstationary you viable so I sat
00:39:00
down this something different than up
00:39:02
and in the simulator domain I pose the
00:39:05
C so I said of what goes down to go
00:39:08
down what was that through but then I
00:39:10
can recreate something with the same
00:39:12
kind of nonstationary it's but I can
00:39:15
also change the nonstationary T by
00:39:18
putting in a to mark here twice about
00:39:20
all the whatever corresponds to wide
00:39:22
ones buttons and out of that you may is
00:39:24
definitely on the right. And I guess I
00:39:26
mean there's more quite so that by
00:39:28
using the modified framework we can
00:39:30
have some no stationary then there was
00:39:38
discussion about training images how do
00:39:39
we get to the training in it. So we
00:39:41
looked in the in the computer science
00:39:44
what division literature. And they are
00:39:47
very nice ways of anything images in in
00:39:51
the systematic way. So here you started
00:39:55
an image here which is the one one
00:39:56
before. And let's say that we want to
00:39:59
add everywhere little red being on top
00:40:04
of each of these structures. So this is
00:40:07
a simple I'd write that the record
00:40:08
baiting when you going to just put one
00:40:11
big certified here. You have to and I
00:40:14
and we go to look at all the similar
00:40:16
locations you did the convolution and
00:40:19
all the one that's sufficiently similar
00:40:21
you're going to be that had this
00:40:22
information. So it's a it's a way of
00:40:25
making systematic wanting to change now
00:40:28
we can be a bit more sophisticated than
00:40:31
instead of doing just to pay people
00:40:33
look at the morphing. So we start from
00:40:37
an image that is and this and we take a
00:40:41
location in this case is this one and
00:40:44
we can decide at this location we're
00:40:45
going to make a transformation that is
00:40:47
the expansion or contraction. So you
00:40:50
can from the war you can I will with
00:40:52
the big breaks or smaller breaks all
00:40:55
thing that is different this one. So
00:40:58
this is nice because you can say
00:40:59
statistics of your training increase
00:41:01
the proportion of a certain type of
00:41:04
value concessions tradition and really
00:41:06
it it it while keeping all the
00:41:08
structural properties the system that
00:41:12
that has worked very well. Okay I will
00:41:15
accelerate a bit because the time is
00:41:19
running short the the plastic and we're
00:41:21
going to speak of is when we find
00:41:23
training image and so this has been the
00:41:28
criticism but because at the beginning
00:41:30
people only to to talk about training
00:41:33
images that something you five test
00:41:35
your overtakes book for example geology
00:41:37
you so it's cross section you things
00:41:39
and of course it doesn't work that way
00:41:40
because this one little mate is not
00:41:43
enough anymore that wrist another
00:41:46
criticism was how to get really
00:41:48
training in ages for George
00:41:49
application. So one thing that has been
00:41:52
successfully about that is to get
00:41:53
treated train images from several two
00:41:55
sections. And many ways that being
00:41:59
tested but the one of the most
00:42:00
efficient is is this so we have a all
00:42:03
the wiry here at a party side. And this
00:42:06
choir we we have perpendicular faces.
00:42:08
So the the the guardian that you got
00:42:11
the interfaces like this. So we can see
00:42:14
this as a succession we the the the
00:42:18
method to generate that really volume
00:42:20
is essentially a succession of twenty
00:42:23
twenty models. So every time that they
00:42:27
one twenty section in the going to
00:42:29
generate several to many models and
00:42:31
this and then in is action also and
00:42:34
that's it one time this direction one
00:42:36
times direction. And their turn it into
00:42:38
the whole volume is then you can get
00:42:40
pretty good treaty blocks of for
00:42:44
training images where you only have in
00:42:46
general you up to two things or the way
00:42:50
that I've been explored recently is to
00:42:51
use for the battery or remote sensing.
00:42:54
So here for example have an ad crop.
00:42:56
And these and these you can very
00:42:58
quickly we light our technology you can
00:43:02
you can have millions of points in a
00:43:04
couple of seconds and you can really
00:43:06
base yourself also have everything in
00:43:10
and we keep the correlate inhabiting
00:43:11
but suspect audience what so can we
00:43:13
treated very large databases where you
00:43:16
would have much more very a lot of
00:43:19
complex information that is in for
00:43:21
another example is so do we need in the
00:43:26
caves in some case industry yeah they
00:43:28
use also it is that's kind of canning
00:43:31
and we could observe black comedians of
00:43:34
that that the roof of the of the cave
00:43:37
to more than the TD filtration also
00:43:40
within minutes okay not a solution to
00:43:44
get this training image is process
00:43:46
these models. So then we'll have in
00:43:49
because of geology. We have a physical
00:43:52
model well you have difference equation
00:43:54
creations of flu sediment in the
00:43:57
reverse is then inoculation really
00:43:59
physically modelled the the position of
00:44:01
sediments if you have a nice big three
00:44:03
block you have parameters of how much
00:44:06
water you put other still you put was
00:44:09
the geology key things and you re
00:44:11
generate physically at really all that
00:44:13
continues as the training and of course
00:44:18
the there's a lot of yeah I speak of
00:44:20
all this will the that these we have
00:44:22
satellite images we have rock physics
00:44:24
also what people pick a tiny meeting
00:44:27
with the fronts in the flock indecisive
00:44:30
delays that I can micro see it would
00:44:32
have a complete really picture of this
00:44:34
which you can use it as many
00:44:35
application when you want to know or
00:44:37
the properties of these works. Now this
00:44:40
is our is is what we don't have a
00:44:41
training image but we have some
00:44:43
knowledge of the process of trying to
00:44:45
mother in there that we can start with
00:44:49
the simple training emails is sort of
00:44:51
expanded. So let's say we have the
00:44:55
acted exactly is quite simple we can
00:44:58
apply needs rotations also
00:45:00
transformations that that will modify.
00:45:04
So for example rotation. Well I think
00:45:07
eighty and the it doesn't change
00:45:10
fundamentally the the shape of the
00:45:12
buttons property. So if I think this is
00:45:15
fact that I can can consider that all
00:45:17
those other buttons this and this one I
00:45:20
just different versions of what they
00:45:22
did or transformed or as one but they
00:45:25
don't even think I got the
00:45:26
transformation. So the idea is that I
00:45:30
can apply the same utterances before
00:45:33
but instead of only considering the
00:45:34
pockets I have a matter anyways. I
00:45:36
considered in several transformations.
00:45:39
And this is the rotation is a
00:45:41
continuous transformation have
00:45:42
essentially an infinity of buttons from
00:45:45
a single. So the the distance between
00:45:49
two buttons would be something for me
00:45:51
like this. I look I want to go back to
00:45:54
buttons look at all the possible
00:45:56
rotation telephony T all this buttons
00:45:59
in the one that best matches will be I
00:46:02
will be my distance cases there is the
00:46:05
are the in the formation I this if I
00:46:08
think my lip then I'm going to make
00:46:11
picture here the difference would be
00:46:15
like this. So I look for the same
00:46:17
pattern matcher any means the
00:46:19
difference is that now okay I look for
00:46:22
this one I don't find it. But then the
00:46:24
next time. I went to look for it we do
00:46:26
random transformation here in this case
00:46:29
the main limitation and it doesn't
00:46:32
match course. But then if I continue we
00:46:35
find something that matches with the
00:46:37
with the rotation every time I I random
00:46:40
rotation over the mapping or whatever
00:46:42
transformational and then it seems that
00:46:45
is blue. And a copy it and I think so
00:46:52
with the traditional way actually
00:46:53
before we have only was really going
00:46:56
because it is buttons now because the
00:46:59
much more because you know all the
00:47:00
possible what stations. And we can
00:47:03
product right this little bit by
00:47:06
imposing ranges of transformation we
00:47:08
only what eight more less thirty
00:47:10
degrees for example all really stress
00:47:12
with the fact the to inaudible. So this
00:47:16
isn't this we can we can just expand on
00:47:20
is more training image that's anyways
00:47:22
here is just this tiny if that is where
00:47:25
we don't this is an designs if I do one
00:47:28
a generate one model. I will get
00:47:31
without any transformation. I get just
00:47:34
it's a as now if I have some random
00:47:37
transformation I say is plus or minus
00:47:41
ninety degrees probation. I get what he
00:47:45
started that can rotate Morrison I
00:47:47
think so from the simple image yeah I
00:47:51
have something that can the it and then
00:47:53
I can put I think it is and I think the
00:47:55
the start to plug changing because of
00:47:58
course that and I can combine things
00:48:00
and have a these that's a sickness and
00:48:02
orientation is one in the interesting
00:48:05
thing is that their property mice more
00:48:07
trendy images you just call it
00:48:09
connected elongated features. And this
00:48:12
only kidding on his property they can't
00:48:14
interest not example here is that my
00:48:17
training missions to it is that the
00:48:19
line beliefs use squares. And base than
00:48:22
just these properties of I mean why
00:48:25
things that connected and read and that
00:48:28
things inside that disconnected. I can
00:48:30
have all these possibility budget
00:48:32
changing essentially two on it as one
00:48:34
for the rotation. And one for nothing
00:48:36
and this also allow us to to have okay
00:48:40
very big models for a more teenagers
00:48:42
very big thing that has a lot of
00:48:43
dynasty receiver nine innings all doing
00:48:46
three D models quite easily like yeah I
00:48:49
have a is more training images oh is
00:48:51
very easy to generate. And I can get
00:48:54
the big model if you speak to
00:48:55
generations everything this looks like
00:48:56
a sane can still we have brains and my
00:49:00
changing the properties that kind of
00:49:02
more blocky graceful is looking great
00:49:04
can really investigate some structural
00:49:06
properties okay I'm going to skip the
00:49:10
next part because the this don't have
00:49:15
much time left and jump to the
00:49:17
conclusion directly where I tried to
00:49:21
introduce multiple point statistics
00:49:23
yeah it's a general toolbox several
00:49:26
algorithms in the the aim is to mow do
00:49:30
spatial model or even spatial temporal
00:49:33
modelling in any case the first
00:49:37
development computer graphics we don't
00:49:39
is stand for by a PHD student first
00:49:42
about the method in you just went to
00:49:43
copy the graphical as I got some ideas
00:49:46
and you came up with the nets. And
00:49:48
since then that was not just yes we
00:49:51
decent people to be set in recently we
00:49:54
try to start to go back to computer
00:49:55
graphics and see what they done since
00:49:57
the experts fifteen years ago see
00:50:00
what's it done sits in the this lot of
00:50:02
new one but it actually misleading
00:50:04
because a lot of things that are
00:50:05
developed about it really would be
00:50:07
useful for these kind of applications
00:50:08
that are much more I that the the
00:50:11
computer graphics a community estimates
00:50:13
bed skills in been development that
00:50:16
what I just had this have but still a
00:50:21
full have practical application this
00:50:24
idea of using training is very very
00:50:26
useful. So that there's a lot of
00:50:28
applications the first applications
00:50:29
would geology georgian sciences now
00:50:32
it's using kind of science methodology
00:50:34
in remote sensing all sort of things.
00:50:37
And the innovation compared to
00:50:39
classical just statistics wise not only
00:50:41
the use training images was the idea to
00:50:44
would be on the two point covariance
00:50:46
use either statistics. And really move
00:50:49
to non parametric models and of course
00:50:53
there's ongoing research that is being
00:50:54
done in many places for this thing so
00:50:58
so it's really usability of egos palm
00:51:00
education of these codes where to get
00:51:02
the training amaze helps take the
00:51:04
training needs how to do free research
00:51:06
goes to put that in the yeah is ugly.
00:51:08
So I thank you for your attention in
00:51:12
the Jeff bridges and complaints yes
00:51:45
right yeah so typically. So it depends
00:52:18
if it's just your prediction you
00:52:21
project a system that's going to be
00:52:23
different commentators seem to be that
00:52:24
in the future is very hard to predict.
00:52:27
"'cause" you don't know the you know
00:52:28
the physics or I see this as a mother
00:52:30
you don't have a learning if you have
00:52:34
for example for the for like geological
00:52:36
mothers can very easily use the use of
00:52:39
a citation approach it keep a part of
00:52:41
your data aside and use it by the rest
00:52:44
for yeah very that kind of expensive.
00:53:12
So typically you pretty something for
00:53:13
the long them and the the edition is
00:53:17
more cross validation that is done with
00:53:19
the that I left aside when we were free
00:53:21
time it bothered to do this because
00:53:23
typically that is an application that
00:53:25
we use these two dollars gay kind of
00:53:27
well like guy you actually put the
00:53:28
eastern innovation model. And then we
00:53:30
do this these them completely that gaze
00:53:33
is what we have output of climate
00:53:34
models a different resolution. So we
00:53:36
know what the core scale when the what
00:53:37
is the five scale you know everything
00:53:39
and trying to plug it in to see how it
00:53:42
works in there we do have a reference
00:53:44
to the TV do a reference to do you know
00:53:46
the reality. And you can see what's the
00:53:48
probability of obtaining of simulating
00:53:51
your your efforts. And how long to the
00:53:53
then we can do this before geology is
00:53:56
much harder again because it's a
00:53:58
completely static is that you can
00:54:00
validating read cases you never know
00:54:02
the geology except yeah yeah cases that
00:54:05
the choir in fact what you really risk
00:54:07
I mean everything in you see what's
00:54:09
this into general Jones you just you
00:54:12
just have but one millions of the of
00:54:14
your volume and that's it but writing
00:54:17
quickly across citation of some salt
00:54:20
evaluation is done the problem here is
00:54:23
that very hard to validate the that's
00:54:25
because is only one reality if you if
00:54:28
you get the certainty doesn't really
00:54:31
them the model any we don't we have a
00:54:33
model we just have a tree. So the so
00:54:36
that is not hard to buy this is this
00:54:37
one of these before yeah S is one way
00:54:54
of doing or often what is done in
00:54:56
geology that people have a very strong
00:54:59
prior knowledge of what it looks like
00:55:01
it's geologist that the you know the
00:55:03
the very naturalist. So they have to
00:55:05
have this knowledge. But in the model
00:55:07
is to get typically don't get this this
00:55:10
complex judges. So we want to wait to
00:55:12
inject this prior model in some kind of
00:55:16
volume framework so this is how it all
00:55:18
started question for for think about
00:55:25
your problems as inverse problems where
00:55:27
you have you usually which very high
00:55:33
resolution you have some way of going
00:55:35
which is that you generate like usual
00:55:42
and we switch understand on this pixels
00:55:45
and then Jim actually produce in this
00:55:53
problem would be in insisting simplify
00:55:55
things that you have you measure some
00:55:57
kind of contaminants out for this. So
00:56:00
somewhat I mean that goes out of you
00:56:02
don't want to generate the mothers such
00:56:04
that when you put this more than enough
00:56:07
low simulator you would get this kind
00:56:09
of good. So then what you know so then
00:56:12
then you in the whole world of inverse
00:56:14
problems you could just are generated
00:56:15
in model and select the ones that are
00:56:18
interesting. But of course is not
00:56:21
sufficient to do this so we have
00:56:22
strategies to purchase model so you
00:56:24
start to generate initial model. And
00:56:26
then you can cause more perturbations
00:56:28
on it but I still include the training
00:56:30
image. So for example going to go to
00:56:31
channel or create a new channel fish
00:56:35
analyse things. But respecting the
00:56:37
overall structure. And you can remote
00:56:39
the metropolis for example we have a
00:56:42
large accept or reject some criteria
00:56:43
then you you converse to some pasta or
00:56:46
but it is this can be done any but the
00:56:49
problem is that is much more
00:56:50
computationally demanding than I like
00:56:52
that and approaches base that in a
00:56:54
tightly formulation channel goes
00:56:56
imminently that is you have the
00:56:58
complete analytical formulation of
00:57:00
fifty that is nice and smooth so it's
00:57:02
much easier to parents of it into
00:57:04
computer great to use graded bees
00:57:05
oppose for example yeah Reagan based
00:57:08
approach to look at these speeds that
00:57:10
are I just binaries feeds with very
00:57:13
high contrast of properties we can be
00:57:16
zap most of the what is because you you
00:57:18
would change if we look if we go back
00:57:21
to this kind of thing. So in this kind
00:57:29
of feel for example you have a whale
00:57:32
here yeah what to bum something here
00:57:34
the value of this between here. You
00:57:36
important. If it's it's because small
00:57:39
of more programming or sometimes
00:57:41
evaluate the next excel because this if
00:57:44
should change to disconnect everything.
00:57:46
So if you have that then great just a
00:57:48
very hard you have to use quicken less
00:57:49
methods in is competition more
00:57:51
difficult but it's it's it can be done
00:57:53
the problem is that if you don't use a
00:57:56
training mission approach like this you
00:57:57
can a way to find very nice smooth
00:57:59
model that will actually be but it's
00:58:01
completely well you go you could fit in
00:58:04
with an infection yes and unsupervised
00:58:16
right yeah well what you mean as
00:58:28
supervise so in this sense here is the
00:58:31
device because there's a user that
00:58:34
provides a training wage that is cynics
00:58:38
but we we can use just a they need the
00:58:42
so in the case of the DNL asking of the
00:58:46
of the topography just data there's no
00:58:48
this not expected just yeah this
00:58:50
inexpensive to say this location is
00:58:52
similar to this one with so there's an
00:58:53
analogue some way the choice of the
00:58:56
analogue has to be guided by an expert
00:58:58
but the there are cases we working on
00:59:03
with the I is the same for example you
00:59:05
want to it's it's it's look at the the
00:59:08
provide complicated but this is simply
00:59:10
seeing barrier you have a satellite
00:59:12
image you want to add information to it
00:59:15
this than the satellite image of the
00:59:16
previous that's that it's the same
00:59:19
location just previous something so
00:59:20
that then you you automatically secular
00:59:22
that you need and there's no then it
00:59:24
could be completed the mic. Thank you

Share this talk: 


Conference program

Training models with images: algorithms and applications
Gregoire Mariethoz, University of Lausanne, Institute of Earth Surface Dynamics, Switzerland
22 June 2016 · 11:02 a.m.
190 views

Recommended talks

Défis d'une pénurie alimentaire mondiale et conséquences de la disparition des paysans sur toute la planète
Dr. Philippe Desbrosses, Dr en sciences de l'environnement
17 Nov. 2012 · 4:15 p.m.
Recruter, c'est investir pour l'avenir de son entreprise!
Josianne de Reggi, Fondatrice de Cap Emergence
14 June 2012 · 6:46 p.m.