Player is loading...

Embed

Copy embed code

Transcriptions

Note: this content has been automatically generated.
00:00:00
so good i know no one ah i'm really happy to
00:00:03
talk about how to combine the learning environment apology today
00:00:08
because over the last couple of years i kept telling to mass that that beep learning is really
00:00:12
hot topic and powerful tool and and that you should try to apply to what methodology
00:00:18
of course somehow this uh you should try became we should try and basically that's where i'm standing here right now
00:00:24
so end of last year we started the project together with class bond was also here today
00:00:30
was writing is master thesis about how to put the that is useful
00:00:34
question enough devices with machine learning methods and especially keep learning methods
00:00:39
so i'll in my talk i want to introduce you to deep learning
00:00:43
so what is steep learning how does it work in principle
00:00:46
and how could you use it and how could you use it in the future foreman told she
00:00:51
and in the second part of my talk i will show you some preliminary results we have in the ongoing work with class
00:00:59
so i'll i'll motivation to try to put the the disease progression the fight is this
00:01:05
that for specific patient we don't know how and how fast the disease will develop
00:01:11
so a better prediction model would help us to choose
00:01:15
i personally i stopped nice treatment uh for for patients
00:01:20
and we're interested in trying to put it uh the clinical question oh of uh f. right is
00:01:26
so like pain or swelling um and there we we we use the past twenty eight score
00:01:31
but we also interested in uh putting the um weight your
00:01:36
graphic question like bone evolution swear use the atoms cool
00:01:40
lot degenerative changes where the catalogue score is quite interesting and
00:01:45
we want to do this with machine learning methods
00:01:48
so i'll let us machine learning um machine learning means that you allow
00:01:53
computers to act without any hardcoded rules so you give the computer
00:01:58
a lot of training training examples and he's able to learn from these
00:02:02
examples to improve and two channel lies to new unseen data
00:02:08
and um machine consists of a different sub fields and one of them is called supervised learning
00:02:14
and there you try to learn the function from labelled examples so for
00:02:18
example if you wanna do disease detection on x. ray images
00:02:22
then you need let's say a couple of thousand images and you need an expert it
00:02:26
doctor who lay good these images so if there's just a disease visible or not
00:02:31
and if you have these label these examples and then you can learn more than you can learn
00:02:36
this function and you can channel lies to new x. ray images you never saw before
00:02:43
and i'm not this appeal of machine learning is called unsupervised learning
00:02:47
and then you try to learn functions and structures and patterns from data
00:02:52
but you don't have a teacher who tells you what these patterns are beforehand so you wanna find them in the first place
00:02:59
and uh for a interesting area which i'm usually working on is that
00:03:04
is reinforcement learning so you're right about that already in the top
00:03:09
and yeah you try to learn strategy how to into acting a
00:03:13
certain situation in order to maximise you expected future we churned
00:03:18
and what this return is you have to define by yourself so you
00:03:21
need to feedback of how good or how bad your action was
00:03:25
and at this feedback can also be quite late so for example as in chess and when you
00:03:31
only have one feedback in the end if you we know if you lost the game
00:03:36
and running is not there's subfield of of machine learning and you can see it as
00:03:40
a tool uh that you can apply in all these areas i showed you
00:03:45
and in deep learning you use artificial peep neural networks in
00:03:49
only two with percent these functions you wanna learn here
00:03:54
about later more about that so let's first to look at some applications of machine learning
00:03:59
so one famous one is uh for example the speech recognition like in serial excel we're going now
00:04:06
but of course they a lot more so what you can do is uh you can do classification
00:04:11
so that supervised learning uh where you try to classify what's in the
00:04:15
images to seek and this is the topic is the person
00:04:18
but you can also do face recognition as the i. phone x. that's when you try uh to unlock the i. phone
00:04:25
and you can not only and categories you can also
00:04:29
learn a wheel values for example you can predict
00:04:32
that the traffic flow of two more or you can predict that the stock price of two more
00:04:37
and they're um also unsupervised learning methods like
00:04:42
class doing so you can try to put the recommended system like in m. s. and
00:04:47
well uh you can try to do outlier detection so you can for
00:04:50
example try to find anomalies likely nietzsche data or something like that
00:04:55
what you can try to find yeah the most important information you have in in your data so um
00:05:02
for example you can try to find what is the most important input feature
00:05:07
i'm responsible for the decision or a few off your network of your classifier
00:05:12
oh and finally in reinforcement learning um yeah you can learn strategies
00:05:17
you can learn to live in strategies you can learn
00:05:20
to control what what's wise we got already or you can
00:05:23
learn to play chess or even possible complicated a call
00:05:28
but um how about grammar told she's of what could we do here so
00:05:34
um for example we could try to detect f. y. 'cause on x. ray niches
00:05:39
what we can try to you to put it uh d. b. kyle score from
00:05:44
what to to compute the kyle score from from x. ray images automatically
00:05:49
well yeah we call a step further and this is our project what we're currently working on
00:05:54
um we try to to predict the disease progression so we try to
00:05:57
put it how these targets course we develop in the future
00:06:02
and um there are a lot of other method you paid to us an unsupervised
00:06:07
learning but today i want to focus on unsupervised learning and if you i
00:06:12
the chip would take how the deceased will develop maybe then you also uh
00:06:18
able to put take how the disease will develop under a certain medication
00:06:23
and then um then it's not far away from from learning a a
00:06:28
good treatment stellar optimised treatment strategy uh with reinforcement learning and
00:06:33
this uh would be yeah local they actually however this is hard to
00:06:37
evaluate because all you would need a wheel control group for that
00:06:43
so i'll coming back to keep learning um so we had already
00:06:46
that uh in the planning you use a deep neural networks
00:06:50
to learn functions from more input data so it's nothing else but
00:06:54
it but a function that maps some input to some output
00:06:57
and it consists of of many layers of artificial onions and that the subdivision humans
00:07:03
are simply my uh loosely motivated by the new ones in the human right
00:07:08
some have some input you bake these inputs then you
00:07:12
compute uh the weighted sample wall inputs and
00:07:16
let's say you're using a binary step function su activation function then the new and fires
00:07:23
if there's some the weighted sum exceeds a certain certain threshold and if not it doesn't fly
00:07:28
so today we are using more complicated activation functions that are
00:07:32
unknown in yeah and daddy why people but you can
00:07:36
you can see like this so the principle but more input the more comes out the more opinion files
00:07:42
and then you um whatever these new ones in a neural network which you can see on the white side so this
00:07:48
is a fully connected neural network revenue and i never really is connected to every new in in the next meeting
00:07:55
and then we make these networks to to learn found data by adapting the
00:08:01
these weights w. that that at every connection here you can see
00:08:06
and we try to to minimise scenario function is ever function is dependent
00:08:10
on on your problem is you have to define by yourself
00:08:14
and you can see that you have a lot of weight here um especially if if your
00:08:19
network gets deep and this is the reason why people earning became successful this late
00:08:24
um because we need a lot of computing power for that and really good cheap news
00:08:31
so but how can we apply this to to a disease progression in production in
00:08:36
the fight is so we use up patients data collected informal was it
00:08:41
we feel this into neural network and train it to output the targets cross so this can be
00:08:46
the cause score that atoms call all the dust and a scroll off the next reads it
00:08:51
so all we can say for quite bastion yeah i put h. and uh we use uh uh like the age of the patients
00:08:57
away with what my fact your number of swollen joints and so
00:09:01
on a for all the receipts we we have information about
00:09:05
and for that we can take also the the um the medication into account so what'd you take and for how long
00:09:12
and so on and then we feed this into a neural network and train
00:09:16
it to output it as trendy age for um for the next receipt
00:09:22
so i'll out what it is that um is the s. e. g. m. database
00:09:26
and this consists of um yeah a lot of data um about um of
00:09:31
about uh i thousand patients and forty five thousand which visits and um
00:09:37
for training and evaluation be use five for cross validation so this means
00:09:42
that uh we train our network let's say five times of of course you can also
00:09:46
say ten times um and train it on the first twenty percent of the patients
00:09:52
and this is no sorry and tested on the first twenty percent of the
00:09:56
patients this is our testing set as training set we use the rest
00:10:00
so in the next trend we use not the testing set and we trained on all the other patients and so on and so on
00:10:06
and this avoids over fitting a two very small testing set and then
00:10:11
we just use the mean without of all these different falls
00:10:15
and um as baselines we just use a very naive
00:10:18
baseline very simply assume that the target score doesn't
00:10:22
change so fast and it stays the same so this does that is useful or not develop
00:10:28
um and all for the um you heard about this also did
00:10:31
they use a rainforest which is not the machine learning method
00:10:37
so um now i want to show you some preliminary results so again this is
00:10:42
um ongoing work and we still have to work a lot of it
00:10:45
uh on it but yeah we to be honest we had a really hard time
00:10:49
trying to put it um that the progression and this is professional fighters
00:10:55
um it's not that easy but um we do it broadcast twenty eight
00:11:00
um and um transform to trek classification problem
00:11:04
so we can put date if the disease will uh get better or worse
00:11:09
uh right now with the one seventy percent accuracy so
00:11:12
set to seventy percent um our predictions are correct
00:11:16
and the same what uh we use uh right now a fixed number of let's say the lies last five for that
00:11:22
and the last two medications and we trained that on over a trendy thousand receipts and um
00:11:29
yeah and then i would it is um with the neck where when forced on the
00:11:33
network and compared then to to the naive baseline by computing the mean squared error
00:11:40
and this is nothing else but the the mean deviation from
00:11:43
our predictions to the true well use uh we get
00:11:48
and you can see that um the network is outperforming the others uh with the means fear of zero point eight
00:11:56
and yeah this is good but actually we want to yeah to get better of course
00:12:00
and i hope we will so on the white side you can see um
00:12:05
the differences between the protection and the true das trendy eight well you
00:12:09
have the next with it for the baseline into the wall
00:12:13
and uh for the network in in all um so what we can see is that
00:12:17
we have trouble else i'm detecting small changes in the weather here but um
00:12:23
larger changes in in the task or we we can um detect right um correct
00:12:30
yeah and and then we we did the same for the kyle score um
00:12:34
there we had a lot less training examples the only four thousand
00:12:38
i say only because what keep learning methods it's not that much um
00:12:44
yeah and um they are uh we actually we out performing the knife
00:12:48
baseline with both methods but other when enforce showed the best weasels
00:12:53
but yeah we're working on that and again here we we had some troubles because
00:12:58
and actually thought for most of the patients other squat doesn't change oil
00:13:02
only change very slightly so it's hard to put it something else
00:13:07
um so one other thing i want to show you is that
00:13:11
if you're using machine learning methods or for example ever enforced
00:13:15
you can compute how important your input features ah for the decision off of your classifier
00:13:21
so um we don't know yet which maybe kate kate mitigation is sufficient or not
00:13:27
or which one could be could be the best but what we know is
00:13:31
that when you look at the feature importance is for different medications
00:13:35
and that they played different goals in the next receipt and our goal is to go
00:13:40
from there and to find almost out okay which one could be the best
00:13:46
so finally i just want to show you some examples of our predictions so you can
00:13:52
um see the disease development of the patients um
00:13:56
all bought six to um twelve years
00:13:59
into you can see uh the the um should but so the the actual
00:14:04
development of options disease and all when she can see um our predictions
00:14:10
so we have troubles i as i said detecting small changes
00:14:14
but um in channel uh i'm largely trends we we are predicting already um correctly
00:14:22
yeah so you can see also see that the predictions are pretty late but um i think in principle we showed
00:14:29
um that is a is a good approach and it uh that it has hypertension
00:14:34
so out just to conclude my talk um i think we we took care
00:14:38
of the first steps towards the person my son optimise treatments veggie
00:14:42
um in in one ontology with machine learning methods and with the
00:14:46
s. if you haven't database um we have a large
00:14:50
enough and and i hope it high quality they just sat on to apply a machine learning and keep learning methods
00:14:56
of course there's still um a lot of 'em missing data we could also consider lied
00:15:02
like utilities and source or a patient reports was for sound and so on
00:15:08
but we still have a lot of work to do um with the data we have like we have to improve
00:15:14
our our models our network we have to to find the right yeah um input features maybe together with thomas
00:15:21
uh we want to include x. y. images and finally want to use reinforcement learning
00:15:27
so um i think that it's a it's a really promising approach and i'm
00:15:31
really looking forward to see how far we can get with it
00:15:36
uh thank you very much for your tension ah
00:15:43
propose to perk or mm reproducing prefer married or are really question for more not really
00:15:54
f.
00:15:56
but should it's a high quality treatable sure how how many do it
00:15:59
with a secret shop for breast want to start with an issue
00:16:04
or with the i mean the more related to the things that you listed you yeah so i think for best friend did we we could
00:16:10
oh actually train on um on a trendy thousand samples were a bit more so
00:16:15
um this is quite enough i would say of course we have the same problems um but uh mentioned uh that
00:16:21
we mentioned before was so they're missing values so missing of features so we have to deal with that and
00:16:29
yeah and i'm not the point is for example okay for for every time
00:16:33
point you have of for every patient you have a different number of
00:16:36
of course is you have to consider and these are all problems uh where
00:16:40
we have to find the right model that can deal with this
00:16:43
and at the right network architecture and yeah i i hope um
00:16:48
i hope that um yeah we have enough data actually
00:16:55
there is a question but rubber your finger remote for reduction to a
00:16:59
maturing and the wizard return and you know very easy to to
00:17:02
understand for one um you just have a question as regards oh
00:17:06
joe training so because um sometimes remote you learning algorithm structures
00:17:11
people are you have a huge amount of multiple parameters right yeah exactly the tool
00:17:15
to most of them you know just a little bit about the musician
00:17:18
mainly due to a different two completely different muscles or have your reviewed so
00:17:23
you some some of these kind of irritated now actually not so
00:17:28
uh we we just started and which is using a very simple
00:17:32
feed forward neural network so very simple architecture where we can't you with a variable number of
00:17:37
of a form of visits and of medications but we just wanted to see it
00:17:42
um that you get first results and see how far we can get uh now we have to work on that
00:17:47
i mean and and then of course i permit optimisation will be uh another topic
00:17:52
we have to deal with yes thank you figure much any other question
00:18:00
okay so reason more temperate
00:18:06
from police were comparable so pull off slur her with her for a pretty
00:18:10
division carpentry term or as you know our target has to be
00:18:15
efficient mood also so you've so you would prefer also are really diverse

Share this talk: 


Conference Program

Demystification the digital health world
Thomas Hügle
Feb. 1, 2019 · 9:41 a.m.
6530 views
1405 views
Digital lessons learned from musculoskeletal radiology
Patrick Omoumi
Feb. 1, 2019 · 11:37 a.m.
1896 views

Recommended talks

Décision assistée par ordinateur: de la donnée à l’information
Dr. Dominik Aronsky , Université Vanderbilt (USA)
June 7, 2013 · 11:58 a.m.