Player is loading...

Embed

Embed code

Transcriptions

Note: this content has been automatically generated.
00:00:00
yeah
00:00:05
yes
00:00:11
i i i i i i
00:00:22
oh
00:00:30
oh
00:00:32
my answer is i hope yes i think yes we are
00:00:39
in my case i'm i'm dealing with another kind of signal
00:00:44
uh uh we are working in a project dealing with a genome eek sequences
00:00:49
finally that's these are signals uh uh reconvene for four levels is like uh
00:00:56
but uh
00:00:59
two beats to with levels uh signals and
00:01:05
what we were doing was more the same which are the filter off
00:01:12
that are more important for a diagnosing somebody
00:01:16
with a that kind of disease and these filters
00:01:22
if you well close to the sequence some of these filters are connected directly to the
00:01:27
to the sequence and they are reacting to part of the sequence so if you can
00:01:35
that would be the next step the first step was was for for that the cases that were active making the filter
00:01:42
we try to do a some kind of alignment and either defined which are the letters
00:01:48
of uh of the sick with that are closer
00:01:51
to that and uh with the bid biological databases
00:01:55
you defined that belongs to that kind of propane and these are active because uh this is uh
00:02:02
'kay propane in a in a year and a buyers and because
00:02:06
of that that's the way we are extracting some explanation which are
00:02:13
i think and that's something that we plan to do as soon as we have uh we'll have a student or a system
00:02:20
the able to that why not use the same use below
00:02:26
the net the eh the adversarial network in that case to generate
00:02:31
this sequence that will maximise that filter that filters that
00:02:35
we really know it's very important for classifying for example disease
00:02:40
then
00:02:42
that filter we activate will maximise that and that the filter would be the compost
00:02:47
or don't stick with the maximise that we become balls of some kind of a letters
00:02:52
and then we can look for that matter which are the real war proteins other the more similar
00:02:59
for us that's a signal that would be a similar to twist the part of speech for example that
00:03:05
could back see my some of the filter that will detect but is not my domain so i'm not that
00:03:13
i presented here images that are are
00:03:17
common images because they are easier for all of us we didn't divide buttons and things
00:03:23
uh if you are speaking about um by the logical images
00:03:28
uh and not everyone is able to read them so because of that
00:03:32
we need really actually an expert to annotate and we cannot be so abstract
00:03:39
because we have a no says uh uh don't know says uh everywhere in any match and we are happy with that
00:03:46
uh it's very hard to say okay we have filters with the cotton ones but perhaps
00:03:53
they are if they're too much they correspond to another phenomena then that's is not as
00:04:00
straight forward to say okay the same thing we we we did with these images we are going to do it with the
00:04:05
read only the the legally matches on the will go i is going to have to work very certainly we would need to
00:04:12
so far there and be much much more careful
00:04:17
i hope that you consider to
00:04:21
uh_huh
00:04:26
if not the intent to be here for the steel for uh the whole the whole morning

Share this talk: 


Conference Program

Methods for Rule and Knowledge Extraction from Deep Neural Networks
Keynote speech: Prof. Pena Carlos Andrés, HEIG-VD
3 May 2019 · 9:10 a.m.
349 views
Q&A - Keynote speech: Prof. Pena Carlos Andrés
Keynote speech: Prof. Pena Carlos Andrés, HEIG-VD
3 May 2019 · 10:08 a.m.
Visualizing and understanding raw speech modeling with convolutional neural networks
Hannah Muckenhirn, Idiap Research Institute
3 May 2019 · 10:15 a.m.
Q&A - Hannah Muckenhirn
Hannah Muckenhirn, Idiap Research Institute
3 May 2019 · 10:28 a.m.
Concept Measures to Explain Deep Learning Predictions in Medical Imaging
Mara Graziani, HES-SO Valais-Wallis
3 May 2019 · 10:32 a.m.
What do neural network saliency maps encode?
Suraj Srinivas, Idiap Research Institute
3 May 2019 · 10:53 a.m.
Transparency of rotation-equivariant CNNs via local geometric priors
Dr Vincent Andrearczyk, HES-SO Valais-Wallis
3 May 2019 · 11:30 a.m.
Q&A - Dr Vincent Andrearczyk
Dr Vincent Andrearczyk, HES-SO Valais-Wallis
3 May 2019 · 11:48 a.m.
Interpretable models of robot motion learned from few demonstrations
Dr Sylvain Calinon, Idiap Research Institute
3 May 2019 · 11:50 a.m.
Q&A - Sylvain Calinon
Dr Sylvain Calinon, Idiap Research Institute
3 May 2019 · 12:06 p.m.
The HyperBagGraph DataEdron: An Enriched Browsing Experience of Scientific Publication Databa
Xavier Ouvrard, University of Geneva / CERN
3 May 2019 · 12:08 p.m.
Improving robustness to build more interpretable classifiers
Seyed Moosavi, Signal Processing Laboratory 4 (LTS4), EPFL
3 May 2019 · 12:21 p.m.
Q&A - Seyed Moosavi
Seyed Moosavi, Signal Processing Laboratory 4 (LTS4), EPFL
3 May 2019 · 12:34 p.m.