Embed code
Note: this content has been automatically generated.
for um i will take a big uh for different approaches compared to my to my critiques
which way are presenting on how to make reproducing research and
actually i would take you to my my journey
uh when i tried to uh actually wrote produced research on someone else
so uh it consisting remote a photo played t. smoke graffiti
which basically consisted in measuring your your heart rate so
the way he's done not remotely is the weights measured in in
a hospital so you have this little keep on your finger
i'm a structured light is um projected uh and that measures
the amount of blood flow that you have in your
in your finger because when your heart is pumping uh the volume of the blood is changing in in your finger
and then uh the based on the light that is uh either transmitted or um reflected
uh you can get um a getaway formed like the one you see here and that that's actually your heart rate
so now our remote for two plus d. mo graffiti which i we've called r. p. p. g. from now on the simpler
um is to do the same but using a regular camera rind from a a distance
so uh there is one or is she that that that shows that
actually the austin small changes in in car on your scheme
when the same process is uh is happening which or witch abroad so basically when the the heart is pumping
blood you go through your to your face many they they re some some variation in colour or
so they are they were value sometimes that were proposed to to try to
recognise the heart rate from from the scroll violations so that's the
first very first aries is the one i should be here
and well you can that we see but it's basically they said that in that
we channel you can you can see like variation the corresponding to operate
and right now you you don't have like a lot of a different application for smart phone we
you just look at just more for it we like say oh okay your heart rate is
i uh why we are you interested in this uh we are doing bad by metrics an
now um by that talked about speaker detection
attract and we also trying to address
face detection that that's what his presentation detection actually so
uh i'll just like in the picture was if you want to trick
'em face recognition system can present at the photo where masking
and what we hope actually is that when you are trying to get the heart rate from
a photograph of farm ask you will not get one artist one that that makes sense
the most problem with the not to problems that we have is that uh all the algorithms there are computing the heart rate
at the at the end of the process yeah you're the out there filtering the signals to be sure that the
the heart rate is in a plausible ranges so basically from forty b.
p. m. two two hundred and and something like that so
whatever you do do you use to get the results that makes sense
and the other problem is that the the phase uh recognise
you don't have the the ground truth for the heart rate but
that will that that goes with the with the video sequence
so we did i'm a bit of survey on what was going in in ah p. p. g.
and this is really a training topic i mean the the first the
paper appeared in two thousand you have um and since then
it's you it was really we increasing and i i think in like two thousand
fifteen only there was more than fifty papers that were published on the subject
but the the main problem uh is that um well the publisher search are usually
a published on well results are published on prior proper you to read databases
which are rather small because the work addict you know and
and uh it's like i don't know ten subjects
not much variability and uh people are just saying okay so we have like i don't know
twenty video sequence that supply organ algorithm to twenty
video sequence arm strict protocols for evaluation
so actually there is anyone publicly available database that that has a both
uh i'm a sequence of the face and the synchronised heart rate
and in the church or we found only one algorithm doubt to at the moment or
um but at the time there was only one algorithm for
market digital that was reporting results on on this database
so the first step that we did this um well okay so then
to summarise the problems we had so
this research area is not meant to be we're pretty simple at the moment but uh because as
i said uh there's no standard protocol for evaluation i mean people are just like using
the video segments saying okay we have either the average halfway to the instant operate uh the
main problem is that there is no uh no but we came up with the data
so the solution uh well we tried to to implement was to first use been
uh to to develop a old algorithms that that we try
we also corrected the a database that um captures video sequences of of a face
together with the with the heart rate and also the why some experimental protocols
and so uh that's the um the first effort we made
we recorded there is um with the uh um
for he said jets which to my knowledge makes it the
largest in terms of number of subject a valuable
it's public uh each uh individual was we call it for a
false sequences to with the light turns on to read
on the the line the light coming from the window so so you have you have this side in the nation
and each sequence is uh is lasting for all the um one minutes
then uh we selected this descent going ons as a as a
baseline because um as i told you before it's the only
one that was a presenting results on uh on the p.
b. g. are available that is that we also downloaded
so basically the weight work uh you try this region of the face and
you compute the mean core or on on this are we are
uh then you extract also the background uh you you you a feature uh to
correct basically the the core or here to be any um what that
to remove the inference of of global illumination actually and so to have
the violation that our just due to the to the blood flow
then you also compensate for the motion but cannot go when people is
speaking or that kind of stuff then you do a bunch of different
filtering and idea and you you get a signal like this
that you will for you transform and then detected a highest frequency
which is supposed to to correspond to the to the whole
so that's basically the are going but uh how it works
uh we which that to the author was an uh
they were kind enough to share uh some some part of the whole so basically all bills bills three blocks
uh because uh they weren't uh able to give us the code
for the tracking in the background extraction because it was
a former cody left and that kind of stuff and they also uh they also provide us with some data
uh the main colour of the face in the bargain regions uh along some of the of the video sequences
so uh as i said the the tracking background extraction part is missing so we implement it yourself
based on the uh on on what was described in the paper the code
was in matlab so it wasn't that ideal because it's proprietary software
uh it's so basically what what we did is like two we
implement the the the code that they gave us in brighton
and to make sure that it's uh uh with the uh with the code that we had in the data
uh we we could we assess that what we translate
from matlab to lighten was doing exactly the same
ah so that's that that that was a good thing
now when we when we ran actually the uh the code that we made on the um
the data uh so you can see that they are like a difference in performance is
so here it's the the mean square the would mean square or between the
the to operate and the and the detected operate and here is the the colour
correlations sorry uh between on all the database between what what was um
uh i detected by the algorithms and underground also the the
publish results are a bit better than the one which
is so that we can it's we can say that fairly accurately that the difference depends only on the
on the tracking in the background extraction procedure because it's the only difference we had in the code
but i'm armpit important question is is to ask ourselves okay so
well how good and the whole already but all those results
because it was testing on only one putting that aside and says the source code was not fully provided
then could mean that that hours and especially to the tracking part for instance could could be wrong
so what we did is we did other experiments uh with
other algorithms that we also we implemented to try to
to basically of comparison we also tested on other database the one we recorded
and we do write some streak experimental protocol so basically we had
divide it every by the database in a training set on the test set so that you can
chew no parameters on your on your training set and then assess the performance on your
on your tested because the way it wasn't before it was just like okay i have
a bunch of use quincy so i run my algorithms tonight with the parameters and
when i reach the best results then i'm done and i'm and i'm happy so right now you can see that the
the results i just put the correlation here but the results are really different any in particular
if you if you go from from the database what where the results were published on
which are quite okay should we go to our database you can see that it doesn't
mean anything so basically this number means that there is no correlation whatsoever between
uh the detected operate and uh and the ground rules and on the other hand you take another algorithm
and uh and you have completely different results i mean in in one database that
that good than on the other one the the they are quite comparable to the to the first one but on the other that is
so basic question which which are going would you use and well you still don't know
so to conclude i would just um make a make two
points that first reproducing published matter is really not
real even even when you have the source code actually i mean and and even when you can
you can discuss with filters we had some really made exchanges and and we've
been able to reproduce exactly what they did was was quite difficult
they are let let of details to figure out i mean in the paper
you cannot find all the it hidden power meters and implementation stuff and
well some functions only they had different the fall still um mode
of operation whether it's matt level python or or whatever
and also most most importantly is that can conclusion that you seen
papers should not be blindly trusting because when you read
the paper you're like oh well the results sounds good and finally when you when you are trying to reproduce them
and the best that you can and the more honest way that you can
well and don't necessarily or find the same results and that's it

Share this talk: 


Conference program

Welcome
Sébastien Marcel, Senior Researcher, IDIAP, Director of the Swiss Center for Biometrics Research and Testing
24 March 2017 · 9:17 a.m.
Keynote - Reproducibility and Open Science @ EPFL
Pierre Vandergheynst, EPFL VP for Education
24 March 2017 · 9:20 a.m.
Q&A: Keynote - Reproducibility and Open Science @ EPFL
Pierre Vandergheynst, EPFL VP for Education
24 March 2017 · 9:54 a.m.
VISCERAL and Evaluation-as-a-Service
Henning Müller, prof. HES-SO Valais-Wallis (unité e-health)
24 March 2017 · 11:35 a.m.
Q&A - VISCERAL and Evaluation-as-a-Service
Henning Müller, prof. HES-SO Valais-Wallis (unité e-health)
24 March 2017 · 12:07 p.m.

Recommended talks

Some Challenges in Biometrics: Facial Sketch, Altered Fingerprints & SMT
Anil K. Jain, Michigan State University
3 Sept. 2013 · 2:02 p.m.