Player is loading...

Embed

Copy embed code

Transcriptions

Note: this content has been automatically generated.
00:00:01
hi i'm sleek and um and the it's awful and looking at philips research find the right i'm
00:00:08
i supervise exactly how much and i'm and and ordered got one university and my supervisor the is had mistake
00:00:15
so
00:00:17
the title of my p. at least acoustics piece this monitoring for the health then click when it takes
00:00:23
so we i work on philips lifeline so phillips lifeline
00:00:26
is the this is a huge business in us within it's
00:00:30
essentially the telly help all monitoring system within the subjects have
00:00:34
opinion kind of thing label pending and it keeps recording the
00:00:38
and then and it had a problem they press a button and then it automatically can extra call centre
00:00:44
and they usually tell the problem and then the call centre people usually have them that whatever is necessary
00:00:50
so we have the we had access for this conversation data between though a call centre agent and the subject
00:00:58
i mean it's obvious it's it's how it had almost takes one point five million
00:01:02
subscribers it's been there for like that yes it's a few which as a speech data
00:01:07
i'm i started working on the p. processing part of it
00:01:11
uh with i wouldn't be covering in this uh slides but uh
00:01:16
and obviously i had actually had some pop in the morning from the the us because
00:01:20
i'm working on the speech get all the dead people you'd be i don't know says only to access the data to get people so
00:01:27
and i'll do this up my uh project is to detect changes in the speech properties and the consistent behaviour these subjects
00:01:34
and we have initially focusing on this pretty problems and later when the spoken
00:01:39
corny to kind of problems so the reason for selecting this pretty problems is of
00:01:45
beating assets for speech a beating is like the primary source so
00:01:51
but that was one reason and also the data we have thirty percent of the subjects have this pretty problems
00:01:56
uh i'm but actually monitoring this desperation from conversation speech you can actually
00:02:01
get what the information to begin problems these people have and then that is pretty shaken the line capacity
00:02:08
we can keep an amazing these for the subjects along then and then we can analysts how the
00:02:12
health is deteriorating uh improving so this is the concept behind it but then we don't really have the
00:02:17
uh this big information of these speakers on the call centre so for getting that is pretty information
00:02:24
the design now when experiments to get the gauntlet values so uh in x. from and we have to
00:02:29
display gene to do with this monograph base one of all the abdomen and one over the uh test
00:02:37
so the summation of these two belts in say information would be
00:02:39
an indicator of a beating signal and then we keep recording the speech
00:02:43
so we and also we have the people the sense uh of uh also an amazing that motion of this person people later
00:02:50
but essentially night so maybe had like five parts in it the first one is the general conversation with the subject
00:02:55
and this would mostly had the spontaneous speech on the subject and then the second one is the paragraph it out
00:03:00
we have a politically balance back up a dessert for like five minutes and we had the data for that
00:03:05
i'm prolong holders on to estimate the land one kind of stuff and then normal beating
00:03:10
going to have a comparison data and the final part is to
00:03:13
have a paragraph it out after exercise so we want also simulate the
00:03:17
of uh uh but it's pretty problem situation for normal the phenomena subjects
00:03:22
these are all this abyss that we're doing backing up the digits it
00:03:26
so if you can see this uh the audience graphics this is the speech data so whenever there's a movement of speech
00:03:32
you can see that the sense actually is an intake of breath so the sensor value
00:03:36
increases and is it continues to claim of the sensitive to so does how it looks
00:03:45
this is actually a v. like replicated them to be different so yeah so i'm supposed to use the planning
00:03:51
in order to pretty beating signal directly from speech as it will speech at the primary sources beating assets will be
00:03:57
wanted to get the planning model which can basically estimate the beating pad an older speech finalising the facility can
00:04:04
one engine what in the speech data so me from the subjects
00:04:08
that we have collected view of map this speech data with the
00:04:14
uh i'd i'd be but sensor and then you train the model and then we i go this too
00:04:20
as the dazed and our goal is to estimate the beating bad and
00:04:23
user d. twenty modern faced mission and see how accurately distracted beating a patent
00:04:29
so these are like one of the reasons that i mean this is and it is measured it
00:04:35
just by the sense and then this is the output that we got from the art in more than that but that's base
00:04:41
uh it's quite good i heard for comparison and we've been investigating on how to have this
00:04:47
and how a bitch model would but then then we expect that the presentation of speech would
00:04:52
have all the information required and then what would what would be the time didn't window
00:04:57
uh these are the things that we investigated and we had these metrics the means good at it
00:05:02
between the beating estimated reading signal and actually readings that the correlation
00:05:07
and beating the competition entitled in comparison to see how well our models of working
00:05:11
so this this was also the frame but that the use further
00:05:16
into speech conference we just submitted up a point within got accepted
00:05:21
so these are the results so we found a lot most people don't representation of speech data
00:05:26
for four seconds time window and using i didn't models but this but i uh the production
00:05:34
and we get of eating a destination we also had a beating it estimation fall this of
00:05:39
this and then we found that the i mean month and reason that it would be it is
00:05:42
the meeting at eight uh during conversations and is almost half that of the
00:05:47
normal beating it also had information on the normal beating it of individual service
00:05:51
so that is one uh countries and we got out and then also the
00:05:54
specificity of the event was almost ninety one point two percent is pretty good
00:05:59
and we also the the title in estimation of that it wasn't it was almost one point it
00:06:04
i mean after selecting four seconds finding off final model to web based with
00:06:08
the production the then all the time on this analysis and we've submitted despite account
00:06:14
and we've also been working on improving the models that will sort started working on attention based
00:06:19
models in order to also take the gradient of the sensor data uh information into the market
00:06:24
so that's one thing that working and also be also that we have the this but it really i mean also how they get off
00:06:30
normal speech same speech after exercise which is like the d. c. uh it's
00:06:36
pretty disease kind of information you're trying to a bill declassify uh for detecting the
00:06:42
a fact detecting this classification of the the database in this date or the
00:06:47
that is pretty problem data and also we're trying to market on the fact big scale seeing how much is the
00:06:53
uh this is already going up as in the plus and i think that's one thing about working on
00:06:58
and regarding the segments that i would be going i'm planning to
00:07:02
do my second meant in your research insured from november to january in
00:07:07
and the topic oh they will again be on the relationship between is pretty bad images and secular and
00:07:12
physiological signals like a c. d. and in that kind of things we're still getting into this topic and
00:07:18
the other planned the second member have is the university of oxford in the fall of two thousand twenty
00:07:24
and regarding the training and conferences with like obviously entertaining all the training given
00:07:29
to top us and then we had to get those things goes back and phillips
00:07:33
and also do you think you have also want to uh i intend the g. d. p. i. related a course which is
00:07:40
the internal committee for biomedical ethics so because we have we have started i want recordings we had to go to all these procedures
00:07:46
and then i would be attending then to speech conference and also we submitted a internal conference
00:07:52
that can phillips you submitted a paper for it and it got accepted so that's one thing
00:07:58
that's good

Share this talk: 


Conference Program

ESR03 : Interpretable speech pathology detection
Julian Fritsch
Sept. 4, 2019 · 2:30 p.m.
161 views
ESR09 : Clinical relevance of intelligibility mesures
Pommée Timothy
Sept. 4, 2019 · 4:49 p.m.
Big Data with Health Data
Sébastien Déjean
Sept. 5, 2019 · 9:20 a.m.
ESR11 : First year review
Bence Halpern
Sept. 5, 2019 · 11:20 a.m.