Player is loading...

Embed

Copy embed code

Transcriptions

Note: this content has been automatically generated.
00:00:04
the all the problems related to the person the machine learning but what i'm curious is the anyone
00:00:10
practically speaking like all these got ice like who did mine i don't openly i
00:00:15
phrase book anyone of them are doing anything actually to preserve privacy in to do
00:00:20
these frameworks you or they just kind of talk on the web page and that's
00:00:24
as much of the two that's what i wonder because it feels like the second
00:00:29
okay well thank you very much for an answer your questions in fact uh
00:00:35
it's a very important question and the very important trend trend nowadays and unfortunately
00:00:40
observes it more and more these people not like five ten years ago where
00:00:44
they publish and they do what they're doing they become becoming more and more cost
00:00:49
and it will be very difficult to say what will happen next years and how it is done actually
00:00:54
a lot of hampers especially from will was about the federated learning thinking
00:00:58
and they wanted to introduce it a different devices so it is done
00:01:02
as far as i know some groups uh in this companies are
00:01:05
working on homework encryption and whether it's possible to do machine learning based
00:01:08
on the on working encrypted data but okay i'm not an expert and just to say about all the issues related to complexity et cetera
00:01:16
and general trend uh uh what this right now of course for them papers and security uh l.
00:01:22
e. generated data and there is a strong reason for that if i may just answer very quickly
00:01:27
as you know many many companies especially opening i stable and and
00:01:32
the data for these guys produce a lot of lot of data
00:01:34
from one side it is cute very cool floss as as a
00:01:38
service but the same time it's created very very it's used engines
00:01:42
be me personally i believe that if you create more data synthetic data in
00:01:46
eighty engage more and more data into the training of the systems but it costs
00:01:50
apparently it's not the keys especially for the noisy diffusion models that is the right now hopes and
00:01:56
he actually including syntactic base that into the training process and with the old data the observer and
00:02:03
distribution shaft means that system if you do first uh part include the thing that you generated data
00:02:09
second system deteriorating performance it can't explain from different
00:02:13
perspective especially of the diffusion of the manifold right by
00:02:16
by by by by by adding there isn't it to do for example the do not represent real well
00:02:21
and the general trend would be observe right now is the uh in this company is is how to uh
00:02:27
detect how to prevent inclusion obvious synthetic you generated data into their models
00:02:32
and that will this solution what is right now we turn from twenty is set years old
00:02:37
technology is what the also contributed before it is digital watermark into what they try to do
00:02:42
they try to add invisible marks who would be generated content to distinguish use more ideas generated content from this
00:02:49
in terms of for applying for could to graphic principle personally i'm not aware
00:02:54
about that a lot of the airports are down along 'cause specially a grappling yeah
00:02:59
i'm from his early days appeased in now the senior researcher so basically a lot of our present on how to fight was ever
00:03:06
settle example so security but privacy in this sense complete not addressed
00:03:11
up my best knowledge okay i didn't know what's exactly going to school
00:03:15
thank you very much i think that's all the questions we have time for so if you're

Share this talk: 


Conference Program

(Keynote Talk) Privacy-Preserving Machine Learning : theoretical and practical considerations
Prof. Slava Voloshynovskiy, University of Geneva, professor with the Department of Computer Science and head of the Stochastic Information Processing group
Oct. 11, 2023 · 7:55 a.m.
2514 views
5 minutes Q&A - Privacy-Preserving Machine Learning : theoretical and practical considerations
Prof. Slava Voloshynovskiy, University of Geneva, professor with the Department of Computer Science and head of the Stochastic Information Processing group
Oct. 11, 2023 · 8:40 a.m.
Enabling Digital Sovereignity With ML
Vladimir Vujovic, Senior Digital Innovation Manager, SICPA
Oct. 11, 2023 · 8:44 a.m.
5 minutes Q&A - Enabling Digital Sovereignity With ML
Vladimir Vujovic, Senior Digital Innovation Manager, SICPA
Oct. 11, 2023 · 8:58 a.m.
Privacy-Enhanced Computation in the Age of AI
Dr. Dimitar Jechev, Co-founder and CTO of Inpher
Oct. 11, 2023 · 9:01 a.m.
139 views
5 minutes Q&A - Privacy-Enhanced Computation in the Age of AI
Dr. Dimitar Jechev, Co-founder and CTO of Inpher
Oct. 11, 2023 · 9:20 a.m.
Privacy by Design Age Verification & Online Child Safety
Dr. Onur Yürüten, Head of Age Assurance Solutions and Senior ML Engineer in Privately
Oct. 11, 2023 · 9:26 a.m.
5 minutes Q&A - Privacy by Design Age Verification & Online Child Safety
Dr. Onur Yürüten, Head of Age Assurance Solutions and Senior ML Engineer in Privately
Oct. 11, 2023 · 9:41 a.m.
(Keynote Talk) Biometrics in the era of AI: From utopia to dystopia?
Dr. Catherine Jasserand, KU Leuven (Belgium), Marie Skłodowska-Curie fellow at Biometric Law Lab
Oct. 11, 2023 · 11:06 a.m.
5 minutes Q&A - Biometrics in the era of AI: From utopia to dystopia?
Dr. Catherine Jasserand, KU Leuven (Belgium), Marie Skłodowska-Curie fellow at Biometric Law Lab
Oct. 11, 2023 · 11:42 a.m.
AI and Privacy
Alexandre Jotterand, CIPP/E, CIPM, attorney-at-law, partner at id est avocats
Oct. 11, 2023 · 11:48 a.m.
5 minutes Q&A - AI and Privacy
Alexandre Jotterand, CIPP/E, CIPM, attorney-at-law, partner at id est avocats
Oct. 11, 2023 · 12:06 p.m.
Preliminary Pperspectives on the Ethical Implications of GenAI
Julien Pache, A Partner at Ethix and Venture Partner at Verve Ventures
Oct. 11, 2023 · 12:12 p.m.
5 minutes Q&A - Preliminary Pperspectives on the Ethical Implications of GenAI
Julien Pache, A Partner at Ethix and Venture Partner at Verve Ventures
Oct. 11, 2023 · 12:30 p.m.
AI & Media: Can You Still Trust Information
Mounir Krichane, Director of the EPFL Media Center
Oct. 11, 2023 · 12:32 p.m.
5 minutes Q&A - AI & Media: Can You Still Trust Information
Mounir Krichane, Director of the EPFL Media Center
Oct. 11, 2023 · 12:54 p.m.
(Keynote Talk) Unlocking the Power of Artificial Intelligence for Precision Medicine with Privacy-Enhancing Technologies
Prof. Jean Louis Raisaro, CHUV-UNIL, assistant professor of Biomedical Informatics and Data Science at the Faculty of Biology and Medicine and the head of the Clinical Data Science Group at the Biomedical Data Science Center
Oct. 11, 2023 · 1:22 p.m.
5 minutes Q&A - Unlocking the Power of Artificial Intelligence for Precision Medicine with Privacy-Enhancing Technologies
Prof. Jean Louis Raisaro, CHUV-UNIL, assistant professor of Biomedical Informatics and Data Science at the Faculty of Biology and Medicine and the head of the Clinical Data Science Group at the Biomedical Data Science Center
Oct. 11, 2023 · 1:50 p.m.
Genomics, AI and Privacy
Julien Duc, Co-Founder and Co-CEO of Nexco Analytics
Oct. 11, 2023 · 2:01 p.m.
5 minutes Q&A - Genomics, AI and Privacy
Julien Duc, Co-Founder and Co-CEO of Nexco Analytics
Oct. 11, 2023 · 2:18 p.m.
How trust & transparency lead the success of an Idiap student's Master's project in fraud detection
Raphaël Lüthi, Machine Learning and AI Lead at Groupe Mutuel
Oct. 11, 2023 · 2:22 p.m.
5 minutes Q&A - How trust & transparency lead the success of an Idiap student's Master's project in fraud detection
Raphaël Lüthi, Machine Learning and AI Lead at Groupe Mutuel
Oct. 11, 2023 · 2:38 p.m.

Recommended talks

Final pitch, team ProtectID
OTROSHI SHAHREZA, Hatef, Idiap Research Institute
Aug. 26, 2021 · 2:44 p.m.