Embed code
oh i just wondered oh how long did it take you to
get into machine learning and all that's that's the world
um so i i think i am determined to sing stories i really like maths and that was a okay
then i studied computer i when i decided to go to university tool without a computer science because i wanted to do something
more applicable so i even like i really love math i don't like that prove something i just left it there
that was kind of unsatisfactory scythe okay i'm just gonna do computer science and in second
here i had my chorus my first court machine learning course and i was hooked
that was it yeah uh uh i and since then i yeah i don't mind that isn't mentioning then work
here but in doing some uh i know p. in zurich and now i have to do mine
so if it was i guess love at first first fight if you believing that if
is there some other questions
objection you want first
so we'll take you for them is presentation i would like to us like mention the the place home
we should tell us at least the customs all pieces with a little blue currently would hinder
if i would say just look uh look at our publication so one of the really nice thing about mine is that
we publish a a lot of the work that we do so there's also want nips coming along the next week
um and there's gonna be a lot of uh a lot of our publications out there
so for follow that and there's also a the really nice deep mine blog
uh where they try to make things very visual and now
for example i don't know if you know wait meant there was this the the three
emu a text to speech uh system developed by by deep mine in in it
absolutely mind blowing uh and uh the they are samples
on the website you can see how it works
oh i know a lot of other information so i think that the my blog is also pretty good
place to look at that and on the lot publications or in the deep mine pages or
but i just thinking with this how to take a bit more questions at the end so this is
done training it thing okay i've created class right fit it it see how well it that
so it's all you wrong less than two percent of the time this is not state of the art this
is far from figure we're actually on on this problem we be way exceed human at your c.
or where we better than humans at recognising digit on the status that and i
think the the error is something ridiculous like small zero point zero zero
uh something that it for it and i think two percent is
pretty good for something that just round my computer right now
um while uh answering a couple of a of couple of questions but let's see what it what
it actually does i have here some code that firstly shows as a random example so what
on average will probably do right came and that it's not often wrong so here is
the one it saying okay this is what the model says this is the
the image that would say okay this is the one uh i i think it's
it's right here then it's the another one okay this is a tool
but now it's also look at one example where it gets it wrong
here the model says something but uh it's actually something else
so here this is the image and the model says it's a one but it's
actually a too but i mean come on a um this so yes it
does things wrong but also the data is not perfect so here i think for some of these things i would not have known what that is
so this isn't a just
and uh another way to use neural networks with cancer flow and the last week that
i will not show them off and i will just briefly mention is that
if you if you want to define your model to the matrix multiplication levels
all kind of how i did it for the linear regression you
can and there's really good to pour for a lot of the things that people using machine learning and this is really ideal for researchers
obviously it would not be a couple of lines it will take you way more time to do it but if you
want to do machine learning research in if you wanna read the play with things then you can go there
and they are a lot of people who do that but it's we you wear it there's
no right and wrong with three d. where you are on the on the spectrum
and just to conclude so i really think that machine learning is changing the
world in a lot of problems that we thought are not solvable
five years ago just think about i'll forego right like it and it's
just so impressive because it's not only that it beat someone that
we we thought it would not be a i'm machines will not be able to be the the set that
would doubt handcrafted features so this is the big difference between deep blue which beat uh gary kasparov chess
uh and i'll forego a deep blue knew about chess they had a lot of
human chess experts kind of fine tuning it and so on and off a
golden like kind of like we learn just thought a lot of examples
played a lot of go and then got really good at that and i think this is just reading mind blowing to see that we
in a time where do these things all these problems that with ah this is not gonna happen it's actually happening
ah and just when you try to solve a problem just think could i be sorting this
would machine learning and that was a really good tool for that because if you
it just it's for me there's a great community out there if you have
a problem you can just reported people will fix it for you
oh i'd it's really like a a great a great because system to be part of an agree community and you can
just do so much with it in with machine learning in general and that's it for me thank you very much
oh so thank you but there are some couple of questions or
hello thank you for it it was very impressive um i just wanted to
to ask about errors um we humans will learn from rose so
there was a quiet surprise for example and is um
a grammar lead to that there you read in the detector or
and it tells you the titles are and everything and everything
specified in that sense um oh to try to build
is more though m. m. l. but there are is becomes to the base of tunes so to say
that can you repeat that i'm not sure i got so it's so um
and we have examples from under classification tomatoes uh_huh and we have examples from um
um what i want to do it in terms of rubber
grammar yes internal grammar to to to detect errors i
would to try to more to that well i okay so the questions about grammar so human language right
so i think it has changed a lot in the last couple of years so
actually if you look at the language models and what they generate so badly
so you can't train a neural network or i'm a bit of especially on
network that deals with time uh you can train it to say okay
um to to kind of learn the distribution of words and you can ask it to predict
think think you look at what i actually learned in one predicts it actually doesn't make that many grammatical mistakes all
uh there is there some really nice blog posts out there will of the people try to
a train is such a neural network on the linux source call and it
actually generate would not help close the bracket and and all this
kind of things it it definitely learned this by overwrite the also the
thing with language ah yes they make some mistakes but they're getting
incredibly good at it so it's it's it's getting a bit far from again give these things were beep what people had before
where okay this is the noun the this thing in germany have to put that in these things are age of
yeah that's some data in there a lot of really good an option models right now it will learn for you for you
um expression
oh thanks a lot for your presentation um those terms of schools towards second order optimisation but
but does it have something what the has since real optimism yes
yes i i've never used it but i'm pretty sure yes
so i haven't used myself but i would be surprised if not if it isn't there
get up and stuff well file bug and probably someone will look at the issue of you
to request or something yeah yeah so so it please do write like this is
one of the reasons why it's such an good tool is because if there's not something
they're file feature requests the product people want that it's gonna be there right
is there more question you
a um we'll let's assume a complete beginner and i don't know anything about machine learning
uh what would you local recommend oh could i yeah it's inshore what is the
point start doesn't that's the hardest part for most people are not yet it's
yeah so so the the questions about how to start with machine learning well i think it
also depends a lot on your background i would recommend different things for people that
let's say no format or but i don't know that much as i think it depends
i think there are a lot of course is out there online courses that are becoming pretty good um there's also some
i'm very in introduction we block ports but just something that i
really wanna stress there is nothing like you playing with it
it may be just twenty one of these models c. doesn't work and then we allies why not
and for example if you really wanna learn you can start with something like you have learned
like i i should here but if you just implemented yourself it will go wrong
the first time and just try to understand why it goes wrong and
they they are some subtle these and sometimes for example someone please
this lawsuit there's also some there um numerical stability so i remember
the first time them into the neural network even do anything
and also oh god i chose this career is not gonna work out for me and
turns out that there's actually a known issue that now i know in a
lot of people know about but there's there's a numerical stability issue but i know
that it's very easy for me because i did it the hard way
and i i i i i'm a really big fan of okay try to look at all
these online resources that are out there and there are some really amazing thing especially
some of the online courses course arise a great example and if we can time with
one of the yeah and of people earning he had the course there are
uh online course uh which is there goes also quite hip so it depends again where you are
in the spectrum but there there's really nothing like getting getting dirty and seeing okay i'm trying
it doesn't and there are some data sets out there so nist is available you can play with that
there are some language data fits on one of the nice thing that you can do theirs
uh some of the um movie sentiment analysis data sets the get some movie reviews
and you try to predict if they're positive or negative it's the person like the
movie or not they can just play with that after you learn a
little bit uh but there's nothing i i'm i i personally really believed is
there's nothing like getting out getting it wrong and learning from that
oh oh okay
um where do you see the limit with all t. s. f. is there a limit
i don't know i um i think it's very hard to say because we're
we're moving very very fast and some of the things that i would not have thought possible
a couple of years ago are possible now so recently there was this this papers that she
showed how to do now what what we call like a one shot learning so basically
you learn how to translate between korean and japanese even though you've never
done that you've only know how to translate korean to english and
english to japanese or or some some strange thing and it's actually working
remarkably well in two years ago i don't think i would have
thought that's possible i think it's very very hard to say i
think we're still far from what we can do very far
uh i there's um and especially things and i i i think using things like reinforcement learning anything
reinforcement learning will grow a lot because of the only thing that actual now are just
oh okay i give you this set of examples and learn from that but that's not what we
do right like i go through life i get rewards when i do something well and
i get punished when i don't do something well and i think this is something where we
will see my personal belief is that we will see a lot of changes from that
with your good question
but uh i think the presentation is terribly oh to harness the
the overall lost in the models because we know locally but
complicated model yeah so there's cloud and now there is a a there is a mention laying a. p. i. out
there it is power button suppose we do use the fall under the hood yeah so the answer is yes
it's lost who was question is oh there we go
what about doing different things with the same a. r. e. i. image recognition around
no text to speech yeah setting so the question is how about doing different things with the same way i so
definitely that needs to to happen any it is happening uh uh slowly
but uh i definitely agree that we are not as so there's some things like for example people play the
different games with the fame um with the same model the completely different games
but it's still far from having a so so that way once we have a model
that can do multiple this because i'm talking to at the same time but
my brain is pumping things and and it's just a thing what our bodies are doing but
our models are still okay this is the cat i know this is a doctor
we're we're getting there but uh definitely this is i i i think these are the two
things to to stress about a reinforcement learning and having models that the multiple things
yeah if that was just any it also depends what do you mean by model because you can have
a very simple model that says if the problem is x.
delegate to model then knows how the it's very well
if the problem and then you have again you have a model that that's all they say that you know i
i agree with you that you should have one model that kind of figures these things out but we're we're
would have been getting there and also one of the one of the other things is that for example translation right
i think that is when we're gonna have one model that knows how to
just just like i like i'm sure that no thought translate multiple languages
oh it's uh it's not too different tasks but it's still getting there to the point of view we have this thing but not how to do
more things than we initially envisioned ah okay let's one of 'em to follow blues clues zoom
combining it from models into one do you think we can get to the
point where we term repeat loop the h. u. b. b. room
but computers like really true documents that that's a hard question can
we replicate and also human behaviour is a bit on defined
so it's very hard to to say what do you mean by human behaviour right i think
i think we can get very far but i i i don't know exactly what
yeah what you mean by human behaviour and um uh i i think we should strive for that
we should strive for generic learning um and i think that would help us with a lot of things but um
oh and uh i presume confident that it will happen but i don't it's very hard to say right i guess we here
a fast moving field and that's what gives me confidence that i think it will happen
but i can tell no one can tell you for sure um it's a it's going fast but we're still

Share this talk: 


Conference program

Keynote
Jean-Baptiste Clion, Coordinator DevFest Switzerland
26 nov. 2016 · 9:40 matin
How to convince organization to adopt a new technology
Daria Mühlethaler, Swisscom / Zürich, Switzerland
26 nov. 2016 · 10:14 matin
Q&A - How to convince organization to adopt a new technology
Daria Mühlethaler, Swisscom / Zürich, Switzerland
26 nov. 2016 · 10:38 matin
Animations for a better user experience
Lorica Claesson, Nordic Usability / Zürich, Switzerland
26 nov. 2016 · 11:01 matin
Q&A - Animations for a better user experience
Lorica Claesson, Nordic Usability / Zürich, Switzerland
26 nov. 2016 · 11:27 matin
Artificial Intelligence at Swisscom
Andreea Hossmann, Swisscom / Bern, Switzerland
26 nov. 2016 · 1:01 après-midi
Q&A - Artificial Intelligence at Swisscom
Andreea Hossmann, Swisscom / Bern, Switzerland
26 nov. 2016 · 1:29 après-midi
An introduction to TensorFlow
Mihaela Rosca, Google / London, England
26 nov. 2016 · 2:01 après-midi
Q&A - An introduction to TensorFlow
Mihaela Rosca, Google
26 nov. 2016 · 2:35 après-midi
Limbic system using Tensorflow
Gema Parreño Piqueras, Tetuan Valley / Madrid, Spain
26 nov. 2016 · 3:31 après-midi
Q&A - Limbic system using Tensorflow
Gema Parreño Piqueras, Tetuan Valley / Madrid, Spain
26 nov. 2016 · 4:04 après-midi
How Docker revolutionized the IT landscape
Vadim Bauer, 8gears AG / Zürich, Switzerland
26 nov. 2016 · 4:32 après-midi
Closing Remarks
Jacques Supcik, Professeur, Filière Télécommunications, Institut iSIS, HEFr
26 nov. 2016 · 5:11 après-midi
Rosie: clean use case framework
Jorge Barroso, Karumi / Madrid, Spain
27 nov. 2016 · 10:05 matin
Q&A - Rosie: clean use case framework
Jorge Barroso, Karumi / Madrid, Spain
27 nov. 2016 · 10:39 matin
The Firebase tier for your app
Matteo Bonifazi, Technogym / Cesena, Italy
27 nov. 2016 · 10:49 matin
Q&A - The Firebase tier for your app
Matteo Bonifazi, Technogym / Cesena, Italy
27 nov. 2016 · 11:32 matin
PERFMATTERS for Android
Hasan Hosgel, ImmobilienScout24 / Berlin, Germany
27 nov. 2016 · 11:45 matin
Q&A - PERFMATTERS for Android
Hasan Hosgel, ImmobilienScout24 / Berlin, Germany
27 nov. 2016 · 12:22 après-midi
Managing your online presence on Google Search
John Mueller, Google / Zürich, Switzerland
27 nov. 2016 · 1:29 après-midi
Q&A - Managing your online presence on Google Search
John Mueller, Google / Zürich, Switzerland
27 nov. 2016 · 2:02 après-midi
Design for Conversation
Henrik Vendelbo, The Digital Gap / Zurich, Switzerland
27 nov. 2016 · 2:30 après-midi
Q&A - Design for Conversation
Henrik Vendelbo, The Digital Gap / Zurich, Switzerland
27 nov. 2016 · 3:09 après-midi
Firebase with Angular 2 - the perfect match
Christoffer Noring, OVO Energy / London, England
27 nov. 2016 · 4:05 après-midi
Q&A - Firebase with Angular 2 - the perfect match
Christoffer Noring, OVO Energy / London, England
27 nov. 2016 · 4:33 après-midi
Wanna more fire? - Let's try polymerfire!
Sofiya Huts, JustAnswer / Lviv, Ukraine
27 nov. 2016 · 5 après-midi
Q&A - Wanna more fire? - Let's try polymerfire!
Sofiya Huts, JustAnswer / Lviv, Ukraine
27 nov. 2016 · 5:38 après-midi
Closing Remarks
Panel
27 nov. 2016 · 5:44 après-midi

Recommended talks

Component Analysis for Human Sensing
Fernando De la Torre, Carnegie Mellon University
29 août 2013 · 11:07 matin