Embed code
Note: this content has been automatically generated.
not that far go i was working and to go into work so it's really great to be back in switzerland and
uh talk about one of my favourite tools out there so i'm definitely gonna enjoy mine for i hope you will too
um and yeah that's it going feel free to interrupt me if you have questions there's plenty of
question time for questions at the end but there's something really strange just just go for it
so for those of you who don't know what's what's as opposed to what am i gonna spend the next
i'm half an hour talking about what was where is it suppose an open source machine learning library
and it's primarily aimed at people earning so these deep learning in this this
new branch of machine learning that have been it's been basically sweeping
um um the world the way with the uh with its ability to solve new problems a lot of
problems that people are not solvable five years ago have already been sold uh using people earning
in some of the really core feature of dance floor and i really cannot stress this enough is that is meatballs for
research in production the same framework works then it's applicable very
well for research work but you can also we might
big that model that the researcher uh made in just um deployed out
and it comes with a very flexible license or you can just take it in
in use it in your product were your project uh however you like
so just as a brief introduction i'm gonna talk to you but about
what machine learning has become and cool and especially how that integrates
with what you what you could potentially saw with machine learning just to give a couple of examples
so as you probably know who girls mission is to make the world
organise the world information and make it accessible and useful and turns
out that deep learning has become very crucial to words admission
so if we look at the number of projects that have been using deep
learning in the last couple of years this has grown so much
in this range just across a lot of areas that that you know when product that you really like
and i'm just gonna go through a couple of these briefly int oh i
have to mention i'll forego oh oh you all probably know it it's
it's really amazing so people did not expect to have um
uh i'll machine being able to beat a dead than nine player without giving handicap for
awhile so handicapped means that you are actually favour ink the machines although between up
when you're uh playing but this was a fair game enough light bulb beat one
of the best players in the world for one so that's that's really impressive
um and moving want to go to the product side
i hope you all use my reply any box
i really like a so it's uh on this my reply is a really nice feature that allows
save time especially when you're on the goals when you're on mobile anything in email and i'm just hopping into the train
almost losing it or the tram i i just want to present something
uh and then okay that'd be email was sent and um
smart reply really leverage is machine learning and an l. p. to kind of predict from the incoming
incoming email okay what are the possible should i firstly suggest an automated reply and secondly
um what that reply could be in this feature has been really popular um
and taking around ten percent of 'em responses sent on mobile
clearly recommendation in clustering are really good fit for machine learning in google
play music makes uh makes the from the use of that
uh one of our again one of my favourites is google for
also we're having more and more powerful phones is that
is right i used to remember that time when i was younger in just having to always copy my photos
to uh onto my computer because i never had space but now i have thousands and thousands of
uh pictures on my i'm on my phone and i just wanna search for a picture for example when i travel
i took some really nice pictures in japan and then i shouldn't my friends i don't have to scroll through
thousands and thousands of features i can just say okay go show me pictures of your important data in japan and then it
will and this this is really cool and this is made possible due to the shivering in deep learning in particular
um and also related to travel and no two two vision is this idea
of okay if a spate um when you're travelling your let's say
can i have no clue how to reach japanese script so i can't even type in my case
sir spar what that means i can just point my phone to something and it'll automatically translate
these are just a couple of examples and just think when when you're trying maybe potentially
to solve a problem is machine learning applicable to what you're trying to solve
oh and if it is in if you decide to use machine learning and then you have a couple of
sources of complexity that you have to deal with so firstly you're gonna train your um although on
maybe a couple of c. pews or maybe g. p. was or maybe some specialised hardware
but at your users are not gonna use the same uh platforms is user you want to potentially deploy
immobile or and they're multiple platforms there's that we have and read and i was and so on
um and these models are also really be beginning like
usually stays so you might want to use um
mm you really might want to take advantage of a cluster to train your model
thank you also want that too expensive to be able to specify wheelers models
all the image that i have here is the state of the art
or almost obscured image recognition um model and it has a lot of layers
as you can see take it you want to be able to specify
this kind of models easily you don't want to have to take months
to write such a thing because then you're really really slow down
and they're really good thing is that in several really handles all this for you so that
you can focus on what you're trying to sort of write like you don't care about
making sure that your model is portable everywhere you just wanna solve your problem in
shared could users or play around with it or whatever you want to do
impassable we deals with all this 'cause i'm source of of of complexity
how does it do that so how does it do all this all these new really nice things for us
so um one of the core things to know about it's wrong to always think about so there are two main concepts
one is the concept of tense or not answer just the multi dimensional array so if you have a vector
uh that's a dancer a matrix is of cancer if you go to three dimension
that's also dancer if you go to and i mentioned that also things there
and the stance ours are actually flowing to work computational graph of operation so
that hence the name tens of little this is how computation is described
intensive fall so it's really important to think when you're programming into several on or when
you're structuring your problem of how your computational graph look what looks like you always
well most like teen when you define emission line problem you would want to open my eyes
a loss and you would define that was in terms of mathematical optimisation that that will become part of the graph
and the general procedure when working with tons of what was to define your graph
as i said in usually a high level language so just by fun
and then this graph is compiled an optimist by the or a tenth the
prosecution system and then executed either being paragraph um for example you have
training and testing graphs or different things that you might want to do or only part of it's what this thing time
you don't want to execute the part that does the training in that you execute that on the available devices
oh how how does this work this to the core of tens of what was
in c. plus plus but if you don't want to deal with that
oh you don't have to so you can specify your computation either in c. plus plus we're
in python and these are the uh well officially supported um front end but i
think there are some others available right now so this is one of the really nice
things about being open source if the great community out there that really helps with
whatever you're trying to do probably someone is also thinking about doing that under a lot of times of
your users out there that can help each other in this building a really really very community
so now we're going into one of the uh sources of complaints like city
that i mentioned before in this is um distribution to these models
these days that your your training for example for image recognition and so on they are as as they should be can be quite big
and you might want to take advantage of the two or three machines that you might have where maybe the
twenty or the class or that you have multiple should be use and
to you you want a flexible platform that is able to
deal with that then you don't want to spend months trying to
to scale up your training from one machine to multiple machines
and that's a flow really deals with this quite nicely supports that you can switch from the g. p. you twist c. p. u.
just with the change of flags will have to change your code
and a jeep use are really crucial to keep learning because
the kind of computation that we do when we train neural networks is very suited for g. p.
use and this is actually one of the reasons why we have all these breakthroughs is that
it training um although on a jeep you with at least imply sponsored
on c. p. u. c. having to be available you can just
um change how you are or how you're in a network is a or a run in that's it your your training once you you
and uh are you can also use multiple cores or multiple graphics cards in if you have
a plaster or multiple machines you can also just just take advantage of that in just
really scale a scale down your experiment time and now it's just look at some numbers
on 'cause i'm i'm i'm selling it here but i wanna i wanna probably that
it actually does this on this is a graph of training inception so this is the
really big model that i showed that just does um image classifications we have
an image in it recognises um if a dog is in that image or another object isn't that image
and let's say i wanna twain this mortal until i have zero point
five um that you received so i'm right half of the time
right and let's see how long that takes with one c. p. u. word but ten or fifteen o. g. p. use sorry
so if i do this with one g. p. u. was gonna take around eighty hours
the train this model so that that takes for quite sometime but if i use fifty
g. b. o.s then it's only two point six hours so that that makes people
will be able to experiment a lot faster in advance the field or fast and
also test your ideas we faster right so this the uh big big scale
difference and the same between ten and fifty g. p. use
you use yeah around a four times improvement if
you always fix for precision because that's that's what you're trying to to get as precision of as possible
and as you know this here so you don't get the nearly more um performance
if you scale uh the number of machines because there is an overhead of communication right so you will uh
um you will expect this over especially if you have a closer there's the bandwidth communication but even if used to jeep use on
the same machine there is a little bit of overhead there any uh this graphic just trying to show you the um
how would distribute it training scale so many increase the number of workers if you
have around a hundred workers you would get around fifty six speed up
over using only one work so it's it's definitely way
faster but expect this this distribution overhead which
comes from the fact that you have only one networks owning one set of parameters that you have to synchronise
or i i think when you say communicate between this or workers okay what is our current sort of parameters of the model
another source of of complexity that i mentioned previously used this idea of having
it's virginia systems and um um complex architectures right so
i mention cheap use just now and uh distributive systems but you might
also want to put your model one raspberry pie why not
oh war we also have um a certain machine learning specialised
hardware which you might want to using you don't want
you have to tweak that every time a name change your code
to be able to run your model or all these platforms
um in terms of what what does that for you so you if you
have your model that you trained on ten or a hundred machines
you can just take that model and put it on a on an i. phone or a hundred device and and that's that's really good
and especially for for mobile support um there there's a lot of
uh resources out there ah and there's so even um yeah
google blog about how to just take an already trained model
load that and create an apt that recognises what in an image
and this is really nice because actually for this model you don't even have to train the
network because then several comes with some train network so you can just download the network
like it in in europe and i think in half an hour you have
this happy that you're able to recognise thing so that's that's pretty cool
and so why are all these things we important for like in the in the end we we hear about
one thing specifically bringing disagree features to users but if that takes us here is like a
this researcher has a really great idea it working research but then you have to
pour tapes to the production code and then we have to make sure that it works
something imply firms that takes so much time bin users are always so behind
compared to the deep learning research right and they're not benefiting of the best possible things that they they could benefit of
and that does the week or thing about uncivil because you have all these things available
um then you can you can just go very fast from
a research to production and one of the things that
oh i was really nice that is that it only took four months
would uh take something that people weren't sure does this even work
to actually launching you to users that the that the that the pretty good
thing to have in mind when you're when you're thinking about these things
and i'm not sure you very simple example of how to do linear regression intensive low if you want to do
the integration maybe this is not the best way to just
the pedagogical example uh it doesn't work but um
uh there are better ways of they're just as a as a disclaimer i'm
just trying to show the basic features of of them for for in
feel free to interrupt again if you have any any questions regarding this
so this is my problem statement i have a couple of points
and i'm trying to fit the line to disappoint so what's the baseline that i could fit so in this case i know
that this is the line because i cheated and i wanted the line first and then i generated the data through it
but um let's assume that we don't know the line so we have the point will uh we have
the points and i'm trying to find find this line oh and then you want to use
this for example for testing so if i give you a new point on the x. axis where
should you predict at that point should be so if i tell you at this coordinate
push my point b. well i i learned this line i'm gonna predicted this point here on the line
and in order to line the line you have to find a slope and intercept of the line so it's defined
by two variables this is the the align equation w. n. b. r. d. things that we're gonna learn
x. is the input so this is the thing on the on the
x. axis n. y. is what you predict so for training time
you know what why should be because you have disappointed this is the
fourth this x. you know that your why should be here
or does x. you know that your why should be here and so on but
that test time you would not know uh what your why should be
but you just use this line that you learn to to predict that it now going back to one of the core things that i mentioned before
um the computational graph so tiny thing about the cancer from program
beat to something really trivial like this or be it and shoot you
know metal put a hundred players in doing something really fancy
always there's a computational crap behind it that you define and that powers the computation intensive flow
an added score it looks something like like this so um this reminds me actually a
lot of when i learned l. operator precedence in school that plus comes uh
when you evaluate something there's and forest and then plus in the main that made us though introduced we use
it's always makes me reminds me of that but um if you if you
think about what you're trying to do you're trying to learn w.
be in these things are variables right so you don't you don't know them before in you
just start with initialise them with something at all times of uh well these are things
that you will learn and can you to start with one i just chose one random here
but at as we go through learning we will we will improve all these things
that there's another concept here which is a a place holder so this is the value that i had before on
the on the x. axis this is the x. here in this that i'm telling tends to flow well
this will be an input for program i don't know we now i don't know the value now
but when you define the graph just know that they would be of value in here
right so it's basically it's holding place for or somebody that will come later
and then wise very simple what we expect w. x. plus p.
uh and the graphs looks like this uh this that route that we define and not that we're not doing any
learning right now i'm just showing a very simple example of how you can compute w. e. x. a plus
b. e. um uh in um intensive for the one that for things to know is this the graph but
the graphics static the grass doesn't compute you and i'll put you are doing this because you wanted output
otherwise we wouldn't be doing any any training right but the the but the craft that as is because that's what
this is all computation looks like but it doesn't do actually in the computation if you want to do
computation you have to create the session in run things through the graph so this is very simple to greatly
just a one line of a line of point than uh and this is how the entire program
slide in this is really important to stress because all sensible programs will look like this
more or less even if it's a neural network training or something very simple
when you first read the graph in the case of the neural network you will replace this with something more complex
and this is the graph that we have created before for w. x. plus b. um you initialise the session you'd
you tell tends to flow initialise all the variables so this is keeping w. n. b. the value of one
and then i'm using the station to run things through the graph
and have something that's important to mention is what i have to tell tens of what the value of x. was i didn't tell
it before i'm telling i i just pulled it here well there's gonna be something here and wait for it until i
asked you for something when asking for something and here i am asking for something i want the output i'm actually gonna keep
you something or that the here what i'm actually doing and actually
computing three times one plus one which is or in
you will love you will just been be able to get your output this is uh about noon pipe answer so for those of
you familiar with heightened it's both the output in what you can feed it in the dictionary are no right answers oh
and that's uh there's just the standard we uh to go about these things so that this was very very simple we just
computed l. y. if we know w. b. but we don't
know w. b. because that's why we're doing some learning
and always you're trying to when you when you're doing actually you have this error that you're trying to minimise and that's up
to you to define right so when in the case of classification
i want to tell the network well if this image
a ah is of a cat and you're saying that it's a dog when you're wrong and there are ways to define that
uh but here for just a simple example i'm just choosing the euclidean distance so these things that i
but i plotted in yellow and i'm using that for my entire day that's it
i don't think it's the line was this one what would never be well
it would be the some of this but because this is hell off
my model is here my model when i give my model this x. it would predict this point here
but actually now that the point is here so it's off by this much same for this point and for all the other
points in my training set and i'm averaging that in this is my error and now i want to minimise that and
what do you do that well i'm very often in the in
the shining we use this gradient basement that there are some
something it's your hands in atlantic that meant not exactly that if you go on to the direction of the gradient
you are going to towards the direction of the local minima at these that are very small small scale so there are a lot
of this um iterative algorithm so this is something important to straight there's this algorithms are iterative so it they tell you
go this direction a little bit the new computer gradient again then this to go in this direction a little bit
any hope that at the end you will end up in the minimum and with the care about the gradient you don't have to care about even drop demise there
you just tell tends to flow well i want to use this up to my
server for my my error so this is exactly what it just putting
it all together this is what we had before so i'm defining the distance
between labels is what i know to be that the true that you
why is uh what i've predicted we to the computational brought that we defined before
this is the cost so the average error that the model is scanning
in this is the whole optimisation step so this performs the
gradients for you and that's what n. computed operation
that would do one step to this again does not want anything this is defined the computational graph
it just i think that's a flaw okay this is what we will be doing in this is how the graph
looks like and this under the what computes the gradients
with respect to the variables that we have defined
so what are we trying to optimise w. n. b. and these are the things that you
would need to compute the gradient with respect and that's what does that for you
and now you're defining the session as always any dental program will have that
and i said this is an interactive algorithms i have to iterate here i chose a
hundred times you can iterate more there's the whole machine learning theory about how long
youth rate and and so on so don't don't take this the hundred for granted
it's just the just the illustrative example and then i'm running this update step
and again i have to tell tens of flow what might put was and what my output
was so these are basically my points on in the graph that i had before
and a time i run up the step doubling in b. change
such that might cost is smaller line changing every time i
run this for the first loop w. n. b. where one initially after i run this step there are gonna be
one anymore they're gonna be to the point where they're gonna go into the area actionable point that minimises my cost in
my course the bit smaller now and i thought that until my cause doesn't decrease or interloper fit it's all
but this is how a very simple example of of linear
regression looks like and now a neural networks i'm
not gonna go into as develop implement the neural network and then suffer because it might take some time
but i don't want to show you the options that you have when implementing neural networks in terms
of uh because they're quite a few and i think this is also one of the
really core strength of cancer flow and this is also white allows it to be
a great platform both if you wanna do research in production so what's a neural network
um yeah i mean 'cause the very nice more though
that allows feature learning in a very nice
and efficient way so before in people that machine learning there was a lot of feature engineering
right so if you had a a an image as inputs to um
although you have to turn turn transform somehow the image from
it's the sequence of pixels which is what i mean it is something that the model would understand
and what people did is they they handcrafted this feature for every problem so they had human experts
saying okay for cad versus dog i think it should be this and then they
they were moving it for any kind of problem and that that's that's very expensive percy need
human experts to do that second the it doesn't scale right i yeah you're just
trying to do this again and again but the real core thing about neural networks is that they learned the features themselves
so you're actually feeding into the model that role pixels that you have so you're
not and you don't have to be features engine you don't have to
be an expert in kept versus dog uh recognition ah to to be able
to on to be able to do this and one of the corners
reasons why can do that is because of this hierarchical structures what it does
is that it looks at the input looks at the pixels but here
it looks at for example ages or strokes in here it looks at even more high level features and so on so it turns out that
not only that they learned is quite well they also do it in an intuitive manner so this is what people would expect
and this is what people were doing it before they were trying to extract ages from ages but instead of doing it
automatically they had handcrafted gabor filters or something or something similar and then they
got here in that from here we try to get here but
now you only have this model that um that lends it all together
and you do something very similar as what we did before with
a gradient descent you you define your cost here which in my case with the distance between the points here with the uh with the
find it a bit a bit differently and then you back propagate the
gradient again there's the the same core think the gradient back
uh and learned the weights between between these these layers now if you
want to do neural networks intensive low you have multiple options and
there's not right choice depends on you where you are on the scale up
i just wanna play with this or i want something that works or i'm willing to spend two months or does versus
i'm willing to spend the week on this and options are the following separately you can use already trained models
before the image example that uh that i i've shown you can actually get
go to the website to download it uh and um uh and then you
someone go so you just load it you train it you don't
have to care that there is this complicated beast behind it and that
someone spent a lot of time figuring out how to define this
you just uh you just use it for your purposes and actually
have an example of that it's it's works oh no
not this one i thought this out i don't know if you can see this i have to two examples here
this is just using a a script that comes with tens of also it even comes with all
these nice examples of how to use this that's will live classified this this image of
a hand you want is ah n. well this runs it um you know so don't not
the model right now but i'm well uh this is running i'm just gonna talk about
the script so the script is two hundred lines in it that's quite
a few things so oh it sounds the model a it'd
does a a prediction it's very keen so it has a lot of comments and things like that and it also all
uh transforms the output of the network into something
human readable because the network is telling you
uh it's getting a probability distributions on the networks is zero point one zero point four zero
point five and the script also kind of gives us something something really nice back
and this is only two hundred lines so that in there plenty of other examples
out there so if you really just want something that recognises images prickly there
is something about that in here does this worked well since this giant panda
panda wouldn't there so this this looks correct me this is the picture
of a different resolutions which works just to show that
it works at um okay different scales them
uh and again it's corrected for those of you who don't know this and i have to
come this i did not know this until i saw the result this is correct because this is
a tabby cat because of the stripes but it is the cat it is actually a
tabby cat because it has this this tribes here so that's what that's what i learned from from
this so this is one example and of course there's not only the um the train model
for images there's also model called heart seemed par
fate with parts phase which actually does um
um semantic parsing for you in english so you can't uh kind of see the cement
three and also this is a noun this is a an adjective and so now
what if you're not lucky enough that there's already a model out there
so one of the things is that even if it's not in the war cancer for
all of our libraries would this model that represent this part or cancer flow
yes apple is open source so there's huge amount of people actually open sourcing
there want also even if it's not insufferable just have a look on
online see okay maybe someone actually has the problem is probably more useful for me and
i can still just just wrote the the model but if you don't find that
anyone else over problem using implementing your own model there's another
option that's very high level and then implemented the
psychic learning p. i. so for those of you have done a bit of machine learning python disappeared
popular frame or for doing machine learning it's very nice clean easy to use
and as part of the answer for all we have to give learn that builds on top of
this this a. p. i. n. actually you don't even have to worry about the graph
because it creates the graph for you so here if i want to create a neural network uh with this is random
set of unit so this means that i have three layers so in the graph that actually before i have the
input to that they were the pixels will be and then i have uh the layers going up and and outputting
three classes so let's say i want to distinguish between cad or rabbit that i can define this this classifier
and then i can fit my data after with so this is also pretty simple and again here you wouldn't have to worry necessarily about
okay this is the computational graph this is the session does this creates the graph for you so this
is all that you have to specify and actually have an example of that too and i'm gonna
train one of these models who to recognise digits right
now using a careful or this is gonna
run on my computer life it's on a c. p. u. would not only g. p. u.
and it'll take a couple of minutes the maybe we can ask questions during the meantime but um this is
basically all it takes and i made it long because i wanted to showcase something's this is how
much it takes to learn how to recognise digits using
zinc you have learned the first name import i'm
gonna get the data from its psyche to learn and i'm doing a bit of processing because
uh the day that comes from zero to two five five but actually neural networks were
better when the input is kind of between zero and one were relatively small
well i'm i'm just doing that that to an hour let's visualise a bit the problem
this is how the digits look like this is a six this
is i would say seven knots or so what we're
gonna do now is actually between the model tools to see this picture and say okay there's that to this picture
and this is by the way the hello world of machine learning that any algorithm or
a neat benchmark we'll try this it's over done to the maximum but um
i i had to do it and i had to do it and there was a there was two ten thing so
this is defining the network engineer can i make it a bit more interesting just to show
you that even if it's very high level you're actually not losing a lot of
um of the flexibility that you get with your flirt so here i'm actually saying
i define this classifier in use this op demise are so if
you remember before i showed you about gradient descent and adam
and the but there are others that come with tons of flow and yeah i'm single actually
the default i think this ad that right but i'm not gonna use that i want to use momentum and this is
four lines of caught in this is just defining the what also notice there's no graph or session here
this this actually doesn't running you computation this has to fight with without this has to find the graph
and now i'm actually telling it well we this graph fit fit fit the data
the grass i'm thinking all these images they are uh sixty a thousand
in toto um uh and i'm actually that training right now for
sixty thousand steps and this is again training live on my on my machine right now
and i can it's gonna take a bit turns out it takes more than two
minutes to learn how to recognise digits so if you have any questions now
um uh well well this is turning feel free to uh ask

Share this talk: 

Conference program

Jean-Baptiste Clion, Coordinator DevFest Switzerland
26 Nov. 2016 · 9:40 a.m.
How to convince organization to adopt a new technology
Daria Mühlethaler, Swisscom / Zürich, Switzerland
26 Nov. 2016 · 10:14 a.m.
Q&A - How to convince organization to adopt a new technology
Daria Mühlethaler, Swisscom / Zürich, Switzerland
26 Nov. 2016 · 10:38 a.m.
Animations for a better user experience
Lorica Claesson, Nordic Usability / Zürich, Switzerland
26 Nov. 2016 · 11:01 a.m.
Q&A - Animations for a better user experience
Lorica Claesson, Nordic Usability / Zürich, Switzerland
26 Nov. 2016 · 11:27 a.m.
Artificial Intelligence at Swisscom
Andreea Hossmann, Swisscom / Bern, Switzerland
26 Nov. 2016 · 1:01 p.m.
Q&A - Artificial Intelligence at Swisscom
Andreea Hossmann, Swisscom / Bern, Switzerland
26 Nov. 2016 · 1:29 p.m.
An introduction to TensorFlow
Mihaela Rosca, Google / London, England
26 Nov. 2016 · 2:01 p.m.
Q&A - An introduction to TensorFlow
Mihaela Rosca, Google
26 Nov. 2016 · 2:35 p.m.
Limbic system using Tensorflow
Gema Parreño Piqueras, Tetuan Valley / Madrid, Spain
26 Nov. 2016 · 3:31 p.m.
Q&A - Limbic system using Tensorflow
Gema Parreño Piqueras, Tetuan Valley / Madrid, Spain
26 Nov. 2016 · 4:04 p.m.
How Docker revolutionized the IT landscape
Vadim Bauer, 8gears AG / Zürich, Switzerland
26 Nov. 2016 · 4:32 p.m.
Closing Remarks
Jacques Supcik, Professeur, Filière Télécommunications, Institut iSIS, HEFr
26 Nov. 2016 · 5:11 p.m.
Rosie: clean use case framework
Jorge Barroso, Karumi / Madrid, Spain
27 Nov. 2016 · 10:05 a.m.
Q&A - Rosie: clean use case framework
Jorge Barroso, Karumi / Madrid, Spain
27 Nov. 2016 · 10:39 a.m.
The Firebase tier for your app
Matteo Bonifazi, Technogym / Cesena, Italy
27 Nov. 2016 · 10:49 a.m.
Q&A - The Firebase tier for your app
Matteo Bonifazi, Technogym / Cesena, Italy
27 Nov. 2016 · 11:32 a.m.
Hasan Hosgel, ImmobilienScout24 / Berlin, Germany
27 Nov. 2016 · 11:45 a.m.
Q&A - PERFMATTERS for Android
Hasan Hosgel, ImmobilienScout24 / Berlin, Germany
27 Nov. 2016 · 12:22 p.m.
Managing your online presence on Google Search
John Mueller, Google / Zürich, Switzerland
27 Nov. 2016 · 1:29 p.m.
Q&A - Managing your online presence on Google Search
John Mueller, Google / Zürich, Switzerland
27 Nov. 2016 · 2:02 p.m.
Design for Conversation
Henrik Vendelbo, The Digital Gap / Zurich, Switzerland
27 Nov. 2016 · 2:30 p.m.
Q&A - Design for Conversation
Henrik Vendelbo, The Digital Gap / Zurich, Switzerland
27 Nov. 2016 · 3:09 p.m.
Firebase with Angular 2 - the perfect match
Christoffer Noring, OVO Energy / London, England
27 Nov. 2016 · 4:05 p.m.
Q&A - Firebase with Angular 2 - the perfect match
Christoffer Noring, OVO Energy / London, England
27 Nov. 2016 · 4:33 p.m.
Wanna more fire? - Let's try polymerfire!
Sofiya Huts, JustAnswer / Lviv, Ukraine
27 Nov. 2016 · 5 p.m.
Q&A - Wanna more fire? - Let's try polymerfire!
Sofiya Huts, JustAnswer / Lviv, Ukraine
27 Nov. 2016 · 5:38 p.m.
Closing Remarks
27 Nov. 2016 · 5:44 p.m.

Recommended talks

TensorFlow 3 and Day 3 Questions and Answers session
Mihaela Rosca, Google
6 July 2016 · 3:21 p.m.