Embed code
Okay perfect. So as a as a friend so
pointed that torture is actually
written at it yet. Um I was having
dinner with some folks for old timers
that an idea. And they were walking
back from the town square and I B so
the old building of idea this really
tiny brown building and it was said
that porches rooted in that tiny run
building then run and was basically
telling here and this this beautiful is
town. So today I'm gonna get three
lectures on towards yes to just some
logistics alright. So just some
logistics the first cell I'm gonna be
giving is on the overview of torch
basically give you like a full view of
high level view of what forges what the
communities like what it is like to
work and forged and some high level
details of the philosophy of George and
so on. The second dog is gonna be a
deep dive into porch we're gonna go
into the the in the inner workings of
George basically looking at how cancers
and storage as work and torch and like
how to use the neural networks package
and the optimisation package and so on.
And that is gonna be useful to start
getting into torch it's gonna be like a
few rack and then the third talk is
basically going to be extensions of
course interesting package is new
paradigms of computation. Um and some
showcases of after after you are sure
stark on genitive modelling I will
directly going to an implementation of
one of the generative models is gonna
talk about in torch. And also give some
extensions of to read and so on. Um
during the breaks as well as during the
lunchtime you could chat that some of
us were here. Um I would so there is a
very "'cause" I grew go us are gonna
get up he's from price tag and he came
here just to be able to chat at any of
you of deeper questions into torch if
you have issues which George how to get
them fixed up and so on. And also we
have two excellent local experts
backdrop a narrow. And image repel us
both of them are PST students have run
a colour there and they can also be
good source of people to check that and
I I'm available as well probably not
during the lunch break but definitely
and the other breaks. Um okay let's get
started. This. So this particular talk
will have the structure what is towards
the community of George a common use
uses of George how people use torch in
the community the core philosophy
behind or some something that we
wouldn't change regardless of how we
would move forward in the future the
key drivers of good the reason white or
just popular or by B thing towards just
popular that would be helpful in
general to get a perspective on what
are the main value additions of george.
And also a little bit about the future
what we're planning next very very high
level view of like our future plans. So
what is stored storage is a scientific
computing framework you can think of it
as a similar to matlab or python with
that's I pine by it it's it's at the
core of torch is is and and the aerial
every we call them tenders. Um and it's
it's an interactive reptile based and
run meant so George is you you can open
an interpreter you can execute commands
you can see what happens it's is that
exactly like when you work with matlab
you can plot something you can train a
network and so on. Um and it's very
very simple to use it has one indexing
to emulate matlab it was written as
from from my understanding this is this
predates me but it was written as close
like something that matlab users can go
to words they wanted to do more serious
competition from like a systems
perspective it has plotting it has all
the bells and whistles you'd expect
from a scientific computing packets
definitely not as rich as matlab and
certain aspects like matlab has certain
tool boxes that are not available and
and the other committee but it's the
same can be said for torture we are
strong and for example the neural
networks packages and optimisation
algorithms based on grade in peace and
and we'd like to focus and either at
least for the near future. So one of
the key values of tort is that we have
we are based on this language called
cola. And this indian college at that
that runs the and this is a very very
perform and jet engine digit is just in
time compilation engine where it it
takes your it takes your high level
code you're the look of it and then it
compiles it dynamically and very smart
plays and what that reflects too is
that you can write high level could
like you're right in matlab or I don't
know low like where you don't have type
safety and other features that you
would get from a compiled language. But
it would be fairly fast it would like
you wouldn't have if you write a for
the in towards you wouldn't you
wouldn't be like oh my god it's it's
taking days to and that the the
difference between the difference
between writing stuff in lieu and
writing stuff in C is bearable I would
I mean it there's obviously performance
differences clear performance
differences but like while you
prototyping it's it's very very
efficient compared to other interpreted
languages like python or matlab the
second key feature we have in torch
that we use all the time. Uh is it's
really really easy integration into C
we have and and and low there's
something called FFI which is now
common in other languages like for
example python. But George was always
meant to be of very very moos interface
on top of C so you wanted to write your
heavy heavy like heavy processing in C
and then you would just want to have
little interpreted language to do quick
prototyping. And to write to rap C
could indoors you don't actually even
have to write complicated bindings you
basically just call the C code as is
within your little a program as an
example here we wrap D and BD as
quickly and then library and forge and
we never actually have to go into
writing our own C code or of any sort
all we have to do is in little we can
directly call us the C function in this
case could in convolution forward just
that with the arguments that it expects
ended just just works out of the box
this saves a lot of time and you are
wrapping either existing libraries or
you're writing your own C code and you
want to have an interface between them
the second key value that torch
provides this is one of the big reasons
that people come to as and use torches
because of the strong cheap you
support. Um torch has a large transfer
library of over a hundred fifty
functions and all of them are
compatible with the they work on both
the CPU and the G you and especially on
the jeep you these functions are
extremely performance quite a few could
engineers spend some time optimising
most parts of the code and we also part
of the court tensor library we have a
neural network library that specialises
just for neural networks and their
performance and that has also been
fairly optimised for GP use. And of a
complete G utensil library is actually
as of today unique to torch especially
one that's performing like there there
are alternatives like could a mad and
the the eigen library. But they're very
limited in their jeep you support like
it just wasn't written with cheap you
first dancers in mine. And like I think
that's something that we have been
focusing on for about since two
thousand eleven or so. And initially we
started off at whatever we can on the
GP support but now it's fairly complete
and for for our users they find it very
natural the transition between CPNGQ
without having a look like without
knowing the difference the next thing I
wanna talk about is who uses storage.
There's actually a large number of
users of course these are only just a
subset of utters idiots it's right
there in the middle of it quite a few
people using torture. Um and also
really large companies like face book
where work of you stored for all of our
research all the for a research that
and there's about fifty of us and tutor
uses torch not only for research but
also for in production environments for
covering okay court covers their image
recognition video recognition and the
language use cases as well
introduction. Um and there's quite a
few schools that you torture show some
of the most active ones I've the put
there. And other larger companies yeah
and ex IBM I. D. M. I visit them a
couple weeks and they said pretty much
all of their speech and an LP pipelines
are now and George and and there's some
interesting companies there Tara deep
for example on the on the right side
bottom. They do torch for FPGA is and
specialised chips like so they
basically have these refugees and chips
and then you can just use torch and as
the as the and the back and is an FPGA
stuff or GP or CPU for example. Um mood
stocks the company at the bottom there
the used towards to train the networks
and they run them on mobile they're
basically a mobile company for image
recognition other examples in the
community are packages I'm gonna just
like go over like a few popular
packages one of the strong points of
some other communities for example the
cafe community has been the models to
where people after they do research
they can share their train models so
for other researchers to use. And like
what we wanted was to leverage the
value that the cafe community provide
so we have a package called load cafe
that's very gay that basically loads
caffeine models into porch up pretty
seamlessly and you you can then use
these models for to do all of your
research and forge. Um and like if a
new paper comes out and there's a
preacher in cafe model you can just
like pull that off extract features and
like plug it into existing George got
for example at face the once a there's
these class of a con that's that
appeared recently called racy deal
networks if you are in the line of
computer vision or deep learning you
would have heard of them. Um at least
look we as soon as that people we
cannot be really interested and that
and we released code for training
vestibule networks from scratch these
are very very deep networks up to a
thousand layers deep and training these
from a systems perspective is not that
simple because thousand layers they
want to take as many jeep use as you
can give them. So it just as a the evil
this mostly for ourselves but we
thought it would be interesting to just
open source it for the community where
you have a complete example that's
fairly simple to follow where you train
multiple cheap use con that's that's
that's especially in this case and that
gives you an end result that state of
the art. And we also really speed in
models for that. Um and Google really
is is their inception models which are
also another up retrain demented models
for vision and we do have those models
as well ported records and one thing
that's what thing that happens is that
if there is any three two in model that
appears in another framework someone in
the community just port it into a torch
within like a week or two and so are in
our users are in that were never had
the feeling of like being left out from
from the state of the art. And I think
this is one of the important aspects of
course the fact that we have a large
enough community that people don't feel
like working like they're just working
by themselves but they feel like they
are leveraging a lot of value from the
community itself and like were the past
year we've been looking at the logs for
like how popular torches and it's it
has about that I was in downloads the
day on like we basically track the
number of installs over get have ben
it's fully come to you and that's one
of the interesting parts of George
George itself is not backed by single
company for example it's it there is a
nonprofit duh that runs torch and all
the companies that are involved in
using towards they also contribute back
to the the open sores towards in
various parts like that engineering
with performance optimisations with new
packages and so on. Um some of the it
interesting packages examples of torso
examples of form a huge driver and user
adoption for torch if you have high
quality examples and how to use torch
people find that very very useful to
get into torch read or then for example
Reading tutorials which might not cover
like the use case you want to do for
example. So some of the interesting
examples that appear this is neural
talk to which is the captioning. Um
network where you send an image and it
fill us but out a demand image caption
it was written by stand for guys and
require pretty and just in johnson. Um
this is a like one of the nice examples
where you you have an example of a con
and then plugging into an LSTMR and
then and the whole training glue there
and like training these things is
obviously not just like putting
together putting them together and it's
like using some learning rate there are
settles the subtleties in their and
examples like these are interesting
another example is the new style
project another really popular project
on top of which many people have built
art installations. And so on. Um
decision also from Stanford by Justin
Johnson you give a an image from the
real world and some are some painting
and it would do this optimisation to
match the statistics of both of them at
different layers of a pretty train
network. And you would actually get a a
picture that is that looks like these
style of the painting but it's still
the content of the image you gave. Um
this is that this has been one of the
most popular projects and like we've
also there's there's other that that
there's other variance of these just
feed for variance where you can
actually do in euro style and real
time. And we've like someone in the
community converted that plug it into a
video stream and you can actually have
do streams of fly you know or another
popular computer vision application
that has been appearing recently is a
visual question answering your shoe a
Benji O yesterday showed a demo of face
book system that does visual question
answering there is an open source
implementation from Virginia tech doing
be a question answering and this is the
university where one of the popular
datasets comes from for a week a and
some more interesting examples is this
one here is called the neural doodle
the you just do it'll the the painting
that you one like you just give a rough
sketch of like what you want to paint
and then you give some other painting
and it will actually produce produce
your doodle into really arty painting
coming to the more practical aspects
some things that forced does really
well so low itself is a language that's
very very light overhead and it has
been like one of the reasons little as
popular before or shower regardless of
George is that it is used in game
engines a lot because it's a very small
language to the low language itself is
about twelve thousand lines of C could
and game engines use it a lot too and
bed and little into into really complex
and high performance C plus plus. Um
I'd face book one of the things we've
been looking at is hard to learn
physics from the world and we wanted to
start off with virtual worlds. So we
plugged towards into one of the most
popular game engines available which is
unreal engine. And we really is this
integrated I these disintegration into
the open source. And you can basically
plug towards into an unreal engine and
run and and with with the very high
performance like low latency pipeline
you will basically get to interact with
the unreal engine world then you can
for example do various reinforcement
learning and computer vision research
or hybrid of those this examples here
is where at a paper that was published
that ICML this year my colleague adam
lower and some others they learn how
the learn to the learn the physics of
blocks where they want to predict
whether blocks are falling or how
blocks fall if they if they do fall
where do the fall and if you're given a
pitcher example of the picture on top
can you predict whether that picture
the blocks in that picture would fall
over this stay stable and questions
like these and then one of the
interesting things here is that network
was trained fully and the unreal engine
and run meant but then then and at the
validation time at S time when they
wanted to see if that network actually
generalises to real world block falling
they they constructed a small unmanned
of wooden blocks set that like a white
background. And the the network
actually does really well just and and
this real world enrolment even though
it was trained completely in this
unreal in based which will so you could
see how that KI that might extend to
other applications as well another big
thing these days is reinforcement
learning especially that atari games
there are a couple of projects that set
up all the reinforcement learning and
enrolments for you so that you can and
including implementing all of the
popular algorithms and reinforcement
learning for you so you can basically
just go and and use those as you
baselines and do for the research in
improving your reinforcement learning
algorithms. Um this is one of the one
of these little base and amendments
that has all of the popular recent
reinforcement learning algorithms
implemented like DQ networks double BQ
and and so on. Um but apart from this
there is this company called opening I
that's really zen and Ron and that to
do reinforcement learning research it's
called the RLGM and that's something
that has been really well written it
has a lot of and on men's and they are
examples of using O Jim but course that
appeared recently as well and also
completely open source. Um coming to
coming to the NLP side of things there
are several good projects of an LP and
in torch that open source training
language models training sequence to
sequence models maybe for translation
for example there's also this one
interesting project where you have a
conversational model basically a chat
but based on the Google paper that
appear not too long ago in the lumber
that doesn't fifteen. Um after that
paper appeared someone from the
community quickly implemented that
model and torch and this is another
good project if you want to do an LP
research and George and just to take a
look at the internals and and lastly
your shows probably gonna be talking a
little bit about the gender kit
modelling. And I will cover a little
bit agenda kit modelling for images in
the third lecture. Um but there is a
project that I wrote that the that
produces pretty pictures. Um so if you
want if you have it on sets of images.
And you want to basically train a
generative model that can generate
images that are similar to the images
that you gave it like the model like
the models fairly stable and if you
just take the images that you have
probably like about ten thousand plus
images that you give any to try to
build a genitive modeller on this and
people in the community just get this
code to generate eighteen century art
generators Monday characters and so on.
So that's basically an overview of the
community the the the very good
examples that you have in the committee
and George and next I will go to
towards just like the basic packages
that are from the core of course and I
will go into deeper dives of some of
these packages in the next to to
lectures the main packages neural
networks so we have a core package
called and then it's just stands for
neural networks. And and then is built
on this concept that if you want to
compose neural network if you want to
go complicated neural networks then you
build them as some kind of I like how
you build how you build the of a system
that Lego blocks you would basically
put them together like them one after
the other and you can have containers
where you can stack lots on top of each
other or put blocks in parallel as
well. And this helps compose really
really complicated neural networks. Um
and the and then package is powerful
enough to have captured a lot of
architectures without Reading a lot of
code for example one one example that
comes to mind is when the when Oxford
release the BDC network and their paper
cafe had the digit did they did their
research in cafe and the the network
definition of the VGG network in cafe
was about two thousand lines of code
and then the pro to buff and in torch
you could basically right that within
sixty seventy five lines or less. And
it it was just like of it's because
George is not data and it's code you
can really right very flexible
structures and the neural networks
package Powers all that when you when
you don't have you kind of when you
want to build really complicated no
networks for example if you have to
create an LSTM cell or some new kind of
fancy memory or like just crazy
networks that you dreamt off last night
you we have another package that
extends the N and package called the
and then grab package will be going
into both and then and then in the in
the deep that the and then have
packages that's you construct really
complicated neural networks it has a
graph API similar to TNO and that
answer flow. But it's constructed at a
granularity that's slightly higher
instead of cry building graphs on top
of every tensor operation you would
build graphs on top of I'm modules that
a pack more compute then a single tends
operation and recall these layers and I
want to go into another interesting
paradigm of of that that recently
appeared at this is this is a pack is
that was contributed but better. Um
this is call undergrad and this is
slightly differently in which people
can do gradient based learning. It's
unlike and the other packets that is
available for doing deep learning
except for the autocrat that people
from cuter and also vote in python this
is this is the the the the way it works
is not new it has been well understood
what do I ideas it tape based mechanism
to record what is going on in the
forward so basically you can write your
neural network as a bunch of cancer
operations you can even have if
conditions for loops and while that's
basically you can write a function that
is just like the standard kind of could
you right with the for loops and
wireless and elves. And when you
execute that function what autocrat
does is it goes in deep into the low
language itself. And it it records
every operation that happen in the
forward face. Um and autograph defines
a backward operator for every operation
in the ford phase and and and little
that like very little restriction. And
when you want to compute the greeting
to dress back to your function you dis
arbitrary function that you just
define. Um it basically plays the tape
backwards the tape that it recorded
during the forward face it just plays
it backward in for every operation it
confuse the gradient with respect to
each variable involved. And this is
really useful when you're training
networks with dynamic grabs where it's
not the same computation that you do
every time like do like your
competition might be conditional on for
example the current normal the
gradients or it's like it can be and it
can be of a very dynamic about this
dependent on any arbitrary thing and
out of bed can do efficient gradient
computation using that and another
package also really spectrometer which
is very important in today's world is a
package for distributed learning to do
luck to train George models over multi
machine and multi GPU power lines. And
to actually do different kinds of
distributed learning using to dislike
package you actually don't have to
write a lot of code or understand a lot
of complicated infrastructure there it
the dislike package packages the the
whole distributed learning as some kind
of your neural network model has a
bunch of parameters that you're trying
to optimise and you're in you you you
no network is consists of parameters
and activation that it basically
ignores the activations part. And these
parameters are can be can be basically
past and do a these certain functions
that will big your parameters according
to either synchronous a CD asynchronous
a CD or elastic asynchronous a CD
algorithm and even extending this learn
to do your new kind of distributed
research you invent a new algorithm to
do distributed optimisation is really
easy as well because all and little
with the few lines occurred in fact if
you go look at the synchronous an
elastic asynchronous implementations
they're actually in a single file that
very few lines of code and this learn
takes a the MPI paradigm where it
basically has certain operations like
already use and scatter gather
implemented and you you you can build
your distributor documentation on top
of that this is unlike. Um the tensor
flow or the MX net paradigm where you
look at your whole computation your
your whole neural network and the
optimisation as a competition grad. And
you try to do dependency analysis and
find how to optimally collect or
distribute the certain cancers are
variables when appropriate it sim
simplified model but it works as well
and as like fairly good performance I
haven't done benchmarks of this against
against either of the other packages in
the distributed setting so I can't
actually say how it plays out in terms
of this forces that in terms of
performance not coming to the core
philosophy of course. Um there's a few
few things we really care about and
really like about origin we want to
keep them and not move away from any of
the aspects the first is interactive
computing. BV strongly care by having a
researcher open and interpreter keep it
open for days just like do very
nonlinear pads of computation where
they might execute whatever function
that they think of next and this is I
like I think we feel that this is one
of the most powerful based research is
carried out than usual and we do not
want to go to some kind of compiled
then Ron men where you have to like
debugging or doing changing what you do
is harder you have to go into file
change it and rerun you program and so
on. Um so as part of the interactive
computing paradigm one thing we care
about is to have no compilation time at
runtime when you're using towards
itself. So something that for example
packages like TNO or chain or do is
that they invoke the compiler at
runtime do basically optimise their
code better they they put together a
code that specific to your complication
grab and then they compile it was like
this is your and we see see at runtime
and this is something that in looks a
lot of overhead on cognitive overhead
for the researcher it's it's I've see
Indiana programs compiled for like two
or three minutes or even more I've
heard of. TN to programs compiling for
hours for example and this is something
that eats simply research time you're
sitting there and front of the computer
to do research and we strongly believe
that having no compilation time not
even like to second of compilation time
is really important for from a research
setting the next thing is improve the
programming what what I mean by that is
that you want to write your your code
as naturally to the language as
possible you want to write you could
like all you always Virginia code like
that for those while that's it like you
wouldn't want to right part of your
code for example defining a neural
network and some other data like
language like a J.'s on config greater
or like some pro to about or in the
case of tender flow for example as a
paradigm very you have this you have to
use special operators to you do while
loop sort of conditions for example. Um
we we strongly believe that imperative
programming is the least resistance pat
for new researchers to researchers just
get used to programming and do research
and feel very little cognitive overhead
and not to think about how to do
certain things and have it back onto
the main actually and the third thing
is minimal abstraction. So we keep an
emphasis where whenever we want to
whenever you want to find some porch
good that actually does the actual
computation for example if you want to
find out where this soft max operation
is being computed and let's say
somewhere inside C the the number of
hops you have to take to go find that
code you want to keep that as minimal
as possible probably like but then if
you jump for one or two functions
you'll find the code that actually runs
in in C or could actually runs the
competition that you care about and
this is something we think is very
important when people want to write new
modules or contribute back because if
you have too many abstract since you
can't think linearly like you have to
always start thinking. Um through those
attractions and it's it's an overhead
aware after three or four abstractions
you pretty much lost and like you you
don't know a where if you change a
particular part of the code bitch
pieces start moving and that's
something that it's really hard when
you're doing development especially
when you're not an engineer restrain
for several years. Um and so we we
think that having as minimal of an
action as possible to the code that
actually runs the complication that you
just define is very important and we
design all four packages with that
philosophy and lastly we have this
notion of maximal flexibility. Um and
look kind of plays into what we need
their in torture you don't have any
constraints on what you can do or
cannot do are class system doesn't have
a tightly defined interface where you
have to implement certain functions or
you have to you cannot implement
certain interfaces we wrote our own
class system we torn type system in the
something that's a little really give
this the power of er lu it doesn't
actually have any of these look lets
you define any of these fundamental
system that you take for granted and a
more strongly typed language and V
design own systems to be as flexible as
possible in this aspect where any of
the users they can do arbitrary things
that that we never expect them to do
when we're designing the package. But
we think that adds a lot of power to
the users especially the hacker kind.
And it does get as into a lot of
trouble there if you want to write a
package now we have to think about all
the possibilities and all the ways in
which users will use it. And make sure
that the package that rewriting doesn't
break in all these cases and it's kind
of harder as core developers. But we
think it's really important from from a
hacker culture perspective to keep this
mikes more flexibility lastly able talk
a little bit about the key drivers of
growth for torch white people you
stores we seen that having tutorials
and more importantly support more than
tutorials having a lot of really fast
support as really important when you're
building your frameworks. Um because
most users are not covered by a
tutorial like most used wanna do
something else than what you what your
tutorial for but they would ask
questions on the forums or an stack
overflow for example and you would want
to answer back as soon as possible
probably within four hours or like
within twenty four hours because
otherwise users just like they just
never come back to using your package.
I especially like the new ones and
another important thing is pretty train
models and high quality open source
projects as they showed earlier in the
slides. And the GP use it's cheap you
support is something that people come
to court for these days it's actually
been much these situation around you
use has improved a lot especially tend
to flow coming in a lot of a lot of
other frameworks have basically said
they they can't be substandard anymore
but for for quite some time a lot of
users that can't records came for the
fact that we had really strong jeep you
support and you're very proactive in
our development I'm in awe extractions
as I explained in the last slide seems
to be actually one of the key drivers
of credit as your compile time is like
something out of for users say they
find really are some and torch. Um and
one of the big big things is community
a community had is is one of the key
reasons people. Um you storage stick to
torch and are pretty happy overall
because when you just have a lot of
other people doing the same thing you
can just have people to chat with the
back or like ask people have for help
and so on. Um and lastly I quickly
added some couple of slides because I
think someone asked me yesterday about
whether I would do a comparison of
course with other frameworks I'm not
gonna do a comparison of torture that
are frameworks but my colleague young
teen yeah word cafe made this slide
where he places all the frameworks and
this linear access on one side you have
this these properties where you want
stability and SP then like like
basically production ready like never
break and easy to understand for
production engineers and so on. And the
other side is what the researchers want
which is like a flexibility fast
situation cycles and so on. Um and like
as the adding pointed towards us it
somewhere closer to the research side
what we don't compromise on is the
speed like because of the year choices
we made early on like sticking to
little and being a very very close
interface to see we actually are one of
the fastest framework if not like I I'm
I I do benchmarks on the side just out
of interest and forge maintain up
maintains its position as being one of
the fastest frameworks ah so without
compromising the flexibility and debug
ability and it's B and like the whole
research aspect of it coming to the
future of george. So what are the
trends we've seen is that a lot of
goodness comes from fusing computation
for example if you have a convolutional
later followed by a regulator and then
as a bash norm or convolution by some
value if you actually fuse these
operations into a single a single could
occur no that does all of them together
for example it actually ends up being
much faster than if you do them one
after the other even though it's easier
to understand and and it's easier to
implement them separately. And we are
looking into doing this kind of fusion
one of the packages that came out of
Paris tech is the spectacle a net where
it takes your existing neural network
and torch and then it optimise is a
network for memory consumption start
sharing certain certain buffers that
that can be shared and the overall I
make consumption of in no network is
drastically reduced and you can change
the future the D kind of automation to
do that come part at runtime and the
ark continuing and will continue to
break down the barriers of entry for a
new users especially for them to start
developing their own modules rather
than just using what we provide. Um
because that's the only way we scale
and we strongly believe in that like
you cannot have a team of really strong
five engineer is that will do
everything for the deep learning about
it just doesn't scale so we want to
empower people to start thing their own
modules in contribute back and that's
always been our emphasis and we are
going to continue making design choices
that break down barriers of entry and
one of this is we want "'em" keep
making and never forget is keeping
focus on the long tail and by the long
tail I mean all the institutions that
cannot afford three hundred GPU for
researcher it's like we understand that
most of our users and most deploring
researchers in the world have a system
under their desk that they're probably
sharing with another researcher at like
one to four jeep use and this is
something we never wanna forget while
they're writing new stuff or rather
making performance improvements. Um
lastly an important point to to make to
be honest about the world is that the
python ecosystem is much larger than
for example the like the system which
pretty much is just orders on the
scientific computing aside to bridge
this gap we have extensions I will talk
about them in the third lecture we have
a big bridge to python we can call and
the average rate python functional
package including packages that return
by dancers and they will see mostly be
converted into forced answers and vice
versa and we are also looking into some
deeper python integration maybe some
python bindings but that's just like
ongoing thought that's the end of the
first top and feel free to ask
questions okay it is yeah bleep. So I
think if we agree talk really looking
forward to see what comes next but one
question but but me S to be is the
composition melody of models we always
seem to start from scratch. And there's
so many Greek mobiles a there that we
don't know about is there any plan of
creating a marketplace to actually know
what others are doing in them to not
having to know myself as a human we
could good a good model but actually
have suggestions so this is for towards
itself actually happened to created
it's not even that much work it's one
single get have grappled with the read
me and people can send in Florida
that's right. Um but it counted a
larger team off we want a universal
marketplace where you want to have
model definitions from cafe towards
tensor float and everything. Um right
now the they've we propagate
information on like what's available is
mostly via tweeter where every time a
new papers implemented in torture every
time a new creature models with these
indoors we just treated out. And most
of our users of all as there that's for
now but like I mean I don't really know
how to get all the frameworks together
because they they all have their own
strong opinions on the marketplace in
the format the common format to fall
and so on. Uh at least four doors yeah
we're doing what we can to keep all
information centrally okay thank you.
Um could you discuss a little bit on
the PI stability of watch and why you
don't have really cycles okay so that's
something that has gets asked a lot we
don't have really cycles because we
don't have enough maintainers. Um if
any of you are willing to become a
maintainer of towards just as a really
is and in there that's cutting trees
branches feel free to reach out to me
we want to start doing more stable and
structured really cycles there is no
technical limitations there just so you
some new question maybe we can just go
for the coffee break down because you

Share this talk: 


Conference program

Deep Supervised Learning of Representations
Yoshua Bengio, University of Montreal, Canada
4 juil. 2016 · 2:01 après-midi
Hardware & software update from NVIDIA, Enabling Deep Learning
Alison B Lowndes, NVIDIA
4 juil. 2016 · 3:20 après-midi
Day 1 - Questions and Answers
Panel
4 juil. 2016 · 4:16 après-midi
Torch 1
Soumith Chintala, Facebook
5 juil. 2016 · 10:02 matin
Torch 2
Soumith Chintala, Facebook
5 juil. 2016 · 11:21 matin
Deep Generative Models
Yoshua Bengio, University of Montreal, Canada
5 juil. 2016 · 1:59 après-midi
Torch 3
Soumith Chintala, Facebook
5 juil. 2016 · 3:28 après-midi
Day 2 - Questions and Answers
Panel
5 juil. 2016 · 4:21 après-midi
TensorFlow 1
Mihaela Rosca, Google
6 juil. 2016 · 10 matin
TensorFlow 2
Mihaela Rosca, Google
6 juil. 2016 · 11:19 matin
TensorFlow 3 and Day 3 Questions and Answers session
Mihaela Rosca, Google
6 juil. 2016 · 3:21 après-midi

Recommended talks

Présentation du projet SeeColor
Juan Diego Gomez, Université de Genève
15 oct. 2014 · 10:26 matin