Player is loading...

Embed

Copy embed code

Transcriptions

Note: this content has been automatically generated.
00:00:00
welcome ten years colour days who's excited about this
00:00:07
yes a few alright well at least you're here that's most important thing so my name is chris
00:00:15
um i work for internal company that's here on my shirt you might
00:00:19
know it that's called torture and um i do a lot of conferences
00:00:25
and i've already done a lot of conferences this year giving this presentation uh most of the time and
00:00:30
they're all very java sentry conferences right but this one
00:00:35
scholar this is actually the most important one for me
00:00:38
because eighty you know twenty uses a lot of colour would be
00:00:42
crowding all the technology we're using is still
00:00:46
good for scholar called that it's in unbelievable almost
00:00:51
so you guys you need to you know pay attention really the right now
00:00:56
uh and then you go home and try the same thing as as but what we did
00:01:01
so i know it's already the second day or the third but i really
00:01:06
wanted to tweed about this event right it's ten years so tweed about it ah
00:01:10
put a bunch of hash taxing it with with about all the stations that you liked
00:01:15
maybe it with about a one c. didn't like don't do that with mine thank you um if it would about my talk at the has
00:01:22
checked with i. b. m. thing because it would actually have the empty
00:01:25
and we're not that small anymore we're now i think eight people or something
00:01:30
um so we have a bunch of j. v. m.
00:01:32
engineers we have three g. c. engineers and they're constantly busy
00:01:37
doing you know she sees support you know all the issues that you have with g. c. everyone in the world does
00:01:43
then i'm a compiler engineers why duke chip compiler work um
00:01:47
i caught of man main business lobby oh he helps me out a little bit it does
00:01:52
and that might be interesting to you he does colour specific optimised stations for
00:01:57
crawl so and some of them already upstream to have one in the pipeline
00:02:02
it's a little more complicated and convoluted but that will bring another performance boost um then
00:02:07
we have some people work on infrastructure you know testing bob i would build our own cheating
00:02:11
case we have to do that um and then we have some people work on the
00:02:17
tool called all the to an opportune is the thing that i'm going to talk about today
00:02:22
so i'm a little bit about me ah i'm working on t. v.
00:02:26
and for a very long time it's i think it's fifteen years now
00:02:29
and all these fifteen years i worked on chicken parts so i'm going to ask a bunch of questions because i wanna know what you know
00:02:36
please raise your hands if i ask uh do people know what the j. v. m. it's
00:02:42
well yeah i mean you know we start well um people know half but
00:02:46
yes the j. v. m. of open cherry cake who knows what the chip competitors
00:02:50
ah that's a lot of people alright so just for the few who don't um basically when you have to
00:02:57
abide code if it comes from java or scholar doesn't matter right the j. v. m. takes that java byte code
00:03:02
interprets it which is really slow and then there are compilers
00:03:07
that compiled the java byte code into native code on the fly well you run your
00:03:10
application we call it just internet right that's uh on the other hand would be a u.
00:03:16
t. ahead of time compilation which would be something like g. c. c. where you compile
00:03:20
something into a native executable so does the chicken part of our country compilers uh still do
00:03:27
i worked at sun uh and or call and ask what compiler team
00:03:31
and mostly on c. to who knows what c. one and c. two are
00:03:36
right lessons so hotspot has to chip compilers
00:03:40
one is called c. one or klein compile and the other one is called c. to our server compile
00:03:45
you might have heard the dashed line tester before i don't use that anymore
00:03:49
we don't need it anymore but they're too chicken parts and i mostly worked on
00:03:53
c. too so see one the purpose of c. one it's high throughput compiler
00:03:57
so the purpose is to get away from interpreting code as quickly as we can
00:04:02
does a little bit of optimisation is not too crazy does some in lining some other stuff but nothing fancy
00:04:09
uh we just wanna run a native code and then see to as highly optimising compiler
00:04:13
so it it does all the fancy um optimisation is you can think of certainly more in line intense
00:04:19
one escape analysis lupin rolling all rectory say you
00:04:22
know all that stuff so that's it to 'em and
00:04:28
these three projects uh up basically the biggest ones i've worked on at
00:04:32
my son at my turn but something or call chase are two ninety two
00:04:36
invoke dynamic you might know what i'm not sure how many
00:04:39
people do job here but if you use java eat lamb does
00:04:43
you actually using invoked dynamic on the thought that's how it's implemented um
00:04:48
clear we had to wind implementations for chase are tonight to that was
00:04:51
jobless seven long time ago um we had two implementations the first one was a lot of handwritten assembly code
00:04:58
pages and pages of assembly code which we have to part all the architectures that we support it
00:05:03
was a major pain in the ass to write was major pain to support maintain and then on top of it we had a performance issue
00:05:10
so we completely redefined the whole thing and moved all the handwritten assembly logic into java
00:05:16
into a package called toppling invoke and a lot of that java code in
00:05:21
travelling invoke i wrote so if it doesn't work you could technically bring me
00:05:26
um but i always say the code i wrote was perfectly fine other people touch after me that broke it that's how i look at it
00:05:34
chapter two forty three we introduced into k. nine a chopper level che
00:05:38
b. m. compile interface also j. v. m. c. i. so brawl is
00:05:44
written in java right oh maybe i should ask this now who knows what robert's not probably yeah
00:05:51
all hands went down and everyone's confused so crowding am
00:05:57
is um an umbrella term for three different technologies basically
00:06:02
um brought all the chip compiler who knows whether to compilers all hands would be up i just explained it
00:06:09
so i'll crawl the chip compiler then truffle which is
00:06:13
of frameworks to implement language one times and substrate korean
00:06:17
which you might know was native image that's what everyone so excited about right now to compile things to native
00:06:22
so rather chip compiler um that's basically what i'm talking about that's what we're using it without the
00:06:28
uh if you saw stews talk yesterday and danny they're using native image to speed up
00:06:33
source compilations right but we in the b. m. thing we only use scroll the chip compiler
00:06:38
and it's written in java and crawl has an interface because it needs to talk to hotspot which
00:06:43
is written in c. plus plus so there's an interface the crowd uses and we just extracted an interface
00:06:48
put it in the java nine module and called it chapter forty three j. v.
00:06:53
m. c. i. it's kind of stable ish it's not an official supported a. p. i.
00:06:58
but it hasn't changed a lot since nine so basically if you have a brawl jar file
00:07:03
you can just take chicken nine ten eleven twelve thirteen and run it with it
00:07:08
so and then to forty three if we did for chapter two ninety five ahead
00:07:12
of time completion also angelica nine this is not needed in which does something different
00:07:18
um this is basically a small command line utility where you pass in
00:07:21
class files or char files and then it takes all the methods in there
00:07:26
sends it off to brawl compiles and spits out the shared library at the other end
00:07:29
and then possible can pick up the shared library and basically use keeping the interpreting step
00:07:35
and that might help if the application is really really big and runs a lot
00:07:39
of different java code when it starts up this might help you with start up
00:07:43
very tricky the main difference between this one and native image is just wanted actually java
00:07:50
like java specification even which is just a subset of job so yeah
00:07:56
and now work at the greatest company on the planet whatever and this is what what it looks like
00:08:02
we have hundreds and hundreds of micro services and thousands of instances of these right
00:08:08
and then on top of it we run on heterogeneous hardware we
00:08:12
own our own data centre so we actually know what hardware we own
00:08:16
you run probably in the cloud you don't even know what machines you know so one problem
00:08:21
with this is when you wanna do performance tuning you don't even know what you're tuning for
00:08:27
so performance tuning or optimisation channel can't you think doesn't skip but we
00:08:33
all know that the the the way we do performance tuning to day is
00:08:38
use it at work at your desk your know it by the performance of something and then you
00:08:42
decide okay i'll spend a couple hours into this and this happens every three years five years never
00:08:49
but that's how we do it today and then you can only to the few
00:08:52
parameters because you have to build up a mental model in your had to understand what's
00:08:56
going on if you change this one and this happens and the two you know very
00:09:01
time consuming labour intensive error prone most important point on the slightest upgrades make optimal fleeting
00:09:08
so we all live in this simple cool agile world and you deploy
00:09:12
hundred transit they because it's so cool but the code is constantly changing right
00:09:17
so even we deployed not multiple times a day but we have some services
00:09:22
we deploy multiple times a week so if you do performance tuning to day
00:09:27
it's not valid think tomorrow anymore because the code change but the code
00:09:30
is constantly changing so an important point is many services operate below them out
00:09:37
and that's certainly true footwear the services right we on the same boat as
00:09:40
you right now it's not there's not much difference this kind of theoretical part
00:09:46
what is performance unit right you have a function have defined over the main x. and then
00:09:51
what do you wanna do is you wanna find the configuration a that maximise f. whatever ass yes
00:09:57
could be throughput could be read to reduce latencies could be
00:10:00
reduced memory consumption could be anything right one important point is
00:10:06
it's always subject to some constraints and the number one most important constraint is it still has to work
00:10:14
and you will see with the experiments i did that actually you can tune too far so it doesn't work anymore
00:10:22
papers performance you write as we all know what we've done it probably at least once in a lifetime
00:10:27
you pick a parameter test run it on the system right
00:10:31
you measure us and then you get it back and then you
00:10:35
the performance engine yeah a person highly paid i hope um has
00:10:40
to decide if the value got back is better or worse than before
00:10:45
what is ridiculous right i mean maybe we could get him wanting to decide
00:10:49
if it's better or not but well we really want is some black box
00:10:54
we don't need to know all what the black box is what it's doing the only thing we need to know
00:11:00
is if the thing that we just measured is better or worse than before and what to try next
00:11:05
right that's all we need and we use something corporation optimisation so it's a method to learn potentially
00:11:13
noisy objective function iteratively efficiently that's important because as
00:11:17
i said earlier we can wait forever for result
00:11:20
get the code is constantly changing right we need you know or something some
00:11:25
reasonable time finds near to my nephew durations works well with
00:11:28
nonlinear multimodal high dimensional functions that the last point is important
00:11:33
because if you think of the j. v. m. or anything else really the hundreds of parameters you could tune right
00:11:39
especially on the chip some people ask me after my talk so put it to my database of course you can't
00:11:48
doesn't mean it i'm just using the g. even because i know it but you could do anything you want to get my craft one
00:11:56
so if you wanna know more about the speech an optimisation thing um my colleague wrong key he's the expert
00:12:02
so he if he's given presentations about this to use it that works for example um
00:12:07
yes very soothing voice uh listen to is your troops and you see the
00:12:12
slides he's using and basically the only thing i did is a still that's let's
00:12:17
and and really fully explaining to you how patient optimisation works i'm not an expert in that's
00:12:23
if you innovation optimisation still call me up don't ask questions um
00:12:28
this is just for you to understand the data using later a little bit better
00:12:33
and how we get there alright so we have a parameter that that affects performance cars from negative six to six um
00:12:39
then on the y. axis we have performance hires better we have three actual evaluation
00:12:44
run so that we data points you seen true results this is the actual performance curve
00:12:50
which we don't know right we wanna figure that one out and imitation optimisation does
00:12:55
just assumes a performance curve with some certainty and the blue
00:13:00
area above and below the curve is the circuit you'd see what we
00:13:04
actually have a data point the circuit least zero or the uncertainty um
00:13:09
because we know that's correct right and then overlaid with the actual performance curve looks like that
00:13:15
and i know what it looks like but not really but it doesn't matter right and then what it would be asian optimisation that
00:13:21
puts in a line with the best result you have right now and the blue area above the red line
00:13:28
is the curve at the bottom that's you expected improvement and the
00:13:31
highest point of you expect improvement curve is the value you try next
00:13:37
that's basically what it does so and then we take the one the highest one which is the the second one from the from the left
00:13:44
does want we run into the evaluation we get the data point right and
00:13:47
so it goes through these one of the left now and then there's one
00:13:52
and then we have exhausted that space you're right we we know the performance curve
00:13:57
we have the data points we've exhausted that space we move on to the right
00:14:01
we do that one that was not very good so we do the one on the barry left was also worse and then
00:14:06
we do the one over there we do this one down one and this one and then we found our global up to
00:14:14
that's all what beijing optimisation dust what's it basically what you would do as
00:14:18
a human as well just a machine does it better and faster and cheaper
00:14:24
right that's all that's really all so we lighted non parametric
00:14:28
robust extensible sure but the last point is the most important
00:14:32
ones belt has too many if types of real world high impact problems this thing is around for awhile and it works
00:14:39
and that's what we need so we are not here yet but what
00:14:43
we want is this all between thing to be always on in production
00:14:48
we want our services our hundreds of services to tune themselves constantly all the time and then we need
00:14:56
to rely on on the system we're using right so we using this one because we know what works
00:15:02
okay that was of the robot now let's go to do experiments the numbers in the grass
00:15:08
so what is the other two it's a beige and optimisation of the services like all
00:15:12
that well last please get that have tech training uh try that every conference never works
00:15:18
don't disappoint it this time so uh it it's kind of a joke
00:15:23
but it's actually true it runs as a service inside of tutor and other
00:15:28
other things i actually the basic just vision optimisation services well and we use it for for order
00:15:33
to it's um it's a thing called white lab which is the company we're quite a couple years ago
00:15:39
and it's unfortunately um clothes stores and we can't open source it for whatever legal reasons and it's an enhancement of
00:15:46
a framework all spearman transparent is still as available as open source and get up so if you're interested in that
00:15:53
that's the one part asian optimisation but and then opportune is just the driver to run the experiments you can think of
00:16:00
it as a very you know complicated script that starts and
00:16:04
stops instances of measures things that's that's really all it is um
00:16:10
when right now um it's very hard to use opportunity or there's no
00:16:16
nice user experience or anything but if we have that at one point in time we planning to open source other too
00:16:22
and then you could take that make it work with spirit into
00:16:27
exactly what we're doing right now we run in over on may source
00:16:32
that's what we do right now so if we would open source opportune and you run on
00:16:37
talk our company does whatever whatever you would have to write a let's call it back and
00:16:42
to start and stop instances really that's all you would have to do right
00:16:48
so what is god we talked about this before it's a java virtual machine just in time
00:16:52
compiler actively developer order last as a stand out there walk over an estimate trillion questions um
00:16:58
as an official intriguing project for it but all the work is actually don't get have
00:17:03
used to be on site as i said earlier and it's written job the fact that it's written in java is not important for this talk
00:17:10
but if you're interested in running drawl as off to day
00:17:14
you might wanna watch this talk to them doing how to
00:17:17
use the new j. v. m. to compile in real life because they a few things to know if you run on
00:17:23
jake eleven and i think everyone here is running eleven in production today
00:17:28
yes no right or not um it would be very easy
00:17:33
because craw is as you've seen before because with a u. t.
00:17:37
chip two ninety five we introduce crawling to open to to get so it's in open changes since
00:17:42
not mine says chain get ten that was a chip um that or cassette okay it now with
00:17:48
supported in quotes as experimental should compile so you can just along with a few command line flags
00:17:54
and as off today there are few things you should
00:17:58
know number one is something called bootstrapping because growers within java
00:18:04
but it's the chip compiler so it has to compile itself you know it's gonna matter circle of thing it's it takes a little
00:18:11
bit of start up you know time but it's it's not that bad if you watch that talking we'll see it's not a big issue
00:18:16
uh and then the other thing to know is and that that will change actually in the future there's a project called lip crawl
00:18:23
where um or collapse and or call too peachy you are working on to a. o. t. compound rather self
00:18:29
and link it into lots but then all these things will be gone but as of today that's the state um
00:18:36
since cross written in java member allocations of different right see once and see two written c. plus plus of memory allocations
00:18:43
wanted to check compilations happened with malcolm the native heap but
00:18:47
draw i'll allocates memory on the job you know it's just
00:18:51
things you should know if you run benchmarks it's good to know that
00:18:54
you interpret the numbers correctly uh okay so which parameters that i too
00:19:00
i tuned i picked re in blinding related parameters who knows what a blinding us
00:19:06
alright so in lining is so when you write code and you have as a piece of code
00:19:11
that you know have multiple places that thing you do is you factor this into small met right
00:19:18
the perfectly sane thing to do for maintenance perspective because you're human
00:19:23
but what the to compile it does it does what you did
00:19:26
it may take that piece of code input to back in the places where it's called that's for performance reasons and also to
00:19:33
why even to the view of our compilation unit to make it bigger more didn't know all
00:19:40
as a compiler the more optimal stations we can apply so in line is the mother of all optimisation just because
00:19:47
it widens the view the world view of what you see and then you can apply other optimisation is much better
00:19:55
especially ah escape analysis kick the noses goes hand in hand
00:20:00
with uh uh in line it i'm not going too much
00:20:03
into detail housekeeping houses work could basically it can get rid
00:20:07
of object allocations if the compiler can prove you don't need it
00:20:12
that's the most important thing wide probably so good was calico because colours
00:20:17
check it's allocating all these temporary options that you don't need but it's happening
00:20:22
and the need you and you can control it but draw this taken care of that right so i i tune in
00:20:27
line first one is called trivial in lining size default value
00:20:30
of ten and this means ten notes in the compiler grass
00:20:35
so what compiler does its parses source code whatever that source code is
00:20:39
in our case it's java byte code prices the source code into into
00:20:43
a graph and if that graph as less than ten notes we know it's very trivial method and we just in mind it all the time
00:20:50
right maximum inland says is the other end of the spectrum it's three hundred nodes
00:20:54
if it's three hundred notes we don't invite and then small complied low level graphs ice
00:21:00
it's similar to the second one um but almost every optimising compiler s. c.
00:21:06
two has that brought has it um i. b. m. j. nine some searches
00:21:11
see incurring has something like this as well maybe even more different levels of
00:21:15
compiler grass is usually high level intermediate representation on the lever we intend intermediate
00:21:22
representation the difference is the high level one is high and the low level ones
00:21:29
you understand that's how explain things
00:21:32
no just getting the high level one is closer to the source language
00:21:37
an artist java byte code but could be i don't know basic or whatever you
00:21:41
write of the little level one is closer to the actual machine instructions for the architecture
00:21:47
in our case that would be intel or could be or miss parker whatever
00:21:50
right so that's why what's also three hundred so these are the ones that too
00:21:56
i have another talk uh i gave it will last year it's colour days in new york uh it's
00:22:00
called twist quest for holy crow granted it's basically the story of me working the first idea tweed or
00:22:07
and how we started running for the services on crawl and all the box we found out when all the box we found
00:22:13
if you're interested in that and also how much c. p.
00:22:16
u. we're saving by running on route and so during this talk
00:22:20
um well i was preparing the slides i did the thing that i told you earlier you shouldn't be doing i hand tuned
00:22:28
right so i sat down a few hours on friday afternoon and i tuned it and so i'm
00:22:33
just going to show you two slides out of that talk uh as as as a perspective so
00:22:39
whenever when it did this talk i i still thought change in that would be coming up
00:22:44
and you know that was wrong um but we're looking at t. c. cycles here
00:22:48
this is twenty four hours of the tweed service by the way cheesy cycles on and
00:22:53
by just one unfortunate than with the numbers are really ought
00:22:57
i'm not sure why did it that way but i just running
00:23:00
on grovel and crawl would be the orange one where the green ones
00:23:05
you can see we've reduced easy cycles by about two point seven percent i think
00:23:11
and then i hand two to three parameters you just saw and the
00:23:15
reduced to buy another one point five percent of total of four point two
00:23:19
which is pretty cool but the main thing that i can almost about this user
00:23:23
c. p. u. time but just wanting to twitch service on broad instead of c. too
00:23:28
we save eleven percent of c. p. u. t. o. u. two s. action
00:23:33
that's huge right eleven percent just imagine and then hand tuned it and i
00:23:38
got another two out of it and i was very proud of that too
00:23:41
a little with a few hours two percent very cool probably drink appeared at night
00:23:47
because i deserved that um but you'll see that all that you will kick my ass
00:23:52
to remember to get so this is these are snippets
00:23:56
of mike configuration file um i gave my parameters ranges
00:24:01
you don't necessarily have to do that right you saw operation optimisation
00:24:05
works you could say from wood from him one to one million
00:24:10
but it will figure it out anyway if you have to write constraints
00:24:15
you can do it that way reason why give it ranges is because i
00:24:18
need to finish it in a certain amount of time to fit on slides
00:24:23
it's basically just for the stock and then noticed service really well so i know what ranges of good ranges so i gave it ranges
00:24:31
the test set up that i used um i use a dedicated
00:24:34
machine for instance configuration is very important if you do performance analysis
00:24:40
and you expect improvements in the single digit percentage range you need dedicated machines
00:24:46
because if we do it in the data centre and others things are running on your machine
00:24:52
all the results you seeing as noise so use dedicated
00:24:55
machines and all my instances instances receive the exact same request
00:25:00
not only the same number of requests but the exact thing request it's very important for let's
00:25:06
say servers as the tweed service because that we could be one character longer two hundred eighty
00:25:12
and the memory allocation pattern for the two would be very different but we
00:25:15
wanna compare apples to apples on instances received exact same request and these are
00:25:20
um read only car traffic request so they're not replace or anything it's just like to be that it comes in
00:25:26
if you do it right now that would be running my experiment i would handle your your request
00:25:30
i run this version of t. v. n. z. i. this version of problem not
00:25:33
that important uh we're running before cleared see one cross that who knows what your compilations
00:25:40
yeah that's what i expected um subject you remember when i talked about c. one c. two that there to check compilers
00:25:46
to buy a hotspot does is and i. b. m.'s open g. nine does that to uh you started
00:25:51
interpreting code right and then be compatible c. one as i explained earlier and we can pilot in the way
00:25:57
that we add we compile in additional code that collects profiling information it collects
00:26:03
how often was the method call how often did we you know execute the loop
00:26:09
if else princes it it counts often they were taken a it collects types
00:26:13
that we were seeing at you know bob interface call sites or whatever and then
00:26:18
we take that profiling information and we compile with c. to later after bunch of iterations and
00:26:26
so we stepping through the tears that's your compilation and the thing we're doing
00:26:31
if you turn on brought in let's say open typically lemon we run an eight but
00:26:36
we do that we just replace inter compilation c. two with well we still you see one
00:26:42
because we need that's that um but then the performance will achieve with well are good experiment want to each service
00:26:49
my favourite one affinity goldfish service you guys maybe even know within a ways
00:26:54
um it's inexpensive or p. c. system for the t. v. and used to construct construct high conference's strips
00:27:01
what what i really don't but it doesn't matter because the the thing
00:27:05
that's important to me is that bottom left ninety two percent of stock
00:27:09
right as explained earlier brow can optimise colour could really well so
00:27:15
this works great and almost all of our services are written scholarly only just the
00:27:19
and to to handful of services the written job um
00:27:23
and most of our services up basically navy at the
00:27:27
bottom then finn ego and then the logic of the servers on top that's kind of what they look like
00:27:33
okay my objective our f. what are we trying to do you see that the end user c. p. u.
00:27:40
time so we're trying to reduce user c. p. u.
00:27:43
time and if you remember patient optimisation always look for maximum
00:27:48
but since we all know really smart computer scientist and we know math
00:27:52
really well you know how to solve the problem with one divided by
00:27:56
makes it and then we have constraints or at least one um
00:28:03
you see there's something called robert and if you doing something really really wrong
00:28:08
mazes with throttle you and basically tell you your your bets it is um
00:28:14
if we're going to have all the to always on the production as i said earlier for all
00:28:19
of our services we certainly need more constraints every service on uh have some metrics he looks at
00:28:26
and then you can tell yes most servers runs fine right all the
00:28:29
things that the service on the house would be needed here as well
00:28:35
but only use that one because i know the service really well i was actually monitoring it while
00:28:39
it was running you know so i only use that one um and then you'll see what happens
00:28:44
there's the outcome that's what the run looks like twenty four hours of
00:28:47
tweed requests 'cause the second uh you see lose experiment and or just
00:28:53
control and this is just request the second is utterly they received exact
00:28:57
same request so it's the same curved the slices as thirty minute evaluations
00:29:03
thirty minutes is not very long but again i needed this to finish in twenty four
00:29:09
hours prefer please select i can fit on the slide and then in thirty minutes i know
00:29:15
that the service has compiled all the methods it actually reached
00:29:18
a steady state and the results are meaningful so trust me about
00:29:23
this is the actual outcome so this is a user c. p. u. time you
00:29:28
see every time the little line is the little the orange one that's an improvement
00:29:34
the spikes you see are when the j. v. m. restarts it's
00:29:37
basically the spikes are cheap compilations when we compile the whole thing
00:29:42
we do a bunch of thick and then it it runs for like thirty minutes and then
00:29:47
what auditing gives you is what set the table basically in the stables sort of by objective
00:29:53
and the top one is one point zero eight three eight what that
00:29:57
means is that we reduced user c. p. u. twenty eight point four percent
00:30:02
remember i had two very proud of it eight what we more it is really get remembered the um
00:30:11
the uh improvement we got but just running on
00:30:14
her all instead of c. two was eleven eleven ish
00:30:19
so we get another eight out of it then we have an eight point one six point four six point four or five point eight
00:30:25
right so let's assume the first one of the first two are outliers and
00:30:30
we could expect a six point four ish improvement i would gets at the bottom of
00:30:35
the table looks like that and you can actually see that reevaluation ones by little bit constraint
00:30:41
so we too too far that it didn't work anymore as a set it actually happens and then
00:30:47
does the table and then you see at the top we can also look at the charts and these other charts that's trivially lining size so
00:30:54
take this with a grain of salt because we two three parameters so we're
00:30:59
exploring a three dimensional space right it's very hard to put that on slides
00:31:03
to every data point in here depends on two other values that are not the same
00:31:09
so but at least we get an idea of what's going on
00:31:12
too if you look at that there's a there's a tendency going upwards
00:31:17
before his tan and our best results twenty one and if you look at the curved
00:31:21
we can say yeah twenty one twenty two twenty three maybe that would be a good value
00:31:27
so this one mapped maximum inland size it's that
00:31:32
almost flat they might yes that's like tendency up words
00:31:36
but it's almost flat so it doesn't affect your performance too much that two applies at the top right
00:31:42
but again they might be outliers or not it could be that the two other values
00:31:47
that are for just the the point it's just the perfect a configuration we we don't know
00:31:52
but we don't need to know right we just take whatever vision optimisation gives us and then we use
00:31:59
and this one it's very obvious what's going on here the default value for
00:32:02
this one was three hundred i actually went down to two hundred i think
00:32:06
to see how it affects performance negatively and it certainly does
00:32:09
and our best values with five hundred eighty whatever that is um
00:32:15
certainly should be six hundred almost double of what did the the default is for this particular service right
00:32:21
okay so what i did next was i wanted to see if what ought to
00:32:26
and found is actually true in the real world so i ran a verification experiment
00:32:32
i ran that we'd service for twenty four hours that's it i compared c. to route and then read is crawl with
00:32:39
the bells we just found without just request the second this is cheesy
00:32:45
seconds so that we'd service uses parallel juicy so we use we look
00:32:49
at p. s. karen cycles here and you can see yes when we
00:32:52
run on probably can reduce juicy cycles by roughly three point four percent
00:32:57
you seen similar things before right in this particular one particular run three point four
00:33:03
three point four might not seem a lot but it's actually intact
00:33:07
if you can read do remember was still processing the same amount of
00:33:12
request first for you know request the second basically we reduce the memory consumption
00:33:18
and if you can uh a boy she sees that always means you're latencies improve
00:33:24
so freeman for this is all too and you can already tell well that's
00:33:29
a good improvements three point five up to the tool of almost seven percent
00:33:34
very important here yes very nice run on drawl you will save a lot of money in our case
00:33:41
we do but if you're not tuning your stuff you throwing out a lot of money out of the wind
00:33:46
this particular 'cause we tuned twice as much as we would actually get it up but you thought
00:33:53
okay uses of um no different side um that's the same data just uh in a different graphics be maybe
00:34:00
people understand some people might understand this better it's allocated lights between right how many bytes we allocate but we'd
00:34:06
you see it's pretty flat over the day it fluctuates a little
00:34:09
bit but suites of different lengths but the improvement is the same
00:34:13
right three but five and then with all the tune it's another three point four up to tools that it's it's the same data right
00:34:21
use of you to this particular around twelve percent to imagine the tweed service and we have thousands
00:34:27
of instance of this service we can use twelve percent less machines to process the same amount of treats
00:34:36
just imagine how much money that is i cannot tell you how much it is but it's a lot of weight
00:34:41
wouldn't accept it i don't know how much you get paid but it's more like a so twelve percent or the two
00:34:49
strictly a good improvement remember we expect that that's six point four ish maybe with the two
00:34:54
outliers six point two exactly what what we want to see up to to look at it
00:35:00
so eighteen 'cause unless machines ridiculous that's like we improve the banana fifty percent
00:35:07
this is when you add scale it's just so much money the next thing i looked at was a p. ninety nine latencies
00:35:15
for that which is um you can already see that crawl certainly gives us we better p. ninety nine latencies thing and see too
00:35:22
well hard to tell how much it really is um i only look at two nines here and not
00:35:26
three or four because if we look at three or four you only look at you want us to see
00:35:32
right if you look at two nine c. actually get a like a rough idea what
00:35:36
the real world looks like and this is ninety nine percent of the twitch anyways it's fine
00:35:41
this is all too certainly better than the regular before crawl hard to tell how much of this what i did was
00:35:48
i integrated over the twenty four hours and that's that raft and then we can actually tell what the improvement is
00:35:54
but just running on overall instead of c. to that we'd service we not only reduce
00:35:59
c. p. u. to the station with twelve percent we reduced p. ninety nine latencies by twenty
00:36:05
it's runs better and faster
00:36:09
and then another eight percent on top if we tune it twenty eight percent that means
00:36:16
if you look at twenty right now and i encourage you to do it you will see
00:36:20
it we twenty percent that's a if you scroll really fast you could read twenty percent more treats
00:36:26
i would also appreciate if you would do that alright experiment to social graph um so social graphs also cynical for
00:36:36
service it's a an abstraction for managing maybe too many relationships
00:36:40
that what it's basically involves you and who are you following
00:36:43
and the reason i'm also a do an experiment with this one is because it's basically the same as
00:36:49
as it may be funny go and then a little bit of different logic on top and you'll see
00:36:54
the logic actually uh influences the outcome so we'll
00:36:59
see that although they are very similar objective thing thing
00:37:03
we try to reduce uses you you'd an excuse me um constraints we don't wanna get followed
00:37:09
and this is the wrong different day different graph right you see
00:37:14
nothing new like thirty yeah but i did again thirty minute evaluation once 'cause this
00:37:19
service i know really well i'm like that with service so i'd do it um
00:37:25
that's the result you see improvements you see some that are worse i'm gonna
00:37:29
spike subject compilations and this the outcome the top one is seven point six
00:37:36
that's pretty good improvement then we have a seven point six to seven point two was six point eight in the six point four
00:37:42
so i would guess maybe we get a seven percent improvement that would be really cool if we get that oh
00:37:48
yeah on on the right i haven't mentioned is or you can actually see the values for the parameters that often found
00:37:56
at the bottom of the table we had one run that violate the constraint to look at
00:38:01
the three that are still running that's a bach um but i think we fixed or what
00:38:06
and these other charts it it's not as obvious as with that which service
00:38:10
especially this one that really lining size but you can see a tendency up
00:38:14
and then the best one is again around the twenty three ford was like twenty one
00:38:19
the twenty one twenty two twenty three for the copies we have is
00:38:22
probably a good value this guy again kind of flat slight tendency up
00:38:28
our best ones around the four hundred mark whatever this one
00:38:34
also will not as clear as with that which are was but
00:38:37
certainly a tendency upwards right the important thing to point out in
00:38:40
this slide is the best one we have to six hundred forty
00:38:44
nine or whatever it's and my range goes to six hundred fifty
00:38:48
so i might redo this experiment with either a bigger ranger no range because we might actually get a better result
00:38:55
so again same thing i did a verification experiment france oceanographic twenty four hours
00:39:02
this is the outcome social pressures running c. m. s. and we're looking apart news cycles here and
00:39:09
we reduce but just running on crawl uh instead of
00:39:12
c. too cheesy seconds but don't only one point six percent
00:39:17
was roughly half what we had for that which service and then which one it that's the important thing and you can really see it
00:39:25
three point five we tune it twice as much better than
00:39:31
we get by default so performance tuning is really really important
00:39:35
and the the compute power which is why just everyday use is ridiculous
00:39:40
right so but when we do this we reduce what let's say
00:39:45
that we service what was it eighty percent right eighty percent let's machines
00:39:49
we own our machines that meets eighty percent less machines lotta money
00:39:52
means less electricity means let's cooling i'm trying to save the world will
00:39:59
i'm glad it's warm outside today but still carnage is real um
00:40:04
user c. p. u. time for the social graph service a is five point five it's roughly half
00:40:10
of what we had for that which service that kind of goes hand in hand
00:40:14
a little bit with the have like fifty like half of the a reduction of
00:40:19
cheesy cycles right we had three point i can't remember the number not three point
00:40:23
four i think for that which i wasn't one point six for the social graph
00:40:27
although like half of it the if we can eliminate the object allocations um
00:40:34
we don't a we don't have to allocate it right which takes a lot
00:40:38
of c. p. u. time but more importantly we don't have to collect the garbage
00:40:42
and collecting garbage takes a lot of c. p. u. but if we can't reduce as much
00:40:47
member allocations what happens for that we'd service but can also not save
00:40:51
as much c. p. did you see the kinda go hand in hand above
00:40:56
this is all to an important very important things although we began to me more other
00:41:01
crawl that it gave us by default important to point out he up to toll of thirteen
00:41:06
you might see it if you look at the blue and orange curve
00:41:11
the improvement is actually the same uh it doesn't matter if
00:41:14
the little this little or the lotus high but with opportunity
00:41:18
well it's high the improvement is more as when the load slow this shot and this should not be
00:41:23
the case and i actually haven't had time to get to go back and look why that's the case
00:41:28
but it doesn't really the seven point it a bit of from up here right um that's to pick a pick the best one but it it actually it makes
00:41:35
sense because would be if the little just lowell we have enough machines anyway we
00:41:39
need improvement but a lot of high it that's that's how we size or instance let's
00:41:47
questions
00:41:49
this is a choked like everyone right now should have this question to remember this is
00:41:53
actually an opportune talk gonna talk about crawl all the time you should have this question
00:41:59
right
00:42:01
of course it it right i couldn't come up here on stage and rave about how the wright brothers and then not do an
00:42:07
opportune experiment with c. too so i did that i i try
00:42:11
to pick we're not doing questions right now well huh no we're not
00:42:18
not to them
00:42:21
there was those for dramatic effect
00:42:26
but i'll refer you you'll be first after
00:42:30
um so i think three d. similar parameters for c. to
00:42:36
um one is complex in line level role doesn't have that
00:42:41
but next in line that was very important to talk to someone
00:42:43
yesterday about this um it's basically how to be your in line goes
00:42:48
and this is very important today the value the default values nine
00:42:52
and to day in our world everyone's using a trillion of frameworks right so framework is calling
00:42:58
for a might be is calling framework c. and at nine it stops like does hard stuff
00:43:04
it been sometimes it doesn't even get to the code you wrote and you can you doesn't him and the whole thing so
00:43:09
next in line level next in line size is the same one as
00:43:14
the trillion landing sites the difference here is the thirty five is quite concise
00:43:19
remember for brought it was noted the compiler graft adheres actually bite size if you ever
00:43:25
added a search statements to your code and then suddenly you performance went to shit it's because of this guy
00:43:32
because this one but see to does when you add a search statements
00:43:35
you buy put size increases but see to doesn't discount for such segments
00:43:40
so then suddenly doesn't in line it anymore i know very embarrassing we never sixty them very sort you scroll um
00:43:48
in line small code very all one i was not or cause
00:43:52
someone introduced this guy two thousand means two thousand bytes of native code
00:43:59
so basically the reasoning behind it i think is
00:44:04
if c. to wants to in line method and we have already compile cold for that method
00:44:09
and it's bigger than two thousand bytes the method is very big so we'd rather not in line
00:44:14
that's reasoning behind it it's kinda ought sure it makes a little bit of sense but
00:44:19
um if you know how in learning works if you in line it can a lot
00:44:24
of things can collapse in your compiler graph and you actually get less code than if
00:44:28
you would just come parts and a little to nothing but i try to tune it
00:44:32
okay i gave it ranges again for the same reason um then just run yeah exactly
00:44:41
good job workable slide snow doesn't matter so this to run again thirty minute evaluation runs with c. too
00:44:48
that's the outcome you know we see some uh better some of course
00:44:51
that's the table to the best result we have is five point one
00:44:56
that's pretty cool five point one percent improvement with the compiler that's been around
00:45:00
for many many many years and and highly optimised for all the code out there
00:45:05
i'll certainly take it the next twenty three point eight three point five three point three and a three
00:45:11
so i would argue that the five as an outlier you know especially with with the numbers we've seen before so
00:45:17
let's say we get a three point five percent improvement still that i'd all the tune and remember
00:45:24
in order to talk opportune did it's job it to a compiler that's been around for fifteen years
00:45:32
so well that we get another three point five percent of i would certainly take it if they wouldn't be crawl right because we're
00:45:39
all we really get twelve but before them than another six on
00:45:42
top up to a total of eighteen compared to three e. and half
00:45:47
so that's the bottom of the table uh no constraints well it it doesn't really matter uh and these other charts
00:45:56
but i was very obvious it that's the next line in line level thing i talked about how the inland goes um
00:46:03
you see all the others are not really important perfectly fine
00:46:07
curved default is nine but it should be sixteen seventeen o. eight
00:46:12
and i'm arguing that it should be eighteen or whatever for all the cold up there
00:46:18
because we take nine i don't even know when we would have to go back into the curly history
00:46:23
and see when we did that but let's see was ten years ago and i'm sure it was more
00:46:29
the call ten years look very different than it looks today right it's way more cool way more
00:46:33
frames and stuff so if you if there's one thing you wanted to tune that and increase it
00:46:41
this one next in line size is completely flat that surprised me
00:46:46
because it's basically the same thing as the trivial in line says and if you
00:46:49
remember the charts before they have really uh influence performance but this one completely flat
00:46:55
doesn't affect performance of all this one is also flat didn't
00:46:59
surprise me too much because that's the one that was talking about
00:47:03
so basically the next in line there was the one you you wanna
00:47:06
change if you want to i did not do a verification experiment fussy too
00:47:12
but let's assume we get a three point five percent improvement sure cool
00:47:16
but compared to eighteen inches that oh especially for scholar dot so my summary for all my talks i think i'm
00:47:23
kind of out of print something i'm very simple and it's it's it's the same summary for all of my talks
00:47:30
that's how you turned well if you run on cheeky ten or later basically
00:47:36
level uh then the only thing you have to do is this and then
00:47:40
you replace the two with well and if you're running scholar code and i'm
00:47:44
sure everyone in this room is your an idiot if you're not turning on grow
00:47:49
really a big i've never scenes colour code that ran morsel
00:47:53
of crawl than seem to never would always see improvements um
00:48:00
how long can rent on or i don't even know he you your point i'm fine good still i
00:48:07
i don't think we're the only ones anymore but we're kind of the biggest
00:48:11
company twenty that uses crawling production oh i don't think i even mention this before
00:48:16
the tweed service the source professor is the user service these are
00:48:20
kind of our biggest services in terms of its in size um
00:48:24
and a bunch of like twenty other services run
00:48:27
one hundred percent on probably production for over two years
00:48:31
so every tweak you've seen in the last two years a tweed yourself was you process by code compiled by crawl
00:48:39
any works fine it never trashed it just works fine did you lose any treats things you wouldn't even know if you lost all but you did not
00:48:48
no you would you would you would blame why side let's should
00:48:51
what actually yes the off we'll find him so if it weren't really
00:48:59
really well for us and when we moved to brawl we found a
00:49:04
few bucks in my other talk all talk about them as a set
00:49:07
but we haven't found a bug in more than two and a half years so what i need is
00:49:13
you to run crawl and five bucks it's we
00:49:17
need does the shady all production code that you have
00:49:21
that would take tell you know that the corner cases of the compiler as well we need to all the better um
00:49:28
it doesn't have to be big right um doesn't have to be a huge application whatever it is it
00:49:34
can be a small pet project you're working on give it a shot if it runs better for you
00:49:40
very good if you if it weren't as well as for us maybe at your company and you save
00:49:45
a lot of money i'm very happy for you i'm a nice person i want you to save money uh
00:49:50
especially when i come back next year then just are we save too much money
00:49:53
i'll buy you a beer is that cool thank you cannot sleep at your house so
00:49:59
that's our our role but i i mean that right i want you to save money if you can save it
00:50:05
sure you're probably just is not as biggest weather but your
00:50:09
computer expense is usually a fraction of your company revenue right
00:50:14
so if you compute expensive let's say ten thousand dollars here and you can save ten percent of that a thousand dollars
00:50:20
it's an amazing christmas party you know to do that uh if it crashes for you excellent that's exactly what i want
00:50:29
i wanted to find a but because if it crashes for you and we find a bargain with fix it means
00:50:34
to it doesn't crash and it but it doesn't go down we all would be sad if what would be down
00:50:39
especially i would be set um because then i would have to go back to work and fix it so
00:50:45
please find parts filer don't get have if you find one um if it runs on worse for you
00:50:51
performance wise who also let us know um we wanna know where groused electing compared to
00:50:56
see to it might be difficult if it's um code from your company it's probably proprietary
00:51:02
but maybe you can extract a small test cases something send it to us and we
00:51:06
probably the brawl team will take a look at it to please please please i'm one it's
00:51:12
i'm doing these comedies talks and um yeah talks and other things that i'm doing moving
00:51:19
to it wouldn't run it with on route i do this for reason because i'm want
00:51:23
crawl to become the new defaulted compiler in open g. ticket for
00:51:28
many reasons but the number one reason is you've just seen it
00:51:33
right the improvement especially this community gets there's also improvements
00:51:36
was for java but not as much it's special effects colour
00:51:41
where what rogers chimes and want this to become the new compiler like all like
00:51:46
flower you could mention at the very beginning who's reading this a specific scalloped musicians
00:51:52
he had a little bit of compile experience but he never button chilean were to compile
00:51:58
right c. two is very very complex when i started working on c. to it took me
00:52:04
for years to fully understand our works no one's doing this today anymore
00:52:08
everyone downloads of fucking framework from give up every day and i expect
00:52:13
to understand it in a week right that's not the case with something
00:52:16
like c. too but floppy was able to write compiler optimisation upstream it
00:52:21
two of them already upstream in about three or four months so we need a new
00:52:27
framework basic uh compiler for what we can do these things and it's especially important for
00:52:34
other languages than job i worked that's not important for many years and have a competitive on c. to
00:52:40
another single day not a single hour or single men it did we optimise for any other language and job
00:52:47
right so that's what we're all can give you so much move was colour um
00:52:54
give it a shot let me know how it works weeded out we had nice
00:52:59
ingredient with that that we're all b. m. cache tag or uh put a handle whatever
00:53:04
please give it a shot you can always reach out to me or liberal the n. team
00:53:08
they have an evangelistic will take care of you actually have to one is he old enough
00:53:12
uh ask a trillion of questions if you need help to set it up
00:53:15
or if you don't understand anything ask us what whatever you'll get an answer in
00:53:20
two minutes that's right that's all i had a tweed
00:53:25
about everything you see in here and thank you very much
00:53:29
hi
00:53:36
and since
00:53:37
i was so
00:53:37
rude earlier you can ask a question
00:53:41
or did i answer it in the meantime i don't think it was just a curiosity question uh are
00:53:46
have you ah i'll teach you a
00:53:49
experiments alms yawned ah ah m. c.
00:53:54
ah right um so ought to do as you as you see the wrong key my calling
00:54:00
his talk was already to use it goes all the tomb is already a few years old
00:54:04
and it was developed to tune the cheesy um you can as i
00:54:09
said earlier you can tune anything i've not done it i've actually never try
00:54:13
to tune that you see i might do that next but i'm a compile engine yeah that's what i understand you know what i mean i'm still
00:54:20
and actually the answers and all but i'm might have yeah
00:54:27
other questions
00:54:30
yes
00:54:33
oh oh oh
00:54:38
do you know why is ah expert yeah and it always is and i'm not speaking louder right now
00:54:49
like like a scroll saw factor what's kyle
00:54:52
my style gel um it mainly has to
00:54:58
with what i said earlier with that these temporary objects that are being created i so
00:55:05
i i didn't see it early i wanted to get out of this room without saying it that i knew nothing about so
00:55:12
hi i i cannot if you would force me right now to write hello well
00:55:17
that's how like and you know that's not job i mean i get i i
00:55:22
all i need to speak is java biker right so but what i can tell you
00:55:27
and you know all the whatever nomads and bob about things you use um
00:55:34
there's a lot of immune ability or whatever going on so it has to create all these
00:55:39
new objects all the time right but if you have a big enough view of your source code
00:55:44
then you you can't prove i don't need these objects right and then the compiler can say i'm not allocating these
00:55:51
that's basically and if you watch mike what is quest for all the grow run time talk you see that
00:55:56
that's basically fifty percent of the improvement the other fifty percent cons
00:56:02
mostly i would guess and i don't have exactly that to to back this up
00:56:06
i'm from the better in line and then if you adopt innovations on the side
00:56:11
but it's i think it's mostly in line because um the in lining implementation in crowds
00:56:17
just better it's not still restricted as with c. to you start you've seen it here right
00:56:22
you've seen the things we tune in actually effect of affect the performance and the other things to do it so
00:56:27
much that next in line level one is a very important that's kind of the reasons why cross a good what's colour
00:56:38
other questions
00:56:40
whether
00:56:42
over time we run out ah but over time one more
00:56:47
um if you're oh it's only lounge after that so
00:56:51
have you experimented with the like the thing v. m.
00:56:57
uh no uh you uh you may as well as real yeah so um no
00:57:06
and of all the not experimenter then i promise every time i see yeah the engineers to
00:57:11
try g. nine and i've not done it yet but still saying are you talking about stock and
00:57:17
yes so i i uh before falcon but no we never tried if so i think if you download thing now
00:57:26
they got rid of all the other compilers and falcon is the default right
00:57:29
now so far can for the people who don't know all falcon is that
00:57:33
check compiler uh as school systems is was it is working on and it's based
00:57:38
on a libyan it uses elevate and to do compilations the problem with stock and
00:57:42
it and i don't wanna bashers will too much or that the problem with dark and a is everybody and
00:57:49
and when i used to work at oracle and was looking for replacing forty two i was also looking at
00:57:54
it would be him with an obvious choice right but l. v. and was always disliked e. s. data compilation framework
00:58:00
it has some support project compilations but it was never designed to be i fasted compiler
00:58:06
still the problem far can pass it's very slow slow in terms of compiler throughput not
00:58:13
the code generates the throughput so as to has to do you know has to jump through
00:58:18
some hoops have technology ready now they have this for a while but then are working
00:58:22
on another technology that you have to have a marketing term get uh the code compiler stashing
00:58:27
which basically where they have to save chip compound code because falcon is so slow so i always just

Share this talk: 


Conference Program

Welcome!
June 11, 2019 · 5:03 p.m.
1574 views
A Tour of Scala 3
Martin Odersky, Professor EPFL, Co-founder Lightbend
June 11, 2019 · 5:15 p.m.
8330 views
A story of unification: from Apache Spark to MLflow
Reynold Xin, Databricks
June 12, 2019 · 9:15 a.m.
1267 views
In Types We Trust
Bill Venners, Artima, Inc
June 12, 2019 · 10:15 a.m.
1569 views
Creating Native iOS and Android Apps in Scala without tears
Zahari Dichev, Bullet.io
June 12, 2019 · 10:16 a.m.
2230 views
Techniques for Teaching Scala
Noel Welsh, Inner Product and Underscore
June 12, 2019 · 10:17 a.m.
1292 views
Future-proofing Scala: the TASTY intermediate representation
Guillaume Martres, student at EPFL
June 12, 2019 · 10:18 a.m.
1152 views
Metals: rich code editing for Scala in VS Code, Vim, Emacs and beyond
Ólafur Páll Geirsson, Scala Center
June 12, 2019 · 11:15 a.m.
4695 views
Akka Streams to the Extreme
Heiko Seeberger, independent consultant
June 12, 2019 · 11:16 a.m.
1551 views
Scala First: Lessons from 3 student generations
Bjorn Regnell, Lund Univ., Sweden.
June 12, 2019 · 11:17 a.m.
577 views
Cellular Automata: How to become an artist with a few lines
Maciej Gorywoda, Wire, Berlin
June 12, 2019 · 11:18 a.m.
386 views
Why Netflix ❤'s Scala for Machine Learning
Jeremy Smith & Aish, Netflix
June 12, 2019 · 12:15 p.m.
5016 views
Massively Parallel Distributed Scala Compilation... And You!
Stu Hood, Twitter
June 12, 2019 · 12:16 p.m.
958 views
Polymorphism in Scala
Petra Bierleutgeb
June 12, 2019 · 12:17 p.m.
1112 views
sbt core concepts
Eugene Yokota, Scala Team at Lightbend
June 12, 2019 · 12:18 p.m.
1652 views
Double your performance: Scala's missing optimizing compiler
Li Haoyi, author Ammonite, Mill, FastParse, uPickle, and many more.
June 12, 2019 · 2:30 p.m.
833 views
Making Our Future Better
Viktor Klang, Lightbend
June 12, 2019 · 2:31 p.m.
1682 views
Testing in the postapocalyptic future
Daniel Westheide, INNOQ
June 12, 2019 · 2:32 p.m.
498 views
Context Buddy: the tool that knows your code better than you
Krzysztof Romanowski, sphere.it conference
June 12, 2019 · 2:33 p.m.
393 views
The Shape(less) of Type Class Derivation in Scala 3
Miles Sabin, Underscore Consulting
June 12, 2019 · 3:30 p.m.
2321 views
Refactor all the things!
Daniela Sfregola, organizer of the London Scala User Group meetup
June 12, 2019 · 3:31 p.m.
513 views
Integrating Developer Experiences - Build Server Protocol
Justin Kaeser, IntelliJ Scala
June 12, 2019 · 3:32 p.m.
551 views
Managing an Akka Cluster on Kubernetes
Markus Jura, MOIA
June 12, 2019 · 3:33 p.m.
735 views
Serverless Scala - Functions as SuperDuperMicroServices
Josh Suereth, Donna Malayeri & James Ward, Author of Scala In Depth; Google ; Google
June 12, 2019 · 4:45 p.m.
935 views
How are we going to migrate to Scala 3.0, aka Dotty?
Lukas Rytz, Lightbend
June 12, 2019 · 4:46 p.m.
709 views
Concurrent programming in 2019: Akka, Monix or ZIO?
Adam Warski, co-founders of SoftwareMill
June 12, 2019 · 4:47 p.m.
1973 views
ScalaJS and Typescript: an unlikely romance
Jeremy Hughes, Lightbend
June 12, 2019 · 4:48 p.m.
1376 views
Pure Functional Database Programming‚ without JDBC
Rob Norris
June 12, 2019 · 5:45 p.m.
6365 views
Why you need to be reviewing open source code
Gris Cuevas Zambrano & Holden Karau, Google Cloud;
June 12, 2019 · 5:46 p.m.
483 views
Develop seamless web services with Mu
Oli Makhasoeva, 47 Degrees
June 12, 2019 · 5:47 p.m.
785 views
Implementing the Scala 2.13 collections
Stefan Zeiger, Lightbend
June 12, 2019 · 5:48 p.m.
810 views
Introduction to day 2
June 13, 2019 · 9:10 a.m.
250 views
Sustaining open source digital infrastructure
Bogdan Vasilescu, Assistant Professor at Carnegie Mellon University's School of Computer Science, USA
June 13, 2019 · 9:16 a.m.
373 views
Building a Better Scala Community
Kelley Robinson, Developer Evangelist at Twilio
June 13, 2019 · 10:15 a.m.
245 views
Run Scala Faster with GraalVM on any Platform
Vojin Jovanovic, Oracle
June 13, 2019 · 10:16 a.m.
1337 views
ScalaClean - full program static analysis at scale
Rory Graves
June 13, 2019 · 10:17 a.m.
463 views
Flare & Lantern: Accelerators for Spark and Deep Learning
Tiark Rompf, Assistant Professor at Purdue University
June 13, 2019 · 10:18 a.m.
380 views
Metaprogramming in Dotty
Nicolas Stucki, Ph.D. student at LAMP
June 13, 2019 · 11:15 a.m.
1250 views
Fast, Simple Concurrency with Scala Native
Richard Whaling, data engineer based in Chicago
June 13, 2019 · 11:16 a.m.
623 views
Pick your number type with Spire
Denis Rosset, postdoctoral researcher at Perimeter Institute
June 13, 2019 · 11:17 a.m.
244 views
Scala.js and WebAssembly, a tale of the dangers of the sea
Sébastien Doeraene, Executive director of the Scala Center
June 13, 2019 · 11:18 a.m.
661 views
Performance tuning Twitter services with Graal and ML
Chris Thalinger, Twitter
June 13, 2019 · 12:15 p.m.
2003 views
Supporting the Scala Ecosystem: Stories from the Line
Justin Pihony, Lightbend
June 13, 2019 · 12:16 p.m.
162 views
Compiling to preserve our privacy
Manohar Jonnalagedda and Jakob Odersky, Inpher
June 13, 2019 · 12:17 p.m.
301 views
Building Scala with Bazel
Natan Silnitsky, wix.com
June 13, 2019 · 12:18 p.m.
564 views
242 views
Asynchronous streams in direct style with and without macros
Philipp Haller, KTH Royal Institute of Technology in Stockholm
June 13, 2019 · 3:45 p.m.
303 views
Interactive Computing with Jupyter and Almond
Sören Brunk, USU Software AG
June 13, 2019 · 3:46 p.m.
680 views
Scala best practices I wish someone'd told me about
Nicolas Rinaudo, CTO of Besedo
June 13, 2019 · 3:47 p.m.
2679 views
High performance Privacy By Design using Matryoshka & Spark
Wiem Zine El Abidine and Olivier Girardot, Scala Backend Developer at MOIA / co-founder of Lateral Thoughts
June 13, 2019 · 3:48 p.m.
752 views
Immutable Sequential Maps – Keeping order while hashed
Odd Möller
June 13, 2019 · 4:45 p.m.
275 views
All the fancy things flexible dependency management can do
Alexandre Archambault, engineer at the Scala Center
June 13, 2019 · 4:46 p.m.
389 views
ScalaWebTest - integration testing made easy
Dani Rey, Unic AG
June 13, 2019 · 4:47 p.m.
466 views
Mellite: An Integrated Development Environment for Sound
Hanns Holger Rutz, Institute of Electronic Music and Acoustics (IEM), Graz
June 13, 2019 · 4:48 p.m.
213 views
Closing panel
Panel
June 13, 2019 · 5:54 p.m.
400 views