Player is loading...

Embed

Copy embed code

Transcriptions

Note: this content has been automatically generated.
00:00:00
a lot of people in the audience i'm i'm i'm very thankful for that given how
00:00:04
many good talks there are at the same slot as this one so i thank you
00:00:09
a welcome i have a lot of material that
00:00:11
i wanna get through and uh i always am pessimistic
00:00:16
about how much material i have given how much time i have so all see how things end up
00:00:22
um i want to get through as much as possible of course as always if you have a question
00:00:29
uh i'd rather you um to raise your hand and let's censor that immediately because some
00:00:36
other person here could have the same problem or would like to have an explanation as well
00:00:41
and that helps everybody if i'm as many as possible
00:00:44
or up to speed or uh have the same shared context
00:00:50
so uh yeah cool so we see here we have a few more people arriving oh all that you privacy
00:00:58
so oh this presentation is about making our future better
00:01:02
oh and i think whenever there is a session on features you have to have some um have
00:01:09
good half bad pun in the title so uh this is my this is my tend to doing that
00:01:15
um how many of you know why am since before twenty two
00:01:22
okay fair fair amount so i've been doing can currency in now in on the j. v. m. an ins colour for
00:01:29
basically ear the better part of a decade and uh if it's been a fun ride so oh
00:01:36
i i would like to share we uh today uh the latest improvements for scott to thirteen but
00:01:44
before that i think we need to go through how we got here oh
00:01:49
else that well doing this presentation i i had a really scary
00:01:52
thought that i might be the one who knows most about the future
00:01:58
in the world uh so please uh get involved uh have a look at the uh the implementation of stuff and
00:02:06
that would be awesome i wanna be a wanna be the one who knows the most about the future um so
00:02:14
i work for like and uh i've been working out like been since
00:02:17
before the dawn of time basically um i was a part of scalable solutions
00:02:23
uh with your husband here and then we if used with colour solutions so we're a swiss swedish
00:02:29
ah fusion which is pretty cool uh and now we're like and so let's have a look at
00:02:38
what futures are how many how many you have never used features well it's got are never
00:02:44
not all not we got one we get to cool alright so then you know this already
00:02:53
a futurist goal represents the capability to read a value at some time
00:03:00
so all it's it's actually a value detached from time
00:03:05
which is pretty cool because if if we can remove time from the question then we can program
00:03:10
the things with that we do not have which is also pretty cool i and a promise of course
00:03:19
represents an obligation to write some value at some point in
00:03:23
time like this important so if you have a reference to promise
00:03:30
how do you know what you're supposed to do with it
00:03:34
well if you have a reference the problem is that somebody is giving you the obligation to complete it
00:03:39
so unless you hand off that obligation to something else then it's very
00:03:43
likely but if you do not complete that thing and something will not happen
00:03:52
one question that arises i'm quite frequently on
00:03:58
stacker for a stack overflow an aunt waiter and
00:04:02
ever wells is why are not futures cancel
00:04:09
well
00:04:10
if the future work cancel then it would mean that anybody having
00:04:16
that capability to write it to read a value we also have the capability to write that by
00:04:23
they would have the capability of cancelling it so if you've created
00:04:28
a promise and handed out futures to that promise and then somewhere
00:04:34
somebody with that future decides to cancel it they will cancel for everybody
00:04:39
so that would make it really hard to reason about what you could be doing
00:04:44
with your future would you need to soar like copy that somehow and pass a copy
00:04:50
so it would be really hard to reason about it so it's really good that futures are for reading promises over right
00:04:57
oh
00:04:59
um i'll put a long while ago now we introduced futures in
00:05:03
promises in just got two nine uh via the skull improvement proposal fourteen
00:05:09
and you can you can read the link down below if you want to have more background on it but i think
00:05:15
the main he was to get a standardised set of interfaces
00:05:20
so that we could have maximum inter operability for style applications
00:05:25
there were numerous a promising future implementations at the time
00:05:30
and everybody would essentially have to write converters between all
00:05:34
of them and that was not feasible i also of course it is not enough to just have the interface
00:05:42
you need to have some sort of canonical implementation otherwise you would end up not knowing which
00:05:47
implementation to use so there is an implementation in the standard library so that as many as possible
00:05:54
can use future some promises without going outside the scouts and elaborate to do so
00:06:03
i don't know if you've thought about it but essentially with
00:06:06
features you have three distinct a. p. i. is rolled into one
00:06:17
we have appalling a. p. i. you can check whether a future has been completed
00:06:23
or if there is a value at this moment but it's essentially doing falling
00:06:32
there's also all call back because the parents there are are sort of we've deprecated on success and on failure
00:06:40
uh mainly because you can do future doubt failed dot for each if you want to
00:06:46
uh just reacts to the failure and then for each
00:06:49
is basically on success so you could just use that instead
00:06:55
but we also have beyond pauling and call backs
00:07:00
we have transformations
00:07:03
transformations that you have probably used many many times
00:07:07
like napping over future record recovering from a failure in the future
00:07:15
so three distinct a. p. ice in one um sometimes you need one of them uh
00:07:21
the other time you need another so they're all necessary some for inter operability and some for
00:07:28
just having clean code we're having a a show that you can reason about
00:07:36
so let's just have a very trivial example of using future some problems
00:07:44
so it's actually we need to be able to create some sort of promise
00:07:48
and we need to be able to attain the associated future with our promise
00:07:53
and then then of course if we have to be able to be detached from time
00:07:57
then we need to be able to operate on that value even though we don't have it
00:08:01
so map in this case will do an upper case on that value once it's available and
00:08:08
then we complete that promise with a string and
00:08:11
then we returned the future that represents the transformed
00:08:14
value and in this case it's a very roundabout way of doing future successful uh up to right
00:08:24
but there are quite a few things involved here that we haven't talked about yet
00:08:33
execution contacts
00:08:36
how many of you know what an execution context is and does
00:08:42
basically two thirds if not more that's great because
00:08:48
we need to figure out
00:08:50
were things are executed when they're executed and how they're executed
00:08:57
so if we want to u. t. couple the description of
00:09:01
some transformation then we also need to be couple what does it
00:09:07
so an execution context as a way of doing that so we detach things from time and we did
00:09:14
patch things from the underlying implementation so you only need to worry about having an execution context it's got
00:09:22
and there's been quite mean uh discussions on when you should be using execution context or
00:09:28
all there is so much overhead in passing in execution context around but the thing is
00:09:33
if you don't want to be able to say we're
00:09:36
things are executed then how can you reason about the program
00:09:41
this is just like magically executed somewhere it has like infinite resources even that number for its
00:09:48
if it's hillbilly that's not that's not reality so if we
00:09:52
want to have a very disciplined way of say this is
00:09:56
what i want i want to do and here's rear that
00:09:59
happens because otherwise it will happen either on the thread that completes
00:10:06
the promise or at the time you add the call back because what was already completed at that time so that you don't know
00:10:14
when you add a call back and my also going to be running the school maybe i don't want to be run it's good
00:10:22
or perhaps the thing that is completing the problem is is not
00:10:25
equipped to do this heavy lifting that this transformation is do it
00:10:30
so this is a very sort of this is that we do but there is a
00:10:33
single thing the controls that and if you want to think about it it's really an interpreter
00:10:39
so the execution context interprets the transformation at some point
00:10:46
it's just that the thing to be interpreted is an hour but of run double it's not super super duper advanced
00:10:56
so on the topic of execution context i think it's very
00:10:59
it's very uh common that people don't know all how they should be spreading their execution context
00:11:06
in their code so like should i be passing it in or should i create one or
00:11:10
how do i how do i manage execution context and so i just thought i'd
00:11:14
just throw this in there so that you you just know what my recommendation is
00:11:20
so if the component itself should control execution
00:11:24
then you have some method within that thing that returns the execution context
00:11:32
so if it's in its own control then you could instantiated or whatever it's its own thing it shouldn't be passed then
00:11:40
on the other hand if the instantiate or that component should control don't of course these to be some sort of constructor parameter
00:11:48
so that you can pass it and yeah and last but not least it's
00:11:53
not uncommon at all but you want to have the colour of the component
00:11:57
the specifying the execution context so then of course you make it an implicit parameter to that method
00:12:04
so these are my recommendations uh and they've sort sort me quite well
00:12:08
it's it's rather easy to reason about um there's not much conflict in or
00:12:17
so the global execution context which is the only one that comes out of
00:12:21
the box that the only thing that is already there and but it just works
00:12:26
it's usually based on or it's currently based on the foregoing pool in job utah concurrent
00:12:32
and what's good about that is that it has a means of doing control blocking
00:12:39
and it's also fairly performance and it's a very sort of handling most cases rather well
00:12:46
there is no sort of generic optimal solution to trade
00:12:50
scheduling or tasks to scheduling but it's not really good compromise
00:12:55
and it can do these sort of evasive actions when there
00:12:58
is blocking involved in order to keep lightness and throughput um
00:13:02
available so you don't necessarily uh make everything grind to a halt if so use blocking
00:13:08
but what you might not know is that it's possible to configure
00:13:15
little execution context how many of you have changed this configuration in their applications
00:13:22
a fair number of people those who haven't i really recommend
00:13:26
having a look at this because it's really hard to provide
00:13:30
the best the full there's no such thing as the best default there's just a compromise and
00:13:37
the standard library doesn't know what your application is doing so all of course you're going
00:13:41
to more about we're at what your application is doing then than i am so oh
00:13:47
if you tweak those parameters you might actually get better performance or um
00:13:52
better fairness in your program so it's worth having a look at that
00:13:57
um something which is fairly common as well is that if you have for performance
00:14:03
how many of you have increased the number of threads i will i will raise my hand
00:14:11
she it's very infrequent that that will actually help anything
00:14:16
because you're increasing contention you're increasing overhead in scheduling
00:14:21
and if you're essentially blocking then you're not doing anything
00:14:26
so try if you have for performance and you might be
00:14:31
c. p. about it might actually be better to lower the number
00:14:34
do like up point six factor so you have sixty percent
00:14:41
of your available course and the reason is fairly simple because
00:14:45
when you run on apart from that does garbage collection then there are garbage collecting parades that also need to
00:14:51
be able to run so if your application is starting the garbage collector then of course it's gonna take longer
00:14:58
so it might actually mixes the lower the number for the
00:15:01
times just to see how the application behaves to that change
00:15:05
um i've i've seen cases were where they had been like a six hundred parades and
00:15:10
i i don't think that that will perform as well as doing something else
00:15:15
um how many of you know what the max extra threads thing does and anybody yeah yeah what does go
00:15:27
yes so it will limit the number of threats that are so on when you do you manage blocking
00:15:33
so sometimes if you have lots of walking occurring you want to be able to
00:15:38
maintain the systems response and this and that means that you have to
00:15:42
spawn you've read that sort of takes over for the current thread for as
00:15:45
long as it's blocking and of course you can just keep on doing
00:15:49
that because then you will run our threads and the application will completely crash
00:15:53
so there needs to be a ceiling to this and the current ceiling is two hundred fifty six friends
00:16:00
so once you go beyond that it will not create more for it's for you but it's a good thing to
00:16:05
have to sort of hand the smoothing out some of the
00:16:09
uh situations where there are a lot of concurrently blocking things
00:16:15
so how do we know that something's walking well we don't so you'll have to tell us
00:16:26
and there are two ways of doing that so one is a way and one
00:16:31
is to use the walking sort of construct and blocking is defined as colour concurrent
00:16:37
so you can just write walking and then i walked in and do the thing
00:16:40
that does the blocking within their and the reason we did this and there's been
00:16:46
both complaints uh but also um some appreciation for this is that
00:16:52
was blocking is seen tactic lee obvious then it's easier to reason about the program
00:17:00
'cause if you call something you and you don't know that it's blocking then how will somebody who does the cover of you know that
00:17:06
if you know that is blocking by reading the code then it's much easier to reason about and
00:17:10
then somebody can say hey i'm really sorry can't block here if we just can't it won't work
00:17:18
or you disallow it they we don't allow calling the weight that result
00:17:24
in our code base but then there's a single method you essentially checked for
00:17:31
so it's also a means of doing doing yeah more analysis
00:17:35
on your code blacked out we know this like we have this
00:17:41
how does this information that something is blocking how's that transferred to
00:17:45
with the execution context well so we have something called la context and
00:17:50
they are insoluble i won't go through the code here but i think
00:17:52
it's worth a knowing so for the uh uh the global execution context
00:17:57
we implement block context and what that rock rock context does
00:18:01
is that it wraps everything in a fork join pool manage locker
00:18:05
which is a construct that the four join paul uses to essentially manage lock right and what you could do
00:18:13
that all is that i mean i've done it you can create a bar context that does a lot of work
00:18:20
'cause you you your able to intercept a chunk of code double
00:18:24
do blocking and is the running that you could just run except nope
00:18:32
so i i haven't seen many uses impact this but i think it's useful to know that it's there
00:18:39
i however
00:18:43
sometimes blocking is unavoidable
00:18:46
and of course my my recommendation is always like you know clark sometimes you have
00:18:51
no choice you're calling some third party a. p. i. that you're not in control of
00:18:55
if you have to return is strict result because you're implementing
00:18:58
some sort of interface and it only deals with strict results
00:19:04
so you can't return the future i on the other hand i think
00:19:08
there is a class of blocking that doesn't get any attention at all
00:19:14
and that is very c. p. u. intensive operations that are not locking in the sense of which changing
00:19:19
the thread a state if you so well it's just gonna take a long while to get through this
00:19:28
so from the sake of the run time for all intents and
00:19:32
purposes it is blocking it is preventing progress on this thread of execution
00:19:38
nothing else can run well this thing is running on this thread so for some things that you know are expensive
00:19:46
you might make sense to have a look at what does the application look like if i had a walking construct around it
00:19:53
or only for documentation like take this thing can be extremely costly
00:19:58
and we don't want this to risk essentially tanking entire system
00:20:07
so the old implementation in scott to troll of future him promise relied really
00:20:14
heavily on this combination of having on complete call backs and having promises complete things
00:20:22
and it's a very powerful model you can implement all the combinations using this super powerful
00:20:28
right but if you do this which we have learned
00:20:33
then you have like are completed problems all the thing is you have some issues so let me go for this
00:20:42
so here's how map
00:20:45
at least in to throw to twelve used to be implemented so
00:20:51
we use transform transform takes a function for me try to try
00:20:57
and in this case the try will be either the success or the failure of the current future
00:21:03
and map of course only applies the function if it's a
00:21:06
success and then it returns another another another try try try
00:21:13
so yeah i this is a this is how input now to be useful well it doesn't
00:21:19
really do anything just delegated right you can't really say that oh it's super easy to implement map
00:21:25
'cause this is not implementing map is just using something to implement now so let's implement map
00:21:32
so then we need to implement transform then here it gets
00:21:35
a bit uh i'll be more difficult than the previous example right
00:21:42
so we need to take some create some new promise the the full problems is the default implementation in the code base
00:21:50
and of course this is the current feature so when the current future gets completed will take the result
00:21:58
and we will complete the promise that we created
00:22:03
with the result of the result having been applied to some
00:22:09
function f. and of course f. can throw exceptions or even errors
00:22:16
so we need to wrap this around a try catch and we need to make sure that we only
00:22:20
capture failures that are not fail fail because if there
00:22:24
are fatal errors we don't want to hide them somewhere
00:22:29
so um any spontaneous reactions to what why this is not a good idea impact this
00:22:46
how do you deal when there is no direct connection between the call back and the thing
00:22:51
that gets completed by the call back had it when it's interrupt that you'll never run all
00:23:00
like it only takes run you can't really say oh i was interrupted
00:23:04
there's no way to pass that information in for when you complete the problem is and you're
00:23:11
trying to submit the call back to the execution context what if there's a reject execution exception
00:23:19
what do you do like i got an exception and i have a run of all
00:23:25
what do i do
00:23:26
but also what is the what is the overhead of this
00:23:28
implementation because clearly you need to allocate several closures or fonts
00:23:34
once you do this you need to allocate the promise you need to allocate
00:23:37
the call back the need to probably allocate the function because we implement that map
00:23:42
pass transform so we need to create the transformation function so they're quite a bit of allocations
00:23:49
but also when you recollect that
00:23:53
in your call back
00:23:55
how easy is it to reason about what gets captured within this closure
00:24:02
not really right
00:24:05
so since we use a closure this it becomes harder to reason about what gets captured by the
00:24:12
by the compiler ass you're doing this you might not wanna capture everything or you might
00:24:17
wanna store things away before because you don't wanna hold on to memory longer than you need
00:24:22
so this method of implementing it is actually tricky from that perspective
00:24:29
so there's been a couple of cases where it was not apparent that something was being held onto um
00:24:35
so memory not memory leaks necessarily but a lot of
00:24:38
memory just hanging around until things got completed a cell
00:24:46
force collect two thirteen
00:24:49
team what should we do
00:24:52
what has changed like let's just start with something really liked
00:24:59
oh how many of you have used the apply method on future
00:25:04
like future like parenthesis and doing something a lot of people as a super useful thing
00:25:11
to be useful or people use it uh even in terms of the shouldn't be using um
00:25:16
but it's interesting to know i think that the apply method is implemented in terms
00:25:24
of future got unit which is a statically
00:25:26
allocated future that is essentially a successful unit
00:25:32
that map and then do nothing with the unit because we don't we don't use the unit for anything and then just
00:25:39
evaluate the body of expression so this is how applies implement it now
00:25:48
now how does how does that work when you have an expression that yields a future
00:25:54
like it's good for expressions like future parenthesis expression and parenthesis
00:25:59
but if you have an expression that returns a future that you would do this and then you do flat in for instance
00:26:08
so in two thirteen
00:26:11
this has been added to make this case really apparent because sometimes when you have an
00:26:18
expression that evaluates in future you don't know if that expression is going to throw exceptions
00:26:24
so if you want to defer or or essentially delegate the execution of that expression
00:26:30
to an execution context and get the response as a future of t.
00:26:35
is that a future future t. and the delegate will do that for you and it's
00:26:39
interesting to note because this is a like map of flat map and being used and
00:26:43
just super useful i've seen a lot of cases where their expressions returning features that
00:26:48
are evaluated in place rather than being transferred somewhere else so it's useful have however
00:26:55
going through the stuff that we wanted to fix right we want to
00:27:00
be able to handle interruptions really nicely we want to have a consistent pat
00:27:04
for fatal exception so that we know that there are different paths that could be
00:27:09
differently it's it's it you don't wanna have that special offer of fatal things and if
00:27:17
we need to be able to deal with rejected execution exceptions for it's very common when you shut down in the thread tool that
00:27:23
it starts to generate work a reject execution exception so you might
00:27:27
have a phase where certain things just get dropped if you don't
00:27:33
and i've seen a lot of cases were uh people implement their
00:27:36
own execution contexts that executing synchronously and they're always bugged so i
00:27:42
want to fix that and of course i mean who doesn't want to improve performance of stuff like we just wanna make it fast
00:27:52
so we can solve three of these things with something called transformation
00:27:57
so it takes the on complete on the promise and say she makes a transformation which is both
00:28:02
a call back and a promise at the same time so now we have this link between the two
00:28:07
so that we can deal with the interruptions we can feel the same way with pharaoh exceptions
00:28:12
all the way around and we can do with the reject execution exceptions so that's nice
00:28:18
but also i've seen a lot of these like a lot of these implementations of like calling
00:28:24
for the execution contexts uh over the years and at at at first try to ignore it
00:28:31
and uh we're at first actually don't do that whole people want to do it
00:28:37
and then i started ignoring it and then i figured well if they're doing it
00:28:40
then i might as well just add something which isn't broken at least so i
00:28:47
want to mourn you like don't use it unless you really know what you're doing
00:28:52
but if you know what you're doing use this rather than rolling your own uh and it's called parasitic
00:28:57
i think it has a negative enough connotation chipset
00:29:00
when you review somebody's coordinated all parasitic that sounds nice
00:29:05
uh i hope i hope it gives the right i'll wait to the thing
00:29:11
yeah it's here i'm dead serious one hundred percent series
00:29:15
this is what happens when i'm involved in naming things
00:29:19
uh itself so we fixed the first three plus the
00:29:23
buggy things so if we want to have a faster
00:29:27
future implementation what do we need to do well first we need to go through like performance optimisation one one
00:29:36
we want to reduce allocations we want to reduce in directions we want to
00:29:40
reduce contention especially since future is a
00:29:44
can currency work ward in eating data structure
00:29:47
so we want to reduce contention on shared resources on the memory bus
00:29:52
and of course when you up to us and you don't wanna of too much for the really where
00:29:55
case because that would just be a waste of time so want all the ones for the really common cases
00:30:01
and of course you don't know if you succeed or not if you don't
00:30:04
measure and verify so i've been using a a j. made for doing the
00:30:09
the uh the verification and uh s. p. t. g. m. h. actually run it
00:30:14
and of course we have some constraints we can't just be optimised however we want like make it super fast
00:30:20
we we don't care about anything just like a fast we want to pursue whiteness and we want to preserve fairness
00:30:26
so that things keep executing nothing can sort of stole everything else for as long as they want
00:30:33
we want to preserve compatible with the i mean this is a standard library thing of of course we want to maintain compatibility right
00:30:39
but also want to make life easier for ourselves and our our users to make both the implementation and
00:30:44
the use of the code maintain or so those are the constraints that we need to work with it
00:30:50
and of course we have some limitations that we've chosen or cells
00:30:54
like um future promise an execution context they're traits so they're interfaces we allow you to implement your own
00:31:03
we allow there to be multiple implementations and invoke interface is always going to be slower
00:31:08
than invoke static so that's uh chose a limitation of performance that we've chosen from the outset
00:31:13
and also our use of uh tonics for doing the the the save memory management
00:31:18
as fast as register type so if a future didn't need to deal with concurrence you then you could be much faster
00:31:24
but yeah it wouldn't be a future it would just be something else like up bar that would be super
00:31:29
useful but also we can't do any automatic absolution because
00:31:35
we have a strict thing and at your thing and pay
00:31:39
you could handed out a reference to your old future before we added a
00:31:43
transformation so we can't just do automatic um absolution however you can do manual observation
00:31:50
okay so if you transform and you have a trial
00:31:54
you can do try map filter collect on the trial itself
00:31:58
and those operations will be fused by you it it it's it even not more code because it's actually basically the same
00:32:08
and of course we have our feature stability we don't just want to drop features for
00:32:12
the sake of performers like oh a fusion would be so much faster fried in half lap
00:32:19
that's not a that's a no go right so we can sacrifice features for performance so how do we how do we
00:32:25
how do we do this if we want increase performance well the resulting recipe and i mean i know i know chefs
00:32:31
uh they ask anybody under shelf like i will i will go to restaurant i
00:32:35
will order in but this is the recipe that i i didn't do performance optimisation
00:32:42
so we need one bucket of these transformations um and uh
00:32:46
we need one tablespoon of improved linking a really important feature a
00:32:51
very unknown but it's necessary is like one of these ingrid is that you don't know what it does like um the leaves
00:32:58
what does the believed that nobody knows just like add a bailiff yeah okay um
00:33:04
we need to the also the improve batting secure that's also like unemployment the implementation detail but i'm going
00:33:09
to get to why that's important next and of course we need to improve the call back management clearly
00:33:16
and of course i mean once you go through the code you just
00:33:19
gonna sprinkle like season generously with all my crap demonstrations you can come
00:33:23
up with like you just you get that's what you do and of course you need to bake this thing so i picked this uh on my
00:33:31
i might add remember remember it can get too high
00:33:36
so i do it in a well ventilated area with a a a c. right
00:33:43
so
00:33:45
or whether transformation we have something which is both a call back and promise that
00:33:49
reduces the number of allocations we don't need to allocate two things to just allocate one
00:33:53
and that also it reduces indirection because it's the same thing you don't need to jump between things and i found out
00:34:01
that uh it's better for performance in some cases uh to you manually include your own dispatch table
00:34:08
uh because if you have lots of permutations of transformations like married have
00:34:12
knapp transformation flat my transmission all these things as their own classes
00:34:16
then all the implications would be would be making more fake and
00:34:20
yeah i actually best part it's faster to yourself for this specific case
00:34:25
so i did that and of course linking is when you link a future
00:34:30
to another future so essentially if you're doing um in the curse of future
00:34:36
completion so like this future will be result of that future which will be
00:34:41
the result of that future and that sort of a nested thing then we link
00:34:45
the future together so that you can complete the first thing
00:34:48
that you created because when you create features remember you need
00:34:52
to hand something back before the thing that you created is done so there's always a link to something that you had first
00:34:59
and to something double completed at sometime those are called links and
00:35:04
once you have a long chain of links clearly you don't wanna keep already
00:35:07
memories you want to be able to compress those links as fast as possible
00:35:11
so we're doing very aggressive calling compression and of course once it's done
00:35:16
was it does as about have a value then you want to remove that link because you want to just have it
00:35:21
as if it was just the normal promise in the first place so we we want on link thanks only blinks addressable
00:35:30
so in order to do this yeah bring their the parasitic thing so for synchronous execution
00:35:39
context um we if we want to execute
00:35:42
things synchronise leave safely the reason why the calling
00:35:46
thread execution context was not not safe is because it's not it will blow the stack because
00:35:53
it doesn't do any segment so if you use the same thing recursive within a mobile sack
00:35:58
so if you want the stack safe but also want to be fast
00:36:02
then we need some way of doing that and passion executor is that thing so parasitic uh uses patching
00:36:09
executor to make it as fast as possible for the
00:36:12
sort of synchronous execution use case and the strategy is
00:36:17
up until a certain number just execute on stack and then once you go beyond that threshold you
00:36:23
will start to allocate a he pays stack that
00:36:26
you will than fill up and then you will essentially
00:36:30
drool easily because you want to avoid doing allocations when you're in the sort of whole thing that spins on the
00:36:35
c. p. u. and of course you just want to optimise all the whole half way across this thing so does that
00:36:43
but um for the sink execution context like global
00:36:48
it's also a extending i'm batting executor now um so you want to
00:36:53
do you almost the same thing uh you want to grow capacity lately
00:36:57
and you of course wanted muscle the whole path but what it does is that it submits locally on the current thread
00:37:04
so it avoids going through the submission queue of the ford pool on less it has to
00:37:09
and there is a bit of a resubmit interval so if you process
00:37:13
x. number of things then it will schedule itself onto the submission queue
00:37:18
it's this issue like the throughput setting in arkansas dispatcher um
00:37:23
and it's possible to to essentially check if something should be patched if it
00:37:28
should be submitted to the submission queue directly steering control over stuff runs also
00:37:35
a hoax in a bar context so if you do blocking
00:37:38
in this thing where you've submitted things on your local thread
00:37:42
you will prevent the progress of everything that you kept in memory and
00:37:46
even if the foregoing police works even want it can steal the current task
00:37:50
so what the ball context does is like if you did if you do barking like a weight or the booking construct
00:37:56
when you're running one of these things will actually take that batch and say okay then we'll put it
00:38:01
into the submission queue so someone else can continue so it provides a a little bit of safety around us
00:38:10
and of course we need to store call back so we improve that we have a custom list
00:38:14
because you don't wanna be allocating listed separately from the call backs so we have our own and
00:38:20
we found out that it's very common for a few sure only have a single call back
00:38:26
so you don't want to allocate anything unless you have to so in the case of
00:38:30
one call back it's just a single transformation and you can just run it so there's no
00:38:34
extra thing there and of course um we can export some facts around call
00:38:40
backs like they don't have any inter dependencies while they're on the same future
00:38:43
so you can reorder them however you want to sell let's go to the fun stuff we have five minutes left are you ready
00:38:53
yeah
00:38:55
come on come on alright so for clarity this is the machine
00:39:02
and this is uh the softer that it was
00:39:05
running on the the settings uh just for a understanding
00:39:09
there is if you look at the link below there's a link to the benchmarks that i ran so you can write on your own machine and
00:39:16
let's have a look at this thing and of course i mean all the cadets in the world applies here
00:39:21
this is a synthetic benchmark it does like your mileage may vary
00:39:25
a copy d. m. whatever um uh this is synthetic i guess synthetic
00:39:32
so it's interesting from that perspective we're comparing something to something else synthetic so alright
00:39:40
uh so if you ever used and then it's uh i'll post this post it
00:39:47
is when you complete the future after you run the benchmark so you've already done
00:39:52
everything in your company at the end and pray is when it's already completed when
00:39:56
you start the thing so that we can separate if there are any performs difference
00:40:00
so and then is between a hundred and two hundred give or take faster
00:40:05
and if you need more than twenty to forty million operations per second uh you should send an email and we can look at it
00:40:13
um filter um so basically between a hundred twenty and a two hundred forty
00:40:20
um flat map uh uh it's uh it's been very much improved in terms of uh
00:40:27
wrote the performance to the old versions so between the um three to five times faster
00:40:34
um map a more moderate a hundred and twenty four percent to two hundred percent faster
00:40:41
um we're cover i'm almost five times faster
00:40:45
um well between three five times faster known you recover with used to
00:40:51
be rather slow um but it's now between four and seven times faster
00:40:58
um transform which is a very important operation because you can do a lot
00:41:02
of stuff we transform so it's between a hundred and thirty and two hundred ish
00:41:07
uh percent improvement so it's not like it's it's twenty
00:41:12
four percent faster it's a hundred and twenty four percent fast
00:41:16
um transform with also really useful when you do composition with
00:41:21
stuff or returns features a between like three and five times faster
00:41:26
and sit with also is extremely useful when you want to
00:41:29
combine features and do some transformation to it's not that much
00:41:33
faster because it involves more work you first have to join
00:41:36
them the need to apply something but it's seventy seven percent faster
00:41:41
and what i think is most important is what i call the various benchmark in the various benchmark does
00:41:48
a map folder by flap mouthful by filter follow bias it with full about france will fall by recover
00:41:54
as its operation change so these are very common things to do
00:41:58
not necessarily just this order uh but doing these sort of sequence of different transformations at the same time
00:42:04
and that's been improved by approximately two hundred percent for for common stuff so
00:42:10
remember that this entire chain you can do more than three million times a second
00:42:16
and this is just running on one single core so imagine that you're you're you're running only sixteen or something
00:42:25
um so i think just in closing i think these are some
00:42:29
really encouraging results i think that the future is looking really bright
00:42:33
and i hope you have a an awesome conference i'm really happy
00:42:36
to have you all here i hope you learn something today i hope
00:42:40
i got you interested in researching this more and i would love
00:42:44
your feedback on all this so if you have something that uses feature
00:42:48
um something which is performed a sensitive or something that uh
00:42:52
try that try it's got two thirteen zero out give me feedback
00:42:56
uh if there are any regressions are box that we haven't caught just let us know
00:43:01
and i'm just taking first then tell me what you think thank you all for coming
00:43:15
thank you for the talk and maybe we have time for some questions yeah
00:43:21
one minute just right yeah and i have i think a lot of
00:43:31
i hear you
00:43:39
okay i should have like a great work thank you um
00:43:44
just uh i'm wondering uh an about how
00:43:50
um predictable these features are on on on
00:43:55
which to build a ram and especially i mean
00:43:58
just last week i've been diving into a java application doing because all consensual mess
00:44:04
and i try to create a simple application was computed computed features and
00:44:10
yeah easy did did even menu test set rules
00:44:14
it's completely eh yeah not that the deterministic actually using
00:44:21
uh and it even change each if it's some computational takes longer than usual to resurface shoes using
00:44:27
the even more additions then uh okay oh so i was just wondering you know what your technician
00:44:33
uh how how how can it be can it be even put a product is predictable oh so
00:44:39
just one hundred percent predictable like it will use the execution context that you passed to it period
00:44:47
so you pass in something it will one there but then if you met for transform on that
00:44:52
future which set list any difference using the same execution context and it will be on the single okay
00:44:58
so it's really predictable and if you have a battering um version like global it will
00:45:04
actually run that underneath like nested so all we can take advantage of stuff like that
00:45:12
but it's really one hundred percent predictable which is wiser the parasitic one is not recommended
00:45:16
because that now becomes a timing question who completes the the future promise by which time
00:45:22
um but this is one hundred percent reliable in terms of words were stuff france
00:45:32
oh so i would like to go back to the question of
00:45:36
cancellation or are you alright ah i uh so many people yet
00:45:43
ha so i'd like to go back to the question of cancellation and i take your point about uh
00:45:49
basically if you chip being carried a promise for the right is very well taken button that had that
00:45:55
common better interest in applications with bill i seen in attractive as well that there's only one
00:46:02
the the thread or one holder that takes a reference to the future and some
00:46:06
she just wants to abandon the computations yeah ah it's all put construct like cancel the future
00:46:12
that's a separate from the standard future but might
00:46:16
have a in addition liberation of cancer the cigarettes
00:46:19
now be something that to like useful already i'd look at the different sort of options
00:46:26
so if you got a bigger clang dot com slash blog there's a blog post on
00:46:31
like why we ended up with cancellation as we have and there's some proposals for how you would do it like
00:46:37
when you have more specific knowledge like how something is used then of course you your like i can rule out the use
00:46:43
these problems and then you can use something else but it's
00:46:46
really it's really possible and to to implement your own cancellation construct
00:46:51
uh for yourself are i think the the the rule is like you you pass something that has he
00:46:57
he like a a function that cancels the thing and then you can separate the things we actually ended
00:47:02
up doing just that yeah so have a look there if you're interested in knowing more like well the
00:47:07
back story and what what we considered and uh why we ended up as we are i think especially
00:47:13
uh something like a task that is a lift the representation where you essentially describes some sort of
00:47:19
transformation process something like that it's easier to do
00:47:23
with cancellation because you have a much much more
00:47:27
control over how that is going to be executed so it might also be one of these things for
00:47:32
cancellation mixes on on logical level rather than no learning implementation level so have work to get
00:47:40
in most yeah yeah i'm i'm sorry i just question any other nation where
00:47:49
i you recently he had to look uh about she choose change
00:47:54
in java again my understanding is that in java as someone who okay
00:47:59
i haven't seen the new journal versions because uh there's a new version every six months
00:48:05
so all i you away the difference in speed maybe
00:48:10
could add him ladies televisions and stella i haven't really compare the two because they have
00:48:16
it's actually fundamentally different design um constraints so i haven't
00:48:22
sort of pitted them against each other because it is a hard thing
00:48:25
to do um there are things that you can do in terms of
00:48:31
essentially a shielding yourself from the blocking parts by having
00:48:35
a completion stage rather uncomfortable future as your dependency type
00:48:40
because then you don't get the uh you get and the other blocking methods so i'm i'm sure i'm i'll have a look up
00:48:47
the uh if i can come up with some sort of good a good way of comparing the two okay thank you thank you
00:48:57
thank you for answering the questions and
00:49:00
well i think you but i again thank you all for coming i'm i'm super sucker you're still you're so uh
00:49:07
have have agreed conference and i'm happy to be here this is like i was here at the first holiday so it's it's

Share this talk: 


Conference Program

Welcome!
June 11, 2019 · 5:03 p.m.
1574 views
A Tour of Scala 3
Martin Odersky, Professor EPFL, Co-founder Lightbend
June 11, 2019 · 5:15 p.m.
8337 views
A story of unification: from Apache Spark to MLflow
Reynold Xin, Databricks
June 12, 2019 · 9:15 a.m.
1267 views
In Types We Trust
Bill Venners, Artima, Inc
June 12, 2019 · 10:15 a.m.
1569 views
Creating Native iOS and Android Apps in Scala without tears
Zahari Dichev, Bullet.io
June 12, 2019 · 10:16 a.m.
2232 views
Techniques for Teaching Scala
Noel Welsh, Inner Product and Underscore
June 12, 2019 · 10:17 a.m.
1296 views
Future-proofing Scala: the TASTY intermediate representation
Guillaume Martres, student at EPFL
June 12, 2019 · 10:18 a.m.
1157 views
Metals: rich code editing for Scala in VS Code, Vim, Emacs and beyond
Ólafur Páll Geirsson, Scala Center
June 12, 2019 · 11:15 a.m.
4695 views
Akka Streams to the Extreme
Heiko Seeberger, independent consultant
June 12, 2019 · 11:16 a.m.
1552 views
Scala First: Lessons from 3 student generations
Bjorn Regnell, Lund Univ., Sweden.
June 12, 2019 · 11:17 a.m.
577 views
Cellular Automata: How to become an artist with a few lines
Maciej Gorywoda, Wire, Berlin
June 12, 2019 · 11:18 a.m.
386 views
Why Netflix ❤'s Scala for Machine Learning
Jeremy Smith & Aish, Netflix
June 12, 2019 · 12:15 p.m.
5027 views
Massively Parallel Distributed Scala Compilation... And You!
Stu Hood, Twitter
June 12, 2019 · 12:16 p.m.
958 views
Polymorphism in Scala
Petra Bierleutgeb
June 12, 2019 · 12:17 p.m.
1113 views
sbt core concepts
Eugene Yokota, Scala Team at Lightbend
June 12, 2019 · 12:18 p.m.
1656 views
Double your performance: Scala's missing optimizing compiler
Li Haoyi, author Ammonite, Mill, FastParse, uPickle, and many more.
June 12, 2019 · 2:30 p.m.
837 views
Making Our Future Better
Viktor Klang, Lightbend
June 12, 2019 · 2:31 p.m.
1682 views
Testing in the postapocalyptic future
Daniel Westheide, INNOQ
June 12, 2019 · 2:32 p.m.
498 views
Context Buddy: the tool that knows your code better than you
Krzysztof Romanowski, sphere.it conference
June 12, 2019 · 2:33 p.m.
394 views
The Shape(less) of Type Class Derivation in Scala 3
Miles Sabin, Underscore Consulting
June 12, 2019 · 3:30 p.m.
2321 views
Refactor all the things!
Daniela Sfregola, organizer of the London Scala User Group meetup
June 12, 2019 · 3:31 p.m.
514 views
Integrating Developer Experiences - Build Server Protocol
Justin Kaeser, IntelliJ Scala
June 12, 2019 · 3:32 p.m.
551 views
Managing an Akka Cluster on Kubernetes
Markus Jura, MOIA
June 12, 2019 · 3:33 p.m.
735 views
Serverless Scala - Functions as SuperDuperMicroServices
Josh Suereth, Donna Malayeri & James Ward, Author of Scala In Depth; Google ; Google
June 12, 2019 · 4:45 p.m.
936 views
How are we going to migrate to Scala 3.0, aka Dotty?
Lukas Rytz, Lightbend
June 12, 2019 · 4:46 p.m.
709 views
Concurrent programming in 2019: Akka, Monix or ZIO?
Adam Warski, co-founders of SoftwareMill
June 12, 2019 · 4:47 p.m.
1974 views
ScalaJS and Typescript: an unlikely romance
Jeremy Hughes, Lightbend
June 12, 2019 · 4:48 p.m.
1377 views
Pure Functional Database Programming‚ without JDBC
Rob Norris
June 12, 2019 · 5:45 p.m.
6375 views
Why you need to be reviewing open source code
Gris Cuevas Zambrano & Holden Karau, Google Cloud;
June 12, 2019 · 5:46 p.m.
484 views
Develop seamless web services with Mu
Oli Makhasoeva, 47 Degrees
June 12, 2019 · 5:47 p.m.
785 views
Implementing the Scala 2.13 collections
Stefan Zeiger, Lightbend
June 12, 2019 · 5:48 p.m.
811 views
Introduction to day 2
June 13, 2019 · 9:10 a.m.
250 views
Sustaining open source digital infrastructure
Bogdan Vasilescu, Assistant Professor at Carnegie Mellon University's School of Computer Science, USA
June 13, 2019 · 9:16 a.m.
375 views
Building a Better Scala Community
Kelley Robinson, Developer Evangelist at Twilio
June 13, 2019 · 10:15 a.m.
245 views
Run Scala Faster with GraalVM on any Platform
Vojin Jovanovic, Oracle
June 13, 2019 · 10:16 a.m.
1342 views
ScalaClean - full program static analysis at scale
Rory Graves
June 13, 2019 · 10:17 a.m.
463 views
Flare & Lantern: Accelerators for Spark and Deep Learning
Tiark Rompf, Assistant Professor at Purdue University
June 13, 2019 · 10:18 a.m.
380 views
Metaprogramming in Dotty
Nicolas Stucki, Ph.D. student at LAMP
June 13, 2019 · 11:15 a.m.
1250 views
Fast, Simple Concurrency with Scala Native
Richard Whaling, data engineer based in Chicago
June 13, 2019 · 11:16 a.m.
624 views
Pick your number type with Spire
Denis Rosset, postdoctoral researcher at Perimeter Institute
June 13, 2019 · 11:17 a.m.
245 views
Scala.js and WebAssembly, a tale of the dangers of the sea
Sébastien Doeraene, Executive director of the Scala Center
June 13, 2019 · 11:18 a.m.
661 views
Performance tuning Twitter services with Graal and ML
Chris Thalinger, Twitter
June 13, 2019 · 12:15 p.m.
2003 views
Supporting the Scala Ecosystem: Stories from the Line
Justin Pihony, Lightbend
June 13, 2019 · 12:16 p.m.
163 views
Compiling to preserve our privacy
Manohar Jonnalagedda and Jakob Odersky, Inpher
June 13, 2019 · 12:17 p.m.
302 views
Building Scala with Bazel
Natan Silnitsky, wix.com
June 13, 2019 · 12:18 p.m.
565 views
245 views
Asynchronous streams in direct style with and without macros
Philipp Haller, KTH Royal Institute of Technology in Stockholm
June 13, 2019 · 3:45 p.m.
304 views
Interactive Computing with Jupyter and Almond
Sören Brunk, USU Software AG
June 13, 2019 · 3:46 p.m.
681 views
Scala best practices I wish someone'd told me about
Nicolas Rinaudo, CTO of Besedo
June 13, 2019 · 3:47 p.m.
2708 views
High performance Privacy By Design using Matryoshka & Spark
Wiem Zine El Abidine and Olivier Girardot, Scala Backend Developer at MOIA / co-founder of Lateral Thoughts
June 13, 2019 · 3:48 p.m.
754 views
Immutable Sequential Maps – Keeping order while hashed
Odd Möller
June 13, 2019 · 4:45 p.m.
277 views
All the fancy things flexible dependency management can do
Alexandre Archambault, engineer at the Scala Center
June 13, 2019 · 4:46 p.m.
389 views
ScalaWebTest - integration testing made easy
Dani Rey, Unic AG
June 13, 2019 · 4:47 p.m.
468 views
Mellite: An Integrated Development Environment for Sound
Hanns Holger Rutz, Institute of Electronic Music and Acoustics (IEM), Graz
June 13, 2019 · 4:48 p.m.
213 views
Closing panel
Panel
June 13, 2019 · 5:54 p.m.
400 views

Recommended talks

Lord of the rings: the Spire numerical towers
Denis Rosset, researcher in quantum physics
June 14, 2019 · 1:47 p.m.