Player is loading...

Embed

Copy embed code

Transcriptions

Note: this content has been automatically generated.
00:00:00
cool um let's uh let's kick this off so thanks for coming to my talk a fast simple can currency was counted
00:00:06
of um salad is is extra amazing this year and it's um it's an honour to be here with all the all
00:00:12
so this is a talk about um terminal can currency on scholar native
00:00:16
um and what that really means it to me gets to the heart of what the the native platform isn't
00:00:22
what makes it different from style uh on the j. v. um or even sell uh on the java script
00:00:28
um but because there's a platform where you you start off with last it's
00:00:33
to me it's really about style as a platform itself if you're doing scalloped
00:00:37
all the way down to the the o. s. level um you you have nothing
00:00:41
that's calling your stack which can be intimidating but also be really exciting um and
00:00:47
to me it's especially about sustainable libraries in communities how do
00:00:51
we build enough functionality on this new platform to sort of
00:00:55
bootstrap a workable ecosystem where you can start using the same
00:00:58
production um concretely though this is a talk about native loop which
00:01:03
is uh the new currency library first culminated load up for
00:01:07
um it's an extensible um that it will be an io system
00:01:11
it's backed by would you be a c. library uh that provide sort of it with all talk a lot more about uh shortly
00:01:17
and it works great with other c. libraries uh things you've heard of like would curl um
00:01:23
and a really easy to extend um and it's here and get up it lives in the
00:01:27
mains culminated organisation um but it's not merged into the core yeah that's still provided as a library
00:01:34
the techniques i'm sure when you hear anyone could build a their own
00:01:37
um can currency system if they wanted to which is one of things it's
00:01:41
most exciting about stalin eight of 'em for me anyway um so what what's in the stock um we're gonna start with
00:01:47
like a slider to about can currency install a just for
00:01:50
background um we're gonna talk about stella native on background and
00:01:53
then would you be um that we're gonna do an overview of the high level a. p. i. um and then we
00:01:58
actually gonna do a deep dive into the execution context itself
00:02:03
um and then conclude by just talking about where we go
00:02:06
so about me i'm i'm richer way when i'm done writing scholar for
00:02:11
about ten years or something but a full time or like four five
00:02:15
um i'm a data engineer and one finance in chicago if you haven't heard
00:02:19
of us were fan tech startup we provide banking brokerage services um we have
00:02:24
about a hundred thousand happy users and manage about half a billion dollars in assets
00:02:29
we're hiring back and salad engineers in the us in if you wanna hear more about it just
00:02:33
come up and talk to me afterwards i would love to tell you about
00:02:36
it um and also the author of this book modern systems programming with stalin native
00:02:41
um which is available um and digital preview out from pragmatic i'm here
00:02:47
and not only that i actually have a a discount code right here for it um and also
00:02:52
if you you buy the beta release you get the full book now all ten chapters are there
00:02:57
and probably around the end of the summer um when i finish the broken revise it for about four you'll
00:03:03
get um you'll be all the updates year you copy and you get a huge discount combined a paper copy
00:03:08
um and all tweed between this out and uh show the code again at the end um yeah let's get started
00:03:14
um so yeah let's talk about can currency really quickly so i'll just
00:03:19
proctor out some definitions programs that do one thing at a time we'll call synchronise
00:03:24
and programs that do more than one thing at a time we can call asynchronous or concurrent
00:03:28
um synchronise programs are generally easier to write but they do bottleneck or block on certain workloads
00:03:34
um whereas concurrent programs can perform better um but also can be financially complex somebody
00:03:40
um i'd argue java and c. plus plus more or less got wrong um and
00:03:45
some java script but probably got it right and in some sense although they
00:03:49
certainly continue to evolve as do we all style of course famously um has made
00:03:55
a huge advances in currency for the whole lifetime of of
00:03:59
style really um part of it really is the the scholar
00:04:02
dot concurrent take the i in particular the the future execution
00:04:06
context provides it lets you do things more or less like this
00:04:10
um where a a future passes the the asynchronous result of the computation
00:04:15
um ideally there's no mutable in state state involved everything's passed around in these features
00:04:21
and you have this implicit execution context lets you abstract over different
00:04:24
backgrounds that can be a thread tool that can be a dedicated background
00:04:28
thread that can be on your main infrared um and this this abstraction
00:04:32
over the exact mechanism of execution is something that i still think makes
00:04:37
next colour really special it's things that go in java script don't let you do right
00:04:42
and it's the that i'm sort of modularity is what makes this whole talking from possible um
00:04:48
that said can currency is still difficult um right if you do run blocking
00:04:52
code it's really easy to starve the the execution context of friends um accidentally
00:04:57
it's easy to run into race conditions if you do have mutable state i've
00:05:01
certainly accidentally closed over centre an actor of probably dozens of times um and
00:05:08
people try to solve this by introducing higher level models actors screen mean um
00:05:12
a lot of the newer stuff people are talking about um here this conference
00:05:16
but um we problems to persist it's still hard right and i'm gonna argue that the
00:05:21
the j. v. m. spreading in memory model is sort of fundamentally hostile to to to closures
00:05:27
in general right um and to some extent that's the style uh um and i like
00:05:33
i don't think it's controversial to just point out that it's huge proportion of j. v. m.
00:05:37
libraries will happily block your friends and there's a good reasons not to rely on those
00:05:42
i thought i'm robin or says talk yesterday um they display much more strongly than i ever
00:05:47
could um highly recommend that if you don't get to see it um so the design questions
00:05:52
for for us going into like what we what can currency to look like an scholar native
00:05:57
you know we we can't um we can't piggyback on what worked for the j. v.
00:06:01
m. that we also aren't constrained by it um we can we can start over from scratch
00:06:05
um so the question for me is whether um scholar native can provide
00:06:10
a model of iowa can currency that's true the scholar but i i
00:06:13
kind of hope that it could provide a model of iowa can currency
00:06:17
that's that's more true to scholar than the j. v. m. ever was
00:06:21
the that's colour without the job is of instant can be more style um
00:06:25
so let's let's talk about stalin data for a bit before we go any deeper
00:06:29
so scully native is is college not a a
00:06:33
fork it's a compiler plugin just like psychology yes
00:06:36
um instead of targeting um j. v. m. by code it targets
00:06:40
the l. l. v. um compiler back m. same compiler framework that russ
00:06:44
clamouring um use um so it produces
00:06:49
um basically compact optimised native binary is with a small footprint in a little memory overhead
00:06:55
you don't have a j. v. um um but you do have pretty good coverage of g. d. k. classes because
00:07:00
we've re implemented a good portion of the g. d. k. and here are style rate so ordinary scholar code for
00:07:06
the most part just works um but you get other
00:07:10
cool things was culminated to that i think are really special
00:07:13
um you get full control over memory allocation like you have an a. c. program
00:07:18
you get struck ten array out from this for offbeat memory uh like you have an
00:07:22
a. c. program you get c. f. f. five is really a foreign function interface
00:07:27
the ability to call into c. libraries and also the pass your own code into them
00:07:31
um i'd started my career is the c. programmer and i've been doing
00:07:34
c. f. f. five from a lot of languages for a long time and
00:07:38
scour natives is the the best i've ever seen i really fell in love with the the first time i i started using them um
00:07:44
all of this adds up to basically and and and that it scholar d. s. l.
00:07:49
with the capabilities of c. with all the power in danger that go
00:07:53
with that and to me that's the main sully exciting and powerful um
00:07:57
so that that also comes with a warning right this is very low level computing um it's
00:08:02
powerful but it's also dangerous and you don't need any on say functionality to use our native
00:08:08
um and sort of the best cracked us as it that we've settled on as a community over the last two years is that
00:08:14
you want what we want to provide are safe idiomatic scholar eighty
00:08:18
eyes on top of the the low level code so really um
00:08:22
library authors um and sort of um language maintainer is really the people
00:08:26
who should have to worry about the the unsafe stuff not end users
00:08:30
we should really just see ordinary scholar um but what even is idiomatic
00:08:36
scholar wouldn't when you really open this up or a scholar native is
00:08:39
is single threaded um and the the g. d. k. isn't complete there
00:08:43
are things that are missing like reflection like run time class load and um
00:08:48
so you you fill in the gaps by by using the
00:08:51
c. f. f. i. there are some c. libraries for everything under
00:08:55
the sun just about everything but j. v. m. can do a lot of cool things the j. v. m. can't do um
00:09:02
so then the question is do we take these off one at a time or
00:09:05
do we try to make a bolder move to provide the essential capabilities that we need
00:09:10
to write productive it's colour code um and from there um i think originally about
00:09:15
two years ago i just started experimenting with um a c. library caught would you be
00:09:19
um this is a cross platform c. library that provides an event loop um it's famously
00:09:25
used by no j. s. and was originally spun out of the no just project but
00:09:28
it's also use by actually uh by the of the um and as a librarian ruby
00:09:33
python prospects all kinds of things um it
00:09:37
has great um non blocking io capabilities um
00:09:41
and it supports windows mac and b. s. d. which is really great um so it's it's uh i think it just
00:09:48
off the looking at the the the label on the can it's a a great fit for stella native
00:09:53
um so what what it does is that it abstracts both over different kinds of ohio
00:09:59
um and different operating system mechanisms that underlie i'm into like
00:10:03
a linux famously has this people mechanism for for a sink
00:10:06
i oh that's quite fast that quite painful d. s. d.
00:10:09
has to take to you um windows has i of completion ports
00:10:13
um but even then like uh in linux equal works on t. c. p. sockets and
00:10:18
pipes um but actually doesn't work on file i. o. n. in i'm almost file systems
00:10:23
file i always always blocking next which is kind of fiendish um so
00:10:28
what lead u. v. does actually has um a thread pool um separate
00:10:32
from the main single threaded event loop where it will run um any
00:10:36
blocking system calls for things like file io for things like the analysis
00:10:41
um in in theory can also run um user code um also actually so you can get that triple
00:10:47
functionality for free um and it provides all this with a really consistent eighty i wear a resource you
00:10:53
can do something to whether it's um whether it's a and io um resource or just sort of a
00:10:58
life cycle is represented by handle and then you just
00:11:02
pass a c. function and to the c. library calls
00:11:05
um to use a call back when when different events happen um essentially
00:11:10
um so the way that works that was our native is that our
00:11:13
solitude of code just run on a single thread um we'll be able
00:11:16
to use a equal et cetera for high performance iowa on whatever um
00:11:22
operating system or using this totally abstract for us we get this group pretty good call that a. p. i. and we get sort
00:11:28
of production great performance for free um and also a lot of
00:11:33
really good patterns for extending this with other c. libraries with basing capabilities
00:11:38
um i've done curl so far but there's even like um sample code out there
00:11:42
for for getting like post press clients and read as clients another things integrated um
00:11:48
but the the the big question is one of design and we were
00:11:52
using um this sort of low level though g. s. library if we can
00:11:56
avoid um the the call back hell or pyramid of do on um
00:12:01
and the pattern that java script code had you know five ten years ago
00:12:05
um so yeah let's just actually talk about where the a. p. i. is though um and then we'll cut back to actually
00:12:11
how we implement something that's that's more idiomatic so first of
00:12:15
all um we're gonna have a real execution context in real futures
00:12:21
um i i'm not comfortable with the no j. s. style a. t. i. and sell a number in a pretty comprehensive
00:12:27
io capabilities get uh get all the basics handled um we're
00:12:31
gonna provide an h. t. t. and h. t. t. p. s.
00:12:34
client server and honestly those are more important to me than
00:12:37
file io right was last time i wrote of bribery output to
00:12:42
a file from the stock program i don't even know um i think they're really like the the price of entry um nowadays
00:12:49
um but it's to provide all this with a sense of um
00:12:51
minimalism and sustainability right the functionality we're talking about here is potentially enormous
00:12:57
um and just trying to get something that works um and gets us to sort of useful production great programs
00:13:03
as as quickly as possible is the most important thing here um the medical for me is um relevant to
00:13:10
design like a state of the art a. p. i. is to provide an attenuated anything a base for the
00:13:16
other ideas that are always rapidly evolving things like streams
00:13:20
like the actor model like the various io monad implementations right
00:13:25
um which uh turns out to be pretty tricky but i'll i'll show you what i've got so far um
00:13:30
so the for the event loop um basically we have a trait
00:13:33
for event loops which just extends execution context um and as to capabilities
00:13:40
um it adds this um loop extension mechanism which i'll show you more about
00:13:43
shortly um but that's basically how you register other modules another c. libraries with the
00:13:48
um so even third party libraries can integrate with the event
00:13:51
loop and synchronise with that um but it also exposes the
00:13:55
low level of u. v. um loop primitive um and then it has this loop don't run method which is what we can
00:14:01
just actually um he'll to the event loop um and start actually doing
00:14:05
i ought like traditional scholar single threaded
00:14:08
execution context we are eagerly executing futures
00:14:12
um as soon as they're ready instead we um basically to for them to
00:14:16
the point our life cycle where we sort of um here and all future
00:14:20
execution i'm so this this is this is a being like medium intrusive but
00:14:24
we can also talk about ways to to um streamline that and i've got a
00:14:28
it's got all the native issue open to just let the um
00:14:31
basically to let the scholar native one time um hockey and uh
00:14:35
the us or other um supplied libraries that can provide a valid
00:14:40
um so for for an example of some of these loop extensions and capabilities
00:14:44
uh the simplest one would just be like a timer delay your schedule right
00:14:49
um so we could have any a really simple a. p. i. like the us where you just give it a duration
00:14:54
um it returns a future of unit after that duration um it can do repeating schedule
00:15:00
so can be really nice parameter for building your own schedule or um and of course this
00:15:05
we're not even we're not even touch ain't that the future class right we're the only thing we're doing is supplying an
00:15:10
execution context so regular map flat map um all of the
00:15:14
different call backs and commentators just just look for free um
00:15:19
um this was my my first attempt it strains this is what's in the book right now
00:15:24
um but actually consider this design a failure i
00:15:27
try to do something very like reactive stream style
00:15:31
and what i found was it was really um tricky to model raw sockets um or files well
00:15:36
files in particular have this notion of um position
00:15:40
and seek ability that reactive streams doesn't model and
00:15:43
demand it doesn't really make sense um where sockets or bi directional and sorted demand can
00:15:49
can flow in both directions unchanged eight um
00:15:52
sort of based on a specific protocol implementations
00:15:56
um so that that turned out to be really tricky
00:15:59
to do with a sort of purely reactive streams um mechanism
00:16:03
so when i redesigned this for about for the goal was actually that to streamline
00:16:08
it to simplify and again to provide this sort of own opinionated a base layer
00:16:13
um for things like reactive screens ma next cats effect
00:16:17
zeal um and sort of parts that the the the experts
00:16:21
um so what we've got it's much much simpler it's i'm much closer to
00:16:26
just that sort of direct exposure of all of live t. v.s capabilities um
00:16:32
including pausing which is really important the the ideas i don't work
00:16:36
for now i'm actually not implementing back pressure on my side of
00:16:39
the sense that i'm just exposing um applause method that would allow
00:16:44
um a layer on top of this stamp on it back pressure
00:16:47
but the cool thing is because this is a totally safe idiomatic stall a. p. i.
00:16:52
with gnome c. functions or pointer arithmetic x. it's a lot easier for um someone else to
00:16:57
comment and implement any io um idiom they they want i think it's quite likely that after
00:17:03
we've gone through the experience of porting some like io monad is um to this we might
00:17:09
decide to have a higher level a. p. i. baked in down the road
00:17:13
um but for now i think this is the the right base to move forward um rapidly
00:17:18
um it's also not basically not tight strings and strings
00:17:22
out is um sort of what i owe speaks um and
00:17:26
for now i think that's actually a a safer than
00:17:29
trying to to come to uh to define complicated type signatures
00:17:33
um likewise the real question i had here is whether i should even um sort of by
00:17:38
which are into the the low level a. p. i. signature whether i should entirely work with call backs
00:17:44
um to allow us to implement like an io monad without i'm having to a
00:17:49
bother 'em allocating futures at all um i think that again that's something that'll come
00:17:54
out in the wash once we've actually had more experience implementing these things but it's
00:17:58
a really interesting um defined most experimental part of what i'm gonna show you today
00:18:03
i'm in contrast things like curl the the famous lip curled c. library
00:18:09
um it's incredibly conference of it's pretty pretty simple by comparison um
00:18:14
it does what it's it's we get is really um awesome higher
00:18:18
scalable um h. t. t. h. t. t. p. s. quiet on it
00:18:21
does a great job integrating with with you these polls and timers has
00:18:25
great on h. t. t. p. s. support ran out of anyone's ever
00:18:28
tried to implement h. t. t. p. s. from scratch but it's hard
00:18:31
so the fact that we get it for free is amazing it also supports like
00:18:35
twenty other protocols like f. t. p. n. s. c. t. and i'm out um
00:18:40
my goal though again is not to like design an a. p. i. for this myself but uh like get support for s.
00:18:46
t. d. t. v. or um request all um things like that i think to a really really great job of this um
00:18:52
and then i can just punch the the a. t. i. design the the harder problem there that was whether there is like
00:18:57
a a primitive low level a. t. i. that we could all agree on right um which i i don't think we have yet
00:19:03
i'm in contrast uh for the the server h. t. v. server or a
00:19:08
t. i. um we actually have two different we actually have to clear layers here
00:19:13
re um we have this imperative server eighty ah um where you just serve on a
00:19:17
port and then have a handler that every time request comes and the handler gets called with
00:19:23
both the request and the connection object in the ideas the connection object is basically a
00:19:28
proxy that provides the capability to respond and
00:19:32
by providing that capability rather than like taking uh
00:19:35
either a future requests other spots or future of requests starter or of function of requests on
00:19:41
the future response we can totally abstract over that
00:19:44
and allow a middle where um layer above this
00:19:48
to uh to decide what types to plug in um entirely
00:19:52
um and uh i'll show you the next slide it's like
00:19:55
if you wanna get started writing a server d. s. l. on top of this with
00:19:58
like routers and stuff like that that's like fifty lines of code or something with like chase
00:20:03
on stuff like that um again in the interest of planting on a. p. i. design
00:20:08
i'd be really interested to see we can even avoid i'm imposing a request and response type
00:20:14
the ideal thing would be if scholar had a light weight h.
00:20:18
c. p. server middleware standard like ruby has rack and skull has whiskey
00:20:23
style has sort of what's but frankly i think they're way too heavyweight um a light weight serve
00:20:28
what style a. p. i. design first call i think would be really healthy thing for the whole ecosystem
00:20:33
i suggest we cop and starlets uh in any takers on a
00:20:38
no okay but right um so yeah so if you have like
00:20:43
forty five minutes you can build a simple server d. s. l. on top of that that looks more like the us
00:20:48
amen i'm actually gonna ship this and the the first published version of the library um so you can just get like
00:20:54
a synchronous asynchronous request handlers again jason support bacon
00:20:59
is pretty easy because we got we got to working
00:21:01
um gee some libraries aunts culminated we got our going on it's rages on i know i think um
00:21:07
this router is rudimentary but they're really good once we could port over um task is awesome
00:21:14
um play is routing um d. s. l. server it is surprisingly um compact an isolated i
00:21:20
think you could ported over pretty pretty quickly and that would be really fun a a huge when
00:21:25
um and because all of the capabilities i've showed you over these last five
00:21:29
slides are all running on the same event loop and coordinated by libby v.
00:21:33
you can mix and match all of these um seamlessly and once you
00:21:37
have write a a web server that can take asynchronous requests handlers um
00:21:41
and a web client that returns futures you have the basics for uh
00:21:46
for distributed systems and a modern environment um and it it just works
00:21:51
um which gets really exciting um but if if we're gonna talk about this
00:21:55
review systems we should also talk about performance right um so i don't like the
00:22:00
first time i gave a talk about stall the native two years ago it strangely
00:22:03
by was all about having like land speed record performance crafts and stuff like that
00:22:09
um what i found is that they are super representative of actual application behaviour in reality modern
00:22:15
server side back and applications are much more likely to bottleneck on their batting data store not on
00:22:21
pushing through h. t. p. requests right that said i am a load
00:22:26
testing this code base with that when quite regularly and one in four years
00:22:30
hi hundred slow thousands of requests per second on benchmarks um with the quality of service
00:22:36
trying to do a little bit better than o. j. s. um but the the
00:22:39
real impact is um actually um service density right scour native is seriously light weight um
00:22:46
all of these instances be running on less than one c. p. u. core they take
00:22:50
a hundred or two hundred megabytes of ram the binary footprint is often less than ten megabytes
00:22:55
versus like realistic g. v. um like parker play micro services
00:22:59
or you're talking to the four seat you coarser more you're
00:23:03
talking at least a gigabyte of ram realistically of not to
00:23:07
the four is what a lot of my services ron i'm proud
00:23:10
and like a ten times larger like disk footprint of not worse um
00:23:16
i'm like if you're in a real world scenario like let's say you have like
00:23:19
a clustered mike or services you have three to five instances of of each of those
00:23:24
um maybe at worst who of those eight services actually run it like a hundred percent
00:23:29
saturation under peak load you've got a system where you have a lot of vital resources
00:23:34
and your pain for all of this um this overhead uh the the g.
00:23:38
v. i'm i'm sort of taxes out of you and that that really adds up
00:23:42
um and i think scholar natives like tiny footprint really makes it
00:23:46
suited economically to the the modern
00:23:49
style of like small um distributed services
00:23:53
um don't even get me started about how great it is for service where you pay for the the the
00:23:58
basically megabyte of memory time seconds it's outstanding and server less i wish i could give a whole talk on that
00:24:05
um but yeah so that's that's over the high level let's um let's let's
00:24:08
go deep with that we're doing right on time so let's let's do it
00:24:12
um so to do this we're gonna need to unsafe techniques from
00:24:16
systems programming we're gonna need pointers and we're gonna need on save
00:24:20
casts right so um high level a pointer is a representation of
00:24:26
the location other piece of data basically represents an address and memories
00:24:30
unsigned integer uh equivalent to long so like a pointer t. is the address of the value of some type t.
00:24:37
um you can think of it as being like a mutable self for a value right
00:24:42
um it can feel little ungainly but there's something about that that's
00:24:45
almost more elegant and scholars of our right if you look at um
00:24:49
other straight functional programming languages like standard m. l. and oh camel
00:24:54
they tend to use like mutable sell containers um for for things like this and it can be really elegant actually
00:25:00
um and then i don't save cast is really just a c. programming technique where you have a pointer
00:25:05
to see if you just make a compile time claim this isn't the pointer to eat this is a pointer
00:25:10
to x. i know what i'm doing um uh it often works so you can treat a pointer as any
00:25:17
other pointer type and when necessary you can just pass the pointer tool long and sort of you know what
00:25:22
um which can be scary uh but um powerful right
00:25:28
see famously has no um generic programming um
00:25:32
mechanisms unless you count macros and i definitely don't tell c. macros um but c.
00:25:37
has void pointers basically pointers to uh whatever or to any um and it's really common
00:25:43
for c. libraries that are designed for generic programming um to uh to just provide
00:25:49
sort of a wild card void pointer field in their data structures where you the user
00:25:54
can just sort of throw in a pointer to whatever data structure you
00:25:57
want um and install a native will represent these void pointers is just a
00:26:02
pointer to bite it's just the address of the binary blobs somewhere we don't
00:26:06
know how many bytes it is that we know where it is and that
00:26:09
as we'll see is is enough um so in in practised it looks
00:26:14
like that um so if you wanna get a pointer there's a few ways
00:26:17
you can get them but the best way is is now locked that's not
00:26:21
a scholar native and transact that's me calling the c. standard would now log
00:26:25
literally image in the way it works is you tell it how many bytes you want which i'm giving
00:26:29
and ask enough for size along bytes which is eight for the record and what returns is uh just up
00:26:35
a retort returns a void pointer and see so just says okay here's a pointer back to a bite somewhere
00:26:41
um it doesn't even um matlock doesn't even a lovely track um the the type that you've asked a four
00:26:47
um so online three this is the first time we do
00:26:49
an unsafe task is just allocating tight memory requires a cast right
00:26:54
so we're doing raw data dot as instance of pointer long um which uh then just
00:27:00
asserts right there's no computation on associate with that that's actually a a pointer to long
00:27:06
the other thing if anyone's use them our kids now returns on initialised memory which is quite scary
00:27:11
tacit scholar programmers it might be zeroed or it
00:27:14
might be just have garbage data from the last um
00:27:18
the last uh a piece of code to use it so we wanna um
00:27:22
we wanna initialise it so the way you initialise it is just
00:27:26
by um setting the value to zero nets were doing online six
00:27:30
the way we both set and the reference pointers and scour native is with the prefix
00:27:35
exclamation mark obeying operator um if anyone's done this and standard m. l. it's pretty similar
00:27:41
syntactically um so the idea is if you use um exclamation mark on the left hand
00:27:46
side of an assignment it's an update operator and just stories the value into the point turn
00:27:52
and use it on the right hand a side or just an expression position right then it's uh de reference it returns the
00:27:58
value stored um at in the pointer right so like online
00:28:03
ten or i'm starting to like print these i can print both
00:28:07
um the pointer itself long pointer as well as its value which is bag along
00:28:13
pointer so the ability to distinguish data from its address is something that's basically impossible um
00:28:20
can i get when you're when you have a garbage collector to deal with or or in java right
00:28:24
um but is also a immensely powerful um and then let's say we
00:28:27
wanna update it we can do that again online eleven we imprinted again
00:28:31
then of course uh with now art you're responsible for the memory you're using your garbage collector will claim
00:28:37
it back for you so if you don't free at it leaks um we let it go with free
00:28:43
but then if we um mistakenly um try to read it again after we free
00:28:48
we we could get a set fault um which will just blow up our program
00:28:52
without a stack trace or you could accidentally correct your corrupt your data if you're
00:28:56
not lucky enough to uh to psych fault um pointers are i'm genuinely dangerous and
00:29:02
well is incredibly powerful in great for performances of code
00:29:06
it's also worth being really careful about where in your code
00:29:09
base this lives and not letting it sort of spread out
00:29:12
over everything you want this isolated to a few critical sections
00:29:17
um where it can make a huge impact um but it's also
00:29:20
worth being very cautious about having this in a large school base
00:29:23
with a bunch of people working on it um so the other thing you need to uh we need to actually get would be
00:29:30
as we need to use the c. f. f. five and clear bindings so that
00:29:35
again the thing that's really amazing about stella native for me is how easy it is
00:29:40
to uh just link to either c. standard lip functions or third
00:29:43
party c. libraries you literally just have this at extra an object
00:29:48
and then online for we just have a deft you start which is just going to have
00:29:52
the signature who's types align with the c. standard
00:29:57
lips you sort function um and then say equals
00:30:00
x. turn and that that's really all there is to it um i'm gonna hold it out
00:30:05
a little bit by to clear enough type called compare it or because if anyone's used to soar
00:30:10
it's basically takes um an opaque by ray and then
00:30:13
it takes a function pointer i'm a c. function pointer
00:30:17
uh for the actual function to use to compare items in it which is how it sort of abstracts over the
00:30:22
structure of of the re um and the really cool thing that stalin it can do that go can't and it
00:30:28
is really nice is we can pass scholar functions um
00:30:33
into this just like c. functions um there's there's also some
00:30:37
limitations on that though um see functions are on likes colour
00:30:40
functions in that they're static right they don't have lexical scope
00:30:44
um they can access static variables like object numbers but they can't access a member of
00:30:48
a class for example so that that constrains your design patterns a little bit in your
00:30:54
in objects instead of taste classes in a lot of places that might feel little little
00:30:58
funky but that's mostly to the to make them work safely with um with c. functions
00:31:04
um so for example if we wanna i like to play or
00:31:07
a struck a data structure of that's a legacy style um lake
00:31:12
a data structure we can declare it like online to
00:31:14
this my strap is just a basically declared like a tuple
00:31:18
and then we can declare an instance of the compared or um function um which
00:31:23
basically works as a single abstract method um class the of course with scholar to twelve
00:31:29
i'm a month or two for stella native the syntax for the so get cleaner that'll
00:31:32
just be a a lander right um but is how it works in two dot eleven
00:31:37
um and then once we've done that like if you see online fifteen then you
00:31:40
can just call to use or um and pass it you're comparing it just works and
00:31:46
if anyone wants to see my talk last year for how crazy fast uh the the c. standard web quick
00:31:51
sort is um i get really excited about that but i don't have time to go into it uh today unfortunately
00:31:57
so all of this reminds me of uh uh like ancient piece a programmer with them green spawns ten
00:32:02
for rule um which is to say that any sufficiently complicate it's your for trent program contains an ad
00:32:09
hoc informally specified below grade in in slow implementation
00:32:14
of half of common lisp home and the way i
00:32:17
unpack that is to say that the the the techniques
00:32:20
i've shown you these void pointers function arguments and casts
00:32:25
their c. avoidance is for generic programming but they're also just how you implemented dynamic language and
00:32:30
see um and the air is you get when you do this wrong or a lot closer
00:32:34
the kind of areas you get in a python or java script function where you just get
00:32:38
a feel on a on an object that isn't there um or call a function that isn't there
00:32:44
um so you don't always have you don't have the kind of safety scholar normally guarantees you um but
00:32:51
this suffices right this is this is enough to to make it work and to provide
00:32:55
a safer wrapper on top of of live t. v. say t.
00:32:58
i. surface um so what's actually get into the the real implementation now
00:33:04
so um to start scholar needed actually
00:33:07
already has an execution context that includes um
00:33:12
it's literally just these twenty two lines of code before i saw vectors talk yesterday that we've said this is
00:33:17
the smallest one possible but now i've seen one that's half the size um but the idea is it it
00:33:23
does something a little unusual in that the way it
00:33:25
implements the execute a method is that it doesn't immediately
00:33:30
execute run doubles when they're ready to run instead of just a pens them onto a cue into first execution
00:33:36
until this private loop method gets invoked and this is
00:33:40
where the scholar native run time quicksand and uh basically just
00:33:43
calls this loop after the main method would after the main function of the class returns um and then it just basically
00:33:50
holes that you um until it's exhaust and of course each one of those to you each one of the rumbles on the cute
00:33:56
the the future which can spot more so it can get
00:33:59
re populated so maybe it won't get exhausted or maybe eventually will
00:34:03
um but the the implementation of future takes care of all that if you
00:34:07
implement your own execution context you don't
00:34:09
worry about dispatching or linking i'm or call
00:34:13
backs you just worry about running in the the raw nobles the the the future
00:34:17
class gives you so it's surprisingly um lightweight um to to implement one of these
00:34:23
um so we're gonna use it for very similar technique to make this work on live t. v.
00:34:28
um the live you've event loop has quite a few different like cycle those uh that we can attach to
00:34:34
um from my experience the the best time to run
00:34:38
these things has been immediately prior to polling for ohio
00:34:42
the idea is before we start do we stop reading our code
00:34:46
and start pulling for aisle we will exhaust all work that's available
00:34:51
um and then we'll just do io until we have more work um to do um effectively now the one touch with that
00:34:57
is if you were if you the recall the the some of
00:35:01
the a. p. i. slides i showed you in the last section
00:35:04
we have tasks that we can do like reading from a socket that are represented by a future
00:35:10
right so this execution context has to do one
00:35:14
extra task it has to track non future io tasks
00:35:18
um and they'll lay termination up the loop until all io is complete and
00:35:24
there's no more io work it can do is well is no more futures
00:35:28
that's sort of the one complication you get from sort from building ohio in to to your event loop um and the way
00:35:34
we do that is with this um this loop extension that we
00:35:37
briefly saw saw earlier it's just a trait and allows any um
00:35:42
either code within the library or a third party
00:35:44
library code to register with the event loop that some
00:35:50
number of of io tasks are going and i can just sort of
00:35:53
checking on this and see see if there's work being done um it's
00:35:57
a really great way to just keep the code really modular rather have
00:36:00
a a giant class um the that's sorta models this very large c. library
00:36:05
um so the way we actually implement the straight um we start out something very
00:36:10
very close to the the initial code or a we have a list of referral doubles
00:36:14
um we will get an an instance of the uh the u. v. d. folder that loop um
00:36:19
and again will differ running our rumbles um until later when i'm i'll call back and all uh fires
00:36:26
and that that gets implemented like this this dispatch that um call back um
00:36:33
all it does is again it just walks through the q. runs the
00:36:37
tasks that can and then the one check is before um actually stopping
00:36:42
right on the checks to see if i'm the
00:36:45
task use empty and no extensions have outstanding work
00:36:50
then and only then it'll actually stop the handle which will allow given the love u. v. available to terminate
00:36:56
and then our program can um can exit otherwise it just goes
00:36:59
back through the loop and pulls for more iowa does more work um
00:37:04
so all the actual um interesting work gets implemented as
00:37:08
these loop extensions the that built onto it the whole execution
00:37:12
context is i think it's less than a hundred lines of code all the the fun stuff is is in the extensions
00:37:18
um and then basically for whatever we add one in um
00:37:21
we'll just register it with this ad extension method um it's
00:37:24
really simple slots implement the sort of delay timer look extension
00:37:28
now 'cause yeah we're really good on time um does also
00:37:32
so the the the track here um is the the the timer extends
00:37:37
look extension i'm active requests is just gonna be the size of this mutable
00:37:42
hash map are gonna keep around the keys of this hash map um are
00:37:47
going to be longs and the values are going to be promises um had
00:37:52
have people seen the promise class mostly um i think victor gave
00:37:56
a better summary of it and i possibly could i think what
00:37:59
he said is up promise is an obligation to provide a value
00:38:04
at some point in the future it lets us spawn off features
00:38:08
that art attached to our rubble and then let's
00:38:11
our code um supply values are failures to them
00:38:14
uh whenever we choose so it's a great way to
00:38:17
actually um implemented um io and return futures safely um
00:38:22
and and uh we'll just have um two additional methods will have the delay method which just is what we saw
00:38:28
on the public a. p. i. signature takes iteration returns a future animal have this um time or call back function
00:38:34
so the signature of that time recall that function you can see online eighteen um it's as simple
00:38:40
as it can be it takes a timer handle returned the unit so um it does almost nothing
00:38:45
um but there's also not a lot of arguments you can throw into it um
00:38:49
we're gonna model the timer handle as a pointer of
00:38:52
long even though this is a large opaque data structure
00:38:56
right um if you look at the c. m. includes
00:38:59
it's got lots of macros is different on um various platforms
00:39:03
so would really be a pain to model this data structure field by field um term
00:39:08
not treated just as a pointer long is cool and is sort of what makes this work
00:39:12
i'll show you how that works but it's um it's a fun little track and then
00:39:16
that to see functions need to call our we need 'em timer and and timer start um
00:39:22
and um they do what what what they say timer in it initialise is a
00:39:27
a timer handle pointer um and attaches it to loop um and then timer start
00:39:32
takes a call back and a time out an optional we are repeat count um
00:39:36
and then starts starts running it um whatever the event loop is is ready to go
00:39:41
so um the the the the one don't trick for a pick stocks is that it it's very
00:39:46
often required and see or in scholar native to work with a data structure without knowing its internal layout
00:39:52
um that makes it really hard allocator initialise it yourself programmatically
00:39:56
but if the library you're working with gives you helper functions
00:40:00
um it can often allocate and initialise it for you so you don't even have to know how many bytes wide um this
00:40:06
thing is um and then there's this amazing c. technique or
00:40:10
type tones which allow us to pass data between on related types
00:40:14
so um you can sort of both tasks uh obstruct with three
00:40:19
fields to sort of a prefix of its fields and as long
00:40:22
as you don't modifier touch the trailing fields nothing goes wrong right
00:40:27
and then you can cast like a structure containing a pointer of
00:40:30
byte to obstruct containing along because they're the same size and then
00:40:34
that he's the really cool one uh a pointer tool one field
00:40:38
strapped containing the team is equivalent to a pointer containing it t.
00:40:43
um there's no like padding in there it's all totally static layout
00:40:47
um so it's it's a a feel scary and kinda ugly but it also works in it can be
00:40:52
really fast because what we'll do is we're going to
00:40:55
use that long um sell basically the store serial numbers
00:41:00
and that would allow us to basically keep
00:41:02
enough tracking data about which timer instances we've created
00:41:07
um in this new double hash map but allow us to avoid having to do an extra d. reference and memory
00:41:13
load right uh to find out that the the contents like we what if we had like a larger custom data
00:41:19
structure attach their um so once we do that the delay method works like this um we can for the duration
00:41:25
the milliseconds 'cause that's what once we instantiate a promise
00:41:28
we generate a serial number and store the promise our map
00:41:32
and then online nine um we use now want to allocate
00:41:35
a timer handle and we use this helper function libby v. gives
00:41:39
us u. v. handle signs to actually get a correctly sized chunk
00:41:43
of data for this for our platform um then we initialise it
00:41:47
and then we'd online eleven we just use um the
00:41:49
d. reference operator to assign our time or i. d. to
00:41:54
the timer handle which feels really scary and destructive the because it's only going to do an update on the leading eight
00:42:00
bytes of this data structure which friend happily happily for
00:42:04
us was designed for us to do this all of the
00:42:07
scary private data fields are the trailing field so we can
00:42:11
just chop them off it's really awesome that works um and
00:42:15
then we start the timer and return the future that we spun off the promise um and then likewise actual call
00:42:20
back that we pass and to uh to to when this is done is even simpler right um we get the timer
00:42:27
handle back is the sole argument and then we just the reference it which gives us back the timer id we just
00:42:33
stored in that that one leading eight byte field um and
00:42:36
then we pull the promise out of our map using um
00:42:41
the timer id that would just be referenced we we remove the the the promise from them
00:42:46
out and then we caught we succeed the promise with uh with yet um and that's where
00:42:51
we all it takes and and this just works oh the first time it dated appealed all
00:42:56
the little magical um but it this is all it takes and then you just have a
00:43:01
there's all the scary stuff going underneath but you don't you have idiomatic save sell
00:43:05
uh that's just usable um so so we're we're we're do we go from here
00:43:12
like i said i'm i'm trying to improve support and stella native for
00:43:15
user supplied event loops to make like the loop darren invocation less intrusive um
00:43:21
getting good integration with s. t. v. p. is a really high priority for me um it's one life
00:43:27
favourite uh libraries on it i think it does a great job
00:43:29
of those um couples rossi i'm already did a great s. t.
00:43:33
d. t. native um curl binding actually for the blocking a c.
00:43:37
p. a. p. i. um so i'd love to get these consolidated
00:43:41
um i'd like to spin out the high level server eighty i i think it's great to have something um
00:43:46
late now here for now but in the long run i don't think
00:43:49
like a server d. s. l. belongs like in a course culminated package
00:43:54
um it makes sense once we have any courses and just spend spend that out entirely unlike the community takeover
00:44:00
and then i really wanna get feedback on the design of the strain um i would p. i'm not just like
00:44:06
design critiques that really like the the the effort and the the
00:44:10
time of of implementing like dizzy uh when cats effect and stuff
00:44:13
on top of those and see if that um gives us any
00:44:17
guidance um towards sort of the next round of a variation on this
00:44:21
um all of that said we would love to have a lot more contributors um to uh to make this work
00:44:28
um there's so many low hanging fruit out there um first our native and there's
00:44:33
a lot of like you can size projects that can ever really really huge impact
00:44:38
oh it's a it's a really exciting moment where there's a lot of
00:44:40
things that are possible um and like our existing contributor community is amazing and
00:44:46
passionate and they've done phenomenal things not of this we possible without them
00:44:51
and um yeah i if anyone's here and things oh gee i could write
00:44:55
a better server d. s. l. than that and forty five minutes please
00:44:58
please do come talk to me or just get it out there like involved
00:45:02
i we would love to have you were uh we have a great getter chat problem by and um yeah that's what i got thank you for
00:45:16
questions to oh really do speaker
00:45:19
uh_huh
00:45:23
some people come with him to actually look
00:45:27
of questions work for one fill dimension garbage collection
00:45:31
yeah how does that happen there so um ordinary scholar value so's culminated
00:45:36
has that state of the art garbage collector that performs a little bit
00:45:40
fatter than um than hotspot re i'm really nice and genesis paper on
00:45:45
that it would be the reference i'm not an expert on garbage collection
00:45:48
um but that exists colour well it'd be embarks craig's act way whereas pointers are not
00:45:54
however pointer it the new thing is colonial not for is that you can actually box a
00:45:58
pointer right so the that the contents of the pointer will they got garbage collected but then
00:46:04
you can actually store a pointer as it as a value and hash map like i did
00:46:08
um so it allows you to sort of next the domains of on say values and and solid values in a way that's
00:46:14
it's it's actually new it's like i'm basically rewriting a lot of the book around this technique
00:46:19
um but it's definitely the the state of the art or i'll read the thing which yeah but
00:46:24
if we do use it because it's like a good use these would be easy to use it
00:46:32
where for compiling um for compiling um i know dennis has um
00:46:38
like an experimental fork of scholars see that runs and style a native
00:46:42
um so that's then that's been that's been demonstrated i don't think it's
00:46:45
merged yet um i think scholar format actually does run install a native now
00:46:50
so having like tools like that i'm around and sell a native where you
00:46:54
get that really quick start up time is one area of immediate impact rate
00:46:58
not having to pay the that like java start obviously like or his work on
00:47:01
blue thanks a lot less painful to deal with s. p. t. and stuff like
00:47:05
that for developer tool in but even just like the lower memory overhead of um
00:47:10
of stalin it really makes impact there so that's that's an exciting i'm thinking already yeah
00:47:17
uh_huh uh huh
00:47:24
is that it

Share this talk: 


Conference Program

Welcome!
June 11, 2019 · 5:03 p.m.
1574 views
A Tour of Scala 3
Martin Odersky, Professor EPFL, Co-founder Lightbend
June 11, 2019 · 5:15 p.m.
8337 views
A story of unification: from Apache Spark to MLflow
Reynold Xin, Databricks
June 12, 2019 · 9:15 a.m.
1267 views
In Types We Trust
Bill Venners, Artima, Inc
June 12, 2019 · 10:15 a.m.
1569 views
Creating Native iOS and Android Apps in Scala without tears
Zahari Dichev, Bullet.io
June 12, 2019 · 10:16 a.m.
2232 views
Techniques for Teaching Scala
Noel Welsh, Inner Product and Underscore
June 12, 2019 · 10:17 a.m.
1296 views
Future-proofing Scala: the TASTY intermediate representation
Guillaume Martres, student at EPFL
June 12, 2019 · 10:18 a.m.
1157 views
Metals: rich code editing for Scala in VS Code, Vim, Emacs and beyond
Ólafur Páll Geirsson, Scala Center
June 12, 2019 · 11:15 a.m.
4695 views
Akka Streams to the Extreme
Heiko Seeberger, independent consultant
June 12, 2019 · 11:16 a.m.
1552 views
Scala First: Lessons from 3 student generations
Bjorn Regnell, Lund Univ., Sweden.
June 12, 2019 · 11:17 a.m.
577 views
Cellular Automata: How to become an artist with a few lines
Maciej Gorywoda, Wire, Berlin
June 12, 2019 · 11:18 a.m.
386 views
Why Netflix ❤'s Scala for Machine Learning
Jeremy Smith & Aish, Netflix
June 12, 2019 · 12:15 p.m.
5026 views
Massively Parallel Distributed Scala Compilation... And You!
Stu Hood, Twitter
June 12, 2019 · 12:16 p.m.
958 views
Polymorphism in Scala
Petra Bierleutgeb
June 12, 2019 · 12:17 p.m.
1113 views
sbt core concepts
Eugene Yokota, Scala Team at Lightbend
June 12, 2019 · 12:18 p.m.
1656 views
Double your performance: Scala's missing optimizing compiler
Li Haoyi, author Ammonite, Mill, FastParse, uPickle, and many more.
June 12, 2019 · 2:30 p.m.
837 views
Making Our Future Better
Viktor Klang, Lightbend
June 12, 2019 · 2:31 p.m.
1682 views
Testing in the postapocalyptic future
Daniel Westheide, INNOQ
June 12, 2019 · 2:32 p.m.
498 views
Context Buddy: the tool that knows your code better than you
Krzysztof Romanowski, sphere.it conference
June 12, 2019 · 2:33 p.m.
394 views
The Shape(less) of Type Class Derivation in Scala 3
Miles Sabin, Underscore Consulting
June 12, 2019 · 3:30 p.m.
2321 views
Refactor all the things!
Daniela Sfregola, organizer of the London Scala User Group meetup
June 12, 2019 · 3:31 p.m.
514 views
Integrating Developer Experiences - Build Server Protocol
Justin Kaeser, IntelliJ Scala
June 12, 2019 · 3:32 p.m.
551 views
Managing an Akka Cluster on Kubernetes
Markus Jura, MOIA
June 12, 2019 · 3:33 p.m.
735 views
Serverless Scala - Functions as SuperDuperMicroServices
Josh Suereth, Donna Malayeri & James Ward, Author of Scala In Depth; Google ; Google
June 12, 2019 · 4:45 p.m.
936 views
How are we going to migrate to Scala 3.0, aka Dotty?
Lukas Rytz, Lightbend
June 12, 2019 · 4:46 p.m.
709 views
Concurrent programming in 2019: Akka, Monix or ZIO?
Adam Warski, co-founders of SoftwareMill
June 12, 2019 · 4:47 p.m.
1974 views
ScalaJS and Typescript: an unlikely romance
Jeremy Hughes, Lightbend
June 12, 2019 · 4:48 p.m.
1377 views
Pure Functional Database Programming‚ without JDBC
Rob Norris
June 12, 2019 · 5:45 p.m.
6374 views
Why you need to be reviewing open source code
Gris Cuevas Zambrano & Holden Karau, Google Cloud;
June 12, 2019 · 5:46 p.m.
484 views
Develop seamless web services with Mu
Oli Makhasoeva, 47 Degrees
June 12, 2019 · 5:47 p.m.
785 views
Implementing the Scala 2.13 collections
Stefan Zeiger, Lightbend
June 12, 2019 · 5:48 p.m.
811 views
Introduction to day 2
June 13, 2019 · 9:10 a.m.
250 views
Sustaining open source digital infrastructure
Bogdan Vasilescu, Assistant Professor at Carnegie Mellon University's School of Computer Science, USA
June 13, 2019 · 9:16 a.m.
375 views
Building a Better Scala Community
Kelley Robinson, Developer Evangelist at Twilio
June 13, 2019 · 10:15 a.m.
245 views
Run Scala Faster with GraalVM on any Platform
Vojin Jovanovic, Oracle
June 13, 2019 · 10:16 a.m.
1342 views
ScalaClean - full program static analysis at scale
Rory Graves
June 13, 2019 · 10:17 a.m.
463 views
Flare & Lantern: Accelerators for Spark and Deep Learning
Tiark Rompf, Assistant Professor at Purdue University
June 13, 2019 · 10:18 a.m.
380 views
Metaprogramming in Dotty
Nicolas Stucki, Ph.D. student at LAMP
June 13, 2019 · 11:15 a.m.
1250 views
Fast, Simple Concurrency with Scala Native
Richard Whaling, data engineer based in Chicago
June 13, 2019 · 11:16 a.m.
624 views
Pick your number type with Spire
Denis Rosset, postdoctoral researcher at Perimeter Institute
June 13, 2019 · 11:17 a.m.
245 views
Scala.js and WebAssembly, a tale of the dangers of the sea
Sébastien Doeraene, Executive director of the Scala Center
June 13, 2019 · 11:18 a.m.
661 views
Performance tuning Twitter services with Graal and ML
Chris Thalinger, Twitter
June 13, 2019 · 12:15 p.m.
2003 views
Supporting the Scala Ecosystem: Stories from the Line
Justin Pihony, Lightbend
June 13, 2019 · 12:16 p.m.
163 views
Compiling to preserve our privacy
Manohar Jonnalagedda and Jakob Odersky, Inpher
June 13, 2019 · 12:17 p.m.
301 views
Building Scala with Bazel
Natan Silnitsky, wix.com
June 13, 2019 · 12:18 p.m.
565 views
244 views
Asynchronous streams in direct style with and without macros
Philipp Haller, KTH Royal Institute of Technology in Stockholm
June 13, 2019 · 3:45 p.m.
304 views
Interactive Computing with Jupyter and Almond
Sören Brunk, USU Software AG
June 13, 2019 · 3:46 p.m.
681 views
Scala best practices I wish someone'd told me about
Nicolas Rinaudo, CTO of Besedo
June 13, 2019 · 3:47 p.m.
2706 views
High performance Privacy By Design using Matryoshka & Spark
Wiem Zine El Abidine and Olivier Girardot, Scala Backend Developer at MOIA / co-founder of Lateral Thoughts
June 13, 2019 · 3:48 p.m.
754 views
Immutable Sequential Maps – Keeping order while hashed
Odd Möller
June 13, 2019 · 4:45 p.m.
277 views
All the fancy things flexible dependency management can do
Alexandre Archambault, engineer at the Scala Center
June 13, 2019 · 4:46 p.m.
389 views
ScalaWebTest - integration testing made easy
Dani Rey, Unic AG
June 13, 2019 · 4:47 p.m.
468 views
Mellite: An Integrated Development Environment for Sound
Hanns Holger Rutz, Institute of Electronic Music and Acoustics (IEM), Graz
June 13, 2019 · 4:48 p.m.
213 views
Closing panel
Panel
June 13, 2019 · 5:54 p.m.
400 views