Player is loading...

Embed

Copy embed code

Transcriptions

Note: this content has been automatically generated.
00:00:06
alright welcome everyone thanks for coming in this this server lists colour uh
00:00:11
functions as super duper biker services so right yeah because services p. p.
00:00:17
get now yeah services yeah somewhere p. i. services yeah services there we
00:00:21
go yeah oh i'm james word a developer of get on google clough
00:00:25
uh and i'm uh just right i'm an engineer cool uh and i do stuff is colour cost so let's try then
00:00:34
so start talking about a server less what is server less um so
00:00:40
my simple definition of what server list is is that you pay for what you use image hosting
00:00:46
um so uh we'll we'll talk more about them pull apart and just a second but a few ways they you might have
00:00:53
interacted with server less is one through what's called function as a service
00:00:57
and this is where you do play a function and it is server let's see only pay for when it's been years
00:01:04
and it's in a managed environments you don't have to deal with any infrastructure
00:01:09
and then the other place re my uh encounter server less is and
00:01:13
when you want to deploy a not just a function by an entire application
00:01:17
and this allows you to have multiple functions together obviously in a single package
00:01:23
so we server less only say um you pay for what you use
00:01:28
what that means is that there's usually like some way to charge
00:01:31
based on number of requests or amount of processing time so if um
00:01:36
you know where there is multiple instances or are multiple requests panhandle by single instant
00:01:42
then you may only have to pay for the c. p. u. time that those requests are being handled not doubleday um that sort of thing
00:01:48
um so this means that if you're not using it you don't play that's great and then as user more you pay more
00:01:56
and then managed means that you don't have to worry about a patch in underline operating systems are dealing with
00:02:02
um distributed application across multiple servers and data centres
00:02:06
in redundancy in restarting processes when they die uh
00:02:09
that all that stuff is just minutes before you answer the you don't have to think about it
00:02:14
or like three allocating for spikes if you expected traffic spike at a particular time and you you
00:02:19
wanna make sure you have enough server load that's not your responsibility more that's responsible your provider yeah yeah
00:02:25
so in some ways it's just a new buzzword for some things that have been around for a while
00:02:30
um but i think really the the the kind of near not is that around server less is that
00:02:35
the scale of the idea of scaling to zero so we've we've had this
00:02:39
we've been trying to march towards utility computing only pain for what we use
00:02:43
but it turned out that the the um granularity of that pay for what
00:02:47
you use allowed us to have a lot of unused resources that we're paying for
00:02:52
and so with a server less not thinking about servers in
00:02:55
instances we really only pay for user can scale to zero
00:03:01
so oftentimes we talk about server list we also talk about containers
00:03:05
and so this is like doctor containers or the new um open container initiative
00:03:10
and so um containers can be an important pieces we're we're gonna be
00:03:14
using today will give you just a brief overview of what containers are
00:03:18
um so essentially there's the piles uh and so uh you take isn't file a kind of as a
00:03:24
file and you put some stuff into it uh what you put into it is like a mini operating system
00:03:30
and there's lots and lots of different base operating systems that you can use for the this there's open linux is
00:03:36
popular you can use your standard one to a lotta different options are superior like many operating system and your container
00:03:42
and then you also put your application of any dependencies that it needs to run inside it that's a pile as well
00:03:49
and then the container has a startup command so that then when something runs this container
00:03:54
in notes had actually start the process do you wanna start to start up your application
00:03:59
so that's that's the the basics of a container so google or amusing today
00:04:05
something called called run which is server lists for container so you take your containers
00:04:09
in new uh upload them to google cloud we're called run and then uh we'll
00:04:14
run those server leslie so let me give you a quick little demonstration of that
00:04:20
so i have a report here on the get help and the this is
00:04:23
the simpler things can possibly get so show you the the doctor file here
00:04:28
um what we do is we're gonna start with the i'll plan the next uh distribution base
00:04:33
uh image and then our command that we're gonna run
00:04:36
to start our process uh is actually using that cat
00:04:40
and so this is actually um i think about the shortest web server you could possibly right
00:04:45
um so this is handling requests on port uh whatever is defined by the environment variable port
00:04:51
and then it's gonna respond with hello world so let's go a launch the thing on cloud run
00:04:57
so we put this whole button that's gonna verify the yeah so in fact
00:05:01
do you want to connect every per and i do you want to deploy that on called run
00:05:07
so what we need is it gonna walk through a few steps it's
00:05:10
going to run the doctor build on that that uh get home ripple
00:05:14
which will sample the doctor container and it's gonna ask me here in a second
00:05:18
where what project i wanna hear souls to yes for my jaded we demo project
00:05:24
self it's building the container store in the container image up on the cool container a registry
00:05:30
and then it can use clot run to deploy that application uh up on the club
00:05:36
so it should just take a second here today for a that service and them all be able to go check it out
00:05:44
so let's go go go go clap after new unsuitable uh uh i mean all
00:05:52
the time but i i also will myself it's usually on the bottleneck by uh
00:05:57
okay right there we go are up and running so now you'll see we got an a.
00:06:01
g. p. s. and point and services on available let's give a warm works rather go hello world
00:06:07
uh so we just take that that container that would tell the play the and now it's
00:06:11
running and google scale this it to use many requests as we wanna throw out the thing
00:06:17
and so uh it'll handle those requests and there we go we're running up on the clock
00:06:23
so that's our our really basic uh inter there to you um
00:06:27
to cloud run in containers and server less uh i have run
00:06:31
is an image service so uh so we manage everything for your trouble
00:06:36
if you want to do everything on your on there's a project
00:06:39
that could that conference based on an open source project called king native
00:06:43
which gives you the same functionality on type of criminality so you don't have to use us you can use can eleven euro
00:06:51
so talk about concerts real quick because this is something that comes up very quickly as you get into server less
00:06:57
so what are called start is is it's really a problem of being able to scale down the resources
00:07:03
typically what we do in computing is that we over provision
00:07:05
our resources or servers so that we can handle spikes in things
00:07:10
and as the service provider even if you're running your own stuff
00:07:14
you probably wanna be able to throttle down those resources and not
00:07:18
over provision then because if you're over provision then then you're not
00:07:21
pain for what you use your pen for things beyond what you use
00:07:25
and so what happens with server lists is that things are scaled down so the only pain for them when you're using them
00:07:31
but then there's some amount of time that it takes to scale things
00:07:34
back up to start up a the processes pull containers other the registry
00:07:38
start the process is warm those processes up and so what happens is if a request
00:07:44
comes then and it has to go through the startup process then that's called a cold start
00:07:49
and uh and they can can be a bad experience for
00:07:53
users of their sooner waiting for a large applications to start up
00:07:58
so it's a play challenge in the the server list world and one of the ways that we can deal
00:08:03
with this particular really with uh with the application that we deal with j. v. m. applications is to use crawl
00:08:09
so crawl it does have time compile asian so it takes our our scroll application that we're gonna show
00:08:16
you and it could have time compiles that down into native image which is much smaller and much faster
00:08:22
to start up yeah so you might be asking yourself like why am i doing ahead of time compilation
00:08:26
it's maybe like you know we've investing so much in the chip so much in this run time optimisation
00:08:31
and so much like in the j. v. m. in java but really you need to think of it as a trade off right i'm training consistency
00:08:38
in my r. p. c. response latency in terms of start up time for
00:08:41
these services and the throughput right i'm trying to trade consistency for optimum speed
00:08:46
so i might not maybe we'll make this one server completely optimal but i might be able to get
00:08:50
a better throughput organisation across the machines that i'm
00:08:54
making use of under the covers by having more consistency
00:08:57
um it's easier to kinda understand load it's easier to gonna understand um how long
00:09:01
it takes to bring one of these services up and handle you know incoming traffic
00:09:05
so you're making that trade off and it's it's not like a zero sum game right so
00:09:09
sometimes you want the j. v. m. sometimes you don't and this is one of the things
00:09:13
we're servile is kinda lets you go the other way and cannot read those that it's yep
00:09:17
yeah exactly yeah so that's that's one way to you um work on addressing the called star issues
00:09:24
okay so we've learned a bit about server less and now we're gonna
00:09:27
take a step to the other side and talk about building server apps
00:09:31
uh so um i've done a lot of play framework development
00:09:34
and in my days and uh what tribe place remarked agreed framework
00:09:39
um and the the way they you handle requests in places really simple you create this action which
00:09:44
is really just a wrapper around a function that goes from a request to you ever sponsor future response
00:09:51
um it's a really nice if yeah is that supposed to be a covers what does the circuit there's there's our cover
00:09:57
this colour cover i mean that's what i have on my bet on that's that's we're covers like like i'm sure uh
00:10:02
yeah but underneath those are all these things are hiding
00:10:06
right so so under them hiding access to my database i'm
00:10:10
hiding in just the the general zeros asian mechanism i used to go in and out of h. t. t. p.
00:10:15
uh and all of that is actually kind of encoded in this function and i'm actually
00:10:19
locks to it to some extent yes in the class the new extend incomes inter injection um
00:10:25
yeah there's a lot of a lot of stuff that's actually needed to be able to handle requests
00:10:30
and it's as you as you said working under your covers likes
00:10:33
it it's not something that you see in the function definition um
00:10:37
so it can be a a little tricky sometimes to to test these things 'cause they're functions
00:10:43
uh don't really work independently they need all of that context around them in order to work
00:10:49
yeah and this is where you see it is when you start reading test right when i write tests i have the kindest about these functions
00:10:54
and for some of these applications they have to give you the afford into this whole
00:10:58
to be able to override that behaviour in a test so that you can start out some production service so
00:11:04
i can run a local test double instance of it right i have this thing i'm talking to like a database
00:11:09
but i don't really wanna talk to real database my test but i wanna talk to something close enough that i can make sure i'm doing the right thing
00:11:15
you know and how why plug in a testicle component and make that work
00:11:20
right oh all these frameworks basically provide unique yeah and then oh we have to do this security thing
00:11:25
let me provide unique yeah it says that you know and you keep your constantly doing this battle yep
00:11:30
yeah they could be you uh are you can really bind your logic to the actual protocol
00:11:35
the you're implementing um in this way so that so that's one of things that i found
00:11:39
with with plates so easy but uh if i if my protocol changed or wanted to implement
00:11:44
a different protocol had to do a lot of re factoring in order to to build support that
00:11:50
okay so so the the environment is effectively changing though as time
00:11:56
goes on right instead of the plain like servers or the o.
00:11:59
j. two e. model or throw like a war somewhere and there's this big server that handles all these web applications work i'm moving
00:12:05
back to a little bit of that of i throw these functions are the cloud in run those
00:12:09
or have this big companies cluster next been up a whole bunch of instances and one machine um
00:12:14
but we're not in e. write once run anywhere um in
00:12:18
the sense of like existential uh we're not write once run everywhere
00:12:24
right so not only do i need to run that my um application like on the cloud i need
00:12:29
to run it on my internal servers i need to run in to have any different in part i
00:12:33
have a micro service architecture so make a change to something here and i need to check the downstream
00:12:39
works but i need to find a way to run my service and its dependencies with my service changed
00:12:45
and then make sure all of the other production things and talk to it don't break
00:12:48
right and so i might make a staging instance of this i might find ways to
00:12:52
it's me all you know redo my graph i'm actually running the
00:12:55
this lot block of code in many many many places not just a
00:12:59
single place right um so it's about understanding and it happened to
00:13:03
your environment more so than just being able to run on any environment
00:13:09
yeah we're uh to war war war
00:13:13
no no no not war that's a hobby i would say that we don't
00:13:16
or else that word yeah yeah that's a real complete another one uh okay um
00:13:23
so anyway the the we wanna make migrating who really really really simple so the idea is when you're at a
00:13:28
particular application level you need to focus on the logic
00:13:33
appropriate to that level or the thing that you're doing so
00:13:35
if i am in development all the feature i should be focusing on the business logic remote feature
00:13:41
if i am worried about the production environment that i'm within i'm dealing with like
00:13:45
latency in dealing with load i'm dealing with my ninety nine percentile latency which is always
00:13:51
fun i that's my favourite problem right now you know nine p. ninety nine yeah hardness
00:13:56
uh so what if i'm dealing with like there's one that one request that slower all the rest of them are fine right
00:14:02
i wanna be with thinking about can currency only thinking about you
00:14:04
know all the different monitoring in in over you know i'm i'm
00:14:09
tracing the i need to figure this stuff out that's what i mean my production layer and that's
00:14:13
what i wanna be talking that language when i'm doing my features that don't wanna be speaking that language
00:14:17
in code that's not my intent whatever that code so we wanna kinda divorce these things
00:14:22
and then what i'm testing and it's all about what's going in what's going out right
00:14:26
a and my and my abiding by these protocols that i'm telling other people i do
00:14:30
so i wanna be able to have those focuses yeah it we test the
00:14:34
protocol in different um as it would be implemented different environments exactly so yeah yeah
00:14:42
so we have a example that we build uh for this presentation that using chat hot and so on
00:14:47
the show you just a little bit of that and then at the end will see the whole thing
00:14:51
um so just a little bit of background uh on or chap ah is we wanted
00:14:55
to you illustrate different environments by providing different protocols that are chat back and talk through
00:15:00
so we've got a protocol for standard in standard out telnet and then is u. p. post coming
00:15:05
from google cloud for a google action um which gives us some state and then return some speech back
00:15:11
so the logic that we want to program to is really just the things
00:15:16
we wanna say and the questions we wanna ask and and receiving answers back
00:15:21
so that's our our little chat what i'm gonna show you the standard in standard how version of it so this
00:15:27
is all a run in the locally here so you see
00:15:30
it's just standard standard in standard outlets that yeah the audience
00:15:33
what what would you say is your failure unlike is your best what's the best program language that's not favour it
00:15:40
well i i heard python python i also put oh wrong wrong so and try again
00:15:47
java okay java wrong no no i'll i'll oh yeah we gotta leave alright yeah yeah
00:15:56
so there's are very very simple chat but running on standard and standard out
00:16:01
um so the protocol that we're using here is standard in standard out but the logic that you'll see in
00:16:07
in just a little bit is really just describe in the conversation it's not describe in not into the protocol
00:16:15
okay so that's our basic chapel uh well
00:16:20
reload it so where's the sequence ever get married five
00:16:26
right so instead of code the way this looks is you say uh when i
00:16:30
get something that the user has spoken to me right in india via typing or talking
00:16:34
or actually we have a google on here so we're gonna really that but um if
00:16:38
if the user is talks to me uh i need to figure out what they said
00:16:42
and then you know do something based on what they said so in
00:16:45
this case if they say that they like style of course the correct
00:16:49
for the say any other language yeah yeah i was i was wrong and then um
00:16:53
in all other instances and we made a really fun syntax here for a underscored does them a beautiful skull i've ever seen
00:16:59
we you do not know how long does that on the slide just for the underscores but anyways um you have to pick
00:17:04
words precisely so you get the right the right padding yes you're not off by one yeah yeah a tricky we're gonna ask
00:17:10
you what what's the best language so so the idea here is uh the the speech is done through some sort of like
00:17:17
too intent recogniser and that's a module that's gonna run and this
00:17:21
code is just running after that particular model okay actually colour that text
00:17:26
uh i use the machine learning uh but applied myself or it's
00:17:30
beautiful it a great job without the colours match perfectly thank you
00:17:35
uh yeah so what's the problem with this i it it said it
00:17:40
returned insignia back to unit yeah so um what is wrong with the
00:17:47
so what's wrong with is is the the the same method right and
00:17:50
then ask method are tied to a particular implementation of say an ask
00:17:55
right and i can't change it so if i move this code somewhere else to say in ask have to deal with
00:18:01
all the complexity of figuring out how to implement themselves on
00:18:05
all these different environments and sometimes that that's not realistic yeah right
00:18:09
yeah and so these are a fax rate these are we we haven't this could uh facts that are happening
00:18:15
so it's things that are fact in something somewhere else and so that
00:18:19
makes it so we've now tied ourselves to a particular protocol particular environment
00:18:24
so how does all this in the past does anyone remember
00:18:26
like old school c. plus plus c. when we had multiple architectures
00:18:31
way to write code we wanted to compile for this c. p. u. and that c. p. u. what do you do what
00:18:37
yeah you you would you ever july's right you like create a crate in eighty i
00:18:41
that was you know you would interpret like the uh you have you have any day
00:18:46
thing which abstract out the environment you're going to run in and you would develop
00:18:49
begins that the the different implementations and yes they were done a pound of finance
00:18:54
and and and if jeff's but the idea was i had the same a.
00:18:57
p. i. depending on what environment i'm on and there's a pattern for that
00:19:02
yeah well the interpreter pattern is i don't know how yeah it was it was or started it was you know
00:19:07
about this is the functional version of it which i think is the more elegant version of it yeah it's beautiful um
00:19:13
anyway so so here you know you can see this is this is this is not dot a. but man i i wanna make this opinion
00:19:20
um the we have a so tray hierarchy is just an eighty of like the user can say
00:19:24
something they can ask something or we could have a output which is some combination of telling people
00:19:29
things and asking them questions at the same time right i don't know why they asked two questions
00:19:33
at the same time but you might so i've done it in conversation i'm like how like when
00:19:40
exactly ah uh okay so then the we have we have our implementation which
00:19:45
returns this this this stage program to operate you know and this is this allows
00:19:50
us to have this matter program this is something that you've probably heard you in times in the salad community and if this is the first time here in it
00:19:57
um you hear this a bit julian times in in your future in scale community
00:20:01
up yeah so we're already has are separate and not be a fax from the interpreter
00:20:06
we are um just describing what are program should do instead of actually have a
00:20:11
program do something and then the interpreter is what comes along and actually does something
00:20:17
no scott has had a very very long history in trying to figure out how to do this correct
00:20:23
okay i'm in two thousand twelve there was this like free
00:20:26
monad things colour was experimenting with that had come from you know
00:20:30
high school and some other things and then into doesn't fifteen there's a paper on the freer monad
00:20:35
i'm very appropriately names to be you know slightly better than the free monad 'cause it's even more free
00:20:41
um it it turns out if you looked in the details the free remote
00:20:44
and use the free monad is the first encoding wasn't you know hundred percent anyway
00:20:49
uh but it it doesn't seventeen we had this thing called tag list final
00:20:52
which is hey this is faster than the free remote african interpret pattern even better
00:20:57
and in two thousand eighteen we have this thing called the io come in which is a here's another way of doing it it's even better
00:21:04
so so confer out that's looking or occur a crystal
00:21:07
ball yeah yeah what's out there we we're making a production
00:21:11
that by the time i we have columnist mars will
00:21:15
have a way of doing interpreters that is the standard way
00:21:18
we're slightly after that just a little bit after yeah sometime around that yep time
00:21:21
around it yeah um for this talk will be covering the io is the latest one
00:21:25
um it has there's pros and cons um i do wanna say if you
00:21:29
look at the website it has a bunch of marketing which we don't appreciate um
00:21:35
there's a you know there's some divisive things about maybe a lot
00:21:38
of segments of this call community and and zero includes that but um
00:21:42
so we just wanted to use it to give it a try um a specific endorsement uh yeah yeah but but it it has
00:21:48
has an interesting take which is slightly different from tags final i
00:21:51
think is is a worth looking at um on the interpret pattern but
00:21:55
this talk applies to any use of interpreter pattern not just as the i. o. u.'s just just so you know it's uh
00:22:01
yeah i i i still think the skeleton is figuring out the
00:22:04
right a bit right eighty i here and you've seen churn because
00:22:08
nothing has really hit the mark yet and uh this one has potential thus yeah yeah
00:22:14
okay so uh just a quick little overview into z. o. and what it looks like so
00:22:19
in the oh what we do is we create programs values
00:22:22
so that's um where we uh rapport affects into uh this
00:22:26
video and there's three different aspects to easier there's um
00:22:30
the first and the first type parameter is the environment type
00:22:34
so this is all of the the resources at the r. z.
00:22:38
o. this particular instance officio needs to you actually build or do something
00:22:42
um so what we do is we with our
00:22:45
interpreter we provide the implementations of the interpreter uh
00:22:49
that are gonna go in and and be used in the environment to then execute actually execute that thing
00:22:55
so when it executes a winner particulars you execute it can either return
00:23:01
in here and so we have a type parameter there for the year type
00:23:04
uh or it can return a result and so we prioritise the the result type as well
00:23:10
and there's a group blog because into you more details on our it's called a bi funder io i'm kind of this pattern
00:23:16
for this but we can think about is that the environment and
00:23:20
it provides the ability to to return in either air or result
00:23:26
so that's the basic structure of the on there's a lot of
00:23:28
convenience functions or how to assemble and link together zeroes in create
00:23:32
the that matter program uh that then will be executed by the interpreter
00:23:39
so this is how we would object to the environment for how we talk so um e.
00:23:45
there's there's a few interesting parts here that we have to call out um gee
00:23:48
that will allow you to kind of uh we will around environment as read a program
00:23:54
here we create talk service we talk services as single value that implements h. rate and the
00:24:00
trade is your a. p. i. this is the thing that used to call that looks like it
00:24:04
mediates rate so i have a service that has same prompt in it that takes in the
00:24:07
masses just like i had in my mutable interface right it's gonna turn something wrapped in the io
00:24:13
um i'm gonna find a straight talk service which is my environment this this describes
00:24:18
um in in my application anytime i see something that takes the talk services environment
00:24:23
it means i have to provide in implementation of this a. p. i. to run that go
00:24:28
um and then i have this convenience method down here where that's what users of talk service will do
00:24:35
and so the way that code reads is basically i have a talk service
00:24:38
or any time someone calls say it's gonna look up the implementation and actually run
00:24:44
right so this is this is boilerplate this is what i would say is possibly the the ugly gone
00:24:50
curve of the io yeah um it's it's kind of like uh i think you have to do to
00:24:54
um then get the better if you guys on the other side um and
00:24:58
if you've ever done you know interpret abstractions you kind of have this somewhere
00:25:02
i haven't seen one that fully gets rid of the stuff but uh yeah this is this is what
00:25:07
it looks like there's yeah yeah maybe score three will fix force yes of course of course yeah um
00:25:14
right so then when i when i write that that previous method that takes
00:25:17
the uh user speech and decides what to do it looks very similar now
00:25:21
right um the the cool thing is because of some type inference in the
00:25:25
the way these things return um let's see i owe takes a talk service
00:25:30
as its environment it's telling me when i read this oh i need to provide an implementation talk service to be
00:25:36
able to test this function it's in the type system so i can't forget to provide it when i read it yet
00:25:41
i need to make sure that i provided it or i can't run this function um
00:25:46
but everything is kind of stage yeah yeah i noticed about on our side is yet
00:25:52
uh last one last uh it's was to ask how does ask yeah
00:25:56
hold on the slide on the slides but not tested these things right
00:26:00
anyway just just reinforcing that like talk services called out it's in the type system i need this thing
00:26:06
okay so what i wanna provide environment um and i want to run the same application right i can call next
00:26:13
with user intent and it's gonna give me a stage program that hasn't run yet so i can decide to provide
00:26:19
the service that will work on the cloud or i can provide the service that
00:26:23
works in the cards on this is actually if you look at our local show later
00:26:27
um or get a link later you can see um the console application and the
00:26:31
cloud application or the exact same implementation and we're just providing different services environments for
00:26:38
and so that's where we implement the different protocols that are gonna be you so we have a protocol for tom that one
00:26:44
for the web server uh which we rode around web server for the switches to get on that yeah i don't recommend it yeah
00:26:50
yeah so that's that's how we then abstract away the the matter program from the interpreter yeah
00:26:58
ah right the other cool thing is if i may use of other services another a. p. i. is
00:27:04
the reason you had that trait with that val is these these environments can compose with oh oh right
00:27:10
um we we have this notion of um it's it's kind of scouts whose version of
00:27:15
union type it's away buttons go three but i can basically have all these traits composed together
00:27:21
and so if i have any p. i. that requires talk service and using my monitoring maybe i
00:27:28
then that program will require low it's hard service with
00:27:32
monitoring and we'll just work right so i can actually um
00:27:36
compose the c. t. i.'s and limits the type of any program to just the service is it makes use of
00:27:43
and if i start to see my take explode real big maybe that's a sign to me to re factor
00:27:49
right so you have this natural um way of of you when
00:27:52
you're you're programming kinda limiting the amount environment you require anyone function
00:27:57
awesome okay uh so here's example we throw in monitoring we'd have a monitoring service and so
00:28:03
uh we define a modern service that allows us to
00:28:07
monitor what language people vote for when they don't choose colour
00:28:10
so we can figure out why we don't recognise other languages and whether we should add inter language recognition
00:28:15
or ignore them i don't know one of the two thirty this all we
00:28:19
didn't change the the actual uh facts here we just change the the matter program
00:28:24
i added in monitoring so there was no impact on on anything and then we just have to add that monitoring trait
00:28:31
into our environment to make this work yeah so the the signature change
00:28:35
to require talk service with monitor instead of just talk service yep yep
00:28:40
um okay so so why why why we've been talking about the own affection interpreters
00:28:46
and where i think this gets really interested in is that we can
00:28:50
do is we can abstract away from my environment that's what the affection interpreter
00:28:54
separation allows us and then was server less we can abstract away from our
00:28:58
operations and then what we get is a beautiful big cake in the clout
00:29:04
yeah they care taken the cloud here and so the those to you
00:29:08
uh allow us to you have an application that is that is portable
00:29:12
crossed environments and then i don't have to think about the operational side when it's running o'clock
00:29:17
so nice combination thereof things so for uh this particular application
00:29:23
the way that we set up was that entered a of interpreter the one
00:29:26
you saw earlier you standard in standard out yet for for our protocol interpreter
00:29:30
for our test interpreter we actually used immutable console and so this
00:29:35
allows us then to assert on the the values that have been uh
00:29:40
inserted and so the statements that have been inserted and feed in different um inputs and that sort of thing
00:29:46
and then the prod interpreter uh we have the web server and sector ever for from on
00:29:51
yeah and the beauty did you like the reason reason we did this and and have this demonstration is nice and simple
00:29:56
was to show you you know your your
00:29:59
dad iteration cycle environments if that thing remains prod
00:30:05
it becomes a problem over time right like when i have to actually go to pride
00:30:10
and push things in product testing pride and test against prod instances in can replace michael services
00:30:16
is actually can cause a lot of developer friction in cause a lot of slow down in your iterations cycle and
00:30:21
you wanna try as best as you can to find ways around that yep and we think this is a good way
00:30:30
okay so let's see and a mall here
00:30:34
summer start with on go up here you go
00:30:39
someone uh so what we've done is create an action on google and maybe
00:30:44
people over the on actions on global 'cause that on that i don't really understand
00:30:48
so uh this is this is a political system um uh it and you can
00:30:53
do more with the idea behind it actually google is you have some task you
00:30:56
want users to doing it to collect data so you define like l. the user
00:31:00
has to give me in this case what their favourite languages or you know they
00:31:04
have to say time a day and you collect all these pieces of data and
00:31:08
then it's gonna make a web poke endpoint call you and you have to take
00:31:12
that data that team and figure out how to facilities are ones and tell them
00:31:17
what you say back right um it's it's a relatively simple simple a. p. i.
00:31:22
cool so for an action on google we haven't in vocation and
00:31:26
we give it a name so this one is scholars you know
00:31:30
and then we have an action review something in the
00:31:33
uh call dialogue flow to actually define or action and
00:31:37
uh with dialogue flow we define an intense so the
00:31:40
first uh we've great one that handles a welcome the vent
00:31:43
and then there's a parameter called language and so that's all um we have to set
00:31:48
up in the intent you'll see that we are time to call out to a website
00:31:52
and so if we go to film it you'll see that here's my out running
00:31:56
up on cloud run with ours your application and that's what's actually gonna get those requests
00:32:01
so when the welcome of that happens it's gonna make a request when the user enters
00:32:05
it's gonna make a request that something okay so that's uh that's our intent in our action
00:32:12
um anything else you do the simulator but i was so sure this oh yeah there's nothing so on the simulator you could actually hit
00:32:18
your live what park right but again it's you live web work so
00:32:22
if you're testing it and your place that web talk with something else
00:32:28
and you accidentally you have that be part like uh all uh i was brought the whole world right
00:32:33
yeah yeah so what often happens when people are doing local the of on this type of web but
00:32:39
is the use something like and brought to expose expose their local machine to the internet so that
00:32:44
then people can make a call to that thing which then route to their development sheen machine and
00:32:50
it's a process that we're trying to move away from that we think uh affection interpreters and can help
00:32:57
okay so let's just take a look at a little bit of code here uh
00:33:01
so here is our um kind of our main application we actually started a little higher
00:33:06
where we can see our um tricks here so we have a survey intent which can be
00:33:11
start surveyed give us a language choice and the survey uh surveys the next question are done
00:33:16
and then what we're doing is we're building up is yellow
00:33:19
um that that essentially knows how to handle incoming intent and
00:33:24
then produce uh the survey state out of that so you
00:33:28
can see what what it returns out is that service to
00:33:32
um so inputs are intent output service take you'll see that we use in
00:33:36
a zero so we give it the environment that it needs and that includes the
00:33:40
monitoring and that includes the um console which provides a protocol
00:33:45
uh and then we can derive exception what one thing you'll notice this looks different in the slide show before
00:33:51
and the primary reason is this first thing of handle
00:33:53
language where if you said exact language we're gonna say correct
00:33:58
and if you didn't exact language was a wrong it will re promptly to say hey
00:34:02
what was the best language right it's you you saw that in the the console application
00:34:06
so this this was a little bit more code and would fit on the slide but that's that's exactly what you see here yep
00:34:12
yeah so so this is just a us kind of putting together the oh
00:34:17
so we're linking together different videos together to create the kind of outer z. o.
00:34:22
and at any zeal level we can we can test that thing we can provide the environment that anything we can
00:34:27
actually test it with the unit test or a on in the dead of instance or or however we wanted to start
00:34:34
so let me show you the actual unit test that we refer some of this so you can see in this one or send like the
00:34:40
composition must reject announce call languages and uh we um just call that
00:34:45
that method to get or zero we provided the environment that it need
00:34:49
and then we do this on safer run on that
00:34:52
so um results here is the c. e. o. of
00:34:56
hawkeye are exceptions a service they and so we actually run that with the interpreter um
00:35:02
that we've provided a and then uh we make sure that we get back is the question
00:35:08
the state we should get back with the wrong answer yeah it you look further down or no
00:35:15
uh if you look further down um we actually check monitoring where we have an implementation modern that all it does
00:35:21
court all the calls and made in the monitoring in here we can assert that when you said c. sharp
00:35:25
uh we make sure that the recorder also recorded c. sharp to make sure that our modern calls or in the right place yeah um
00:35:32
if if we were to show up higher and the file you'd see where we have the
00:35:36
the mutable part of the environment that allows us to store those things in the test environment
00:35:42
um but we would never wanna do it that way production you could
00:35:45
imagine that this is a given how trivial or business logic is it doesn't
00:35:49
it's not as impressive but when you actually do one of these dialogues for real you end up with a lot more
00:35:54
complicated logic and so making sure that you log something in
00:35:57
this little tiny instances a lot harder and more important to verify
00:36:01
yeah what's nice is that we can we can test was yours
00:36:04
to any level of the o.'s like 'cause they're just does yellows
00:36:08
we can we can combine them together chain them together but then we can also
00:36:11
test them at any point as well a point that we expose from our application yep
00:36:16
that's nice okay ha the doctor file the is used to you uh use crawl
00:36:20
to uh create the application on the native image part of this application so i'm using
00:36:26
in the overall the m. c. e. e. image and then um run in this
00:36:31
install for native image which is the tool to do the head of time compile asian
00:36:35
and then as b. t. native package or thanks josh and many
00:36:38
others uh it has such everyone else now that's where you're long gone
00:36:43
um so crawl b. m. native image is the task uh provided by the native factor which is going
00:36:49
to run crawl and uh and then do the process to create that that a lot of time compiled image
00:36:55
so that's how we build our artifact our application or native application
00:37:01
so then that you actually run it we're gonna use the alpine the next image and then we're gonna copy
00:37:07
the apt that was compiled from s. p. t. and crawl and then here's the command that we use to stand out
00:37:14
is that it's in here is this is taking a our application in running it in crawl
00:37:19
and we can actually defined as as a cloud run we can have this be like a production deployment but again if we
00:37:24
wanted to have a non crawl image haven't done crawl way
00:37:27
of of starting this for development that has a faster iteration cycle
00:37:31
you just make it different darker file for that thing and you build it
00:37:35
in that way right like that you should be able to have different environments
00:37:39
this is production this is the native this is the fast we start this is the one where i want you know
00:37:44
uh to know exactly what my latency is but that doesn't mean this is good for your developer right
00:37:50
uh how long did it take us to build this girl imagine this is like
00:37:53
a very very small per it's like seven minutes seven minutes right that's great for
00:37:56
pushing the product that's a beautiful push the product that is not a good dad
00:38:00
iterations cycle time right i don't want to crawl things for dead iterations cycles yep
00:38:06
yes the with the oh we need to have some point where we can take the
00:38:09
k. and the way that we've done that is we've used command flags a kind heart
00:38:14
to do that and so when you specify dash web that's gonna stumble the web server into the environment
00:38:20
uh put the web server um protocol provider into that
00:38:25
environment so that then we can handle web requests versus
00:38:28
and when you do it to standard in standard out or um the a
00:38:32
town that's or yeah ten so you know there's also dash telnet and i think
00:38:36
nothing alfonso yes yeah yep yep okay so we um
00:38:41
have already deployed this application openly clout so that's um we
00:38:45
wanna go check out or go see here in the
00:38:49
cloud around consul or here it is ours yo laying survey
00:38:55
still see there's our and point and we can check up awards and see revisions
00:38:59
and that sort of thing and write some years there some um but that's okay
00:39:04
uh okay so that's our applications now we've gone through the
00:39:07
process of going darker image and the plan uh on club run
00:39:11
so now we should be able to actually talk to it from our um group what would be causing trouble
00:39:17
it's the rule menu go home any yeah right or the micro i mean no one can see it here
00:39:23
oh yeah we're supposed to job okay so let's let's give it a try and
00:39:28
see if it actually works a google talk to scholars the oh oh oh oh
00:39:38
program allow groups style corrupt yeah okay
00:39:43
alright first sign first ah yeah itself
00:39:47
good job on that yeah ice so anyway so that's that's our little application
00:39:52
okay let's object you fly it's uh it's uh it's sure oh
00:39:59
okay these are them up putting all those different things together with angela telnet environment but you get to see
00:40:04
the standard is in or out and the uh the test unit test and then the web server
00:40:11
so the concept we really really really want to emphasise here is like
00:40:16
the idea of making things portable through staging and understanding that my production environments and my
00:40:22
they have iterations like environment in my test environment are different
00:40:26
and any time i write a program i actually have more than one environment target no
00:40:31
matter what like even if i only have
00:40:33
one production environment i'm still targeting many different environments
00:40:37
for the life cycle this project right or write once run everywhere
00:40:43
right like more than one place for the life cycle of that thing
00:40:48
yeah yeah it's the future it's the feature yeah and we do expects you know these tools and things
00:40:53
will get better over time this is just a yeah what we think is pretty cool right now yeah
00:40:58
ship your matter program yeah yeah a staging you know that's gonna make it better that one too
00:41:05
alright so all the good is up on my god they're ceiling survey on jim's word i get up
00:41:12
and uh just uh most of the work so ah so it has some idea about taking all the credit that just actually does all
00:41:20
the work so yeah that's that's true anyway okay i think we have
00:41:24
a few minutes for questions or comments for questions yeah well thank you
00:41:32
we have two people ask questions we have we have forty mike for just raise your hand if you have questions
00:41:38
a big room don't shy
00:41:43
well see around afterwards family has questions any
00:41:48
okay this is flawless it yet perfect yard
00:41:51
okay alright well thank you all so i can have it already have questions thanks alright actually
00:41:56
well yeah because i'm sorry i'm sorry sorry yeah we we can't see with some let's say
00:42:01
hello uh i have a question isn't there a specific uh
00:42:06
sky mario when the interpreter is uh well i able to
00:42:12
christ uh application so i'm you oh let's
00:42:15
assume that oh i occasional runs perfectly and
00:42:21
a test so i'm more i'm out but uh with a different um top whatever it backs
00:42:26
yeah so that that's a great question um i that that does happen
00:42:31
and the idea one of one of the things you're trying to do
00:42:34
is uh the person who writes the interpreter might not be the same person to raise the business logic
00:42:39
and the people who can debug in production bought this might not be the same people who are efficient
00:42:46
at reading domain code and so this allows you
00:42:48
kind of special lies were someone kinda owns the interpreter
00:42:53
for production and they work on those bugs in the special lies on those bugs
00:42:58
and they're more efficient those box um then then your entire team had to deal with them at the
00:43:02
same time right it's it's it's a special is asian trade off now if your team is small enough
00:43:08
it might just be you are the domain logic and the interpreter and this
00:43:13
other thing but it lets you divide your own time in the same way
00:43:17
um and it's up to you whether or not you think that's you know useful
00:43:19
but especially if things get bigger uh we find there tends to be specialised you know
00:43:24
a person's and so dealing with production tends to be certain people deal with certain production components
00:43:30
so the the better we can tune those bugs to that person and optimise their work the
00:43:34
the better off we are as opposed to forcing everyone to learn that even people who never will
00:43:40
uh and yeah you yeah yeah yeah to add on to that in this example
00:43:45
we actually uh i had a situation where requests come in that wasn't will be parsed
00:43:50
and so it was actually crash in our server because there's an exception that was not being handled correctly
00:43:55
and so that was i've looked at the the lawton was like likely the the application has been restart it so
00:44:02
i don't have to think about that but then i was able to go look at a lot and like oh
00:44:05
i there's an exception the the actually need all the way out and uh crashed my process and my
00:44:11
you know log on me up i sort of handle that and that was in the interpreter was where
00:44:14
that actually was uh yeah we what's interesting for this application uh so i implemented the console version in
00:44:20
james implemented the cloud version and independently like we don't even really have to look at each other's go yeah
00:44:26
well i i copied or tarmac or else it's fine yeah don't anybody i copied it from somewhere else
00:44:32
okay construct overflow stacker which are anyway uh we're out of time so thank
00:44:37
you so much when mark wasn't sorry yeah i'll keep giving you false false and
00:44:44
one more question oh sorry just quick one um i saw you mentioned
00:44:52
so you showing oh well oh well it's support is also it's it's it's it's it's because i
00:45:04
support i'll zone is on stopped i wasn't sure
00:45:10
or does i'll support the was the language itself
00:45:14
yeah there there is a direct support uh what we're because we're using cloud run
00:45:18
you can run anything that runs the doctor container essentially and so so there
00:45:22
isn't specific support but in case o'clock one we don't actually need any specific support
00:45:26
and that's definitely one reason where you choose club run over good lap and
00:45:30
is that you can run anything with copper yeah it's all but you change channels
00:45:40
is that there there are some limitations with claude run today like for instance it doesn't
00:45:44
for web socket yeah which is been a bummer for some things i've worked on um and
00:45:49
a good lap engine just added web socket support and so um so
00:45:53
there are some variations in what you can do between the two but
00:45:56
for me i'd like to just play a doctor container um and i like that it's based on a kid native
00:46:02
so that i can you pour out of that environment if i need to and you know my own community so
00:46:07
so for me quadrants gonna default choice for execution um but but there are some limitations i think
00:46:12
i think about it this way uh it most of the applications most remote services are very simple
00:46:17
a key native any clout run give you a really good way to to bootstrapping kick start
00:46:22
uh if you get more complicated you have to pick something a little more
00:46:25
complicated uh but that can hit the should be able to handle a lot
00:46:30
like just those two things that okay alright well really be done this time of year round if there's any other questions thank you thank you

Share this talk: 


Conference Program

Welcome!
June 11, 2019 · 5:03 p.m.
1574 views
A Tour of Scala 3
Martin Odersky, Professor EPFL, Co-founder Lightbend
June 11, 2019 · 5:15 p.m.
8333 views
A story of unification: from Apache Spark to MLflow
Reynold Xin, Databricks
June 12, 2019 · 9:15 a.m.
1267 views
In Types We Trust
Bill Venners, Artima, Inc
June 12, 2019 · 10:15 a.m.
1569 views
Creating Native iOS and Android Apps in Scala without tears
Zahari Dichev, Bullet.io
June 12, 2019 · 10:16 a.m.
2231 views
Techniques for Teaching Scala
Noel Welsh, Inner Product and Underscore
June 12, 2019 · 10:17 a.m.
1295 views
Future-proofing Scala: the TASTY intermediate representation
Guillaume Martres, student at EPFL
June 12, 2019 · 10:18 a.m.
1156 views
Metals: rich code editing for Scala in VS Code, Vim, Emacs and beyond
Ólafur Páll Geirsson, Scala Center
June 12, 2019 · 11:15 a.m.
4695 views
Akka Streams to the Extreme
Heiko Seeberger, independent consultant
June 12, 2019 · 11:16 a.m.
1552 views
Scala First: Lessons from 3 student generations
Bjorn Regnell, Lund Univ., Sweden.
June 12, 2019 · 11:17 a.m.
577 views
Cellular Automata: How to become an artist with a few lines
Maciej Gorywoda, Wire, Berlin
June 12, 2019 · 11:18 a.m.
386 views
Why Netflix ❤'s Scala for Machine Learning
Jeremy Smith & Aish, Netflix
June 12, 2019 · 12:15 p.m.
5026 views
Massively Parallel Distributed Scala Compilation... And You!
Stu Hood, Twitter
June 12, 2019 · 12:16 p.m.
958 views
Polymorphism in Scala
Petra Bierleutgeb
June 12, 2019 · 12:17 p.m.
1113 views
sbt core concepts
Eugene Yokota, Scala Team at Lightbend
June 12, 2019 · 12:18 p.m.
1655 views
Double your performance: Scala's missing optimizing compiler
Li Haoyi, author Ammonite, Mill, FastParse, uPickle, and many more.
June 12, 2019 · 2:30 p.m.
837 views
Making Our Future Better
Viktor Klang, Lightbend
June 12, 2019 · 2:31 p.m.
1682 views
Testing in the postapocalyptic future
Daniel Westheide, INNOQ
June 12, 2019 · 2:32 p.m.
498 views
Context Buddy: the tool that knows your code better than you
Krzysztof Romanowski, sphere.it conference
June 12, 2019 · 2:33 p.m.
393 views
The Shape(less) of Type Class Derivation in Scala 3
Miles Sabin, Underscore Consulting
June 12, 2019 · 3:30 p.m.
2321 views
Refactor all the things!
Daniela Sfregola, organizer of the London Scala User Group meetup
June 12, 2019 · 3:31 p.m.
514 views
Integrating Developer Experiences - Build Server Protocol
Justin Kaeser, IntelliJ Scala
June 12, 2019 · 3:32 p.m.
551 views
Managing an Akka Cluster on Kubernetes
Markus Jura, MOIA
June 12, 2019 · 3:33 p.m.
735 views
Serverless Scala - Functions as SuperDuperMicroServices
Josh Suereth, Donna Malayeri & James Ward, Author of Scala In Depth; Google ; Google
June 12, 2019 · 4:45 p.m.
936 views
How are we going to migrate to Scala 3.0, aka Dotty?
Lukas Rytz, Lightbend
June 12, 2019 · 4:46 p.m.
709 views
Concurrent programming in 2019: Akka, Monix or ZIO?
Adam Warski, co-founders of SoftwareMill
June 12, 2019 · 4:47 p.m.
1974 views
ScalaJS and Typescript: an unlikely romance
Jeremy Hughes, Lightbend
June 12, 2019 · 4:48 p.m.
1376 views
Pure Functional Database Programming‚ without JDBC
Rob Norris
June 12, 2019 · 5:45 p.m.
6374 views
Why you need to be reviewing open source code
Gris Cuevas Zambrano & Holden Karau, Google Cloud;
June 12, 2019 · 5:46 p.m.
484 views
Develop seamless web services with Mu
Oli Makhasoeva, 47 Degrees
June 12, 2019 · 5:47 p.m.
785 views
Implementing the Scala 2.13 collections
Stefan Zeiger, Lightbend
June 12, 2019 · 5:48 p.m.
810 views
Introduction to day 2
June 13, 2019 · 9:10 a.m.
250 views
Sustaining open source digital infrastructure
Bogdan Vasilescu, Assistant Professor at Carnegie Mellon University's School of Computer Science, USA
June 13, 2019 · 9:16 a.m.
374 views
Building a Better Scala Community
Kelley Robinson, Developer Evangelist at Twilio
June 13, 2019 · 10:15 a.m.
245 views
Run Scala Faster with GraalVM on any Platform
Vojin Jovanovic, Oracle
June 13, 2019 · 10:16 a.m.
1340 views
ScalaClean - full program static analysis at scale
Rory Graves
June 13, 2019 · 10:17 a.m.
463 views
Flare & Lantern: Accelerators for Spark and Deep Learning
Tiark Rompf, Assistant Professor at Purdue University
June 13, 2019 · 10:18 a.m.
380 views
Metaprogramming in Dotty
Nicolas Stucki, Ph.D. student at LAMP
June 13, 2019 · 11:15 a.m.
1250 views
Fast, Simple Concurrency with Scala Native
Richard Whaling, data engineer based in Chicago
June 13, 2019 · 11:16 a.m.
624 views
Pick your number type with Spire
Denis Rosset, postdoctoral researcher at Perimeter Institute
June 13, 2019 · 11:17 a.m.
245 views
Scala.js and WebAssembly, a tale of the dangers of the sea
Sébastien Doeraene, Executive director of the Scala Center
June 13, 2019 · 11:18 a.m.
661 views
Performance tuning Twitter services with Graal and ML
Chris Thalinger, Twitter
June 13, 2019 · 12:15 p.m.
2003 views
Supporting the Scala Ecosystem: Stories from the Line
Justin Pihony, Lightbend
June 13, 2019 · 12:16 p.m.
163 views
Compiling to preserve our privacy
Manohar Jonnalagedda and Jakob Odersky, Inpher
June 13, 2019 · 12:17 p.m.
301 views
Building Scala with Bazel
Natan Silnitsky, wix.com
June 13, 2019 · 12:18 p.m.
565 views
244 views
Asynchronous streams in direct style with and without macros
Philipp Haller, KTH Royal Institute of Technology in Stockholm
June 13, 2019 · 3:45 p.m.
304 views
Interactive Computing with Jupyter and Almond
Sören Brunk, USU Software AG
June 13, 2019 · 3:46 p.m.
681 views
Scala best practices I wish someone'd told me about
Nicolas Rinaudo, CTO of Besedo
June 13, 2019 · 3:47 p.m.
2702 views
High performance Privacy By Design using Matryoshka & Spark
Wiem Zine El Abidine and Olivier Girardot, Scala Backend Developer at MOIA / co-founder of Lateral Thoughts
June 13, 2019 · 3:48 p.m.
753 views
Immutable Sequential Maps – Keeping order while hashed
Odd Möller
June 13, 2019 · 4:45 p.m.
276 views
All the fancy things flexible dependency management can do
Alexandre Archambault, engineer at the Scala Center
June 13, 2019 · 4:46 p.m.
389 views
ScalaWebTest - integration testing made easy
Dani Rey, Unic AG
June 13, 2019 · 4:47 p.m.
468 views
Mellite: An Integrated Development Environment for Sound
Hanns Holger Rutz, Institute of Electronic Music and Acoustics (IEM), Graz
June 13, 2019 · 4:48 p.m.
213 views
Closing panel
Panel
June 13, 2019 · 5:54 p.m.
400 views