Anatomy of Dockerfile
In this episode we will dockerize existing Phoenix application, while looking into anatomy of dockerfile
Transcript (auto-generated):
[0:03] Hello, and welcome to Deploying Elixir. In this episode, we will be dockerizing an existing Phoenix application, which I have pre-created here. So we have something to work with. So before we begin, we need to talk a little bit about how Docker constructs the final image. It’s constructed of something called layers. So we start with a base layer, which we’ll see in a moment. Then we add files, add files, run some compilations, run more stuff. And then we end up in a final image, which is actually a base image, with all of the layers added as an overlays. So let’s start. I just generated a very brief Phoenix application without Ecto for the brave of simplicity. Let’s just start with running the application so we can see it’s actually working. So mix.ph server and yes it’s a default phoenix application so everything is working they’re fine so let’s just start writing our docker file so as i said before first must be our base layer so only from elixir.
[1:15] Minus iAlpine, which is an officially maintained elixir of the newest version and using Alpine, which is a very light, quite linear distribution, very often used in a Dockerfile. Then we need to add some packages. So let’s just add... Build-base and git and tinny, because we’ll need tinny in a moment. And let’s just say we are using slash app as path for our work. So it will change directory to that current working directory, and it will allow us to do everything in the scope of that work there. And then let’s just set mix and variable to prod. Which means we’ll be using our Phoenix application in production mode.
[2:11] And as you may remember from your first journey with Elixir, you may want to add, you will need to add local hex and local rebar, because as you may know, the Docker environment in which we’ll be building our application is different from your local machine, so it won’t be having everything you have. So we need to do everything from scratch, like you are installing your Elixir for the very first time and everything. And then this very important step here. So let’s just say copy, which copies files from the local directory to the Docker container. Let’s just copy mix and mix lock. And then let’s just run mix devs get only mix env. Why is it important? So If something changes, for example, in the mix.xs, it will invalidate all of the lines underneath, which means on the rebuild, they will need to be rebuilt. If they stay the same, the Docker file evaluation will start from the bottom line, which means you’re changing your code more often than you’re changing your dependencies. And then just add here mcadier config, because we need that, and then let’s just say copy config slash config dot xs and the config.
[3:41] Protaxs to config directory. So from now on, if we change config files, everything from the bottom will be involved and that would copy the existing config files to the config files in our docker container. And then we can compile our dependencies because they are dependent on configs. So if we do that the other way, that would make sense. That would change more of your compilation. We want to reuse cached layer as much as possible here.
[4:14] Then copy all of the remaining files which is needed for our application to work. Since we copied our pre-flipping assets, we can compile our assets and compile our application, which will create a build of our app ready to be released and both with our assets, then we can copy the runtime file to our config and then run mixtry. So this will copy the config runtime, which as you may remember from the other stuff contains a runtime configuration for application. But on the other hand, it’s evaluated only when you start your application. So changing in that file doesn’t need to cause recompilation because it only is evaluated during the startup of your Erlang virtual machine. And then run a mix release. And then something interesting. So this is actually kind of a different command because it basically doesn’t add a new layer. Remember the layer? So every copy, every run creates a new layer to get overlay, but some of the commands only changes metadata. And this is exactly that command. So it exposes port 4000, which basically means we’re saying here that this container is listening internally on port 4000.
[5:43] And you can either tell Docker to bind that to random port on your local machine or you can forward the port directly to your Docker running container.
[5:55] It’s up to you, but you know to expect that inside port 4000 will be open for business. Then just add entry point, which will be teeny.
[6:08] And this is like a Swiss army knife tool for being an entry point in the containers. Because it listens to system signals correctly and do all of that stuff that you want there. And then, basically, if you want to avoid a lot of issues with zombie processes and stuff like that, TINI is your weapon of choice because it handles all of it for you and you don’t need to worry about it. And then let’s just say command and it would be slash app because we used a workdir app and build because this is the directory in which build is start, prod because we’re running production environment rel because it’s list and then up again bin and up name again and we want to start it, cool so let’s just say let’s just check if that works So let’s just say docker-build-tBuildHello. Excellent. We have our container. We have our image ready. Let’s just run container out of it. Docker run-pHello. But we need to provide secret key base.
[7:17] And since we don’t need to have a traffic going from one instance to another, we can just use a randomly generated number here. So let’s just go with docker run minus e and let’s just say secret key base should be mix ph-agent secret generated locally. And if we look at the runtime, we see that there’s exactly secret key base here, which we need to provide. And then we also want to provide pxyserver equals true here in order to have that specific instance listening for traffic. You may ask yourself, what are the examples of server you don’t want to listen to traffic? So, for example, let’s just say you’re running a background job processor. So running some of the instances, running for serving traffic, some of them actually listening to background processes. That would be the case, for example, or if you forwarding traffic to that endpoint from some other endpoint, which we can do, that would be another example. So let’s just run here.
[8:45] Oh, I just typed run two times. Amazing. And let’s just check what is the part. So, 55003. Amazing! Everything works. That’s fine.
[9:04] And there the final image is a conglomerate of layers. We can actually see that. As I said before to you, this image consists of layers and you can actually see this layer. So if you go docker-inspect-hello, you can see that there are layers specified here. And if we, for one second, do that and then let’s just say we want to combine that with docker because you may see that there are just quite many of these layers, right?
[9:41] Oh, there’s also exposed here port, which is the information that its port will be served on the port 4000 instantly. And you want to push the traffic to that port if you want to do something with that. So let’s just say we’re inspecting the elixir and we have three layers. And if we look at here, you can clearly see that these three layers are the same as these three layers. Because we use our elixir image as a base, which basically means the bottom three layers of our file will be taken from here. And we have one, two, three, four, five, six, seven, eight, nine, 10, 11, 12, 13, 14 layers. And this Docker file is far from optimum. For example, it uses a build image, also a runtime image, so it’s not a production ready. But we’ll look into that in the next episode in which we’ll rebuild that Docker file to be production-friendly, for example, starting stuff like multi-stage build and doing some of their tweaks. So thank you very much for listening and see you in the next episode.