0:02 Hey guys, this is Christian and in this
0:04 video I'm going to show you how to run
0:06 traffic, my absolute favorite reverse
0:09 proxy and load balancer in Docker Swarm.
0:11 This is the setup for production ready
0:13 high available load balancing with
0:15 trusted TLS certificates for all of your
0:18 applications in Docker Swarm. But I have
0:20 to be honest with you guys, it wasn't an
0:22 easy task because when I first tried to
0:24 combine traffic in Docker Swarm, I ran
0:27 into some practical challenges. for
0:29 example, how do you handle and maintain
0:31 a consistent configuration across all of
0:33 the different nodes or how do you store
0:36 tier certificates and also what happens
0:38 when the node where traffic is running
0:40 goes down. Today I'm going to show you
0:42 how exactly I've solved all of these
0:44 problems following two different
0:46 approaches. So, one that is a bit
0:48 simpler and in my opinion perfect for
0:50 smaller home labs and setups using local
0:52 Docker volumes and another one that is
0:55 truly production ready with zero single
0:57 points of failure using shared storage.
1:00 Plus, to make your life easier, I've
1:02 built all of this using my new boiler
1:04 plates CLI tool so you can generate
1:06 everything, the Docker Compost stack and
1:08 all of the traffic configuration files
1:11 with proper security settings in just a
1:13 few commands. Now before we jump right
1:14 in, I also want to mention that if
1:17 you're managing containers, clusters,
1:19 and maybe other infrastructure services
1:21 in your environment, it is also
1:23 important to monitor all of these
1:25 systems and get notified about any
1:27 bottlenecks or problems. So that's why
1:29 you definitely should use Czechm, the
1:32 sponsor of today's video. Czechm is a
1:34 comprehensive IT monitoring platform
1:37 that is scalable, automated, and highly
1:39 extensible. With over 2,000 maintained
1:41 plugins, it can monitor nearly all of
1:44 your network components across all the
1:46 different manufacturers, and it has many
1:48 advanced capabilities like auto
1:50 discovery with preconfigured thresholds
1:53 and rules, custom plugins. The features
1:55 are just insane. And furthermore, it
1:57 creates stunning visualizations, has a
2:00 dynamic dashboard with logs and event
2:01 monitoring that really allows you to
2:03 drill into all of the details and get
2:05 sophisticated alerts and notifications.
2:08 It is really super cool. I'm personally
2:10 using the free and open source raw
2:12 edition of Czechm in my home lab which I
2:15 already did some videos on. So check
2:16 them out if you like. And if you want to
2:18 use Czechm for your company's
2:20 infrastructure to monitor your
2:22 production systems, they have many
2:24 options to deploy a self-hosted solution
2:27 or use their cloud service offerings. Of
2:28 course, I will put your link to Czechm
2:31 and my tutorials in the description box
2:33 down below.
2:35 All right, so before we start setting up
2:36 traffic in Docker Swarm, let me quickly
2:39 recap what we've done in the past and
2:40 what are some of the challenges that I
2:43 needed to solve. So a few videos back, I
2:45 showed you how to set up traffic on a
2:48 single Docker instance. So we got TL
2:51 certificates for HTTP and HTTPS reverse
2:52 proxying for local applications and
2:55 services. And we also covered how to
2:57 store the TLA certificates in a
2:59 persistent Docker volume as well as
3:01 mounting the configuration files and
3:03 connecting let's encrypt and it really
3:06 worked great. Yeah, for a single server
3:08 of course. So then in another video I
3:11 showed you how to set up a Docker Swarm
3:12 cluster. So that brings high
3:15 availability and some other advanced
3:18 capabilities. But we are also facing a
3:20 few challenges with this because as you
3:22 might have learned from my Docker Swarm
3:25 tutorial, local Docker volumes are only
3:27 persistent on that one Docker host where
3:29 they are created and they are not
3:32 automatically replicated in the cluster
3:34 such as the configs or secrets are. So
3:37 that's a problem we need to solve. And
3:39 also what actually happens when we
3:41 deploy traffic on one of the Docker
3:43 hosts and that one Docker host goes
3:45 down. How can we make sure that there is
3:48 always a traffic instance on each docker
3:50 host running? So we will also take a
3:52 closer look at that. To tackle these
3:55 challenges, there are two different
3:57 possible setups. So there is one with
3:59 local docker volumes on a single node.
4:01 We're just deploying a traffic instance
4:04 on one of the docker hosts and we can
4:06 use the docker overlay network to
4:09 connect that one traffic instance to any
4:11 application in the entire cluster. That
4:13 is really a great setup in smaller home
4:15 lab environments where you might not
4:17 have a shared storage volume. The only
4:20 downside this setup has is when the node
4:22 one goes down, so the node where traffic
4:24 is running, we can't access any other
4:27 applications in our cluster even if they
4:28 might be running on the other two
4:30 remaining nodes. So if you really want
4:32 to have a reliable and solid production
4:35 ready setup, you have to do it this way.
4:37 You need a shared volume on all of the
4:39 nodes. And that for example you can
4:42 achieve with an NFS server. So that can
4:44 be a truness or an unrated server that
4:46 is somewhere running outside of the
4:48 docker cluster where you switch the
4:50 deployment mode of your stack from
4:53 replicated to global. So that means that
4:55 the docker swarm cluster will make sure
4:58 that each of the docker node has one
5:01 instance of that container running. So
5:02 that traffic is running on all of the
5:04 hosts and when one of these goes down
5:06 the applications might be restarted on
5:08 the other remaining nodes but the
5:09 traffic instance is always up and
5:11 running. So these are the two deployment
5:13 methods that at least in my humble
5:15 opinion make the most sense. It is up to
5:17 you what you plan to use. If you want to
5:19 have a more sophisticated setup for true
5:22 high availability or if you are okay in
5:24 your small home lab that you might need
5:27 to restart a docker host that is again
5:28 up to you and how complicated you want
5:31 this setup to be. Now, if we take a look
5:33 at the traffic documentation, honestly
5:35 guys, I'm not the biggest friend of the
5:37 traffic documentation because there's a
5:39 lot of different fragmented areas. It
5:42 has improved uh a lot in the past. Yeah,
5:44 don't get me wrong. But of course, it is
5:46 a quite complicated setup with many
5:47 different things you need to keep in
5:50 mind. And traffic in general is in my
5:53 opinion not a really intuitive solution.
5:54 So it really takes some time to get up
5:56 to speed and to understand all the
5:58 things that you need to set up and how
6:00 they work in combination, especially on
6:02 Docker Swarm. So that's why I thought
6:04 instead of just going through this
6:06 documentation, I'm going to create my
6:09 own boiler plate that is like a template
6:11 that contains all of the best practices
6:13 and things that we're covering in my
6:15 tutorial. And you can use my newly
6:17 created boilerplate CLI tool to easily
6:20 create a docker compos template for
6:22 traffic running on a single node server
6:25 or on docker swarm uh using the one or
6:26 another deployment method. It is
6:28 completely up to you. It's it's highly
6:30 customizable and flexible and you can
6:32 use this with a oneline command that you
6:34 can just copy from my git repository of
6:36 course link in the description. put this
6:38 in the terminal and that will install
6:40 the latest version of the boiler plates
6:42 tool and then we can use this to create
6:44 a new deployment docker compos file for
6:46 traffic. Now let's take a closer look at
6:48 how that tool actually works. Once you
6:50 have installed that you can just execute
6:52 it with boiler plates. Always make sure
6:55 to make a repo update before. So that
6:57 will download and synchronize the latest
6:58 versions of the boiler plates templates
7:01 with my git repository. And then you can
7:03 just use the compose list command to get
7:05 a list of all the boilerplate templates
7:08 that I've uh created for docker compose.
7:11 So here uh we find traffic. Again this
7:14 is using the latest schema 1.1. I
7:16 probably update this in the near future
7:18 when I include more features in the
7:20 boiler plates. And here you can create
7:23 preconfigured u versions of the traffic
7:25 compost stack including authentic
7:27 middleware presets and also make this
7:29 ready for dockers one. Okay. So let's
7:32 now take a closer look at what are the
7:33 actual files that it creates and the
7:36 variables we can use to customize this
7:38 template. Now first of all it creates a
7:39 bunch of different files like some
7:42 environment variables and environment
7:44 secret variable that later contains our
7:46 token for the DNS challenge that is
7:47 necessary to get trusted TLS
7:49 certificates from let's encrypt and it
7:52 creates a compos file. So depending on
7:54 whether you are using Docker Swarm or
7:56 Docker standalone, it will generate this
7:59 file with preconfigured settings and it
8:01 will also create a static configuration
8:03 file for traffic. So this is where you
8:05 configure your general settings and it
8:07 also creates some placeholders for
8:10 dynamic configuration files. So where
8:11 you can add your middlewares, your
8:13 routers and service definitions and also
8:16 put that in a config directory. In
8:18 Docker Standalone that just gets mounted
8:20 through the config files. In Docker
8:23 Swarm, we are creating Docker configs
8:26 that are created automatically from the
8:28 content uh that is in these config
8:31 files. And of course, I also enabled
8:32 some preconfigured settings like
8:35 enabling or disabling the dashboard,
8:37 enabling or disabling access lock,
8:39 Prometheus metrics, and I even added
8:41 some preconfigured security settings.
8:45 So, I really try to add as most of the
8:47 reasonable configuration settings that
8:49 I've explained in some of my older
8:51 tutorials and put them all together in
8:53 one comprehensive template where you can
8:56 easily enable or disable certain flags
8:58 and that will uh automatically put the
9:01 necessary configuration values in the
9:03 actual files. So, first of all, let me
9:05 show you the test setup that I have
9:07 created for this. I've deployed three
9:09 new virtual machines where I installed
9:11 the Docker engine and connected them to
9:15 a Docker Swarm cluster. All of them are
9:17 managers and I'm not running any
9:19 containers or service applications. I
9:21 can also show you the networks. This
9:23 only has the default ingress overlay
9:25 network from Docker Swarm and the
9:27 default bridge networks that you might
9:29 be familiar with. So this is really a
9:31 newly created Docker Swarm cluster where
9:33 we want to deploy traffic. All right. So
9:36 now let's start generating a new
9:38 template with the boiler plate generate
9:40 command then the name of the boiler
9:41 plate or the template that we want to
9:43 create. And now we can also add a
9:45 project directory. I'm just going to put
9:48 this in the temporary uh traffic single
9:50 node directory. So this is uh what we
9:52 will use for the uh single node
9:55 deployment. And this again prints out
9:58 all of the variables and the files again
10:00 and ask if we want to customize any of
10:03 these settings. You can see by default
10:06 this template will work great in a
10:08 standalone docker container. So with
10:09 just docker compose up on a single
10:11 docker host but of course we can
10:14 customize all of these variables enable
10:16 tls settings and enable docker swarm. So
10:18 let's go through all of these options
10:21 one by one together. So let's say yes.
10:23 I'm not going to uh customize the
10:25 service container or internal host name
10:28 here. Just go with the default. set my
10:30 container time zone and the container
10:32 lock level that I want to use. I think
10:35 info is fine and the restart policy
10:37 unless stopped also should be fine. Now
10:40 the HTTP and HTTPS port you should not
10:43 need to customize because uh usually we
10:45 want to use the default port 80 and 4
10:47 for free and the traffic dashboard is by
10:49 default running on port 8080. Note this
10:51 is only used when the dashboard is
10:54 enabled. I personally would not
10:56 recommend you to do that if you haven't
10:58 protected this with a strong
11:00 authentication. Now the next thing that
11:02 is important to set is the traffic
11:05 network name. My boiler plate will also
11:07 automatically create an overlay network
11:09 that is attachable. So that means you
11:12 can attach other projects or other
11:16 container stacks to that docker network
11:18 and then the traffic reverse proxy will
11:19 automatically be connected with all the
11:21 applications running on the same swarm
11:24 cluster no matter on which node they are
11:26 running on. So this is really important
11:28 to connect the traffic reverse proxy and
11:30 all your other application deployments
11:33 to the same docker network. All right,
11:35 so long story short, let's continue with
11:37 the default value. the uh name of the
11:40 HTTP entry point. I'm always using web.
11:42 I've seen other tutorials use different
11:44 names, but that's what it's probably the
11:46 most common one. Now, if you're
11:49 attaching the uh traffic reverse proxy
11:52 or the compost stack to an existing
11:54 Docker network, then you can enable this
11:56 here. But if you follow the default
11:59 values, it will create a new network and
12:01 manage this in that compos stack that
12:03 we're creating. Now, uh of course, I
12:05 don't want to enable the traffic
12:06 dashboard. Again, don't use this in
12:08 production unless you have uh
12:10 specifically protected this. And here
12:12 you can also enable the traffic access
12:14 lock. Whenever somebody accesses this
12:16 reverse proxy, it will um it will lock
12:18 this as a new entry depending on what
12:20 was the HTTP status code and so on. This
12:22 produces many many locks, but if you
12:24 need them, if you want to have more
12:26 granular control over who is actually
12:28 accessing what URL on your reverse
12:30 proxy, you can enable this access lock.
12:32 By the way, I'm also thinking about some
12:36 upgrades to the boilerplate CLI tool for
12:37 advanced Prometheus metrics
12:39 configuration. So, uh in the current
12:42 implementation, you can enable this to
12:45 enable a /metrics endpoint. And uh here
12:47 you can also uh decide if you want to
12:50 create production ready security headers
12:53 middleware file. So, this enables HSTS,
12:55 XSS protection, frame denial and some
12:57 other stuff. We will later go through
12:59 some of these headings. So, you can
13:01 enable this. it will just generate a new
13:02 middleware in the configuration. It
13:04 doesn't automatically attach this to
13:06 your service applications. And now we
13:08 want to decide if uh we want to enable
13:11 HTTPS with TLS and the AGMA protocol. Of
13:13 course, this is probably the most
13:15 critical setting why you want to use uh
13:17 my templates. Yes, of course I want to
13:19 do this. Name this to web secure. And
13:21 now we can uh give the certificate
13:24 resolver a name. Now this is also using
13:26 the same variable names as the other
13:28 boiler plates use. I will set this to
13:29 Cloudflare, but you can customize it.
13:32 And here is the DNS challenge provider.
13:34 Now, currently I'm only supporting
13:35 Cloudflare as I said in the beginning.
13:37 If you want me to add another one, just
13:39 raise it as an issue on GitHub. But
13:41 currently, this is the only thing that's
13:44 supported. And now we can add the DNS
13:46 provider API token. By default in
13:48 Cloudflare, this is your personal access
13:50 token. I think I've explained this
13:52 multiple times, but just in case you
13:54 haven't followed my traffic tutorial.
13:56 Then just go to your profile, go to API
13:58 tokens, and then create a new API token
14:01 with the zone DNS template. And then you
14:03 also need to add your email address. In
14:06 my case, I'm using info creative.de.
14:08 And now you can also enable a
14:10 redirection rule. So this definitely
14:13 would be recommended in production to
14:16 redirect all HTTP traffic to HTTPS to
14:18 prevent that any of the users is
14:21 accidentally using an unencrypted HTTP
14:23 connection because without a redirection
14:25 rule this would be possible. And also
14:27 the next setting is quite important for
14:31 security. This is enforcing TLS 1.2 as
14:34 the minimum version. So I think we're
14:37 now at TLS 1.3 which adds maximum
14:39 security features but not all backend
14:42 services and not all clients are using
14:44 or supporting this. So it definitely you
14:47 would enforce TLS 1.2. If you enforce
14:50 TLS 1.3 it might be that some older
14:53 laptops or some older phones are not
14:54 able to connect to your traffic
14:56 instance. So therefore um the default
14:58 setting should be fine. just if you want
15:00 to enforce the highest security possible
15:02 then you can switch it to TLS1 free if
15:04 you want. Now what I also would
15:06 recommend you to do uh I haven't enabled
15:08 this by default because in the past it
15:11 caused some problems in some very rare
15:13 edge cases but in general it is also
15:16 recommended to use strict cipher
15:18 switches for TLS. Now if there are older
15:20 devices that might not support this or
15:22 older backend services you might have to
15:25 disable this. Sometimes for backwards
15:27 compatibility it might cause problems
15:30 but in general it is recommended so that
15:33 you're not accidentally using unsecure
15:35 ciphers when you're initiating the TLS
15:37 handshake with the clients. There's also
15:40 another setting skip TLS verification
15:41 for backend servers. And now this is
15:43 only relevant if you're running
15:46 applications inside your Docker Swarm
15:48 cluster. So behind the traffic reverse
15:51 proxy that are using HTTPS
15:54 uh connections but are using selfsigned
15:56 certificates by default. I'm not
15:58 enabling this because it might be a
16:00 security risk. So should not be best
16:02 practice. All right. And now comes the
16:04 most important setting for this
16:06 tutorial. This enables the Docker swarm
16:08 mode. Of course, we want to enable this
16:10 otherwise it will generate a standalone
16:12 Docker compos. And here comes the
16:15 important part for our first setup. So
16:17 this is really important um to change if
16:19 you want to go with the single node
16:23 setup or with the global setup. By
16:26 default I've set this to replicated. So
16:28 that means it will create a specific
16:30 number of replicas of traffic inside
16:33 your uh dockers cluster and by default
16:36 the number is one. So that means it will
16:38 create one traffic instance in the
16:41 entire cluster. And you can with the
16:44 target host name placement constraint
16:46 specifically define on which host you
16:48 want this traffic instance to be created
16:50 on. Note this is really important
16:53 because if you don't set this it might
16:56 happen that your traffic instance is
16:58 restarted on another Docker host that
17:01 does not have the same Docker volume for
17:03 the TLS store. I think in case of
17:06 traffic it's not that dramatic honestly
17:08 but it is not really best practice. So
17:10 therefore I think it is necessary to pin
17:12 that to a specific host. And then we
17:14 want to configure the swarm volume
17:16 storage back end. By default it is
17:18 always local. So uses a local docker
17:20 volume. In case we're using a shared
17:22 volume later we have to customize this
17:25 option. But now uh we will just run with
17:26 the defaults. And now we can also
17:28 customize the docker swarm secret name.
17:31 So where the API token that we've uh
17:34 added here is stored in. Actually it's
17:36 not really that uh important to change
17:39 this name. only if you're using this
17:41 name for another secret value, you have
17:43 to change it obviously. And um here you
17:45 can also enable an authentic middleware
17:47 when we used the free and open source
17:50 identity provider authentic and uh
17:52 combined this with a traffic reverse
17:55 proxy. So then you can enable the
17:57 authentic SSO integration. So this will
18:00 just generate the middleware um
18:02 connected to your authentic outpost. If
18:03 you're not using authentic, just ignore
18:06 this. Um, if you're interested in it,
18:08 again, refer to my authentic tutorial.
18:09 So, there I've explained everything.
18:11 Okay. So, now at the end, we get a
18:13 summary of what files are being created
18:15 by the boiler plates tool. I know
18:18 there's was a lot here, but um I tried
18:19 to make the boiler plates as
18:22 customizable as possible by still
18:25 enforcing or preconfiguring some
18:27 reasonable values. And before I deploy
18:29 this to my docker swarm cluster. Let's
18:32 have a quick run through uh the files
18:34 that have been created. So here you can
18:35 see we're creating a new traffic uh
18:38 service. We're exposing the port 80 and
18:41 4 for free in swarm. This will be um
18:44 exposed as an ingress port exposure. So
18:45 this is using the docker swarm's
18:48 internal load balancing feature is quite
18:50 nice. Then we are creating a new docker
18:52 volume for the traffic certificate
18:55 storage and we are including thev file.
18:58 So that really just contains the name of
19:01 the CF API token file where the actual
19:03 secret is located in. And then I'm also
19:05 attaching four configs. So Docker
19:08 configs that are automatically created
19:12 here from the content of these files
19:14 here. So in the project directory and
19:16 when we take a closer look at these
19:17 config files. So here's the general
19:20 traffic configuration where the email
19:23 address and the uh DNS challenge for the
19:27 certificate resolver is um configured as
19:30 well as the TLS ciphers that have been
19:33 enabled by this setting here. And I also
19:36 configured two providers here. the swarm
19:39 provider not exposing any services by
19:42 default connected to the proxy overlay
19:44 network which is by the way managed in
19:47 this compos file here so here you can
19:48 see it will create this overlay network
19:51 as attachable all right and then I'm
19:54 also adding another provider so that
19:56 will automatically watch any dynamic
19:59 configuration files into this directory
20:02 so this is where the other files uh
20:04 passed through when you take a look at
20:05 the middleware file so this is adding
20:08 the middleware security headers that
20:10 enable some security settings. You can
20:13 just add to any applications using the
20:16 traffic labels and some placeholders for
20:19 adding uh custom routers for external
20:21 services, whatever you want to use this
20:24 for. And I also added a simple health
20:27 check uh using a ping request. And of
20:29 course um here you can also see the
20:32 docker swarm deployment options. the
20:34 mode is replicated with one replica and
20:36 the name constraint is set to host name
20:40 VM test one. So let's go into the um
20:42 temporary file directory. So where
20:44 everything is located. You can see I'm
20:46 connected to my uh test environment and
20:48 currently there's nothing running. So
20:50 let's start deploying this. Now in
20:53 docker standalone mode you will just uh
20:55 use docker compose up but of course if
20:56 you're using docker swarm if you've
20:58 watched my swarm tutorial you will use
21:02 this stack deploy command set a specific
21:05 compos file this is needed otherwise it
21:07 will not work and then give this a name
21:09 something like traffic single node I
21:12 think that's fine right so here it will
21:14 create the necessary resources like the
21:17 proxy network the secret the config uh
21:21 the docker configs and the service and
21:24 with docker stack ls we can see the
21:26 stack is up and running. Let's also
21:30 execute docker service ps uh sorry it's
21:33 docker service ls and then docker
21:36 service ps we have to use the name of
21:38 that of course you can see it's
21:41 replicated one replica is up and running
21:43 in our cluster but just to get some more
21:45 details about what's happening here you
21:47 can see the docker container is running
21:49 on vm test one so if we would execute
21:51 docker ps we would not see anything
21:54 because I'm connected to the uh docker
21:57 context on server two. Actually, if we
22:00 open an SSH connection to the first
22:02 server and then execute a docker ps, we
22:05 should see docker containers created.
22:06 It's also healthy. So, that means the
22:09 ping request works. So, all should be
22:10 good. And yeah, let's uh run a simple
22:14 HTTPS connection on the IP address uh
22:15 page not found. This is by the way
22:18 coming from the traffic reverse proxy.
22:21 So um here you can see the traffic
22:23 default certificate is the issue of the
22:25 TLS certificates on that location. So
22:27 that means yes we are definitely
22:29 reaching the traffic service. And what I
22:31 also want to show you what is pretty
22:34 cool if you're uh if you're running
22:37 docker service ps you can see that the
22:39 traffic reverse proxy is currently only
22:42 started on that one server. But what
22:47 happens uh if we ping the IP address of
22:49 the second node for example so it's the 10.2030
22:50 10.2030
22:54 uh4 and if we also make a curl request
22:58 to that you can see that this still gets
23:01 us the traffic default certificate. So
23:03 even though we are deploying the traffic
23:05 container on just one docker node in the
23:10 cluster, we can still access this by any
23:12 IP address of the cluster. So that
23:14 actually means you don't have to run
23:16 traffic in your cluster multiple times.
23:19 It's enough to run this one instance of
23:21 traffic somewhere and you can still
23:24 achieve high availability with this. And
23:26 let me also demonstrate how that would
23:28 work uh to connect multiple service
23:30 applications. Again, if we return to the
23:32 boiler plates, it would be nice to just
23:34 deploy the who am I uh boiler plates.
23:37 So, this is just a very simple test
23:40 container that is uh used by traffic to
23:42 verify that this is working. This is
23:44 just deploying a very simple docker
23:46 compose project with just one docker
23:48 compos file. But you can still
23:51 automatically add uh swarm mode and
23:53 traffic TLS and traffic labels to this
23:56 just to test how this would work. Yeah.
23:59 And uh let's also generate a new
24:03 boilerplate for who am I in the /temp
24:06 who am I directory. Yeah. So let's do
24:08 this and let's customize some of these
24:10 settings here. All right. Just assume
24:12 this would be a production ready
24:13 application that we want to deploy.
24:17 Yeah. Uh let's just use the default
24:19 values for the hosting the restart
24:21 policy and stuff like that. Now comes
24:23 the important part um that we want to
24:25 connect this to the proxy network. So
24:28 this is the name of the traffic network,
24:30 the overlay network that we've created.
24:32 So now we want to set the domain name
24:34 for our application. Of course, we're
24:36 not using local host, but uh we can use
24:39 any name that we uh created on our DNS
24:42 server. So in my case, I created one
24:44 that is who am I.Crave.
24:46 So making sure that we're using trusted
24:48 TLS certificates and the certificate
24:51 resolver name is Cloudflare. So now this
24:53 variable name matches with uh the
24:56 traffic template variable as well. And
24:59 because we're using docker swarm, we
25:02 also need to deploy this container as a
25:05 swarm stack because if you remember in
25:07 the traffic configuration file that
25:11 we've created um I didn't enable the
25:13 docker provider here. I enabled the
25:16 swarm provider. So that means we cannot
25:18 expose standalone containers with this
25:20 template. Actually there is a trick to
25:23 enable it if you enable docker provider
25:25 here and configure it. You can check
25:28 this in the documentation and this is
25:31 kind of possible. So you can run traffic
25:34 itself in docker swarm but you can
25:37 expose standalone containers if they are
25:40 attached to the same proxy network. It
25:41 will work. However, it will also
25:44 generate a warning inside the traffic
25:46 locks and I think it is not really uh a
25:49 best practice or supported uh deployment
25:51 methods. So, if you're running traffic
25:54 in Docker Swarm, just assume that you
25:57 can also only expose swarm services and
25:59 not standalone containers. Yeah, it is
26:00 technically possible but it's probably
26:03 not recommended. All right, so yeah,
26:05 that's about that. So, yes, enable swarm
26:08 mode. And now I want to place this uh I
26:10 want to run this as replicated uh with
26:12 one replica of course but I want to
26:15 target um another host name for example
26:18 VM test 2 just to demonstrate how it
26:21 would work on a single node deployment
26:23 with traffic running on VM test one and
26:25 our application container running on VM
26:27 test 2. Okay. So let's generate these
26:31 files and then let's go into this
26:35 directory. I think uh I've used who am I
26:38 as the name. Oh yeah. And this is our
26:39 compose file. You can see this just
26:42 generates a very very simple uh test
26:44 container but it has our deployment
26:46 configuration for Docker Swarm running
26:49 on VM test 2 and it also has attached
26:51 the traffic labels that are necessary to
26:54 expose this with our swarm reverse
26:56 proxy. And of course I've also attached
26:58 it to the proxy network the overlay
27:00 network where traffic is running. These
27:03 are the important parts that you need.
27:05 And then you can just use docker stack
27:09 deploy compose.ml.
27:10 Of course, I want to use a different
27:13 name. Uh just use who am I? I think
27:15 that's fine. And then it's creating the
27:19 service. All right. So now let's just
27:21 run docker stack ls. There we have our
27:22 reverse proxy, our application
27:25 container. And now of course it's also
27:29 important that you have set the DNS name
27:32 uh and point this to the IP address of
27:34 any of the nodes in the cluster. So here
27:36 you can see that's the IP address of the
27:40 VM test one server. So where the traffic
27:41 reverse box is running but actually it's
27:43 not important which of the IP addresses
27:46 you choose. And then if we open a
27:48 connection here you can see it's moved
27:50 permanently. So that's probably the
27:54 redirection to the HTTPS location. So
27:56 let's follow this. Sometimes it can take
27:58 a few seconds or maybe up to a minute or
28:01 two until the certificate is issued by
28:04 the traffic reverse proxy. Uh let's just
28:06 try to do it again. And oh yeah, so now
28:08 you can see the connection is working
28:10 and that should have a trusted TLS
28:12 certificate. So when we are opening the
28:14 verbose mode, we should see some of the
28:16 details here. You can see that we're
28:19 using TLS 1.3. The minimum version was
28:22 configured to 1.2, but as this is an
28:25 updated uh client and the reverse proxy
28:27 is up to date, it uses automatically the
28:31 highest uh version possible. And also
28:33 one of the most secure ciphers that the
28:36 server and the client both support. You
28:38 can also see the uh subject of the
28:42 certificate who mihome.creative.de
28:44 And this is a trusted TLS certificate
28:47 that was issued by let's encrypt. So the
28:50 AKMA protocol is working. We get real
28:53 certificates and also the connection
28:54 from the traffic reverse proxy running
28:57 on server one to the application running
29:00 on server 2 is also working because both
29:02 are connected to the same Docker overlay
29:03 network. All right. And that is
29:06 basically how you run this very simple
29:08 setup that you can use to achieve high
29:11 availability and load balancing in your
29:13 uh home lab using just a single instance
29:16 of traffic in your docker swarm cluster.
29:18 However, as you told in the beginning of
29:22 this video that has a small downside and
29:25 this is if we would shut down the server
29:28 VM test one where traffic is actually
29:30 running. I can just demonstrate this to
29:33 you. When I just open an SSH connection
29:36 and shut down this server and then we
29:39 try to connect again, you can see this
29:41 is not working anymore. It's running
29:44 into a timeout. How to achieve the
29:47 second setup with a shared volume? So
29:49 where we store the TLS certificate onto
29:53 a separate NFS server and deploy traffic
29:55 on all three nodes in the cluster so we
29:58 can actively achieve high availability.
29:59 All right. So first of all we have to
30:02 remove the traffic uh stack. So that
30:04 will again not work because there is an
30:06 application connected to the proxy
30:08 overlay network. So you first of all
30:10 need to make sure that all the services
30:13 using this proxy network are down. So we
30:16 need to remove the who am I stack as
30:19 well and only then we can remove the uh
30:21 traffic signal node stack so that
30:23 there's nothing running anymore on that
30:25 uh cluster. All right, let's return to
30:28 the boiler plates and let's generate
30:32 another uh deployment a file of traffic,
30:34 but now let's set the name to global. I
30:37 will just rush through the first items
30:38 here because we've covered them before.
30:41 Just use the same settings. So here
30:43 comes the difference. Uh now we need to
30:45 switch the placement mode from
30:47 replicated that is the default to
30:49 global. Again as I have explained this
30:51 in the beginning, replicated means it
30:53 will uh create a specific number of
30:56 replicas in your entire cluster. So it
30:58 will try to balance depending on the CPU
31:00 and memory load of the swarm nodes on
31:03 which node to put the replicated service
31:06 on. If you switch to global, it will
31:08 always make sure that there is one
31:10 instance of the server on each docker
31:12 swarm node in the cluster. So if we have
31:14 three nodes in the cluster, it also gets
31:17 us free traffic instances. So now again
31:18 for this we need a shared volume.
31:20 Therefore I added an option here to
31:24 change the local volume to a mount
31:26 point. This could be a shared volume
31:29 using a seph uh connected uh mount point
31:32 or whatever. Or you can use NFS. When
31:34 using NFS you need to put in the IP
31:36 address of the NFS server. In my case it
31:39 is 10.20.0.7
31:41 which is by the way the unrated uh NAS
31:44 that I'm running in my home lab. and it
31:48 has a shared uh volume under mount user
31:52 app testing. Make sure that this uh
31:54 directory exists on the NFS server
31:57 otherwise it might run into some issues
31:59 and uh then you can also customize the
32:02 mount options and yeah so the API token
32:04 name I think that's that's fine and
32:06 authentic we don't need this as well.
32:09 All right so now the files are stored in here.
32:11 here.
32:14 Uh let us just take a look at the global
32:17 directory and open and open this in VS
32:19 code. So again this is actually very
32:22 similar to the previous setup. Instead
32:24 it uses the deployment mode global and
32:27 uses a configuration for creating an NFS
32:30 volume uh inside the docker swarm
32:32 cluster. Ah yeah okay it's adding NFS
32:34 version 4. So I haven't made this
32:36 optional yet because I think NFS version
32:38 4 is yeah actually the most recommended
32:40 but all the other stuff should actually
32:43 be exactly the same. So we can just
32:46 return here uh I think that's correct.
32:49 Yes. And then just deploy the compos
32:54 stack to the traffic global right okay
32:56 so that should work and of course we
32:58 also need to redeploy the who am I
33:01 container. Okay. So, uh let's uh quickly
33:04 check if the stacks are up and running.
33:07 Yes. And if the services are up and
33:09 running. Yes. So, here you can see a
33:11 difference between the first deployment
33:13 and the second one.
33:15 If we take a look here, you can see that
33:17 there are three replicas actually
33:21 running. On each of the nodes is one
33:23 instance of the traffic container
33:24 running. And of course, we can check if
33:26 we can still connect to this. Uh again
33:29 it probably takes uh some time until the
33:31 certificate is issued.
33:34 So yeah we got the trusted TLS
33:36 certificate. So now uh let me also
33:38 demonstrate what happens when I shut
33:41 down the first node. Again the one
33:43 problem that we still have is that when
33:46 we try to connect to this service
33:49 although there is a separate traffic
33:51 instance running on VM test two and
33:53 three and the application container is
33:55 running. The problem that we are now
33:57 facing is the connection is still
34:00 initiated to the first IP address only.
34:02 And that is uh the reason for this is
34:06 that I've set or configured the DNS name
34:08 for who am I.Crave.
34:11 My local DNS server to only resolve to
34:15 the first IP address. So when I go to my
34:17 DNS project, you can see that this is a
34:19 DNS record. It still only has this one
34:22 IP address. But if we start adding the
34:26 other IP addresses to that as well. So
34:27 then we still get a connection from at
34:30 least one of these IP addresses where
34:32 traffic is running. Now this way you can
34:35 have a reliable and truly high available
34:38 deployment of traffic reverse proxy on
34:40 docker swarm. Now, there's still one
34:42 more thing that I need to address at the
34:44 end. And these are the final thoughts or
34:47 my ideas for future videos because this
34:50 might introduce uh some problems if it
34:53 still resolves the IP address of a node
34:55 that is down and it still tries to
34:57 connect to that IP address. Now in our
34:59 case we are just picking a different IP
35:01 address but if it would connect to this
35:04 IP address we would still run into a
35:07 timeout and uh it might uh take some
35:09 time until the client tries to connect
35:13 to a different IP address and that still
35:15 is a problem. So although the traffic
35:17 reverse proxy setup is truly high
35:19 available, our Docker Swarm cluster is
35:21 not. This requires an an additional
35:23 video about keeper lifed because I think
35:25 otherwise it would be too much for this
35:28 one tutorial. However, I still hope that
35:30 it helped you to get a reliable and
35:32 truly high level setup of traffic
35:34 running in Docker Swarm. And if you
35:35 really want to make this production
35:37 ready, you can now use an external load
35:40 balancer to check if these nodes are up
35:41 and running and route the incoming
35:44 client requests to the online servers or
35:46 you're using a floating IP address with
35:49 something like keeperd. But again, we'll
35:51 cover that in a future video. All right,
35:53 guys. So, now it's your turn. Please
35:56 tell me, did you like this video? Did it
35:57 help you? And what do you think about
36:00 the boilerplate CLI tool? By the way, I
36:02 would really love to hear your opinion
36:03 about this. And if you feel this is a
36:05 promising project that helps you to get
36:07 up to speed quickly. And then also
36:09 consider supporting this or at least
36:11 give it a star on GitHub. That always
36:13 helps. And thank you so much for
36:15 watching. Thanks a lot to the people
36:17 supporting this project already. And of
36:19 course, I'm going to catch you in the
36:20 next video tutorial. So, have a nice