Hang tight while we fetch the video data and transcripts. This only takes a moment.
Connecting to YouTube player…
Fetching transcript data…
We’ll display the transcript, summary, and all view options as soon as everything loads.
Next steps
Loading transcript tools…
How to run Traefik in Docker Swarm? | Christian Lempa | YouTubeToText
YouTube Transcript: How to run Traefik in Docker Swarm?
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Video Summary
Summary
Core Theme
This content explains how to set up the Traefik reverse proxy and load balancer in Docker Swarm, offering two distinct approaches for different environments and detailing a custom CLI tool to simplify the configuration process.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
Hey guys, this is Christian and in this
video I'm going to show you how to run
traffic, my absolute favorite reverse
proxy and load balancer in Docker Swarm.
This is the setup for production ready
high available load balancing with
trusted TLS certificates for all of your
applications in Docker Swarm. But I have
to be honest with you guys, it wasn't an
easy task because when I first tried to
combine traffic in Docker Swarm, I ran
into some practical challenges. for
example, how do you handle and maintain
a consistent configuration across all of
the different nodes or how do you store
tier certificates and also what happens
when the node where traffic is running
goes down. Today I'm going to show you
how exactly I've solved all of these
problems following two different
approaches. So, one that is a bit
simpler and in my opinion perfect for
smaller home labs and setups using local
Docker volumes and another one that is
truly production ready with zero single
points of failure using shared storage.
Plus, to make your life easier, I've
built all of this using my new boiler
plates CLI tool so you can generate
everything, the Docker Compost stack and
all of the traffic configuration files
with proper security settings in just a
few commands. Now before we jump right
in, I also want to mention that if
you're managing containers, clusters,
and maybe other infrastructure services
in your environment, it is also
important to monitor all of these
systems and get notified about any
bottlenecks or problems. So that's why
you definitely should use Czechm, the
sponsor of today's video. Czechm is a
comprehensive IT monitoring platform
that is scalable, automated, and highly
extensible. With over 2,000 maintained
plugins, it can monitor nearly all of
your network components across all the
different manufacturers, and it has many
advanced capabilities like auto
discovery with preconfigured thresholds
and rules, custom plugins. The features
are just insane. And furthermore, it
creates stunning visualizations, has a
dynamic dashboard with logs and event
monitoring that really allows you to
drill into all of the details and get
sophisticated alerts and notifications.
It is really super cool. I'm personally
using the free and open source raw
edition of Czechm in my home lab which I
already did some videos on. So check
them out if you like. And if you want to
use Czechm for your company's
infrastructure to monitor your
production systems, they have many
options to deploy a self-hosted solution
or use their cloud service offerings. Of
course, I will put your link to Czechm
and my tutorials in the description box
down below.
All right, so before we start setting up
traffic in Docker Swarm, let me quickly
recap what we've done in the past and
what are some of the challenges that I
needed to solve. So a few videos back, I
showed you how to set up traffic on a
single Docker instance. So we got TL
certificates for HTTP and HTTPS reverse
proxying for local applications and
services. And we also covered how to
store the TLA certificates in a
persistent Docker volume as well as
mounting the configuration files and
connecting let's encrypt and it really
worked great. Yeah, for a single server
of course. So then in another video I
showed you how to set up a Docker Swarm
cluster. So that brings high
availability and some other advanced
capabilities. But we are also facing a
few challenges with this because as you
might have learned from my Docker Swarm
tutorial, local Docker volumes are only
persistent on that one Docker host where
they are created and they are not
automatically replicated in the cluster
such as the configs or secrets are. So
that's a problem we need to solve. And
also what actually happens when we
deploy traffic on one of the Docker
hosts and that one Docker host goes
down. How can we make sure that there is
always a traffic instance on each docker
host running? So we will also take a
closer look at that. To tackle these
challenges, there are two different
possible setups. So there is one with
local docker volumes on a single node.
We're just deploying a traffic instance
on one of the docker hosts and we can
use the docker overlay network to
connect that one traffic instance to any
application in the entire cluster. That
is really a great setup in smaller home
lab environments where you might not
have a shared storage volume. The only
downside this setup has is when the node
one goes down, so the node where traffic
is running, we can't access any other
applications in our cluster even if they
might be running on the other two
remaining nodes. So if you really want
to have a reliable and solid production
ready setup, you have to do it this way.
You need a shared volume on all of the
nodes. And that for example you can
achieve with an NFS server. So that can
be a truness or an unrated server that
is somewhere running outside of the
docker cluster where you switch the
deployment mode of your stack from
replicated to global. So that means that
the docker swarm cluster will make sure
that each of the docker node has one
instance of that container running. So
that traffic is running on all of the
hosts and when one of these goes down
the applications might be restarted on
the other remaining nodes but the
traffic instance is always up and
running. So these are the two deployment
methods that at least in my humble
opinion make the most sense. It is up to
you what you plan to use. If you want to
have a more sophisticated setup for true
high availability or if you are okay in
your small home lab that you might need
to restart a docker host that is again
up to you and how complicated you want
this setup to be. Now, if we take a look
at the traffic documentation, honestly
guys, I'm not the biggest friend of the
traffic documentation because there's a
lot of different fragmented areas. It
has improved uh a lot in the past. Yeah,
don't get me wrong. But of course, it is
a quite complicated setup with many
different things you need to keep in
mind. And traffic in general is in my
opinion not a really intuitive solution.
So it really takes some time to get up
to speed and to understand all the
things that you need to set up and how
they work in combination, especially on
Docker Swarm. So that's why I thought
instead of just going through this
documentation, I'm going to create my
own boiler plate that is like a template
that contains all of the best practices
and things that we're covering in my
tutorial. And you can use my newly
created boilerplate CLI tool to easily
create a docker compos template for
traffic running on a single node server
or on docker swarm uh using the one or
another deployment method. It is
completely up to you. It's it's highly
customizable and flexible and you can
use this with a oneline command that you
can just copy from my git repository of
course link in the description. put this
in the terminal and that will install
the latest version of the boiler plates
tool and then we can use this to create
a new deployment docker compos file for
traffic. Now let's take a closer look at
how that tool actually works. Once you
have installed that you can just execute
it with boiler plates. Always make sure
to make a repo update before. So that
will download and synchronize the latest
versions of the boiler plates templates
with my git repository. And then you can
just use the compose list command to get
a list of all the boilerplate templates
that I've uh created for docker compose.
So here uh we find traffic. Again this
is using the latest schema 1.1. I
probably update this in the near future
when I include more features in the
boiler plates. And here you can create
preconfigured u versions of the traffic
compost stack including authentic
middleware presets and also make this
ready for dockers one. Okay. So let's
now take a closer look at what are the
actual files that it creates and the
variables we can use to customize this
template. Now first of all it creates a
bunch of different files like some
environment variables and environment
secret variable that later contains our
token for the DNS challenge that is
necessary to get trusted TLS
certificates from let's encrypt and it
creates a compos file. So depending on
whether you are using Docker Swarm or
Docker standalone, it will generate this
file with preconfigured settings and it
will also create a static configuration
file for traffic. So this is where you
configure your general settings and it
also creates some placeholders for
dynamic configuration files. So where
you can add your middlewares, your
routers and service definitions and also
put that in a config directory. In
Docker Standalone that just gets mounted
through the config files. In Docker
Swarm, we are creating Docker configs
that are created automatically from the
content uh that is in these config
files. And of course, I also enabled
some preconfigured settings like
enabling or disabling the dashboard,
enabling or disabling access lock,
Prometheus metrics, and I even added
some preconfigured security settings.
So, I really try to add as most of the
reasonable configuration settings that
I've explained in some of my older
tutorials and put them all together in
one comprehensive template where you can
easily enable or disable certain flags
and that will uh automatically put the
necessary configuration values in the
actual files. So, first of all, let me
show you the test setup that I have
created for this. I've deployed three
new virtual machines where I installed
the Docker engine and connected them to
a Docker Swarm cluster. All of them are
managers and I'm not running any
containers or service applications. I
can also show you the networks. This
only has the default ingress overlay
network from Docker Swarm and the
default bridge networks that you might
be familiar with. So this is really a
newly created Docker Swarm cluster where
we want to deploy traffic. All right. So
now let's start generating a new
template with the boiler plate generate
command then the name of the boiler
plate or the template that we want to
create. And now we can also add a
project directory. I'm just going to put
this in the temporary uh traffic single
node directory. So this is uh what we
will use for the uh single node
deployment. And this again prints out
all of the variables and the files again
and ask if we want to customize any of
these settings. You can see by default
this template will work great in a
standalone docker container. So with
just docker compose up on a single
docker host but of course we can
customize all of these variables enable
tls settings and enable docker swarm. So
let's go through all of these options
one by one together. So let's say yes.
I'm not going to uh customize the
service container or internal host name
here. Just go with the default. set my
container time zone and the container
lock level that I want to use. I think
info is fine and the restart policy
unless stopped also should be fine. Now
the HTTP and HTTPS port you should not
need to customize because uh usually we
want to use the default port 80 and 4
for free and the traffic dashboard is by
default running on port 8080. Note this
is only used when the dashboard is
enabled. I personally would not
recommend you to do that if you haven't
protected this with a strong
authentication. Now the next thing that
is important to set is the traffic
network name. My boiler plate will also
automatically create an overlay network
that is attachable. So that means you
can attach other projects or other
container stacks to that docker network
and then the traffic reverse proxy will
automatically be connected with all the
applications running on the same swarm
cluster no matter on which node they are
running on. So this is really important
to connect the traffic reverse proxy and
all your other application deployments
to the same docker network. All right,
so long story short, let's continue with
the default value. the uh name of the
HTTP entry point. I'm always using web.
I've seen other tutorials use different
names, but that's what it's probably the
most common one. Now, if you're
attaching the uh traffic reverse proxy
or the compost stack to an existing
Docker network, then you can enable this
here. But if you follow the default
values, it will create a new network and
manage this in that compos stack that
we're creating. Now, uh of course, I
don't want to enable the traffic
dashboard. Again, don't use this in
production unless you have uh
specifically protected this. And here
you can also enable the traffic access
lock. Whenever somebody accesses this
reverse proxy, it will um it will lock
this as a new entry depending on what
was the HTTP status code and so on. This
produces many many locks, but if you
need them, if you want to have more
granular control over who is actually
accessing what URL on your reverse
proxy, you can enable this access lock.
By the way, I'm also thinking about some
upgrades to the boilerplate CLI tool for
advanced Prometheus metrics
configuration. So, uh in the current
implementation, you can enable this to
enable a /metrics endpoint. And uh here
you can also uh decide if you want to
create production ready security headers
middleware file. So, this enables HSTS,
XSS protection, frame denial and some
other stuff. We will later go through
some of these headings. So, you can
enable this. it will just generate a new
middleware in the configuration. It
doesn't automatically attach this to
your service applications. And now we
want to decide if uh we want to enable
HTTPS with TLS and the AGMA protocol. Of
course, this is probably the most
critical setting why you want to use uh
my templates. Yes, of course I want to
do this. Name this to web secure. And
now we can uh give the certificate
resolver a name. Now this is also using
the same variable names as the other
boiler plates use. I will set this to
Cloudflare, but you can customize it.
And here is the DNS challenge provider.
Now, currently I'm only supporting
Cloudflare as I said in the beginning.
If you want me to add another one, just
raise it as an issue on GitHub. But
currently, this is the only thing that's
supported. And now we can add the DNS
provider API token. By default in
Cloudflare, this is your personal access
token. I think I've explained this
multiple times, but just in case you
haven't followed my traffic tutorial.
Then just go to your profile, go to API
tokens, and then create a new API token
with the zone DNS template. And then you
also need to add your email address. In
my case, I'm using info creative.de.
And now you can also enable a
redirection rule. So this definitely
would be recommended in production to
redirect all HTTP traffic to HTTPS to
prevent that any of the users is
accidentally using an unencrypted HTTP
connection because without a redirection
rule this would be possible. And also
the next setting is quite important for
security. This is enforcing TLS 1.2 as
the minimum version. So I think we're
now at TLS 1.3 which adds maximum
security features but not all backend
services and not all clients are using
or supporting this. So it definitely you
would enforce TLS 1.2. If you enforce
TLS 1.3 it might be that some older
laptops or some older phones are not
able to connect to your traffic
instance. So therefore um the default
setting should be fine. just if you want
to enforce the highest security possible
then you can switch it to TLS1 free if
you want. Now what I also would
recommend you to do uh I haven't enabled
this by default because in the past it
caused some problems in some very rare
edge cases but in general it is also
recommended to use strict cipher
switches for TLS. Now if there are older
devices that might not support this or
older backend services you might have to
disable this. Sometimes for backwards
compatibility it might cause problems
but in general it is recommended so that
you're not accidentally using unsecure
ciphers when you're initiating the TLS
handshake with the clients. There's also
another setting skip TLS verification
for backend servers. And now this is
only relevant if you're running
applications inside your Docker Swarm
cluster. So behind the traffic reverse
proxy that are using HTTPS
uh connections but are using selfsigned
certificates by default. I'm not
enabling this because it might be a
security risk. So should not be best
practice. All right. And now comes the
most important setting for this
tutorial. This enables the Docker swarm
mode. Of course, we want to enable this
otherwise it will generate a standalone
Docker compos. And here comes the
important part for our first setup. So
this is really important um to change if
you want to go with the single node
setup or with the global setup. By
default I've set this to replicated. So
that means it will create a specific
number of replicas of traffic inside
your uh dockers cluster and by default
the number is one. So that means it will
create one traffic instance in the
entire cluster. And you can with the
target host name placement constraint
specifically define on which host you
want this traffic instance to be created
on. Note this is really important
because if you don't set this it might
happen that your traffic instance is
restarted on another Docker host that
does not have the same Docker volume for
the TLS store. I think in case of
traffic it's not that dramatic honestly
but it is not really best practice. So
therefore I think it is necessary to pin
that to a specific host. And then we
want to configure the swarm volume
storage back end. By default it is
always local. So uses a local docker
volume. In case we're using a shared
volume later we have to customize this
option. But now uh we will just run with
the defaults. And now we can also
customize the docker swarm secret name.
So where the API token that we've uh
added here is stored in. Actually it's
not really that uh important to change
this name. only if you're using this
name for another secret value, you have
to change it obviously. And um here you
can also enable an authentic middleware
when we used the free and open source
identity provider authentic and uh
combined this with a traffic reverse
proxy. So then you can enable the
authentic SSO integration. So this will
just generate the middleware um
connected to your authentic outpost. If
you're not using authentic, just ignore
this. Um, if you're interested in it,
again, refer to my authentic tutorial.
So, there I've explained everything.
Okay. So, now at the end, we get a
summary of what files are being created
by the boiler plates tool. I know
there's was a lot here, but um I tried
to make the boiler plates as
customizable as possible by still
enforcing or preconfiguring some
reasonable values. And before I deploy
this to my docker swarm cluster. Let's
have a quick run through uh the files
that have been created. So here you can
see we're creating a new traffic uh
service. We're exposing the port 80 and
4 for free in swarm. This will be um
exposed as an ingress port exposure. So
this is using the docker swarm's
internal load balancing feature is quite
nice. Then we are creating a new docker
volume for the traffic certificate
storage and we are including thev file.
So that really just contains the name of
the CF API token file where the actual
secret is located in. And then I'm also
attaching four configs. So Docker
configs that are automatically created
here from the content of these files
here. So in the project directory and
when we take a closer look at these
config files. So here's the general
traffic configuration where the email
address and the uh DNS challenge for the
certificate resolver is um configured as
well as the TLS ciphers that have been
enabled by this setting here. And I also
configured two providers here. the swarm
provider not exposing any services by
default connected to the proxy overlay
network which is by the way managed in
this compos file here so here you can
see it will create this overlay network
as attachable all right and then I'm
also adding another provider so that
will automatically watch any dynamic
configuration files into this directory
so this is where the other files uh
passed through when you take a look at
the middleware file so this is adding
the middleware security headers that
enable some security settings. You can
just add to any applications using the
traffic labels and some placeholders for
adding uh custom routers for external
services, whatever you want to use this
for. And I also added a simple health
check uh using a ping request. And of
course um here you can also see the
docker swarm deployment options. the
mode is replicated with one replica and
the name constraint is set to host name
VM test one. So let's go into the um
temporary file directory. So where
everything is located. You can see I'm
connected to my uh test environment and
currently there's nothing running. So
let's start deploying this. Now in
docker standalone mode you will just uh
use docker compose up but of course if
you're using docker swarm if you've
watched my swarm tutorial you will use
this stack deploy command set a specific
compos file this is needed otherwise it
will not work and then give this a name
something like traffic single node I
think that's fine right so here it will
create the necessary resources like the
proxy network the secret the config uh
the docker configs and the service and
with docker stack ls we can see the
stack is up and running. Let's also
execute docker service ps uh sorry it's
docker service ls and then docker
service ps we have to use the name of
that of course you can see it's
replicated one replica is up and running
in our cluster but just to get some more
details about what's happening here you
can see the docker container is running
on vm test one so if we would execute
docker ps we would not see anything
because I'm connected to the uh docker
context on server two. Actually, if we
open an SSH connection to the first
server and then execute a docker ps, we
should see docker containers created.
It's also healthy. So, that means the
ping request works. So, all should be
good. And yeah, let's uh run a simple
HTTPS connection on the IP address uh
page not found. This is by the way
coming from the traffic reverse proxy.
So um here you can see the traffic
default certificate is the issue of the
TLS certificates on that location. So
that means yes we are definitely
reaching the traffic service. And what I
also want to show you what is pretty
cool if you're uh if you're running
docker service ps you can see that the
traffic reverse proxy is currently only
started on that one server. But what
happens uh if we ping the IP address of
the second node for example so it's the 10.2030
10.2030
uh4 and if we also make a curl request
to that you can see that this still gets
us the traffic default certificate. So
even though we are deploying the traffic
container on just one docker node in the
cluster, we can still access this by any
IP address of the cluster. So that
actually means you don't have to run
traffic in your cluster multiple times.
It's enough to run this one instance of
traffic somewhere and you can still
achieve high availability with this. And
let me also demonstrate how that would
work uh to connect multiple service
applications. Again, if we return to the
boiler plates, it would be nice to just
deploy the who am I uh boiler plates.
So, this is just a very simple test
container that is uh used by traffic to
verify that this is working. This is
just deploying a very simple docker
compose project with just one docker
compos file. But you can still
automatically add uh swarm mode and
traffic TLS and traffic labels to this
just to test how this would work. Yeah.
And uh let's also generate a new
boilerplate for who am I in the /temp
who am I directory. Yeah. So let's do
this and let's customize some of these
settings here. All right. Just assume
this would be a production ready
application that we want to deploy.
Yeah. Uh let's just use the default
values for the hosting the restart
policy and stuff like that. Now comes
the important part um that we want to
connect this to the proxy network. So
this is the name of the traffic network,
the overlay network that we've created.
So now we want to set the domain name
for our application. Of course, we're
not using local host, but uh we can use
any name that we uh created on our DNS
server. So in my case, I created one
that is who am I.Crave.
So making sure that we're using trusted
TLS certificates and the certificate
resolver name is Cloudflare. So now this
variable name matches with uh the
traffic template variable as well. And
because we're using docker swarm, we
also need to deploy this container as a
swarm stack because if you remember in
the traffic configuration file that
we've created um I didn't enable the
docker provider here. I enabled the
swarm provider. So that means we cannot
expose standalone containers with this
template. Actually there is a trick to
enable it if you enable docker provider
here and configure it. You can check
this in the documentation and this is
kind of possible. So you can run traffic
itself in docker swarm but you can
expose standalone containers if they are
attached to the same proxy network. It
will work. However, it will also
generate a warning inside the traffic
locks and I think it is not really uh a
best practice or supported uh deployment
methods. So, if you're running traffic
in Docker Swarm, just assume that you
can also only expose swarm services and
not standalone containers. Yeah, it is
technically possible but it's probably
not recommended. All right, so yeah,
that's about that. So, yes, enable swarm
mode. And now I want to place this uh I
want to run this as replicated uh with
one replica of course but I want to
target um another host name for example
VM test 2 just to demonstrate how it
would work on a single node deployment
with traffic running on VM test one and
our application container running on VM
test 2. Okay. So let's generate these
files and then let's go into this
directory. I think uh I've used who am I
as the name. Oh yeah. And this is our
compose file. You can see this just
generates a very very simple uh test
container but it has our deployment
configuration for Docker Swarm running
on VM test 2 and it also has attached
the traffic labels that are necessary to
expose this with our swarm reverse
proxy. And of course I've also attached
it to the proxy network the overlay
network where traffic is running. These
are the important parts that you need.
And then you can just use docker stack
deploy compose.ml.
Of course, I want to use a different
name. Uh just use who am I? I think
that's fine. And then it's creating the
service. All right. So now let's just
run docker stack ls. There we have our
reverse proxy, our application
container. And now of course it's also
important that you have set the DNS name
uh and point this to the IP address of
any of the nodes in the cluster. So here
you can see that's the IP address of the
VM test one server. So where the traffic
reverse box is running but actually it's
not important which of the IP addresses
you choose. And then if we open a
connection here you can see it's moved
permanently. So that's probably the
redirection to the HTTPS location. So
let's follow this. Sometimes it can take
a few seconds or maybe up to a minute or
two until the certificate is issued by
the traffic reverse proxy. Uh let's just
try to do it again. And oh yeah, so now
you can see the connection is working
and that should have a trusted TLS
certificate. So when we are opening the
verbose mode, we should see some of the
details here. You can see that we're
using TLS 1.3. The minimum version was
configured to 1.2, but as this is an
updated uh client and the reverse proxy
is up to date, it uses automatically the
highest uh version possible. And also
one of the most secure ciphers that the
server and the client both support. You
can also see the uh subject of the
certificate who mihome.creative.de
And this is a trusted TLS certificate
that was issued by let's encrypt. So the
AKMA protocol is working. We get real
certificates and also the connection
from the traffic reverse proxy running
on server one to the application running
on server 2 is also working because both
are connected to the same Docker overlay
network. All right. And that is
basically how you run this very simple
setup that you can use to achieve high
availability and load balancing in your
uh home lab using just a single instance
of traffic in your docker swarm cluster.
However, as you told in the beginning of
this video that has a small downside and
this is if we would shut down the server
VM test one where traffic is actually
running. I can just demonstrate this to
you. When I just open an SSH connection
and shut down this server and then we
try to connect again, you can see this
is not working anymore. It's running
into a timeout. How to achieve the
second setup with a shared volume? So
where we store the TLS certificate onto
a separate NFS server and deploy traffic
on all three nodes in the cluster so we
can actively achieve high availability.
All right. So first of all we have to
remove the traffic uh stack. So that
will again not work because there is an
application connected to the proxy
overlay network. So you first of all
need to make sure that all the services
using this proxy network are down. So we
need to remove the who am I stack as
well and only then we can remove the uh
traffic signal node stack so that
there's nothing running anymore on that
uh cluster. All right, let's return to
the boiler plates and let's generate
another uh deployment a file of traffic,
but now let's set the name to global. I
will just rush through the first items
here because we've covered them before.
Just use the same settings. So here
comes the difference. Uh now we need to
switch the placement mode from
replicated that is the default to
global. Again as I have explained this
in the beginning, replicated means it
will uh create a specific number of
replicas in your entire cluster. So it
will try to balance depending on the CPU
and memory load of the swarm nodes on
which node to put the replicated service
on. If you switch to global, it will
always make sure that there is one
instance of the server on each docker
swarm node in the cluster. So if we have
three nodes in the cluster, it also gets
us free traffic instances. So now again
for this we need a shared volume.
Therefore I added an option here to
change the local volume to a mount
point. This could be a shared volume
using a seph uh connected uh mount point
or whatever. Or you can use NFS. When
using NFS you need to put in the IP
address of the NFS server. In my case it
is 10.20.0.7
which is by the way the unrated uh NAS
that I'm running in my home lab. and it
has a shared uh volume under mount user
app testing. Make sure that this uh
directory exists on the NFS server
otherwise it might run into some issues
and uh then you can also customize the
mount options and yeah so the API token
name I think that's that's fine and
authentic we don't need this as well.
All right so now the files are stored in here.
here.
Uh let us just take a look at the global
directory and open and open this in VS
code. So again this is actually very
similar to the previous setup. Instead
it uses the deployment mode global and
uses a configuration for creating an NFS
volume uh inside the docker swarm
cluster. Ah yeah okay it's adding NFS
version 4. So I haven't made this
optional yet because I think NFS version
4 is yeah actually the most recommended
but all the other stuff should actually
be exactly the same. So we can just
return here uh I think that's correct.
Yes. And then just deploy the compos
stack to the traffic global right okay
so that should work and of course we
also need to redeploy the who am I
container. Okay. So, uh let's uh quickly
check if the stacks are up and running.
Yes. And if the services are up and
running. Yes. So, here you can see a
difference between the first deployment
and the second one.
If we take a look here, you can see that
there are three replicas actually
running. On each of the nodes is one
instance of the traffic container
running. And of course, we can check if
we can still connect to this. Uh again
it probably takes uh some time until the
certificate is issued.
So yeah we got the trusted TLS
certificate. So now uh let me also
demonstrate what happens when I shut
down the first node. Again the one
problem that we still have is that when
we try to connect to this service
although there is a separate traffic
instance running on VM test two and
three and the application container is
running. The problem that we are now
facing is the connection is still
initiated to the first IP address only.
And that is uh the reason for this is
that I've set or configured the DNS name
for who am I.Crave.
My local DNS server to only resolve to
the first IP address. So when I go to my
DNS project, you can see that this is a
DNS record. It still only has this one
IP address. But if we start adding the
other IP addresses to that as well. So
then we still get a connection from at
least one of these IP addresses where
traffic is running. Now this way you can
have a reliable and truly high available
deployment of traffic reverse proxy on
docker swarm. Now, there's still one
more thing that I need to address at the
end. And these are the final thoughts or
my ideas for future videos because this
might introduce uh some problems if it
still resolves the IP address of a node
that is down and it still tries to
connect to that IP address. Now in our
case we are just picking a different IP
address but if it would connect to this
IP address we would still run into a
timeout and uh it might uh take some
time until the client tries to connect
to a different IP address and that still
is a problem. So although the traffic
reverse proxy setup is truly high
available, our Docker Swarm cluster is
not. This requires an an additional
video about keeper lifed because I think
otherwise it would be too much for this
one tutorial. However, I still hope that
it helped you to get a reliable and
truly high level setup of traffic
running in Docker Swarm. And if you
really want to make this production
ready, you can now use an external load
balancer to check if these nodes are up
and running and route the incoming
client requests to the online servers or
you're using a floating IP address with
something like keeperd. But again, we'll
cover that in a future video. All right,
guys. So, now it's your turn. Please
tell me, did you like this video? Did it
help you? And what do you think about
the boilerplate CLI tool? By the way, I
would really love to hear your opinion
about this. And if you feel this is a
promising project that helps you to get
up to speed quickly. And then also
consider supporting this or at least
give it a star on GitHub. That always
helps. And thank you so much for
watching. Thanks a lot to the people
supporting this project already. And of
course, I'm going to catch you in the
next video tutorial. So, have a nice
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.