This content is a transcript of a technical interview for a DevOps engineer role, focusing on the candidate's experience with DevOps tools, cloud platforms (AWS), CI/CD pipelines, containerization (Docker, Kubernetes), infrastructure as code (Terraform, Ansible), monitoring, and security best practices.
Mind Map
クリックして展開
クリックしてインタラクティブなマインドマップを確認
so as I as I'm looking at your CV now so
youve got around 2.8 years of experience
youve worked quite a fair bit
on doops Technologies and also on AWS
all right so why don't you start up with
your experience what you have done so
far yeah sure so hello S as I said my
name is Nas I have been working in thex
Technologies from P 2.8 years as a
devops engineer so I started my career
at DX technology as a devops engineer so
here uh I have been dedicated to
optimizing and automating the software
delivery process on AWS cloud and also
using most of the open source devop
stols so during my past two and a half
years here I have got an opportunity to
work on various projects where I have
showcased my skills in managing and uh
configuring the infrastructure and
devops using the devops tools and the
AWS cloud services so I have also
successfully implemented a ICD pipelines
using the open S tools like Jenkins and
I also have an experience with thew code
pipeline a little bit and apart from
that I have also have an experience in
containerizing an application using
Docker and managing this multiple
containers using a kubernetes
orchestration service uh tool and apart
from that I have also have an experience
with a little bit experience with Anil
and terraform and apart from that I have
experience working with many of the AWS
services like ec2 VC S3 and and few
others and yeah that's what about my
professional experience and the skills
that I possess right now so currently
I'm looking for an opportunity to uh in
some other organizations where I can
grow up my skills again yep
sure all right so shik you said that you
have worked on Jenkins for cicd can you
elaborate on a specific project where
you use Jenkins to create a cicd
pipeline so like in most of the my all
my previous project we have used jenin
open source tool for a cicd purposes so
where we have created in multi-stage
cicd pipelines to automate our entire
con integration con delivery or
deployment processes by integrating most
of many of the services into it so
that's what we have used zenin for all
right I would like to know the details
of the project what was the project what
was the domain what was what was the
business application what were you doing
in that so like in my previous projects
uh most of them are Java based projects
so they are like one of my project is
recent project is SKS where it's like an
e-commerce website where we have
developed that micr Services projects
using the cicd pipelines so for that we
have since it is a Java based project we
have used Maven tool to integrate with
the Jenkins to build our artifacts in
Maven we have integrated Jenkins
pipeline the maven tool and then after
building it we have used Nexus
repository to store our build artifacts
from MAV and then also we have used
sonar integrated sonar in the jenin so
that to do the code analysis so once
everything passes code uh passes all the
test qualities then it will move forward
or else the pipeline will get avered and
the developer will get a notification
that this code needs some uh changes and
still modifications or bus it has and
after that uh using that Nexus artifacts
from Nexus Docker will build the images
build the images and that will be pushed
to the ECR that we have used so to
storing the images and then we'll have
the continuous deployment TR we have
deployed we have used kubernetes for the
deployment of this latest images from
the ECR to the kubernetes using the
manif files so that's the process that
we have uh worked it like it defers on
the project to project that we have used
in my projects okay can you walk me
through a scenario where you used Docker
to troubleshoot containerized
application uh sure so we have used like
issues where we have tried to pull some
images from the ECR but we are not able
to because it was throwing some issues
like image pull back off and these kind
of things so at that time we need to
make sure that all the configurations
are right and the image name the taxs
that we are giving and the images exist
in the ECR that we'll measure all these
things and we'll TR should still if the
image is still there and we are not able
to pull up pull out and then we'll check
the logs where what is the error message
and depending on that will troubleshoot
more okay can you tell me about a few
error messages that you can recall what
sort of error messages came up when the
containerized application had issues so
containerized application when we have
used kubernetes
arated uh the container for managing
this container applications uh in the
ities we have faced issues like uh
performance symptom issues like
configuration configuration environment
variables is not correct and
configuration kind of issues and crash
Loop backups issues like image is uh
trying to start but it is failing it and
kubernetes Tred to restart it again and
still it is failing that kind of issues
we have faced like a dayto daily this is
this kind of daily kind of task so that
for that we used to troubleshoot we'll
find out the parts using C get parts and
then we'll see which part is having the
running in Crash back off and all and
then we will check the logs of it will
describe the part and we'll enter the
container and inside the container we'll
go and we'll check various kinds of logs
that we have also we have will check the
monitoring tools where we will all the
visualize the data to see the
visualization of this logs so checking
all these things we get to know where
the exact issue is and we'll
trou sheet and find the RO where where
exactly would you check the kubernetes
logs first we'll check in the virtual
machine itself like using the commands
and we'll find out the error message if
and also we can check in the using the
monitoring tools like proes and grao
where we can see the dashboards
dashboards
okay all right so in one of the projects
you said that you have measured 35%
increase in the code delivery rate after
implementing the cicd pipelines how do
you measure that
35% so it is like uh sorry can you
please repeat your question so I think
in in your CV I can see that you have
written that you measured 35% increase
in the code delivery
rate how do how do you measure that 35
why not 30 why not 40 why not 50 it's
not like exactly that how we Define the 35%
35%
but simultaneously we can say like uh
with the help of some other teams we can
make sure that the time that we
optimized the delivery date using some
optimization uh tricks that we have
implemented in our cicd projects by impl
minimizing the optimiz resource the take
resource limits and CPUs and the
optimizing the image size by using the
various tricks like multi-stage builds
so we have employed some kind of tricks
okay all right so when when I was
looking at this 35 uh I thought that
probably you had look at some sort of
metrics tool so there are many metric
tools which can prescribe these sort of
numbers so I was expecting that you
might be having like a j dashboard
report or some other deployment reports
which you can refer but but that's all
right yeah we have also used
G okay all right
so another number which is fascinating
50% reduction in deployment times
achieved through Docker kubernetes and
anible so what what do you understand by
a reduction in deployment time so the
deployment time it SE that depend it
depends on the deployment strategy that
we imply to deploy our application into
the kubernetes so we have used always to
reduce a down so the best part is we
always eager to reduce a downtime that
it takes to the deployment of the
application kubernetes so we have
implemented best deployment strategies
like rolling updates to for the zero
down times and I also used Canary
deployments uh to reduce the down times
okay without any effect of the to the
application can you tell me different
types of deployment strategies so
deployment strategies like uh blue green
deployment rolling updates scary
deployments A/B Testing so I have I'm
familiar with this three three uh blue
green and rolling updates and K
deployment all right can you tell me the
difference between each one of them yeah
sure so blue green deployment is used is
a continuous deployment process to
reduce a downtime and here we'll create
identical environment identical
environment like your production one
then you will root the traffic to the
with the latest uh version then you will
root that is called as green environment
that then they will gradually route the
traffic to the green environment that is
latest vers
and if user don't face any issues we
will be continue with that version and
the old version will be put it as a
backup the blue environment and the new
environment will again change as a blue
environment that's a blue green
deployment and when it comes to the
rolling updates in Rolling update the
deployment updates will take place with
zero down time but it takes a time for a
deployment uh deploy deployment into the
application but uh here the parts
gradually delet the old parts and the
new parts will get created in the in
case of this Canary deployments first
we'll release the new version to the
some set of users so that the impact
will be less and if that uh if the one
patient uses we'll roll out the whole
whole version to the whole of the TRU whole
whole
users all right now def tell me what is
the role of an artifact repository when
you're doing the deployments so what
happens if an artifact repository is
missing why is it important and what do
you do with that that is pleas a crucial
role if you want to deploy your latest
version into your kubernetes because
artifacts is the one that we'll get from
The Source codes uh like if it is in
case of java based project dat or jar
file jar artifacts will get created from
the build file that will use to store in
an exess artifact that we have used to
create doer images and if artifact
itself is missing then the latest otion
can't be deed into the kubernetes so it
is important to keep it safe and secure
and uh version it in case of roll backs
also okay all
right now I'll give you a scenario you
tell me how would you trouble shoot it
yes sir you have a you have a binary
running in production right now the team
which was running all the deployments
all the builds and everything that team
has been completely replaced by new guys
and they have no idea as to what is
running in production uh including your
source code and everything and else you
are given a task to identify which exact
source code version is running in
production in the binaries so youve got
all these containers which are running
how would you troubleshoot or how would
you trace the roots back to which source
code is exactly deployed what are the
steps okay to know like which source
code is deployed in which container what
is what is running in production yeah
you have you all you have is a
production environment and the set of
binaries which are running in production
how would you back Trace set to the source
source
code okay like we have multiple
pipelines so each artifact that will be
coming from the each service of the each
service will get stored in a separate
repository in
a repos manager itself like an exess so
there itself we can get to know which
source code is from the which pipeline
from which service so then we'll easily
uh we'll have used different tags for
each rep to know which service is from
the which which artifact is from the
which service so while deploying also we
have we had tax like commit ID so from
that we'll get to know that which
artifact is from which Services which is
deployed into the production Dev and
test we will get to
know okay uh and what exactly are all
the places where you will be looking for
the respective
tags uh so tags will be given in the the
tags will be given in the that step will
be added in the Jenkins pipeline itself
and the tag that tag will be sto with
that tag name will be stored into the
next also and the tag also given in the
doer image also that will be stored in the
the
ECR okay and uh about the source
code uh source code has already uh we
have repository we have separate
branches right for each service
I think so that's the
way okay uh tell me about you said that
you have worked on
Prometheus yes yes I just I have
monitored using PR and so tell me about
how did you configure Prometheus for the
monitoring so uh we have there are many
processes to configure the Prometheus
and raana so the easiest way that we
have used in our application is using
the helm charts we have directly
deployed into the CL ERS to monitor our
parts to monitor our part applications
into the prom grafana so in grafana we
have set up this data source for grafana
is as promo which will collect data
collections collect the metrics and logs
from the our application and that will
be given to the grafana to create a
dashboard and to see the visualized
dashboards of
lock okay all
right now coming to some of the security
measures so how do you ensure Security
in your DeVos
pipelines so security comes as a best
part into secur pipelines and uh images
the artifact that you created is uh very
important is very important so we have
employed various security measures in
our cicd pipeline like before after
building the code after before building
code itself we'll uh do M will undergo
into the unit testing some kind of unit
test and after it passes we will also
have used s code analysis stest using a
sonar where code will be checked for the
vulnerabilities and it will scan for
vulnerabilities after that if it passes
only it moves forward or else the
pipeline will get Abed and developer
will get know about the vulnerabilities
and when it comes to the darker image we
have used trivy to scan the darker
images for vulnerabilities and uh we
have also in Darker stage we'll make
sure that it is running as a nonroot
user to minimize the min minimize the
you know problems in case of
vulnerabilities and I also only expose
the ports which
aree necessary for absolutely necessary
for that
communication that that's how we will
also implement the network policies in
the kubernetes based on this that it
will only allow users that allow the
resources are users that are absolutely
necessary for the communication of our application