Hang tight while we fetch the video data and transcripts. This only takes a moment.
Connecting to YouTube player…
Fetching transcript data…
We’ll display the transcript, summary, and all view options as soon as everything loads.
Next steps
Loading transcript tools…
3 Years Experienced Devops Cloud engineer Live Interview #devopsinterview #devopsengineer #devops | DevOps Cloud and AI Labs | YouTubeToText
YouTube Transcript: 3 Years Experienced Devops Cloud engineer Live Interview #devopsinterview #devopsengineer #devops
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Video Summary
Summary
Core Theme
This content is a transcript of a technical interview for a DevOps engineer role, focusing on the candidate's experience with DevOps tools, cloud platforms (AWS), CI/CD pipelines, containerization (Docker, Kubernetes), infrastructure as code (Terraform, Ansible), monitoring, and security best practices.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
so as I as I'm looking at your CV now so
youve got around 2.8 years of experience
youve worked quite a fair bit
on doops Technologies and also on AWS
all right so why don't you start up with
your experience what you have done so
far yeah sure so hello S as I said my
name is Nas I have been working in thex
Technologies from P 2.8 years as a
devops engineer so I started my career
at DX technology as a devops engineer so
here uh I have been dedicated to
optimizing and automating the software
delivery process on AWS cloud and also
using most of the open source devop
stols so during my past two and a half
years here I have got an opportunity to
work on various projects where I have
showcased my skills in managing and uh
configuring the infrastructure and
devops using the devops tools and the
AWS cloud services so I have also
successfully implemented a ICD pipelines
using the open S tools like Jenkins and
I also have an experience with thew code
pipeline a little bit and apart from
that I have also have an experience in
containerizing an application using
Docker and managing this multiple
containers using a kubernetes
orchestration service uh tool and apart
from that I have also have an experience
with a little bit experience with Anil
and terraform and apart from that I have
experience working with many of the AWS
services like ec2 VC S3 and and few
others and yeah that's what about my
professional experience and the skills
that I possess right now so currently
I'm looking for an opportunity to uh in
some other organizations where I can
grow up my skills again yep
sure all right so shik you said that you
have worked on Jenkins for cicd can you
elaborate on a specific project where
you use Jenkins to create a cicd
pipeline so like in most of the my all
my previous project we have used jenin
open source tool for a cicd purposes so
where we have created in multi-stage
cicd pipelines to automate our entire
con integration con delivery or
deployment processes by integrating most
of many of the services into it so
that's what we have used zenin for all
right I would like to know the details
of the project what was the project what
was the domain what was what was the
business application what were you doing
in that so like in my previous projects
uh most of them are Java based projects
so they are like one of my project is
recent project is SKS where it's like an
e-commerce website where we have
developed that micr Services projects
using the cicd pipelines so for that we
have since it is a Java based project we
have used Maven tool to integrate with
the Jenkins to build our artifacts in
Maven we have integrated Jenkins
pipeline the maven tool and then after
building it we have used Nexus
repository to store our build artifacts
from MAV and then also we have used
sonar integrated sonar in the jenin so
that to do the code analysis so once
everything passes code uh passes all the
test qualities then it will move forward
or else the pipeline will get avered and
the developer will get a notification
that this code needs some uh changes and
still modifications or bus it has and
after that uh using that Nexus artifacts
from Nexus Docker will build the images
build the images and that will be pushed
to the ECR that we have used so to
storing the images and then we'll have
the continuous deployment TR we have
deployed we have used kubernetes for the
deployment of this latest images from
the ECR to the kubernetes using the
manif files so that's the process that
we have uh worked it like it defers on
the project to project that we have used
in my projects okay can you walk me
through a scenario where you used Docker
to troubleshoot containerized
application uh sure so we have used like
issues where we have tried to pull some
images from the ECR but we are not able
to because it was throwing some issues
like image pull back off and these kind
of things so at that time we need to
make sure that all the configurations
are right and the image name the taxs
that we are giving and the images exist
in the ECR that we'll measure all these
things and we'll TR should still if the
image is still there and we are not able
to pull up pull out and then we'll check
the logs where what is the error message
and depending on that will troubleshoot
more okay can you tell me about a few
error messages that you can recall what
sort of error messages came up when the
containerized application had issues so
containerized application when we have
used kubernetes
arated uh the container for managing
this container applications uh in the
ities we have faced issues like uh
performance symptom issues like
configuration configuration environment
variables is not correct and
configuration kind of issues and crash
Loop backups issues like image is uh
trying to start but it is failing it and
kubernetes Tred to restart it again and
still it is failing that kind of issues
we have faced like a dayto daily this is
this kind of daily kind of task so that
for that we used to troubleshoot we'll
find out the parts using C get parts and
then we'll see which part is having the
running in Crash back off and all and
then we will check the logs of it will
describe the part and we'll enter the
container and inside the container we'll
go and we'll check various kinds of logs
that we have also we have will check the
monitoring tools where we will all the
visualize the data to see the
visualization of this logs so checking
all these things we get to know where
the exact issue is and we'll
trou sheet and find the RO where where
exactly would you check the kubernetes
logs first we'll check in the virtual
machine itself like using the commands
and we'll find out the error message if
and also we can check in the using the
monitoring tools like proes and grao
where we can see the dashboards
dashboards
okay all right so in one of the projects
you said that you have measured 35%
increase in the code delivery rate after
implementing the cicd pipelines how do
you measure that
35% so it is like uh sorry can you
please repeat your question so I think
in in your CV I can see that you have
written that you measured 35% increase
in the code delivery
rate how do how do you measure that 35
why not 30 why not 40 why not 50 it's
not like exactly that how we Define the 35%
35%
but simultaneously we can say like uh
with the help of some other teams we can
make sure that the time that we
optimized the delivery date using some
optimization uh tricks that we have
implemented in our cicd projects by impl
minimizing the optimiz resource the take
resource limits and CPUs and the
optimizing the image size by using the
various tricks like multi-stage builds
so we have employed some kind of tricks
okay all right so when when I was
looking at this 35 uh I thought that
probably you had look at some sort of
metrics tool so there are many metric
tools which can prescribe these sort of
numbers so I was expecting that you
might be having like a j dashboard
report or some other deployment reports
which you can refer but but that's all
right yeah we have also used
G okay all right
so another number which is fascinating
50% reduction in deployment times
achieved through Docker kubernetes and
anible so what what do you understand by
a reduction in deployment time so the
deployment time it SE that depend it
depends on the deployment strategy that
we imply to deploy our application into
the kubernetes so we have used always to
reduce a down so the best part is we
always eager to reduce a downtime that
it takes to the deployment of the
application kubernetes so we have
implemented best deployment strategies
like rolling updates to for the zero
down times and I also used Canary
deployments uh to reduce the down times
okay without any effect of the to the
application can you tell me different
types of deployment strategies so
deployment strategies like uh blue green
deployment rolling updates scary
deployments A/B Testing so I have I'm
familiar with this three three uh blue
green and rolling updates and K
deployment all right can you tell me the
difference between each one of them yeah
sure so blue green deployment is used is
a continuous deployment process to
reduce a downtime and here we'll create
identical environment identical
environment like your production one
then you will root the traffic to the
with the latest uh version then you will
root that is called as green environment
that then they will gradually route the
traffic to the green environment that is
latest vers
and if user don't face any issues we
will be continue with that version and
the old version will be put it as a
backup the blue environment and the new
environment will again change as a blue
environment that's a blue green
deployment and when it comes to the
rolling updates in Rolling update the
deployment updates will take place with
zero down time but it takes a time for a
deployment uh deploy deployment into the
application but uh here the parts
gradually delet the old parts and the
new parts will get created in the in
case of this Canary deployments first
we'll release the new version to the
some set of users so that the impact
will be less and if that uh if the one
patient uses we'll roll out the whole
whole version to the whole of the TRU whole
whole
users all right now def tell me what is
the role of an artifact repository when
you're doing the deployments so what
happens if an artifact repository is
missing why is it important and what do
you do with that that is pleas a crucial
role if you want to deploy your latest
version into your kubernetes because
artifacts is the one that we'll get from
The Source codes uh like if it is in
case of java based project dat or jar
file jar artifacts will get created from
the build file that will use to store in
an exess artifact that we have used to
create doer images and if artifact
itself is missing then the latest otion
can't be deed into the kubernetes so it
is important to keep it safe and secure
and uh version it in case of roll backs
also okay all
right now I'll give you a scenario you
tell me how would you trouble shoot it
yes sir you have a you have a binary
running in production right now the team
which was running all the deployments
all the builds and everything that team
has been completely replaced by new guys
and they have no idea as to what is
running in production uh including your
source code and everything and else you
are given a task to identify which exact
source code version is running in
production in the binaries so youve got
all these containers which are running
how would you troubleshoot or how would
you trace the roots back to which source
code is exactly deployed what are the
steps okay to know like which source
code is deployed in which container what
is what is running in production yeah
you have you all you have is a
production environment and the set of
binaries which are running in production
how would you back Trace set to the source
source
code okay like we have multiple
pipelines so each artifact that will be
coming from the each service of the each
service will get stored in a separate
repository in
a repos manager itself like an exess so
there itself we can get to know which
source code is from the which pipeline
from which service so then we'll easily
uh we'll have used different tags for
each rep to know which service is from
the which which artifact is from the
which service so while deploying also we
have we had tax like commit ID so from
that we'll get to know that which
artifact is from which Services which is
deployed into the production Dev and
test we will get to
know okay uh and what exactly are all
the places where you will be looking for
the respective
tags uh so tags will be given in the the
tags will be given in the that step will
be added in the Jenkins pipeline itself
and the tag that tag will be sto with
that tag name will be stored into the
next also and the tag also given in the
doer image also that will be stored in the
the
ECR okay and uh about the source
code uh source code has already uh we
have repository we have separate
branches right for each service
I think so that's the
way okay uh tell me about you said that
you have worked on
Prometheus yes yes I just I have
monitored using PR and so tell me about
how did you configure Prometheus for the
monitoring so uh we have there are many
processes to configure the Prometheus
and raana so the easiest way that we
have used in our application is using
the helm charts we have directly
deployed into the CL ERS to monitor our
parts to monitor our part applications
into the prom grafana so in grafana we
have set up this data source for grafana
is as promo which will collect data
collections collect the metrics and logs
from the our application and that will
be given to the grafana to create a
dashboard and to see the visualized
dashboards of
lock okay all
right now coming to some of the security
measures so how do you ensure Security
in your DeVos
pipelines so security comes as a best
part into secur pipelines and uh images
the artifact that you created is uh very
important is very important so we have
employed various security measures in
our cicd pipeline like before after
building the code after before building
code itself we'll uh do M will undergo
into the unit testing some kind of unit
test and after it passes we will also
have used s code analysis stest using a
sonar where code will be checked for the
vulnerabilities and it will scan for
vulnerabilities after that if it passes
only it moves forward or else the
pipeline will get Abed and developer
will get know about the vulnerabilities
and when it comes to the darker image we
have used trivy to scan the darker
images for vulnerabilities and uh we
have also in Darker stage we'll make
sure that it is running as a nonroot
user to minimize the min minimize the
you know problems in case of
vulnerabilities and I also only expose
the ports which
aree necessary for absolutely necessary
for that
communication that that's how we will
also implement the network policies in
the kubernetes based on this that it
will only allow users that allow the
resources are users that are absolutely
necessary for the communication of our application
application
so to monitor a whole application we
have also employed
OAS so these are the practices we have
used in our application to as a security
based practices okay tell me about if
you are to secure a Docker
container which is highly sensitive uh
for an
application how do you visualize a
traffic coming from an external internet
to reach your Docker container which is
secure so where will you place your
Docker container and how do you think
the traffic what are the different
layers of security that your traffic
should pass through in order to reach your
your
container so we'll deploy our containers
in your Docker based on the
communication that you need will deploy
in the different kinds of doer Docker
networks like uh Docker host Docker O
Network Ducker Bridge default so if you
want external communication is needed
and your container needs a commun iation
from various other containers from other
darker Hol yeah we'll employ this overl
network this is for the complex
multihost Docker container setup we need
this O Network so here the containers
will communicate with other containers
from other Docker H so if you just need
Docker containers to communicate from
within only your Docker host one host
then it will employ the docker Bridge
Network which is always a default one so
based on this the container will
communicate the security Bas practices
is like um darker secrets we can use to
uh store our sensitive
informations these are the security B
practices when it comes to the container that
that
okay all right so you said that you have
worked on microservices uh you have
deployed all these
different and monitoring capabilities
have you have you ever worked on a monolithic
monolithic
architecture like initially when I
started my caner we have I have worked
on monolithic one okay can you can you
tell me the difference between a
monolithic and a microservice sure like
in microservices is just a design
approach where uh uh we'll deploy our
single service into a set of services so
that the uh operational load and many
benefits will uh will be taken from the
micro services as the like considering
operational load is considerably reduced
if you employ the micr services due to
its simple
architecture uh but it takes deployment
times but in case of Monolithic the uh
if you want to scale down or scale up
then entire application needs to be
scale up or scale down but in case of
services whatever the application need
to scale up you can scale up uh
independent of the other services so
there are many benefits compare it to
the montic and microservices
microservice has many advantages over
it and is there any tradeoff that you
can think when you are converting a
monolithic into microservice is there
something which
is when I say A tradeoff something which
was better in monolithic but yeah like
simple deployment is is there in
monolithic as possible some complex
deployment will be the uh complex
deployment is there in microservices
that's the only thing that I could I can
identify okay all right tell me the
difference between a stateful and a stateless
stateless
application yeah sure like stateful
applications are something which is used
for stateful State stateful set is used
for the stateful applications that where
you need the unique Network identif and
data consistency of persistent storage
that in that cases we use stateful
applications and this is best suited for
the data bases and any other
applications where the data consistency
and the unique Network interace are key
here the updates will be scaling and
updates will be happening
under like under scheduled manner only
in in an order manner the scal scaling
and updates will be done when it comes
to the stateless application deployment
is kind of a stateless application where
you if you need a scalable Deployable
and replaceable containers then it is
the best CH best choice
and here rolling updates and roll backs
will be possible here you just need to
uh give the desired state of your
application and kubernetes will uh work
on it to work on your cluster so that to
maintain the desired state of your application
application
okay have you heard of this word
immutable yes unchangeable can you tell
me the concept of immutable
infrastructure what does it mean uh
immutable infrastructure is something
that you cannot
change uh the infrastructure once you
just uh created
it what's the benefit of
it I don't think so I don't know I
haven't have an experence that's that's
all right
okay have you ever experienced like a
major service outage in any
application yeah like uh recently we had
a critical in
from the user saying that the whole
application crashed because of the
latest version so when we have seen that
we quickly implemented our incident roll
back plan where uh we quickly roll back
to the stable previous version
to activate the application service and
then we have ID uh with the cross
application communication we'll find out
the we had founded the root cause where
we have seen the database
misconfigurations in the latest version
so after that we have fixed that the
corresponding team that fixed the
database configuration setup and then we
have tested that multiple times in a
staging environment and then we had
again deployed into the production
without any
issues okay yeah that's one issue that
I've used so can you can you tell me the
difference between these two words RTO and
and
RPO sorry have you heard of RTO and
RPO no I think I in terms of in terms of
your disaster recovery
and resour no I don't have it all right
so yeah so RP is recovery time objective
and RPO is recovery Point objective
maybe you can just go and read about
yeah all right in terms of have you
heard of this word chaos
engineering I am I heard it heard about
it but uh I don't have much I think I
need to look into that okay no
problems have you heard about
giops yes I explain me
giops like using gitops it is similar uh
tool like Jenkins where you have you can
Implement your pipelines uh manage your
project I'm not I haven't I have not I
don't have an experience working with
giops I have used so far Jenkins only as
an open source ASD tool yeah I know that
Jenkins that's all right
okay all right uh what in terms of
security what do you understand by
difference threats are like uh processes
which will no not threads threat
okay threat threat is some kind of
issues that will crash your application
and vulnerabilities which will come out
out from the uh
scammers which you can resolve using
some kind of firewalls like we have AWS
Shield we can make use of firewalls to
protect your application from this kind of
of
thanks all yeah I'm not sure if you're
really clear on these
but okay explain me the role of identity
and access
management so yeah identity in access
management is basically a service from
AWS which we use to manage uh users and
their permissions users and groups and
their permissions so because various
users needs V varying level of access to
the services so we make use of this a
services in AWS so using this we can
create users manage users create groups
and manage users we can create policies
and manage the users
credentials with this
AWS service can you tell me the
difference between a group and a
role uh role is an I am entity that
defines a set of permissions to raise to
make AWS service requests whereas groups
is combination of few users combination of
of
few uh what to say combination of
few groups like
uh supplemental groups like supplemental
things which you can group and you can
give to the group of users so that the
whole group whole use group of users can
access to that particular things okay
can you explain me role a bit more in
detail uhe yeah what would you use a role
role
for to give access to the other service
like uh there is an S3 bucket and I want
to access it from another a account in
that case we will create a role and give
that uh give some permission policies to
that like get object list objects and
then we will assign that to the I am
user so that the user can access it from
the other account the content of the
bucket to that account so such is role
limited only to AWS services or can you
also give a role to a manual console
user MH yeah we can give it am rule can
Services
okay all right so let's say that you
have some sensitive data that you want to
to
store and tell me what are the different
ways in which you can protect your sensitive
sensitive
data so like in case of any particular
tool or it is entire application so in
AWS we have secrets managers to store
our senstive data in kubernetes we have
like kubernetes secrets and in Docker we
have Docker secrets in anible we have
anible VA like that in every tool if you
can see we have a Secrets where we can
store sensitive data without uh uh
exposing secret exposing sensitive
information like passwords access Keys
into our main codes so we use this uh
sensitive information into our secrets
okay so Secrets is one way what are the
other environment variables is also
another way where we can store the
sensitive information without exposing
it okay and how would you protect the
information I know that you you so so
far you have told me only one which is
using Secrets what are the other
use that's it uh that's what I know I
get have used in my previous
rules the secr you told me about these
things in your answers
previously you need to summarize what
are the different
security either tools or practices that
you can use in order to protect a sensitive
sensitive data
data
so to protect sensitive data like we
have used Secrets only in I I'm not
recalling it
Enon yes encrypt we can encrypt the data
so we have TLS we have certificate
bundles in AWS TLS and SSL which we
encrypt to we use this to encrypt in S3
we have a default encryption options
like uh server side encryption server
side encryption with KMS dual layer
server side encryption we have
encryption option like we can encrypt
the data okay it can you tell me the
ckms KMS is used to manage your
encryption keys on your behalf so it
will rotate your keys ckms also I think
does the same work but I have not used
it so far so I don't have much does c
stand for in
ckms not sure that's all right
okay explain me the con concept of zero
trust security what does it
mean uh zero trust security with the
name I can tell that uh
the the security is the high level
security can be implemented with this
zero trust
that yeah the the name says it a little
bit uh yeah all right so it's all to do
with your micro segmentation of networks
your authentication
authorization uh monitoring and threat
detection lease privilege access so all
right you can go and have a read about
sir next
question all tell me some of common Cloud
Cloud
misconfigurations that are done which
can be prevented
easily like in recent uh recent changes
no in cloud in Cloud there there are
lots of different
configurations which can be
misconfigured and that would make your
application very
insecure right so I'll give you an
example for example like publicly
exposed uh storage buckets so that is
one common miscon U so what I'm trying
to get from this question is still about
security but what are the specific
things if you if you have an application
all right so let's say you have a
scenario where somebody's asking you to
look at an existing application which is
deployed and asking you that okay sh can
you please confirm that this application
conforms to all the different security
and it is really secure what are the
things that you would go and look into that
that
application okay I'll check the various
security measures that it followed so
far so like uh whether it has used all
the VP whether it is under VPC whether
it used public where it is various
instances is deployed into the which
subnets public and private I I will um
I'll check that and after that I'll
check the various security groups and
the which application which ports that
this application is listening to and the
various security groups and the net net
gateways internet gateways rooting
policies I used I will check this basic
things first and then the firewalls and
the other security measures coming to
the cicd pipelines I'll check the uh the
so check what is it is doing doing in
case of this checking the code qualities
uh and I'll also make sure that uh it is
following the vulnerability checks using
the for darker images using like triv or
any other tool it is using so various
Network policies that it used the
services that kuity service that it use
it used to expose application I'll check
all these things to get to know to
figure out the security things that it
followed earlier and whether it is best
or not that then we can
decide okay all
right now have you heard of this term
distributed denial of
service no sorry distributed no I didn't
okay tell me about so you use 3 before
right so tell me about the S3 life cycle
policies what do you understand by that
so life cycle policy is a feature in S3
using that you can transition your uh
data from one class to the one storage
class to the another storage class it is
a process of automating of moving your
objects from one story class to the
another story class like we have uh to
reduce it is basically used to reduce
your cost so that your objects you are
preing or else you are accessing it very
rarely in that case you need to move
from one storage class to another
storage class to secure to optimize your
class like we have S3 standard s
standard IIA oneone IA S3 intelligent
tiring we have S clure we have so we'll
move our objects from one storage class
to the another storage class by
automating it with the life cycle life
cycle rules in S3 okay now let's say you
have a scenario and I want you to
actually problem solve this
you have a you have a customer who wants
to place a file in somewhere okay I want
you to design the solution and tell me
about it sure he wants to place a file
somewhere and then he wants so it's like
a image file okay then he wants to have
that image file so what what he has done
is he has just placed the
file now from your end you need to
develop a system
which can take that file as soon as it is
is placed
placed
okay collate all the different files
together stitch them up make a
video and uh store it again in somewhere
else right so the customer will keep on
pumping all the different
files okay so let's say in an S3 bucket
so I need you to design a solution where
all these different files are
together automatically make a video and
then place it somewhere else how would
you design the solution what are the
different AWS services that you will
use what attributes would you put on to
sorry listen the question what are
different services that you will use
what attributes will you put to your S3
bucket how will you make it secure that
only the customer can put files into it
and how would you design a solution
where all these things happen
afterwards so this will be like enter a
mini project where we can design the
architecture like uh whenever he uploads
something we'll like automate this
process first we'll create a s bucket
we'll put this uh we'll make it private
so that only customer can access it and
upload the files and then we we will put
our s buckets and data databases we use
after creating a video we'll store it
some kind of data uh we'll store it in
somewhere like S3 only and this entire
things can be put under some
VPC uh and we will protect our entire
this thing to convert it we we need to
use some kind of uh third party
applications to convert all the images
into some video and then put in into
S3 can you can you do all of these
within AWS
itself yeah I think most of the things
can be done with the AWS itself uh
because uh no no I'm asking you the name of
of services
services okay
okay
uh the image collect
S3 customer will upload the files the
images into the S3 bucket okay now let
let's let's drill down a little bit
little bit deeper into it customer
uploads the S3 you said that you have
provided private access yeah explain me
that private
access private access to this bucket yes
so if it is S3 private bucket the
customer is uh uploaded the file into
how how will the customer get access to that
that
bucket we'll give access to the S3
bucket access to the customer by by with
the bucket policies by creating a bucket
policy like get or put objects put
object policy permission policy to that
bucket and assign an I am role to that
user so that the user can put uh images
into that s put object into that s file
right once the once the once the file is
available using the role the customer
can put up a file once the file is
available what happens
next so so then we'll need to use some
kind of AWS service I don't know whether
we have that our third party application
service not you can use any service
which one we have transcoder I guess the
how will
trans media files into versions it not
not media when when a file comes onto S3
bucket which service will determine that
something has come
up uh Lambda function we can use okay uh
to create the event will trigger the
Lambda function that I think it will
conver all the images into some kind of
video I I don't know whether it's
possible but Lambda function can do this
uh work based on the events it will run
the function of code and then it will
convert to the code yes I think it's
possible because Lambda we can write the
code and that code will convert this
images into the video and then it can be
published or put into
some all right I think I can go on and
on with my questions but I think we are
just running short of time so yeah uh
all right shake so I think we'll just
conclude the interview here uh and I'll
give you my feedback and thanks for
attending it today and uh yeah wish you
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.