0:22 hello and welcome to the compta storage
0:25 plus course in this module we will look
0:27 at the syllabus of the compta storage
0:30 plus course and then we will look at the
0:33 compta storage plus exam
0:35 overview this course is based on the
0:37 objectives of the compta storage plus
0:44 SG-1 business's Digital Data is growing
0:46 at an exponential rate so it becomes
0:48 necessary to efficiently store and
0:50 manage this data but there will be a
0:53 shortage of IT staff with storage
0:55 Administration skills compta storage
0:58 plus certification will help it
1:00 professionals be prepared to work with
1:02 multivendor technologies that support
1:04 storage now let's talk about the modules
1:06 that we're going to cover in this course
1:09 the modules are mechanical disc drives
1:13 Solid State Storage storage arrays raid
1:16 fiber channel sand IP sand converged
1:19 networking replication backup and
1:22 Recovery storage management storage
1:24 performance and
1:26 troubleshooting in the module titled
1:28 mechanical disc drives we will introduce
1:30 you to the hard dis Drive
1:32 and then we will talk about the hard
1:34 disk drive interfaces and
1:36 protocols we'll also talk about the
1:38 geometry and characteristics of the hard
1:41 drive in the solid state storage module
1:43 we will talk about Solid State Storage
1:46 such as flash memory in the storage
1:48 arrays module we will introduce you to
1:50 storage arrays and then we will talk
1:52 about the storage array architecture in
1:54 the raid module we will introduce you to
1:56 raid and then we will discuss the raid
1:59 organization in the fiber channel sand
2:01 module we will get in depth into the
2:03 fiber channel sand we will start off the
2:05 module by introducing you to storage
2:07 area networking and then we will talk
2:09 about the fiber channel architecture
2:11 we'll also cover the components of fiber
2:13 channel sand and the topologies that are
2:16 involved we will also talk about the
2:17 characteristics of a fiber channel
2:20 switch followed by a discussion on nport
2:24 ID virtualization in the IP sand module
2:27 we will introduce you to IP storage area
2:30 networking with a focus on is scuzzy San
2:32 in the converge networking module we
2:33 will introduce you to converge
2:35 networking that combines ethernet
2:38 technology and fiber channel technology
2:40 with a specific coverage on fiber
2:43 channel over Ethernet or fcoe in the
2:45 same module we will also focus on the
2:47 data center operations in the
2:49 replication module we will introduce you
2:52 to replication Technologies and then we
2:54 will discuss the types of replication in
2:56 the backup and Recovery module we will
2:58 introduce you to backup and Recovery
3:01 processes methods and implementation we
3:03 will also talk about backup targets and
3:05 in the same module we will also talk
3:07 about content addressable storage and
3:09 archives in the storage management
3:11 module we will talk about capacity
3:13 optimization methods Lun provisioning
3:16 techniques storage virtualization and
3:19 then we will talk about monitoring and
3:22 alerting in the storage performance
3:24 module we will discuss how latency and
3:26 throughput affects the storage
3:28 performance in the last module called
3:30 troubleshooting we will explain how to
3:32 troubleshoot the common fiber channel
3:35 problems and land problems now we will
3:37 take a look at the certification exam
3:39 overview the certification exam consists
3:42 of 100 Theory questions and you have to
3:44 complete the exam in 90
3:47 minutes the passing score is 720 out of
3:52 a th000 and the exam code is sgo
3:55 001 you can take the exam at pearsonview.com
3:56 pearsonview.com
3:59 that brings us to the end of this module
4:02 in this module we looked at the syllabus
4:04 of the compta storage plus course and
4:06 then we looked at the exam overview of
4:09 the compta storage plus exam in the next
4:11 module we will learn about hard disk
4:15 drives so let's get started [Music]
4:36 hello and welcome to unit 1 introduction
4:38 to the disc
4:40 drive in this lesson you will learn
4:43 about the dis drive and specifically you
4:46 will learn about the hard disk drive
4:48 Technologies we're going to start this
4:52 lesson by looking at what a dis Drive is
4:53 and then we will take a look at the hardisk
4:55 hardisk
4:57 drive we will then briefly talk about
5:00 the history of the hard disk drive
5:02 the next thing we will do is take a
5:03 detailed look at the mechanical
5:05 components of the hard disk
5:08 drive you'll be curious to know how the
5:10 data is written to the hard disk drive
5:12 so we will talk about the read write
5:15 process of the hard disk drive we will
5:17 then talk about the hard disk controller
5:18 board that interfaces the hard disk
5:24 computer when talking about the hard
5:27 disk controller we will also touch upon
5:29 the addressing scheme called logical
5:32 block clock addressing or
5:35 LBA last but not least we will also take
5:37 a look at the hard disk drive interface
5:39 that connects the hard disk drive to the host
5:40 host
5:44 computer let's begin with the disk drive
5:47 so what is a dis drive a dis Drive is a
5:50 storage device that uses a disk for
5:52 storing and retrieving
5:54 data the term disc drive popularly
5:56 represents the hard disk drive even
5:58 though there are other types of dis
6:00 drives such as a FL floppy disc drive or
6:03 an optical disc
6:06 drive a hard dis Drive is a storage
6:08 device that contains one or more rigid
6:12 discs for storing and retrieving
6:15 Data before we go in depth talking about
6:17 the mechanical components let's spend
6:19 some time on the history of the hard disk
6:21 disk
6:23 drive it would be surprising to know
6:25 that the hard disk drive technology is
6:28 more than 55 years old and we're still
6:30 relying on it for our storage
6:32 needs the world's first hard drive was
6:35 named IBM 350 disc drive and was
6:39 invented by IBM in
6:41 1956 it was the first storage device
6:44 with random access to data and was a key
6:48 component of the IBM 305 computer
6:51 systems the IBM 305 data processing
6:54 system combined with the 350 dis Drive
6:57 became known as the ramac which stands
6:59 for Random Access method of a counting and
7:01 and
7:04 control the IBM 350 disc drive consisted
7:07 of a stack of 50 magnetic discs that
7:08 were 2 ft in
7:11 diameter data was recorded on each side
7:14 of the disc in circumferential
7:17 tracks the IBM 350 disc drive stored 5
7:19 million characters and in today's
7:21 Computing terms it was less than 5
7:24 megabytes of data the hard disk drive is
7:26 the slowest part of a modern-day
7:28 computer because it contains mechanical
7:31 components that have physical movements
7:33 the graphics on the slide show the
7:36 internal components of a hard disk drive
7:38 the major mechanical components of the
7:42 hard disk drive are the platter spindle
7:43 spindle
7:46 actuator arm and
7:49 head these components are enclosed in a
7:52 dust-free compartment called a head dis
7:54 assembly now let's talk about these
7:58 components one by one so what is a
8:00 platter a platter is a disc that stores
8:04 the data it looks like a CD or a DVD and
8:06 is usually made up of aluminum alloy
8:08 which is rigid in
8:11 nature apparently the hard dis gets its
8:13 name from the rigid nature of the
8:16 platters the most common form factor or
8:20 the size of the platter is 3.5
8:22 in the top and the bottom surfaces of
8:24 the platter are coated with a magnetic
8:27 material that allows data to be recorded
8:30 magnetically on its surface
8:32 the data is actually stored on the
8:34 magnetic media of the platter by
8:35 aligning the field of the media
8:37 particles on the
8:40 surface the data thus stored is
8:42 nonvolatile meaning that it is retained
8:45 even when there is no power
8:48 supply a hard drive is composed of
8:49 multiple platters that are stacked on
8:53 top of each other as you can see in the
8:55 diagram The Platters are held together
8:58 by a central axis called a spindle which
9:00 in turn is direct directly attached to a
9:03 rotating motor called the spindle
9:06 motor so when the spindle rotates all
9:08 the platters rotate at the same time
9:09 with the same
9:12 Speed The Platters of the fastest hard
9:15 disk drive rotate at a speed of 15,000
9:18 revolutions per minute now let's look at
9:21 the actuator the actuator is responsible
9:24 for moving the Reed right arm in and out
9:27 across the surface of a rotating
9:30 platter it positions the read WR heads
9:32 precisely to a specific location on the
9:34 spinning platter there is only one
9:37 actuator for a hard disk assembly an
9:40 extension to the actuator is the read right
9:41 right
9:44 arm the read right arm contains the read
9:51 ends now let's talk about the read right
9:54 head the term read right head is a
9:56 mouthful so it is sometimes referred to as
9:57 as
10:00 head a head is an interface that reads
10:02 and writes data to the
10:04 platters there are two heads for each
10:07 platter one head is mounted on the top
10:09 side of the platter and the other one is
10:12 mounted on the bottom side of the
10:14 platter since the heads are attached to
10:17 the actuator they all move at the same
10:20 time the read WR head never touches the
10:21 surface of the platter while it is
10:24 writing or reading data but it floats
10:27 extremely close to the surface of the
10:29 platters the minute distance between The
10:31 Head and the surface of the platter is
10:34 referred to as flying height or floating
10:38 height or head Gap and it is measured in
10:40 terms of
10:43 nanometers in modern drives the flying
10:46 height is generally about 3
10:48 nanometers if you're wondering how small
10:51 a nanometer is then here it is 1
10:54 nanometer is 1 million times smaller
10:55 than a
10:57 millimeter it should be noted here that
10:59 if the head accidentally touches the
11:01 spinning platter then it would not only
11:03 damage both the head and the surface of
11:06 the platter but it will also damage the
11:09 data stored on the
11:11 dis this type of hard disk failure is
11:14 called Head crash however when powered
11:17 off the head will rest in contact with the
11:18 the
11:20 platter since hard drives are
11:22 susceptible to dis failures it is very
11:24 important that we back up the data
11:29 them during the right process the heads
11:31 write the data to the platters by
11:33 aligning the magnetic field of the
11:35 magnetic particles that pass under
11:38 them the polarity of the magnetic field
11:40 of the particles will depend on whether
11:44 a bit one or Bit Zero is
11:46 written an unchanged polarity is
11:49 considered as Bit Zero and a changed
11:51 polarity is considered as bit
11:55 one during the read process the heads
11:57 detect the polarities of the already
11:59 aligned magnetic particles that pass on
12:01 under them and convert them into
12:03 electrical signals it then transmits
12:05 these electrical signals to the hard disk
12:10 controller now let's talk about the hard disk
12:11 disk
12:13 controller the hard disk controller
12:16 board is a printed circuit board PCB
12:18 that is attached to the chassis of all
12:20 modern hard disk
12:23 drives such hard disk drives are also
12:26 called integrated Drive Electronics or
12:28 IDE because they have the hard dis
12:31 control controller attached to
12:34 it the hard disk controller contains a
12:37 microprocessor memory and firmware the
12:39 firmware of the hard disk controller
12:42 controls the internal parts for example
12:44 it controls the positioning and movement
12:46 of the actuator
12:49 arm the hard disk controller interfaces
12:51 the hard disk drive with the rest of the
12:53 computer it allows the movement of the
12:56 data in and out of the hard disk drive
12:58 the hard disk controller also provides
13:00 an important functionality of mapping
13:02 the physical addressing structure of the
13:05 disk to a logical addressing that the
13:07 computer system can
13:10 understand the logical block addressing
13:14 or LBA is a simplified addressing scheme
13:15 that hides the complexity of the
13:21 structure now let's look at the hard
13:23 disk drive
13:26 interface the hard disk drive interface
13:28 is a part of the hard disk
13:30 controller it contains a physical
13:32 connector that connects the hard disk
13:33 drive to the host
13:36 computer the primary function of the
13:38 hard disk drive interface is to provide
13:40 a standard protocol for the hard disk
13:42 drive to talk to the host
13:44 computer there are many types of hard
13:47 disk drive interfaces and each one
13:49 supports a particular protocol for
13:53 example the SATA drive uses the SATA
13:55 protocol the major hard disk drive
14:00 interfaces are scuzzy SATA SAS and
14:03 FC we will discuss more about them in
14:05 the upcoming videos and that brings us
14:07 to the end of this
14:09 lesson let's summarize what you learned
14:13 in this lesson in this lesson we saw
14:15 what a disc drive is and then we looked
14:17 at the hard disk
14:19 drive we spoke in brief about the
14:21 history of the hard disk drive and then
14:23 we looked in detail at the mechanical
14:26 components of the hard disk
14:28 drive while talking about the read WR
14:30 head we also looked into the read write
14:33 process of the hard disk drive we then
14:35 talked about the hard disk controller
14:36 board that interfaces the hard disk
14:39 drive with the rest of the computer we
14:41 also touched upon an addressing scheme
14:45 called logical block addressing or LBA
14:47 at the end of this lesson we looked at
14:49 the hard disk drive interface that
14:52 connects the hard disk drive to the host
14:54 computer in the next lesson we will talk
14:57 about the hard disk drive interfaces and
22:16 hello and welcome to unit 2 hard disk
22:19 drive interfaces and protocols in this
22:20 lesson you will learn about the hard
22:23 disk drive interfaces and
22:25 protocols we will begin this lesson by
22:27 recalling what a hard disk drive
22:29 interface is and then we will see what a
22:32 protocol is we will then take a detailed
22:34 look at the major types of hard disk
22:36 drive interfaces and protocols such as
22:41 ATA SATA scuzzy SAS and fiber
22:44 channel let's begin with the hard disk drive
22:45 drive
22:47 interface the hard disk drive interface
22:49 is a part of the hard disk drive
22:51 controller that connects the hard disk
22:53 drive to the rest of the
22:55 computer it provides a standard protocol
22:57 for the hard disk drive to talk to the
23:00 host computer
23:02 we can think of the standard protocol as
23:03 a language that is used for
23:05 communication and we can Define it as a
23:09 set of rules and communication
23:11 standards the major types of hard disk
23:17 drive interfaces are ATA SATA scuzzy SAS
23:18 and fiber
23:21 channel ATA and SATA are widely used in
23:23 the low-end Computing Market whereas
23:27 scuzzy SAS and fiber channel are used in
23:29 the high-end Enterprise Market Market we
23:33 will first look at the ATA interface ATA
23:35 stands for advanced technology
23:37 attachment it is the most common
23:39 interface that is used to connect the
23:42 hard disk drive to the host
23:45 computer the hard disk drives with ATA
23:47 interfaces are called integrated Drive
23:50 Electronics or IDE because the
23:52 controller board is integrated with the
23:55 hard disk drive there are two types of
23:58 ATA p and SATA P stands stand for
24:02 parallel ATA it is an older ATA and it
24:05 is characterized by slower data transfer
24:09 rate P uses a 40 pin ribbon cable that
24:11 connects the IDE drive to the computer's
24:14 motherboard and transfers 16 bits of
24:16 data in parallel over the cable the
24:20 recommended cable length is 18 in SATA
24:24 stands for Serial ATA it is a newer ATA
24:26 and it is characterized by faster data
24:28 transfer rate
24:31 uses a thinner 7even pin data cable that
24:32 connects the SATA drive to the
24:35 computer's motherboard and transfers
24:37 data in a Serial manner that is one bit
24:40 at a time the length of the Internal
24:43 Sata cable can be up to 1 M SATA
24:44 supports hot
24:47 swapping hot swapping is a feature that
24:49 allows plugging or unplugging a SATA
24:51 drive when the computer is
24:55 running this feature is also called hot
24:57 plugging hot swapping does not mean we
24:59 can unplug the drive while the data is
25:01 being written to it because such action
25:03 will only corrupt the
25:06 data now we will look at ESAT or
25:09 external SATA the Internal Sata bus can
25:11 be extended to connect external hard
25:14 disk drives through external SATA the
25:16 computer's motherboard or the expansion
25:18 card of the computer can provide us an
25:21 external SATA or eata port to plug in
25:23 external SATA
25:26 drives the external SATA drives use a
25:28 special external shielded cable which
25:31 can be up to 2 m
25:34 long SATA drives come in three versions
25:37 SATA 1 SATA 2 and sata
25:41 3 SATA 1 has a data transfer rate of 1.5
25:43 gbits per second with a maximum
25:48 throughput of 150 megabytes per
25:51 second SATA 2 has a data transfer rate
25:53 of 3 GBS per second with a maximum
25:57 throughput of 300 megabytes per
26:00 second sata 3 has a data transfer rate
26:02 of 6 GBS per second with a maximum
26:10 second SATA is Backward Compatible with
26:13 P drives using a SATA bridge this means
26:15 we need to have a SATA Bridge attached
26:17 to the P drive to make it compatible
26:20 with SATA hard disk drives Implement a
26:23 technique called queuing for dis
26:25 optimization the queuing technique
26:27 implemented by SATA drives for faster
26:28 read write operations
26:32 is called native command queing or
26:35 ncq now let's talk about scuzzy which
26:37 really is not a hard disk drive
26:40 interface we have included it under this
26:42 topic because it is mainly used for hard
26:45 disk drives so what exactly is scuzzy
26:48 scuzzy stands for small computer system
26:50 interface scuzzy is a parallel system
26:53 level interface that connects various
26:55 devices to a single common cable called
26:58 the scuzzy bus the scuzzy bus is is
27:00 connected to the scuzzy host adapter
27:02 card and all the devices on the scuzi
27:05 bus communicate with the host computer
27:08 through the scuzzy host adapter card so
27:10 in scuzzy architecture the host computer
27:12 communicates with the devices via the
27:14 parallel scuzzy
27:17 bus the scuzzy host adapter card is
27:18 plugged into the expansion slot of the
27:21 host computer the scuzzy host adapter
27:23 and the devices connected to the scuzzy
27:27 bus form a single daisy chain the number
27:28 of devices that can be connected to a
27:31 single scuzzi bus can either be seven or
27:34 15 devices based on the scuzzy standard
27:35 that is being
27:38 implemented these devices for example
27:41 can be hard disk drives CD DVD drives
27:44 tape drives printers and
27:47 scanners each device on the scuzzy bus
27:49 including the scuzi host adapter is
27:52 assigned a unique number between 0 and
27:54 15 the number assigned to the device is
27:57 called the scuzzy ID the scuzzy host
27:59 adapter manages all the devices in the
28:02 scuzzy bus using their scuzzy
28:05 IDs the scuzzy host adapter itself is
28:07 assigned the number 7 which has the
28:09 highest priority over all the other
28:11 devices in order to avoid signal
28:14 interference on the scuzzy bus we must
28:16 have Terminators at both ends of the
28:18 scuzzy bus some devices will have
28:21 built-in termination that can be enabled
28:23 in the absence of built-in termination a
28:25 hardware device called a terminating
28:28 resistor can be used scuzi is an
28:30 interface developed for high performance
28:32 servers there are three major versions
28:36 of scuzzy scuzzy 1 scuzi 2 and scuzzy 3
28:38 but they are commonly known as regular
28:42 scuzzy fast scuzzy and Ultra scuzzy
28:44 respectively earlier we saw native
28:47 command queuing ncq the queuing
28:49 technique implemented by satad drives
28:51 scuzzy also uses a powerful queuing
28:54 technique called command tag queuing it
28:58 cues multiple commands to a scuzzy drive
29:00 command tag queuing improves
29:02 performances because it establishes an
29:04 efficient way of ordering and processing
29:07 the io commands to the scuzi
29:09 drive let's look at the communication
29:11 that takes place on the scuzzy
29:14 bus at any given time communication
29:16 takes place only between two devices on
29:20 the scuzzy bus the device that initiates
29:22 or sends a command to another device is
29:23 called the
29:25 initiator the device that performs the
29:28 requested command is called the target
29:30 now let's look at the protocols that are
29:32 used for data transfers in scuzzy there
29:35 are two scuzzy protocols asynchronous
29:38 scuzi protocol and synchronous scuzi
29:41 protocol in asynchronous scuzzy protocol
29:43 acknowledgement is required for each
29:45 data transfer and cannot be delayed
29:47 which results in propagation
29:50 delays in synchronous scuzi protocol an
29:52 acknowledgement is required for each
29:54 data transfer but it can be delayed
29:57 hence there is no propagation delay now
30:00 let's talk about about SAS SAS stands
30:03 for serial attached scuzzy serial
30:05 attached scuzzy is the serial version of
30:07 the scuzzi interface it uses a
30:09 point-to-point Serial protocol and the
30:12 scuzzy command set the second generation
30:15 SATA disc drives are compatible with SAS
30:18 and can be connected to the SAS back
30:22 planes on a SAS back plane SATA 2 and
30:25 SAS drives can exist side by side
30:27 however SAS drives cannot be connected
30:29 to the SATA back planes in serial
30:32 attached scuzi the devices are connected
30:34 directly to the scuzi port rather than
30:36 connecting to the scuzi bus the
30:38 advantage of this is that the bandwidth
30:40 is not shared with other devices and as
30:42 a result higher transfer speeds are
30:45 achieved SAS drives have dual ports that
30:48 provide redundancy and that make them an
30:49 ideal option for building external
30:52 storage arrays the redundancy results
30:54 from the fact that each port on the SAS
30:56 Drive can be connected to a different
30:59 controller of the external storage array
31:01 so if one controller fails the other one
31:03 can take over the SAS
31:06 drive it should be noted that the ports
31:08 in the Dual ported SAS drive work in
31:11 active passive mode this means that only
31:13 one port can be active at a
31:15 time now let's look at the fiber
31:18 channel fiber channel is a Serial
31:20 interface that allows data to be
31:22 transferred serially over fiber optics
31:25 or copper cable it was developed for
31:27 Enterprise storage systems to provide
31:29 performance scalability and
31:32 redundancy in addition to providing a
31:34 high-speed serial data transfer rate the
31:36 fiber channel interface also provides
31:38 built-in redundancy through the Dual
31:41 Port access to the fiber channel
31:44 discs the fiber channel protocol or FCP
31:46 is the transport protocol that can
31:49 transport scuzzy commands and IP
31:52 commands over fiber channel networks the
31:54 fiber channel protocol permits a huge
31:55 number of hard disk drives to be
31:58 connected to a fabric topology you will
32:00 learn more about fiber channel in a
32:02 module dedicated to fiber channel
32:05 storage area network that brings us to
32:07 the end of this lesson let's summarize
32:10 what you have learned in this lesson in
32:12 this lesson you recalled what a hard
32:14 disk drive interface is and then you
32:16 learned what a protocol is we then
32:18 covered in detail the major types of
32:20 hard disk drive interfaces and protocols
32:25 such as ATA SATA scuzzy SAS and fiber
32:27 channel in the next lesson you will
32:29 learn about the hard disk drive geometry
32:55 watching hello and welcome to unit 3
32:58 hard disk geometry in this lesson lesson
32:59 you will learn about the hard disk drive
33:02 geometry we will begin this lesson by
33:03 recalling what you've learned about the
33:06 platters of the hard disk drive and then
33:08 with the help of a diagram we will see
33:10 how to identify the sides of the
33:12 platters on which the data is
33:14 stored we will also see how the discs
33:17 are organized into tracks and sectors
33:19 and then we will look into the zoned data
33:19 data
33:21 recording next we will see what a
33:24 cylinder is and then we will look into low-level
33:25 low-level
33:27 formatting last but not least we will
33:29 look into the data addressing schemes
33:33 such as CHS addressing and logical block
33:35 addressing we know that the hard disk
33:37 drive contains one or more platters and
33:39 the data is stored on the magnetic media
33:41 coded on the surface of a
33:44 platter a platter has two sides that are
33:46 used for recording
33:48 data on the slide you can see the
33:50 diagram of the platters that are stacked
33:53 on top of each other each platter has
33:55 two sides if we want to point out where
33:57 the data is located we will have to
33:59 first mention the side of the platter on
34:01 which the data is
34:04 stored in our diagram the top side of
34:06 the first platter is depicted as side
34:09 zero and its bottom side is depicted as
34:12 side one similarly the top side of the
34:14 second platter is depicted as side two
34:17 and its bottom is depicted as side
34:20 three each platter has two read write
34:22 heads to read and write data on both of
34:25 its sides in our diagram we have head
34:28 zero to read and write data to to side
34:30 zero and head one to read and write data
34:33 to side one of the first platter
34:35 similarly head 2 reads and writes data
34:38 to side two and head three reads and
34:41 writes data to side three of the second
34:43 platter the surface of each side of the
34:45 platter is divided into concentric
34:47 circles called
34:49 tracks on the slide you can see a
34:52 platter with the spindle at its Center
34:54 the surface of this platter is divided
34:57 into tracks each track is of different
34:59 size and is shown by a different color
35:00 in the
35:02 diagram the tracks in the outer rings
35:04 are longer the outermost track is
35:07 numbered zero and the numbers increase
35:10 as we move towards the inner Rings a
35:11 typical hard disk drive can have more
35:13 than 2,000 tracks per inch on the recording
35:14 recording
35:17 surface the tracks are further divided
35:20 into smaller addressable units and each
35:22 such unit is called a sector a single
35:26 sector holds 512 bytes of data a bite is
35:28 a unit of memory size that can hold a single
35:29 single
35:31 character when you look at the diagram
35:33 you will notice that the inner tracks
35:35 have fewer sectors compared to the outer
35:37 tracks this is because the outer tracks
35:39 are bigger and as expected they have
35:41 more area to squeeze in more sectors
35:44 compared to the inner tracks in our
35:46 diagram the inner tracks have eight
35:48 sectors and let's say we want to further
35:51 divide it equally into 16 sectors this
35:53 is just not possible because the size of
35:55 the sectors will become smaller and as a
35:58 result they cannot hold 512 bytes of
36:00 data the outer tracks in our diagram
36:03 have 16 sectors and let's say we want to
36:06 go down from 16 sectors to eight sectors
36:08 in that case we are just wasting space
36:10 because each sector would have the
36:13 capacity to store more than 512 bytes of
36:15 data in order to prevent the wastage of
36:17 storage space we have divided the
36:19 surface of the platter into two zones
36:22 the inner Zone and outer zone in our
36:25 diagram the inner Zone consists of three
36:27 inner tracks and the outer zone consists
36:29 of four outer tracks the tracks in the
36:32 outer zone can squeeze in more sectors
36:34 than the tracks in the inner Zone
36:36 resulting in maximum utilization of the
36:38 storage space on the surface of the
36:40 platter this technique of squeezing in
36:42 more sectors in the outer zone than in
36:44 the inner zone is called zoned data
36:47 recording or
36:50 zdr so far in the lesson we discussed
36:52 tracks and sectors we will now talk
36:55 about cylinders so what is a cylinder we
36:57 know that a hard dis Drive is made up of
36:59 platters that are stacked on top of each
37:02 other the platters consist of tracks and
37:04 if we look at a particular track let's
37:06 say a third track on all the platters
37:08 then they are stacked directly on top of
37:10 each other and together they appear to
37:13 form a cylinder as shown in the
37:15 diagram so the locations of a specific
37:18 track on all the platters form a
37:20 cylinder we saw how the surface of the
37:22 platter is divided into tracks and
37:24 sectors the process of forming the
37:26 tracks and sectors on the surface of the
37:29 platter is called Low lowle formatting
37:31 and it is typically done by the hard
37:33 disk drive
37:34 manufacturers during the low-level
37:37 formatting the address is written into
37:39 the sectors and also the starting and
37:41 ending points of each sector are marked
37:42 on the
37:45 platter when the lowle formatting is
37:47 complete the platters become ready to
37:50 hold 512 bytes of data in each
37:53 sector now let's look at how data is
37:55 addressed on a hard disk drive there are
37:58 two methods to address data CHS
38:00 addressing and logical block addressing
38:03 the first method is CHS addressing CHS
38:07 stands for cylinder head sector each CHS
38:09 address is composed of a cylinder number
38:12 a head number and a sector number in
38:14 this type of addressing the value of the
38:16 cylinder tells us on what track the data is
38:17 is
38:20 located the value of the head tells us
38:22 on which platter the data is
38:25 located the value of the sector points
38:26 to the sector to where the data is located
38:28 located
38:31 so in a CHS addressing the combination
38:34 of cylinder head and sector tells us
38:36 exactly where the data is located on the
38:39 dis Drive the second method is LBA
38:42 addressing LBA stands for logical block
38:44 addressing in The Logical block
38:46 addressing scheme the sectors of the
38:48 hard dis Drive are sequentially numbered
38:50 starting from zero with each sector
38:53 getting a unique logical address for
38:55 example the first sector will be
38:57 addressed as sector zero the second
39:00 sector will be addressed as sector 1 and
39:02 this continues until the last physical
39:04 sector it's interesting to note that the
39:07 first logical sector addressed in LBA
39:09 that is sector zero is the same as the
39:12 first logical sector addressed in CHS
39:15 addressing that is cylinder 0 head Zer
39:18 and sector 1 both CHS addresses and LBA
39:21 addresses are logical addresses of the
39:23 sectors the hard disk controller does
39:24 the conversion of The Logical address
39:26 into a physical address through a
39:29 translation process proc though CHS is a
39:32 logical address it was conceptualized
39:33 based on the physical parameters of the
39:36 hard disk drive such as cylinder number
39:39 head number and sector number cylinders
39:43 are numbered from zero to a maximum of
39:46 65,535 heads are numbered from 0 to a
39:48 maximum of 15 sectors are numbered from
39:51 0 to a maximum of
39:53 255 these maximum values are the
39:55 arbitrary numbers that were chosen when
39:58 the CHS addressing was a established
39:59 assuming that it would be sufficient
40:02 enough for the future as well that
40:04 brings us to the end of this unit let's
40:06 summarize what you have learned in this
40:08 lesson in this lesson we recall that the
40:10 hard disk drive will have one or more
40:13 platters and that data is stored on the
40:15 magnetic media coded on the surface of
40:17 the platter then with the help of a
40:20 diagram we saw how to identify the sides
40:21 of the platter on which the data is
40:24 stored we also saw how the dis was
40:26 organized into tracks and sectors and
40:28 then we looked looked into zoned data
40:31 recording next we saw what a cylinder is
40:33 and then we looked into low-level
40:36 formatting last but not least we looked
40:38 into the data addressing schemes such as
40:41 CHS addressing and logical block
40:43 addressing in the next lesson you will
40:45 learn about the hard disk drive
41:11 hello and welcome to unit 4 hard disk drive
41:12 drive
41:14 characteristics in this lesson you will
41:16 learn about the characteristics of the hard
41:17 hard
41:20 drive we're going to start by looking at
41:22 what a hard disk drive size is and then
41:25 we will look into the data storage
41:27 metrics next we will look at the speed
41:30 of the rotating platters we will also
41:32 look into the performance metrics of the
41:34 hard disk drive such as seek time
41:40 rotational latency average latency and
41:42 iops we will then look into the
41:44 sequential operations and random
41:46 operations of the hard disk
41:48 drive we will also look at what is meant by
41:49 by
41:51 throughput we will then look into a dis
41:54 optimization technique called queuing
41:56 last but not least we will touch on the
41:58 hard dis Drive mark
42:00 Market let's start with the size of the
42:03 hard disk drive like all other things
42:04 you might think that size is referring
42:06 to the physical dimensions of the hard
42:09 disk drive yes it does but it also can
42:11 refer to the capacity of data the hard
42:13 disk drive can
42:16 hold so a hard dis Drive size could
42:17 either be referring to the physical
42:20 dimension of the dis drive or it could
42:26 drive the physical dimension of the hard
42:29 dis Drive D is also referred to as hard
42:32 disk form factor and it's described in
42:34 inches the most common form factors of
42:38 hard dis drives are 2.5 in and 3.5
42:41 in these measurements refer to the
42:43 diameter of the platter inside the Hard dis
42:44 dis
42:46 drive there is something more that we
42:48 should know about these measurements
42:50 when we measure the size of the platter
42:52 inside a 3.5 in hard disk
42:55 drive you will notice that the actual
42:58 diameter of the platter is 3. 74 in not 3.5
43:00 3.5
43:03 in so the 3.5 in size that we are
43:05 talking here is really an approximate
43:08 value and it has a historical reason behind
43:09 behind
43:13 it the name 3.5 in hard disk drive comes
43:15 from the fact that it can easily fit
43:17 into the slot meant for the 3.5 in
43:20 floppy disc drive in our
43:23 system the 3.5 in drives are generally
43:26 low performance high-capacity drives
43:28 that come with data
43:31 interfaces on the other hand 2.5 in
43:33 drives are generally high performance
43:38 low capacity drives that come with SAS
43:40 interfaces that's all about the form
43:42 factor of hard disk drives now let's
43:44 talk about the size of the hard disk
43:45 drive with regard to the capacity of
43:47 data it can
43:49 hold we don't get the full capacity that
43:51 is mentioned on the hard disk drive why
43:54 is this so should we not be getting the
44:01 let's say that we have a 500 GB hard
44:03 disk drive but we don't get the 500 GB to
44:05 to
44:07 use one of the reasons for this
44:09 disproportion is because of the way hard
44:12 disk manufacturers Express storage
44:16 capacities computers treat 1 kilobyte as
44:19 1,24 bytes whereas the hard disk drive
44:22 manufacturers treat it as 1,000
44:25 bytes the difference of 24 bytes seems
44:27 to be negligible when we are talking in
44:29 terms of kilobytes but when we talk in
44:32 terms of megabytes we see a considerable
44:38 capacity in order to provide accurate
44:40 data storage metrics the international
44:43 electrotechnical commission IEC
44:46 introduced new metrics such as Kibby
44:52 bite mebi bite Gibby bite and tbbi
45:03 1 Meb equals 1,24 Kibby
45:08 bytes 1 Gibb equal 1,24 Meb
45:14 bytes and 1 tbbi equal 1,24 gibbi
45:16 bytes there are also other reasons why
45:19 we are not getting full capacity such as
45:21 the overhead associated with formatting
45:23 of the Diss and the dis space consumed
45:25 by the file
45:28 system now let's look at the speed of
45:30 the rotating platters of the hard disk
45:33 drive we quantify the speed of the hard
45:35 disk drive based on the number of
45:37 rotations it makes in a second and the
45:39 metric that denotes this is called
45:42 revolutions per minute or RPM most
45:48 common RPMs are 5400 7200 10,000 and
45:51 15,000 as you might have guessed more
45:53 RPMs increase the efficiency of the hard
45:56 disk drive as it provides faster read
45:58 and write operation ation on the drive
46:01 so hard disk drives with more RPMs are
46:03 expensive compared to the ones that have lesser
46:04 lesser
46:08 RPMs RPM also impacts the capacity hard
46:10 disk drives with more RPMs have lower
46:13 capacity and ones with lesser RPMs have
46:15 higher capacity you might be wondering
46:18 how increase in RPMs results in a lower
46:21 capacity let's do some simple math to understand
46:23 understand
46:25 this if the platters of a hard dis Drive
46:30 rotate at 5 200 RPM then it takes 11.54
46:33 milliseconds for a single complete
46:36 rotation however if we rotate the same
46:39 Platters at 7200 RPM then the time drops
46:41 to 8.33
46:43 milliseconds though the drive has become
46:45 faster the amount of time it takes to
46:47 write a single track is reduced from
46:52 11.54 milliseconds to 8.33
46:55 milliseconds the drive now has 8.33
46:57 milliseconds to write the dat data
47:00 instead of 11.54 milliseconds so it can
47:03 write only what it can during the 8.33
47:07 milliseconds resulting in a lower
47:10 capacity so whenever manufacturers
47:11 increase the rotation speed of the hard
47:14 disk drive it is necessary to increase
47:16 the read WR frequency to avoid the drive
47:21 capacity the performance of the hard
47:25 disk drive is also affected by seek time
47:27 seek time is the time it takes to to
47:28 move the read right head to the correct
47:30 location on the surface of the
47:34 platter it is measured in milliseconds 1
47:36 second is made of 1,000
47:39 milliseconds the faster seek improves
47:41 the performance the seek time has a
47:43 significant bearing on performance when
47:45 the read right head has to randomly read
47:49 the data random reads require the read
47:50 right head to move to different
47:52 positions of the platter the faster seek
47:54 time will improve the performance and
47:56 the slower seek time will be a hit on
47:58 the performance performance the seek
48:00 time changes depending on the number of
48:02 tracks that the head must navigate so we
48:04 consider the average seek time to
48:07 measure the performance the average seek
48:09 time is calculated as 1/3 of the full
48:12 seek time further we have the average
48:14 seek time for read and write operations mentioned
48:15 mentioned
48:19 separately for example if we take a hard
48:22 disk drive with 7200 RPM the average
48:24 seek time for the read operation will be
48:28 mentioned as less than 8.5 Mill seconds
48:30 and then average seek time for the right
48:32 operation will be mentioned as less than 9.5
48:33 9.5
48:35 milliseconds right after the head
48:37 positions itself on a track it has to
48:39 wait for the platter to rotate in order
48:42 to place the sector beneath the
48:46 head this waiting time or time taken for
48:47 the rotation of the platter to position
48:49 the sector beneath the head is called rotational
48:52 rotational
48:54 latency rotational latency varies with
48:57 the position of the sector if the sector
49:00 is near then less rotation is needed but
49:02 if it is far then a full rotation might
49:05 be required so an average latency is
49:07 considered for depicting the rotational
49:10 latency of the hard disk drives and it's
49:15 milliseconds average latency is the time
49:18 it takes for the platter to do a half
49:20 rotation this means that the average
49:22 latency is calculated directly from the
49:25 RPM of the hard disk
49:27 drive let's do some simple math to find
49:31 the average latency of a 7200 RPM hard disk
49:32 disk
49:36 drive we know that 7200 RPM is equal to
49:40 120 revolutions per second and hence for
49:47 monds and for a half rotation it takes 4.17
49:49 4.17
49:52 milliseconds so the average latency for
50:01 seconds since average latency is
50:03 directly related to the RPM the more the
50:06 RPMs are the less the average latency or
50:08 the waiting time will
50:10 be it should be noted that while seek
50:12 times are significant for random
50:14 workloads the average latency is
50:21 workloads we will discuss sequential and
50:24 random workloads in a few
50:27 minutes the next performance metric that
50:29 we will discuss is input output
50:32 operations per second or
50:34 iops it is used to measure the maximum
50:37 read or write operations in a second the
50:40 read or write operations are performed
50:42 by the storage device in response to a
50:45 request from the host computer iops are
50:47 typically used to measure the random
50:49 workloads there are several types of
50:52 input output operations such as read
50:55 operation write operation random
50:57 operation and sequential operation
50:59 operation so the iops measurement
51:01 corresponds to the type of input output
51:04 operation occurring at that
51:07 time iops also depends upon the size of
51:09 the input output
51:11 operation it is obvious that larger
51:14 input output operations take longer time
51:15 than the smaller
51:18 ones input output operations per second
51:20 are complex because they involve more
51:23 than one type of operations such as read
51:26 write sequential and random
51:28 now let's look at the sequential and
51:31 random operations in sequential
51:33 operations large number of sectors such
51:36 as 120 kilobytes is accessed in a
51:39 contiguous manner without jumping around
51:41 for example in a sequential operation
51:45 the alphabet A B C D and E are located
51:49 next to each other such as a then B then
51:53 C then D then
51:56 e in a random operation small number of
51:59 SE such as 4 kilobytes are accessed in a
52:03 non-contiguous manner for
52:06 example in a random operation the
52:11 letters A B C D and E are spread
52:15 out in hard disk drives the random iops
52:17 is mainly based on the seek time whereas
52:20 in solid state drives the random iops is
52:22 based on its internal controller and
52:24 memory interface
52:27 speeds a quick way to calculate the iops
52:29 of a hard disk drive is by using the
52:35 formula 1,00 / x + y whereas X is the
52:38 average seek time and Y is the average
52:41 rotational latency for example if we had
52:43 a hard disk drive whose average seek
52:46 time is 8.5 milliseconds and the average
52:48 rotational latency is 4.17
52:53 milliseconds then the iops will be 78
52:55 iops now that we have a good
52:57 understanding of the iops it's time to
53:00 look at the throughput of the disk drive
53:01 throughput is the rate at which the data
53:03 gets transferred in and out of the hard
53:06 disk drive without failure we will also
53:08 hear the term maximum throughput from the
53:09 the
53:12 vendors the maximum throughput is also
53:15 called maximum transfer rate sustained
53:17 transfer rate is also another term that
53:20 vendors use it is the rate at which the
53:22 hard dis Drive can transfer sequential
53:24 data from multiple tracks on the
53:27 dis now we will look at the queuing
53:29 technique implemented by hard disk
53:31 drives queuing is a dis optimization
53:33 technique that is used by hard disk
53:36 drives to create faster reads and writes
53:38 typically when a hard disk drive
53:40 receives multiple input output commands
53:43 at a time these are placed in a queue
53:45 and are reordered based on the dis
53:47 layout to carry out the read and write operations
53:48 operations
53:51 efficiently as a result queuing improves
53:53 the performance of hard disk
53:55 drives now let's look at the hard disk
53:57 drive Market
53:58 the hard disk drive Market is
54:01 characterized by two important things
54:03 the first thing is that there is a high
54:05 performance-based hard disk drive Market
54:07 where the hard disk drives are built to
54:10 deliver high performance for example SAS
54:12 and fiber channel drives are high
54:14 performance drives that come with high
54:16 cost the second thing is that there is a
54:19 capacity-based hard disk drive Market
54:20 which gives importance to the cost per
54:23 bite aspect and hence these drives are
54:25 slower and squeeze in as much bits as
54:28 possible in inreasing the capacity for
54:30 example SATA drives are low performance
54:33 high-capacity drives and that brings us
54:35 to the end of this unit let's summarize
54:37 what we have learned in this lesson we
54:40 looked at what a hard disk drive size is
54:42 and then we looked at data storage
54:44 metrics next we looked at the speed of
54:46 the rotating platters we also looked
54:48 into the performance metrics of the hard
54:51 disk drive such as seek time rotational
54:55 latency average latency and iops we then
54:57 looked into the sequential operations
54:59 and random operations of the hard disk
55:01 drive we also looked at what is meant by
55:04 throughput we then looked into a dis
55:06 optimization technique called queuing
55:08 and last but not least we covered the
55:10 hard disk drive Market in the next
55:12 lesson you will learn about Solid State
55:40 hello and welcome to unit 1 Solid State
55:43 Storage in this lesson you will learn
55:45 about Solid State
55:48 Storage we're going to start by looking
55:50 at the brief history of solid state
55:52 storage and then we will look at what
55:54 solid state storage
55:57 is next we will look at the two forms of
56:00 solid state storage that is solid state
56:04 card SSC and solid state drive
56:08 SSD we will look at what flash memory is
56:09 and then we will look at the two types
56:12 of flash memory that is nand flash
56:16 memory and nor flash memory we will also
56:18 look at the types of nand flash memory
56:21 in detail and these are called single
56:25 level cell SLC multi-level cell mlc
56:30 Enterprise mlc or E mlc and triple level cell
56:31 cell
56:33 TLC we will then look at the data
56:36 organization in nand flash memory next
56:39 we will see how data is stored in flash
56:42 memory using the erase write cycle we
56:44 will also look at a phenomenon called
56:48 Write amplification in flash memory and
56:49 then we will look at a technique called
56:52 garbage collection which is employed by
56:54 The Flash controller to free up
56:56 space we will look at what wear leveling
56:59 is and then we will look at the types of
57:01 wear leveling such as Dynamic wear
57:05 leveling and static wear leveling lastly
57:09 we look at what flash over provisioning
57:11 is now let's begin with the brief
57:13 history of solid state
57:16 storage solid state technology is not
57:18 new and it has been around since the
57:21 late 1970s the early solid state devices
57:23 were very expensive and provided less
57:26 capacity around the 1980s the
57:28 flash-based solid state technology came
57:32 into existence in 1987 the EMC
57:34 Corporation introduced solid state
57:37 drives for the mini computer
57:39 Market the presence of solid state
57:41 technology in the Enterprise storage
57:43 world was not felt until recently
57:46 because they had issues related to cost
57:47 performance and
57:50 reliability the traditional Enterprise
57:51 storage Market is now seeing the
57:54 emergence of the solid state
57:56 technology Solid State Storage is a
57:58 non-volatile storage that is built from solid
57:59 solid
58:02 semiconductors it includes any storage
58:04 that has no moving
58:07 components nonvolatile storage refers to
58:09 storage that can retain its stored data
58:12 even when the power is Switched
58:15 Off the term Solid State Storage was
58:17 created by snia and is seldom used in
58:18 the real
58:20 world there are two forms of solid state
58:24 storage solid state card SSC and solid
58:26 state drive SS s
58:31 SD solid state card or SSC is a pcie
58:33 card with the solid state media embedded
58:36 on it since it is a pcie card it can be
58:40 plugged into the pcie slot of the host
58:42 computer they are commonly used in
58:45 servers and are characterized by high
58:49 performance the solid state drive or SSD
58:51 is a solid state device that resembles a
58:54 hard disk drive it comes in form factors
58:59 3.5 in 2 .5 in and 1.8 in solidstate
59:01 drives have standard hard disk drive
59:04 interfaces and protocols such as SATA
59:06 SAS and fiber
59:08 channel they do not give high
59:11 performance like solid state cards
59:12 because the interfaces that connect them
59:15 to the host computer are not fast enough
59:18 to fully utilize their potential however
59:21 they are still faster than the hard disk
59:23 drives there are different kinds of
59:26 solid state media such as flash memory
59:29 phase change Ram or PC Ram Magneto
59:33 resistive Ram or M Ram resistive Ram R
59:36 RAM and many
59:38 more we will focus our discussion on
59:41 flash memory because of its widespread
59:44 acceptance so what is flash memory flash
59:47 memory is a solid state storage that is
59:50 rewritable in flash memory data is
59:52 stored in a collection of cells that is
59:54 made from floating gate
59:56 transistors flash memory offers
59:58 persistent storage because the data
60:00 stored on the chips are retained even when the power is Switched
60:02 when the power is Switched Off there are two types of flash memory
60:05 Off there are two types of flash memory nand flash memory and nor flash memory
60:08 nand flash memory and nor flash memory nand flash memory is commonly used
60:10 nand flash memory is commonly used because it is Affordable durable and
60:13 because it is Affordable durable and faster than nor flash
60:15 faster than nor flash memory nand flash memory is made up of
60:18 memory nand flash memory is made up of very small sized cells and the data is
60:21 very small sized cells and the data is stored in each
60:22 stored in each cell the cells can be written only for a
60:25 cell the cells can be written only for a limited number of times once the limit
60:27 limited number of times once the limit is reached the cells start to degrade
60:29 is reached the cells start to degrade resulting in loss of
60:31 resulting in loss of data this restriction on the number of
60:34 data this restriction on the number of times that we can safely write the data
60:35 times that we can safely write the data to a cell is called Write endurance and
60:38 to a cell is called Write endurance and is expressed in program erase Cycles or
60:41 is expressed in program erase Cycles or in short PE
60:47 Cycles there are four types of nand flash memory single level cell SLC
60:51 flash memory single level cell SLC multi-level cell mlc Enterprise mlc emlc
60:57 multi-level cell mlc Enterprise mlc emlc and triple level cell
61:00 and triple level cell TLC we will look at these one by one in
61:04 TLC we will look at these one by one in the single level cell or SLC type of
61:06 the single level cell or SLC type of nand flash memory a single cell can
61:08 nand flash memory a single cell can store one bit of data it can either be a
61:11 store one bit of data it can either be a binary number zero or a binary number
61:14 binary number zero or a binary number one it typically rates a r endurance of
61:17 one it typically rates a r endurance of 100,000 PE Cycles SLC is characterized
61:22 100,000 PE Cycles SLC is characterized by high performance high right endurance
61:24 by high performance high right endurance and high cost
61:27 and high cost in multi-level cell or mlc type of nand
61:30 in multi-level cell or mlc type of nand flash memory a single cell can store Two
61:33 flash memory a single cell can store Two Bits of data it can be a combination of
61:36 Bits of data it can be a combination of binary numbers such as 0 0 0 1 1 0 and 1
61:41 binary numbers such as 0 0 0 1 1 0 and 1 one it typically rates a right endurance
61:44 one it typically rates a right endurance of 5,000 PE Cycles mlc is characterized
61:48 of 5,000 PE Cycles mlc is characterized by medium performance low right
61:50 by medium performance low right endurance and low
61:53 endurance and low cost Enterprise mlc or emlc is an
61:57 cost Enterprise mlc or emlc is an improvised version of mlc that was
62:00 improvised version of mlc that was developed to handle Enterprise
62:02 developed to handle Enterprise workloads the enhanced error correction
62:04 workloads the enhanced error correction in emlc has improved its reliability and
62:07 in emlc has improved its reliability and right endurance compared to the standard
62:10 right endurance compared to the standard mlc
62:11 mlc flash it rates a right endurance of
62:14 flash it rates a right endurance of 30,000 PE
62:20 Cycles emlc is characterized by higher performance than mlc high right
62:22 performance than mlc high right endurance and low
62:24 endurance and low cost in triple level cell or TLC type of
62:27 cost in triple level cell or TLC type of nand flash memory a single cell can
62:30 nand flash memory a single cell can store three bits of data it can be the
62:32 store three bits of data it can be the combination of binary numbers such as
62:35 combination of binary numbers such as 000000
62:37 000000 001 0 1 0 01 1 1 0 0 101 110 and
62:46 001 0 1 0 01 1 1 0 0 101 110 and 111 it typically rates a right endurance
62:49 111 it typically rates a right endurance of 1,000 PE
62:51 of 1,000 PE Cycles TLC is characterized by higher
62:54 Cycles TLC is characterized by higher density lower WR endurance and lower
62:58 density lower WR endurance and lower cost now let's look at how data is
63:00 cost now let's look at how data is organized in nand flash memory nand
63:03 organized in nand flash memory nand flash memory is accessed in blocks each
63:06 flash memory is accessed in blocks each block is composed of
63:08 block is composed of pages we know that the flash memory is
63:10 pages we know that the flash memory is composed of cells cells are organized
63:13 composed of cells cells are organized into groups called
63:15 into groups called Pages the pages are then organized into
63:17 Pages the pages are then organized into groups called blocks in simple words a
63:21 groups called blocks in simple words a group of cells make a page and a group
63:24 group of cells make a page and a group of pages make a block
63:33 a page size can be 2K 4K 8K or 16k the size of the blocks can be 16k
63:38 16k the size of the blocks can be 16k 128k 256k or
63:41 128k 256k or 512k data is stored on the flash memory
63:44 512k data is stored on the flash memory using the erase right cycle the erase
63:47 using the erase right cycle the erase operation sets the value of the cell to
63:49 operation sets the value of the cell to Binary one and the right operation sets
63:52 Binary one and the right operation sets the values of the cell to
63:54 the values of the cell to zero what is important to know is that
63:56 zero what is important to know is that right operation works at page level but
63:59 right operation works at page level but the erase operation works at Block
64:02 the erase operation works at Block Level let's explain this concept in
64:05 Level let's explain this concept in detail when the flash memory is fresh
64:07 detail when the flash memory is fresh out of the box it is in an erased
64:10 out of the box it is in an erased condition that is all the cells come
64:12 condition that is all the cells come preset with binary
64:14 preset with binary ones it is very easy to write the value
64:17 ones it is very easy to write the value of a cell from binary 1 to Binary 0o but
64:20 of a cell from binary 1 to Binary 0o but changing it back to one requires an
64:22 changing it back to one requires an entire block containing the cell to be
64:24 entire block containing the cell to be erased
64:26 erased this is because the erase operation of
64:28 this is because the erase operation of the erase right Cycle Works only at the
64:31 the erase right Cycle Works only at the Block
64:32 Block Level now let's look at write
64:34 Level now let's look at write amplification in flash
64:36 amplification in flash memory we know that one or more bits of
64:39 memory we know that one or more bits of data are stored in each cell of the
64:41 data are stored in each cell of the flash memory if you recall cells are
64:44 flash memory if you recall cells are grouped into pages and pages in turn are
64:46 grouped into pages and pages in turn are grouped into
64:47 grouped into blocks while data is written at the page
64:50 blocks while data is written at the page level it can only be erased at the Block
64:54 level it can only be erased at the Block level in FL memory data cannot be
64:57 level in FL memory data cannot be overwritten directly as is done in hard
64:59 overwritten directly as is done in hard disk drives instead the entire block
65:02 disk drives instead the entire block must be erased before a page can be
65:04 must be erased before a page can be written to it as the flash memory is
65:07 written to it as the flash memory is being used the blocks get filled up with
65:09 being used the blocks get filled up with initial
65:11 initial wrs when there is a write request to
65:13 wrs when there is a write request to change or rewrite the data it cannot be
65:15 change or rewrite the data it cannot be overwritten to the block just like in
65:18 overwritten to the block just like in the hard disk drive instead the change
65:21 the hard disk drive instead the change data has to be either written to empty
65:23 data has to be either written to empty pages in the existing block or to new
65:25 pages in the existing block or to new Block in the absence of empty
65:28 Block in the absence of empty pages in our example the change data is
65:31 pages in our example the change data is written to empty pages now that the
65:33 written to empty pages now that the empty pages of the block one have been
65:35 empty pages of the block one have been filled for any subsequent rewrites the
65:38 filled for any subsequent rewrites the valid pages of block one have to be
65:40 valid pages of block one have to be moved to a free Block in our example the
65:43 moved to a free Block in our example the valid pages are relocated to block two
65:46 valid pages are relocated to block two since the valid data of block one is
65:48 since the valid data of block one is moved to block two block one is erased
65:52 moved to block two block one is erased as a result we have an empty block which
65:55 as a result we have an empty block which is is block one to which the data can be
65:57 is is block one to which the data can be written
65:58 written to the necessity to relocate the valid
66:01 to the necessity to relocate the valid data and to erase the block for writing
66:03 data and to erase the block for writing new data causes right
66:11 amplification so WR amplification occurs when the actual number of right
66:12 when the actual number of right operations executed on the media by The
66:14 operations executed on the media by The Flash controller is greater than the
66:16 Flash controller is greater than the number of right operations that was
66:18 number of right operations that was requested by the host
66:20 requested by the host computer now let's look at garbage
66:23 computer now let's look at garbage collection in flash memory garbage
66:25 collection in flash memory garbage collection is a technique employed by
66:27 collection is a technique employed by The Flash controller to free up blocks
66:29 The Flash controller to free up blocks that contain invalid data by invalid
66:32 that contain invalid data by invalid data we mean the older data that was
66:35 data we mean the older data that was replaced when a block is full of data it
66:38 replaced when a block is full of data it usually contains a mix of valid and
66:40 usually contains a mix of valid and invalid
66:41 invalid data so what the flash controller does
66:44 data so what the flash controller does is it copies the valid data of the block
66:47 is it copies the valid data of the block to an empty block and skips the invalid
66:50 to an empty block and skips the invalid data it then erases the original block
66:53 data it then erases the original block as a whole making it available for
66:55 as a whole making it available for future rights this results in effective
66:58 future rights this results in effective consolidation of multiple blocks into
67:00 consolidation of multiple blocks into fewer blocks now let's look at where
67:03 fewer blocks now let's look at where leveling wear leveling is the process of
67:06 leveling wear leveling is the process of Distributing the data rights across all
67:08 Distributing the data rights across all the memory locations so that the same
67:10 the memory locations so that the same memory location is not exploited While
67:13 memory location is not exploited While others remain sparingly
67:15 others remain sparingly used this is essential because writing
67:18 used this is essential because writing to the same memory location eventually
67:20 to the same memory location eventually wears out the
67:21 wears out the memory there are two types of wear
67:24 memory there are two types of wear leveling dynamic wear leveling and
67:26 leveling dynamic wear leveling and static wear
67:28 static wear leveling in Dynamic wear leveling the
67:30 leveling in Dynamic wear leveling the incoming WR requests are directed to the
67:33 incoming WR requests are directed to the sparingly used memory
67:36 sparingly used memory locations in static Weare leveling the
67:38 locations in static Weare leveling the content of the memory locations that
67:40 content of the memory locations that store static data are moved periodically
67:43 store static data are moved periodically so that the original memory locations
67:45 so that the original memory locations can be used to store other data that
67:47 can be used to store other data that changes
67:49 changes frequently now let's look at what flash
67:52 frequently now let's look at what flash over provisioning
67:54 over provisioning means Flash over-provisioning is a
67:56 means Flash over-provisioning is a technique of reserving a portion of the
67:58 technique of reserving a portion of the capacity by The Flash controller for
68:01 capacity by The Flash controller for various background activities such as
68:03 various background activities such as garbage collection and wear
68:05 garbage collection and wear leveling the reserved capacity is hidden
68:08 leveling the reserved capacity is hidden by The Flash controller and is not made
68:10 by The Flash controller and is not made known to outside
68:12 known to outside applications though flash over
68:14 applications though flash over provisioning reduces the usable capacity
68:17 provisioning reduces the usable capacity it increases the performance and right
68:19 it increases the performance and right endurance of the flash
68:21 endurance of the flash memory an example of this would be a
68:23 memory an example of this would be a flash drive that has 100 28 GB but only
68:27 flash drive that has 100 28 GB but only 120 GB is available to the user and the
68:30 120 GB is available to the user and the remaining 8 GB is
68:33 remaining 8 GB is over-provisioned that brings us to the
68:35 over-provisioned that brings us to the end of this unit let's summarize what we
68:37 end of this unit let's summarize what we have learned in this lesson we looked at
68:40 have learned in this lesson we looked at the brief history of solid state storage
68:42 the brief history of solid state storage and then we looked at what solid state
68:44 and then we looked at what solid state storage is next we looked at the two
68:47 storage is next we looked at the two forms of solid state storage that is
68:50 forms of solid state storage that is solid state card SSC and solid state
68:53 solid state card SSC and solid state drive SSD we we looked at what flash
68:56 drive SSD we we looked at what flash memory is and then we looked at the two
68:58 memory is and then we looked at the two types of flash memory that is nand flash
69:01 types of flash memory that is nand flash memory and nor flash memory we also
69:04 memory and nor flash memory we also looked at the types of Nan flash memory
69:07 looked at the types of Nan flash memory and these are single level cell SLC
69:10 and these are single level cell SLC multi-level cell mlc Enterprise mlc or
69:14 multi-level cell mlc Enterprise mlc or emlc and triple level cell
69:18 emlc and triple level cell TLC we then looked at the data
69:20 TLC we then looked at the data organization in nand flash memory next
69:23 organization in nand flash memory next we saw how data is stored in flash
69:26 we saw how data is stored in flash memory using the erase WR cycle we also
69:29 memory using the erase WR cycle we also looked at a phenomenon called WR
69:31 looked at a phenomenon called WR amplification in flash memory and then
69:34 amplification in flash memory and then we looked into a technique called
69:35 we looked into a technique called garbage collection we looked at what
69:38 garbage collection we looked at what wear leveling is and then we looked at
69:40 wear leveling is and then we looked at the types of wear leveling that is
69:42 the types of wear leveling that is dynamic wear leveling and static wear
69:45 dynamic wear leveling and static wear leveling lastly we looked into what
69:48 leveling lastly we looked into what flash over provisioning is in the next
69:51 flash over provisioning is in the next lesson you will learn the basics of
69:53 lesson you will learn the basics of storage arrays thank you for watching
70:19 hello and welcome to unit 1 introduction to storage
70:21 to storage arrays in this lesson you will learn the
70:23 arrays in this lesson you will learn the basics of storage arrays we're going to
70:26 basics of storage arrays we're going to start by looking at what a storage array
70:28 start by looking at what a storage array is and then we will look at the key
70:30 is and then we will look at the key components of the storage array and
70:32 components of the storage array and these are controller controller
70:35 these are controller controller enclosure drives and drive
70:39 enclosure drives and drive enclosure next we will take a look at
70:42 enclosure next we will take a look at the enclosure addressing scheme and then
70:44 the enclosure addressing scheme and then we will look at what a scuzzy enclosure
70:46 we will look at what a scuzzy enclosure service or cess
70:48 service or cess is after looking at the components of
70:51 is after looking at the components of the storage array we will see how these
70:53 the storage array we will see how these components work together we will then
70:56 components work together we will then look at the benefits of storage
70:58 look at the benefits of storage arrays we will also look at what a
71:00 arrays we will also look at what a direct attached storage or Das is and
71:03 direct attached storage or Das is and then we will look at the advantage of
71:05 then we will look at the advantage of the storage arrays over the direct
71:07 the storage arrays over the direct attached
71:12 storage next we will look at the types of storage arrays and these are network
71:15 of storage arrays and these are network attached storage or Nas array storage
71:18 attached storage or Nas array storage area network or san array and unified
71:22 area network or san array and unified array when we cover Nas array we will
71:25 array when we cover Nas array we will touch upon the file-based protocols such
71:27 touch upon the file-based protocols such as NFS and SMB sifts that are used by
71:32 as NFS and SMB sifts that are used by the NASA Rays to communicate with the
71:33 the NASA Rays to communicate with the host computers and then we will look at
71:36 host computers and then we will look at what a storage network is when we cover
71:39 what a storage network is when we cover the storage Network we will also touch
71:41 the storage Network we will also touch on block-based protocol lastly when we
71:44 on block-based protocol lastly when we cover the San array we will also look at
71:46 cover the San array we will also look at what lungs and volumes are now let's
71:49 what lungs and volumes are now let's begin with the storage array so what is
71:52 begin with the storage array so what is a storage array a storage array is a
71:55 a storage array a storage array is a storage system that provides data
71:57 storage system that provides data storage to the computers connected to it
71:59 storage to the computers connected to it through a shared
72:01 through a shared network storage arrays come in various
72:03 network storage arrays come in various sizes from the ones that can be kept on
72:05 sizes from the ones that can be kept on our computer table to the ones that are
72:08 our computer table to the ones that are extremely huge and need to be kept in
72:10 extremely huge and need to be kept in data
72:15 centers let's look at the physical components of a storage
72:16 components of a storage array the typical components of a
72:19 array the typical components of a storage array are controller controller
72:22 storage array are controller controller enclosure drives and Drive
72:26 enclosure drives and Drive enclosure a controller is a printed
72:29 enclosure a controller is a printed circuit board that contains a processor
72:31 circuit board that contains a processor memory modules and
72:33 memory modules and firmware it is often called a head or
72:36 firmware it is often called a head or storage processor or a
72:39 storage processor or a node the controller acts like a
72:41 node the controller acts like a specialized computer that manages all
72:43 specialized computer that manages all the functions of a storage array
72:45 the functions of a storage array including IO
72:47 including IO operations data recovery in the event of
72:50 operations data recovery in the event of dis failures and management of dis
72:53 dis failures and management of dis capacity
72:58 a controller enclosure is an enclosure that contains one or more controllers
73:01 that contains one or more controllers power supply units fans and other
73:03 power supply units fans and other miscellaneous
73:05 miscellaneous components drives can be either hard
73:08 components drives can be either hard disk drives or solid state drives or a
73:10 disk drives or solid state drives or a combination of both a drive enclosure is
73:14 combination of both a drive enclosure is an enclosure that contains an enclosure
73:16 an enclosure that contains an enclosure controller monitoring card multiple
73:20 controller monitoring card multiple drives power supply units fans and other
73:24 drives power supply units fans and other supporting components
73:29 the enclosure controller is a printed circuit board with a CPU that typically
73:32 circuit board with a CPU that typically manages the components of the drive
73:34 manages the components of the drive enclosure such as hard disk drives
73:38 enclosure such as hard disk drives fans power supplies and so
73:42 fans power supplies and so on the monitoring card is a dedicated
73:45 on the monitoring card is a dedicated Hardware monitoring device that has
73:47 Hardware monitoring device that has sensors such as for temperature voltage
73:50 sensors such as for temperature voltage and current to report on the status of
73:52 and current to report on the status of the drive enclosure and its components
73:55 the drive enclosure and its components and the environmental condition such as
73:59 and the environmental condition such as temperature the drives in the drive
74:01 temperature the drives in the drive enclosure are hot
74:03 enclosure are hot swappable this means that we can unplug
74:05 swappable this means that we can unplug a drive when it fails and then plug in a
74:08 a drive when it fails and then plug in a new
74:09 new drive now we will talk about the
74:11 drive now we will talk about the enclosure addressing scheme and scuzzy
74:14 enclosure addressing scheme and scuzzy enclosure service that is implemented in
74:16 enclosure service that is implemented in the drive
74:18 the drive enclosure the enclosure addressing
74:20 enclosure the enclosure addressing scheme is used to identify the devices
74:22 scheme is used to identify the devices in the drive enclosure by assigning each
74:25 in the drive enclosure by assigning each and every device a unique
74:27 and every device a unique address each hard disk drive is assigned
74:30 address each hard disk drive is assigned an address based on its location in the
74:32 an address based on its location in the drive
74:34 drive enclosure the enclosure addressing is
74:36 enclosure the enclosure addressing is important for troubleshooting and parts
74:39 important for troubleshooting and parts replacement the scuzzy enclosure service
74:42 replacement the scuzzy enclosure service or cess is a technology that provides
74:44 or cess is a technology that provides the means to Monitor and manage the
74:47 the means to Monitor and manage the health of the drive enclosure and its
74:49 health of the drive enclosure and its components cess can be used to detect
74:52 components cess can be used to detect and manage the state of power supplies
74:55 and manage the state of power supplies fans drives displays indicators and
74:59 fans drives displays indicators and locks for example we can have a
75:02 locks for example we can have a threshold limit on the power Supply's
75:04 threshold limit on the power Supply's voltage and if the 12volt output of
75:07 voltage and if the 12volt output of power supply varies by plus or minus 5%
75:10 power supply varies by plus or minus 5% then an alert can be sent to the
75:13 then an alert can be sent to the administrator the scuzzy enclosure
75:15 administrator the scuzzy enclosure service is either implemented on the
75:17 service is either implemented on the enclosure controller or on the
75:19 enclosure controller or on the monitoring card that is installed in the
75:21 monitoring card that is installed in the drive
75:23 drive enclosure now let's see how the
75:25 enclosure now let's see how the components work in the diagram we have a
75:28 components work in the diagram we have a controller it has a processor usually an
75:31 controller it has a processor usually an Intel processor and memory modules the
75:34 Intel processor and memory modules the ports on the controller that connect to
75:36 ports on the controller that connect to the storage Network are called front-end
75:38 the storage Network are called front-end ports and front end ports doesn't
75:41 ports and front end ports doesn't necessarily mean they are on the front
75:42 necessarily mean they are on the front side of the storage
75:45 side of the storage array the storage array receives the
75:47 array the storage array receives the incoming read write requests from the
75:49 incoming read write requests from the host computers through these front-end
75:52 host computers through these front-end ports the storage controller processor
75:55 ports the storage controller processor with the intelligence provided by the
75:57 with the intelligence provided by the firmware processes these requests for
76:00 firmware processes these requests for internal
76:01 internal action the controller also uses high
76:04 action the controller also uses high performance memory modules called
76:07 performance memory modules called dramm dram retains the data until it is
76:11 dramm dram retains the data until it is written to the hard disk and it is also
76:13 written to the hard disk and it is also known as cache since dram memory loses
76:17 known as cache since dram memory loses the data when there is a power loss the
76:19 the data when there is a power loss the data in the dam is protected through
76:21 data in the dam is protected through battery-backed Power in the event of an
76:23 battery-backed Power in the event of an unexpected power f
76:26 unexpected power f failure the dram cache accelerates the
76:29 failure the dram cache accelerates the performances of the storage array
76:31 performances of the storage array through read cache and write back cache
76:35 through read cache and write back cache Technologies read cache is implemented
76:37 Technologies read cache is implemented by keeping the frequently accessed data
76:39 by keeping the frequently accessed data in Cache so that a read request from a
76:42 in Cache so that a read request from a host computer for frequently accessed
76:44 host computer for frequently accessed data can be served immediately without
76:47 data can be served immediately without accessing the diss all the
76:49 accessing the diss all the time in right back cache technology
76:52 time in right back cache technology whenever an incoming right from the host
76:54 whenever an incoming right from the host computer reaches the cache an
76:56 computer reaches the cache an acknowledgement is sent to the host
76:58 acknowledgement is sent to the host computer without waiting for the data to
77:00 computer without waiting for the data to be written to the hard
77:02 be written to the hard disk both read cach and writeback cach
77:05 disk both read cach and writeback cach increases the responsiveness of the
77:07 increases the responsiveness of the input output operations which otherwise
77:10 input output operations which otherwise would have been slower because of the
77:12 would have been slower because of the mechanical nature of the hard disk
77:14 mechanical nature of the hard disk drives the back end of the controller
77:16 drives the back end of the controller consists of ports that connect to the
77:18 consists of ports that connect to the Dual ported hard disk drives tray
77:21 Dual ported hard disk drives tray through their interfaces such as SATA
77:24 through their interfaces such as SATA SAS and fiber channel the connections to
77:27 SAS and fiber channel the connections to the Dual ported hard drive are in active
77:29 the Dual ported hard drive are in active passive mode which means that only one
77:32 passive mode which means that only one port of the hard disk dries two ports is
77:35 port of the hard disk dries two ports is active at a
77:36 active at a time the major benefits of using storage
77:39 time the major benefits of using storage arrays are high availability increased
77:42 arrays are high availability increased capacity utilization and increased
77:46 capacity utilization and increased performance storage arrays provide High
77:49 performance storage arrays provide High availability by providing more than one
77:51 availability by providing more than one of the same components such as drives
77:55 of the same components such as drives controllers power supplies and
77:59 controllers power supplies and fans storage arrays increase the
78:01 fans storage arrays increase the capacity utilization by efficiently
78:04 capacity utilization by efficiently managing the allocation of storage
78:07 managing the allocation of storage resources let's find out what happens in
78:10 resources let's find out what happens in the absence of a storage
78:12 the absence of a storage array in the absence of a storage array
78:15 array in the absence of a storage array each server has to depend on its own
78:17 each server has to depend on its own storage which is usually one or more
78:20 storage which is usually one or more hard disk drives directly attached to it
78:23 hard disk drives directly attached to it and such storage cannot be shared with
78:26 and such storage cannot be shared with other
78:27 other servers this direct attachment of the
78:29 servers this direct attachment of the storage to a single server either
78:31 storage to a single server either internally or externally is called
78:34 internally or externally is called direct attached storage or
78:37 direct attached storage or Das the disadvantage of direct attached
78:40 Das the disadvantage of direct attached storage is that while one server may
78:42 storage is that while one server may have plenty of free storage capacity
78:44 have plenty of free storage capacity available another server may be running
78:46 available another server may be running out of space and there is no option to
78:48 out of space and there is no option to share the free capacity available in one
78:51 share the free capacity available in one of the servers
78:57 a storage array addresses this problem by having storage as a single pool that
78:59 by having storage as a single pool that can be shared among the
79:05 servers an example would help us explain this concept better let's say we have
79:08 this concept better let's say we have two servers X and Y server X has 500 GB
79:13 two servers X and Y server X has 500 GB as its storage capacity and server y has
79:16 as its storage capacity and server y has 500 GB as its storage
79:19 500 GB as its storage capacity let's say server X has utilized
79:22 capacity let's say server X has utilized only 100 GB of space out of its 500 so
79:26 only 100 GB of space out of its 500 so it has plenty of free space that is 400
79:30 it has plenty of free space that is 400 GB on the other hand let's say server y
79:34 GB on the other hand let's say server y has utilized 490 GB of space out of its
79:38 has utilized 490 GB of space out of its 500 obviously it is running out of space
79:41 500 obviously it is running out of space it would help if we could borrow some
79:43 it would help if we could borrow some free space from server X and give it to
79:46 free space from server X and give it to server y but this cannot be done because
79:49 server y but this cannot be done because the storage is attached to each server
79:52 the storage is attached to each server and therefore cannot be shared
79:58 as a result of direct attached storage the overall capacity utilization is
80:01 the overall capacity utilization is less the solution is to ensure that
80:04 less the solution is to ensure that storage is no longer attached to
80:06 storage is no longer attached to individual servers but rather belongs to
80:09 individual servers but rather belongs to a storage array which has a single pool
80:11 a storage array which has a single pool of
80:13 of storage so in our example let's say we
80:16 storage so in our example let's say we have moved the storage capacity to the
80:18 have moved the storage capacity to the storage array the storage array now has
80:21 storage array the storage array now has a total capacity of 1,000 GB and the two
80:24 a total capacity of 1,000 GB and the two servers X and Y can access their storage
80:27 servers X and Y can access their storage from the storage
80:33 array the storage array can now determine the storage allocation of each
80:35 determine the storage allocation of each server connected to it it can reclaim
80:37 server connected to it it can reclaim the Surplus space from server X and
80:40 the Surplus space from server X and replenish server y with some additional
80:43 replenish server y with some additional space overall the storage has increased
80:46 space overall the storage has increased the capacity utilization by efficiently
80:49 the capacity utilization by efficiently managing the allocation of storage
80:52 managing the allocation of storage capacity storage arrays increase the
80:54 capacity storage arrays increase the performance performance because some of
80:55 performance performance because some of the host computer's workload can now be
80:58 the host computer's workload can now be handled by the storage array
81:01 handled by the storage array controller now let's discuss the types
81:03 controller now let's discuss the types of storage
81:04 of storage arrays there are three types of storage
81:07 arrays there are three types of storage arrays network attached storage or Nas
81:10 arrays network attached storage or Nas array storage area network or san
81:14 array storage area network or san array unified Nas n San
81:17 array unified Nas n San array let's begin with the network
81:20 array let's begin with the network attached storage Nas is a storage array
81:23 attached storage Nas is a storage array that connects to host computer through
81:25 that connects to host computer through an IP network and communicates with them
81:27 an IP network and communicates with them using file-based protocols such as NFS
81:31 using file-based protocols such as NFS and SMB
81:33 and SMB CS let's understand a little bit more
81:36 CS let's understand a little bit more about NFS and SMB
81:39 about NFS and SMB CS NFS stands for Network file
81:42 CS NFS stands for Network file system it was designed to support the
81:45 system it was designed to support the Unix file system in which host machines
81:47 Unix file system in which host machines can mount a dis partition on the storage
81:49 can mount a dis partition on the storage array as if it were a local
81:53 array as if it were a local dis and NFS allows file sharing over a
81:57 dis and NFS allows file sharing over a network SNB stands for Server message
82:00 network SNB stands for Server message block it is microsofts protocol for the
82:03 block it is microsofts protocol for the Windows File system that allows file
82:05 Windows File system that allows file sharing over a network using a client
82:08 sharing over a network using a client server
82:14 model CS stands for common internet file system it is a public version of
82:18 system it is a public version of SMB in short NFS is for the Unix or
82:21 SMB in short NFS is for the Unix or Linux based operating system whereas SMB
82:25 Linux based operating system whereas SMB sifts is for the Windows operating
82:28 sifts is for the Windows operating system most Nas storage arrays support
82:31 system most Nas storage arrays support both NFS and SMB
82:34 both NFS and SMB sifts but for the sake of argument let's
82:37 sifts but for the sake of argument let's say the nas storage arrays support only
82:39 say the nas storage arrays support only SMB sifts then the Unix or linux-based
82:42 SMB sifts then the Unix or linux-based host computers can access Nas storage
82:45 host computers can access Nas storage arrays using the samba client Samba is
82:48 arrays using the samba client Samba is an SMB CS file server that runs in a
82:51 an SMB CS file server that runs in a Unix or Linux based operating system
82:54 Unix or Linux based operating system alternatively if the nas storage arrays
82:56 alternatively if the nas storage arrays support only NFS then the Windows
82:59 support only NFS then the Windows operating system can access Nas storage
83:01 operating system can access Nas storage arrays using the NFS
83:04 arrays using the NFS client before we Define the sand storage
83:07 client before we Define the sand storage array let's explain what a storage
83:09 array let's explain what a storage network
83:11 network is a storage network is a network that
83:14 is a storage network is a network that was developed for transporting
83:15 was developed for transporting block-based
83:17 block-based protocols the block-based protocol is a
83:20 protocols the block-based protocol is a protocol that transports an entire block
83:22 protocol that transports an entire block of data on the other hand the file-based
83:25 of data on the other hand the file-based protocol transports only one bite of
83:28 protocol transports only one bite of data at a time and it relies on the
83:31 data at a time and it relies on the lower level block protocol to reorder
83:33 lower level block protocol to reorder the bytes into
83:36 the bytes into blocks a sand storage array is a storage
83:39 blocks a sand storage array is a storage array that connects to the host
83:41 array that connects to the host computers through a storage area network
83:44 computers through a storage area network and communicates with them using
83:45 and communicates with them using block-based protocols such as fiber
83:48 block-based protocols such as fiber channel I scuzzy and
83:52 channel I scuzzy and fcoe when talking about a sand storage
83:55 fcoe when talking about a sand storage array it is important to mention how the
83:58 array it is important to mention how the storage is presented to the host
84:00 storage is presented to the host computers the storage capacity of a sand
84:03 computers the storage capacity of a sand storage array has to be shared among the
84:05 storage array has to be shared among the host computers so it is divided into
84:08 host computers so it is divided into logical discs that are assigned to the
84:11 logical discs that are assigned to the hosts these logical discs appear to the
84:14 hosts these logical discs appear to the hosts as local discs The Logical discs
84:17 hosts as local discs The Logical discs or logical units as they are usually
84:19 or logical units as they are usually called are identified by a unique number
84:22 called are identified by a unique number called a logical unit number or lung
84:25 called a logical unit number or lung luns play a vital role in the management
84:28 luns play a vital role in the management of storage in the sand storage
84:31 of storage in the sand storage array they provide a logical abstraction
84:34 array they provide a logical abstraction between the host computers and the hard
84:36 between the host computers and the hard disk drives of the storage
84:37 disk drives of the storage array the terms lungs and volumes should
84:41 array the terms lungs and volumes should not be confused with each other while
84:43 not be confused with each other while aun is a unique number that is assigned
84:45 aun is a unique number that is assigned to a logical unit volume on the other
84:48 to a logical unit volume on the other hand is a broad term that denotes a
84:50 hand is a broad term that denotes a contiguous area on a storage device and
84:53 contiguous area on a storage device and includes l and
84:55 includes l and partitions now let's look at the unified
84:58 partitions now let's look at the unified storage array a unified storage array is
85:01 storage array a unified storage array is a storage array that supports both
85:03 a storage array that supports both file-based protocols and block-based
85:06 file-based protocols and block-based protocols host computers can access the
85:08 protocols host computers can access the unified storage arrays either using
85:11 unified storage arrays either using file-based protocols or using
85:13 file-based protocols or using block-based
85:14 block-based protocols they are also called
85:16 protocols they are also called multi-protocol storage
85:18 multi-protocol storage arrays and that brings us to the end of
85:21 arrays and that brings us to the end of this lesson let's summarize what you've
85:23 this lesson let's summarize what you've learned in this lesson in this lesson
85:26 learned in this lesson in this lesson you learned what a storage array is and
85:28 you learned what a storage array is and then we looked at the key components of
85:30 then we looked at the key components of the storage array and these are
85:32 the storage array and these are controller controller enclosure drives
85:35 controller controller enclosure drives and drive
85:36 and drive enclosure next we looked at the
85:38 enclosure next we looked at the enclosure addressing scheme and then we
85:40 enclosure addressing scheme and then we looked at what a scuzzy enclosure
85:42 looked at what a scuzzy enclosure service or cess is after looking at the
85:45 service or cess is after looking at the components of the storage array we saw
85:47 components of the storage array we saw how these components work together we
85:50 how these components work together we then saw the benefits of storage arrays
85:53 then saw the benefits of storage arrays we also looked at what a direct attached
85:55 we also looked at what a direct attached storage or Das is and then we looked at
85:58 storage or Das is and then we looked at the advantages of the storage array over
86:00 the advantages of the storage array over the direct attached storage next we
86:03 the direct attached storage next we looked at the types of storage arrays
86:05 looked at the types of storage arrays and these were network attached storage
86:07 and these were network attached storage or Nas array storage area network or san
86:10 or Nas array storage area network or san array and unified
86:12 array and unified array when we covered the nas array we
86:15 array when we covered the nas array we also touched upon the file-based
86:17 also touched upon the file-based protocols such as NFS and SMB sifts that
86:21 protocols such as NFS and SMB sifts that are used by the NASA Rays to communicate
86:23 are used by the NASA Rays to communicate with the host computers and then we
86:25 with the host computers and then we looked at what a storage network was
86:28 looked at what a storage network was when we talked about the storage Network
86:30 when we talked about the storage Network we touched on block-based
86:32 we touched on block-based protocol lastly when we covered the San
86:35 protocol lastly when we covered the San array we also looked at what lungs and
86:37 array we also looked at what lungs and volumes were in the next lesson you will
86:40 volumes were in the next lesson you will learn about architectures of the storage
86:42 learn about architectures of the storage array thank you for watching
87:09 hello and welcome to unit 2 storage array architecture in this lesson you
87:11 array architecture in this lesson you will learn about the architectures of
87:13 will learn about the architectures of the storage array we're going to start
87:15 the storage array we're going to start by looking at what a dual controller
87:17 by looking at what a dual controller architecture is and then we will look at
87:20 architecture is and then we will look at what a mid-range storage array is we
87:22 what a mid-range storage array is we will also look at the two types of modes
87:24 will also look at the two types of modes in which controllers can work and these
87:26 in which controllers can work and these are active active mode and active
87:28 are active active mode and active passive mode we will then talk about how
87:31 passive mode we will then talk about how dram cache accelerates the performance
87:33 dram cache accelerates the performance of the storage array we will also look
87:35 of the storage array we will also look at the disadvantage of the Dual
87:36 at the disadvantage of the Dual controller storage array next we will
87:39 controller storage array next we will look at the grid architecture and then
87:41 look at the grid architecture and then we will look at the scalability in the
87:43 we will look at the scalability in the grid
87:44 grid architecture now let's begin with the
87:46 architecture now let's begin with the classification of storage array
87:47 classification of storage array architectures the two common
87:49 architectures the two common architectures of storage array are dual
87:51 architectures of storage array are dual controller architecture and grid
87:53 controller architecture and grid architecture Ure let's talk about dual
87:56 architecture Ure let's talk about dual controller architecture in the previous
87:58 controller architecture in the previous unit we explained the functioning of a
88:00 unit we explained the functioning of a storage array with a single controller
88:02 storage array with a single controller but a typical storage array comes with
88:04 but a typical storage array comes with two controllers and this type of storage
88:06 two controllers and this type of storage array architecture is referred to as
88:09 array architecture is referred to as dual controller
88:10 dual controller architecture a dual controller storage
88:13 architecture a dual controller storage array provides High availability with
88:15 array provides High availability with either controller having access to the
88:17 either controller having access to the pool of storage the underlying
88:19 pool of storage the underlying architecture of dual controller storage
88:21 architecture of dual controller storage arrays doesn't allow them to have more
88:23 arrays doesn't allow them to have more than two
88:24 than two controllers however it allows more hard
88:27 controllers however it allows more hard disk drives to be added to an existing
88:29 disk drives to be added to an existing storage array and because of this dual
88:31 storage array and because of this dual controller architecture is also called
88:34 controller architecture is also called scaleup
88:36 scaleup architecture on the slide we have a
88:38 architecture on the slide we have a diagram of a dual controller storage
88:40 diagram of a dual controller storage array with two controllers the purpose
88:43 array with two controllers the purpose of having two controllers is to provide
88:45 of having two controllers is to provide redundancy that is if one controller
88:47 redundancy that is if one controller fails the other one can provide
88:49 fails the other one can provide uninterrupted
88:50 uninterrupted service in dual controller storage
88:53 service in dual controller storage arrays each controller is connected to
88:55 arrays each controller is connected to each and every Drive enclosure in the
88:56 each and every Drive enclosure in the storage array the drive enclosure
88:59 storage array the drive enclosure contains the hard disk drives that are
89:01 contains the hard disk drives that are dual ported and each hard disk drive
89:03 dual ported and each hard disk drive will have two controllers connected to
89:06 will have two controllers connected to it this Arrangement ensures that in the
89:09 it this Arrangement ensures that in the event a controller fails the other
89:11 event a controller fails the other controller can still access all the hard
89:13 controller can still access all the hard disk
89:14 disk drives mid-range storage arrays are dual
89:16 drives mid-range storage arrays are dual controller storage arrays with a single
89:19 controller storage arrays with a single active passive pair of controllers
89:21 active passive pair of controllers combined with a modest level of cache
89:24 combined with a modest level of cache capacity and performance and they are
89:26 capacity and performance and they are best suited for small and medium-sized
89:28 best suited for small and medium-sized businesses now let's talk about the two
89:30 businesses now let's talk about the two types of mode in which the controllers
89:32 types of mode in which the controllers can work active active mode and active
89:35 can work active active mode and active passive mode in an active active array
89:37 passive mode in an active active array both the controllers can perform the
89:39 both the controllers can perform the read and write operations on a logical
89:41 read and write operations on a logical unit at the same time in other words
89:44 unit at the same time in other words both the controllers are said to own a
89:45 both the controllers are said to own a logical unit and either of them can
89:47 logical unit and either of them can service the io meant for that logical
89:50 service the io meant for that logical unit let's explain this with the help of
89:52 unit let's explain this with the help of an example
89:54 an example on the slide you can see a dual
89:56 on the slide you can see a dual controller array with controllers 1 and
89:58 controller array with controllers 1 and two in an active active array the
90:01 two in an active active array the controller one and controller two can
90:03 controller one and controller two can service the host IO operations at the
90:05 service the host IO operations at the same time on Lun zero and both these
90:08 same time on Lun zero and both these controllers are said to be active for
90:10 controllers are said to be active for Lun zero now let's explain active
90:13 Lun zero now let's explain active passive array in an active passive array
90:16 passive array in an active passive array only one of the two controllers can
90:18 only one of the two controllers can perform the read write operations on a
90:20 perform the read write operations on a logical unit at a given time even though
90:22 logical unit at a given time even though both the controllers can access access
90:24 both the controllers can access access it in other words only one of the
90:26 it in other words only one of the controllers is said to own a logical
90:28 controllers is said to own a logical unit and it can alone service the io
90:30 unit and it can alone service the io meant for that logical unit in this case
90:33 meant for that logical unit in this case the controller that owns The Logical
90:35 the controller that owns The Logical unit is said to be active while the
90:37 unit is said to be active while the other controller is said to be
90:39 other controller is said to be passive let's also explain this with the
90:41 passive let's also explain this with the help of an example on the slide you can
90:44 help of an example on the slide you can see a dual controller array with
90:46 see a dual controller array with controller one and controller two like
90:48 controller one and controller two like the one we showed before but this time
90:50 the one we showed before but this time it is for an active passive array in an
90:53 it is for an active passive array in an active passive array
90:54 active passive array both these controllers can access Lun
90:56 both these controllers can access Lun zero but only one controller can perform
90:58 zero but only one controller can perform the read write operations on it and
91:01 the read write operations on it and let's say that controller is controller
91:02 let's say that controller is controller one in other words controller one is set
91:05 one in other words controller one is set to own the Lun zero and it can alone
91:07 to own the Lun zero and it can alone serve the host IO meant for that logical
91:09 serve the host IO meant for that logical unit in this case the controller one is
91:12 unit in this case the controller one is said to be active while the controller 2
91:14 said to be active while the controller 2 is said to be
91:16 is said to be passive when the controller 1 fails or
91:18 passive when the controller 1 fails or loses access to Lun zero the ownership
91:21 loses access to Lun zero the ownership of Lun zero is now transferred from
91:23 of Lun zero is now transferred from controller 1 one to controller 2
91:25 controller 1 one to controller 2 controller 2 now owns Lun zero and IT
91:28 controller 2 now owns Lun zero and IT services the host IO meant for Lun zero
91:31 services the host IO meant for Lun zero storage vendors may call their dual
91:33 storage vendors may call their dual controller storage array as active
91:35 controller storage array as active active because both the controllers are
91:38 active because both the controllers are actively servicing input output
91:40 actively servicing input output operations at a given
91:42 operations at a given time however it is important to note
91:44 time however it is important to note that both these controllers may not be
91:46 that both these controllers may not be servicing a particular
91:48 servicing a particular Lan as a matter of fact each of the
91:51 Lan as a matter of fact each of the controllers will actively service a
91:53 controllers will actively service a different LAN and therefore the storage
91:55 different LAN and therefore the storage array is really an active passive
91:58 array is really an active passive array this unequal way of accessing the
92:01 array this unequal way of accessing the lungs is called asymmetric logical unit
92:04 lungs is called asymmetric logical unit access or
92:07 access or alua in a dual controller storage array
92:10 alua in a dual controller storage array the workloads are normally uniformly
92:11 the workloads are normally uniformly distributed across the two controllers
92:14 distributed across the two controllers by assigning the even number lungs to
92:15 by assigning the even number lungs to controller one and the odd number lungs
92:18 controller one and the odd number lungs to controller
92:24 2 as a result the even numbered luns will be active in controller one and
92:26 will be active in controller one and passive in controller 2 whereas the odd
92:29 passive in controller 2 whereas the odd number luns will be active in controller
92:31 number luns will be active in controller 2 and passive in controller
92:35 2 and passive in controller one if a controller fails the surviving
92:38 one if a controller fails the surviving controller will become active for all
92:40 controller will become active for all the luns including the luns that were
92:42 the luns including the luns that were assigned to the failed controller and
92:44 assigned to the failed controller and will process the hosts iOS meant for all
92:47 will process the hosts iOS meant for all the
92:49 the luns in the previous unit we saw that
92:52 luns in the previous unit we saw that the dam cache in the controller
92:54 the dam cache in the controller accelerates the performance of the
92:55 accelerates the performance of the storage array through read cache and
92:58 storage array through read cache and write back cache Technologies the dam
93:01 write back cache Technologies the dam cache is protected with a battery-backed
93:03 cache is protected with a battery-backed power source to prevent data loss in the
93:05 power source to prevent data loss in the event of an expected power
93:08 event of an expected power failure but how can the Dual controller
93:10 failure but how can the Dual controller array hold the right cache when the dam
93:13 array hold the right cache when the dam memory itself
93:14 memory itself fails the Dual controller array does
93:17 fails the Dual controller array does this by mirroring the contents of the
93:19 this by mirroring the contents of the right cache on the two
93:21 right cache on the two controllers so each controll will have
93:24 controllers so each controll will have its right cache and also a mirrored
93:26 its right cache and also a mirrored right cache of the other
93:28 right cache of the other controller in essence there are two
93:30 controller in essence there are two copies of right cache that are
93:32 copies of right cache that are physically isolated from each other to
93:34 physically isolated from each other to prevent data loss should a controller
93:37 prevent data loss should a controller fail if a controller fails the Dual
93:39 fail if a controller fails the Dual controller array will no longer be able
93:41 controller array will no longer be able to mirror the right cache and it will
93:44 to mirror the right cache and it will resort to write through caching
93:46 resort to write through caching mode in right through caching mode the
93:49 mode in right through caching mode the surviving controller doesn't acknowledge
93:51 surviving controller doesn't acknowledge the right operation to the host computer
93:53 the right operation to the host computer until the data is written to the hard
93:56 until the data is written to the hard disk the major problem with the Dual
93:59 disk the major problem with the Dual controller storage array architecture is
94:01 controller storage array architecture is the irrelevance of right back cache when
94:03 the irrelevance of right back cache when one of the two controllers
94:05 one of the two controllers fail in such a situation the surviving
94:08 fail in such a situation the surviving controller switches from the right back
94:10 controller switches from the right back caching mode to the right through
94:12 caching mode to the right through caching
94:13 caching mode this is because if the surviving
94:15 mode this is because if the surviving controller also fails then the data
94:17 controller also fails then the data written to the cache memory will be lost
94:19 written to the cache memory will be lost forever without being written to the
94:21 forever without being written to the hard
94:22 hard diss right writing to dis without using
94:25 diss right writing to dis without using right back cach takes a significantly
94:27 right back cach takes a significantly long time resulting in degraded
94:30 long time resulting in degraded performance the worst is yet to come
94:32 performance the worst is yet to come when the surviving controller just can't
94:34 when the surviving controller just can't handle the additional IO load because it
94:36 handle the additional IO load because it may be fully utilized for its own IO
94:39 may be fully utilized for its own IO load this is possible because IO load
94:42 load this is possible because IO load increases over time along with the
94:44 increases over time along with the capacity that is added to the storage
94:45 capacity that is added to the storage array so making a surviving controller
94:48 array so making a surviving controller handle the workload of two controllers
94:50 handle the workload of two controllers is not effective and scheduling a
94:53 is not effective and scheduling a downtime comes necessary to restore the
94:55 downtime comes necessary to restore the failed
94:56 failed controller now let's look at the grid
94:59 controller now let's look at the grid storage architecture grid storage
95:01 storage architecture grid storage architecture consists of more than two
95:03 architecture consists of more than two controllers controlled by management
95:05 controllers controlled by management software the grid storage architecture
95:08 software the grid storage architecture functions as a single system even though
95:10 functions as a single system even though it has storage units that are
95:11 it has storage units that are geographically distributed across
95:13 geographically distributed across multiple
95:15 multiple locations grid-based architectures are
95:17 locations grid-based architectures are characterized by the ability to provide
95:19 characterized by the ability to provide scalability of both performance and
95:22 scalability of both performance and capacity the these architectures are
95:24 capacity the these architectures are often referred to as scaleout
95:27 often referred to as scaleout architecture in Grid storage
95:29 architecture in Grid storage architecture the controllers can be
95:31 architecture the controllers can be heterogeneous in nature meaning
95:33 heterogeneous in nature meaning controllers from different vendors
95:34 controllers from different vendors should fit into the
95:36 should fit into the system grid architecture has the ability
95:39 system grid architecture has the ability to recover from failures without
95:41 to recover from failures without degrading
95:43 degrading performance unlike dual controller
95:45 performance unlike dual controller storage arrays the grid-based storage
95:47 storage arrays the grid-based storage arrays function in an active active mode
95:50 arrays function in an active active mode where more than one controller can own a
95:51 where more than one controller can own a particular Lun and write to it
95:58 a grid-based storage array can also be described as a cluster storage array
96:00 described as a cluster storage array because of its scalability in both
96:02 because of its scalability in both performance and capacity but a cluster
96:05 performance and capacity but a cluster storage array may not be a grid storage
96:08 storage array may not be a grid storage array because it may not be distributed
96:12 array because it may not be distributed geographically let's try to explain the
96:14 geographically let's try to explain the concept of scalability in the grid
96:16 concept of scalability in the grid storage architecture with the help of an
96:18 storage architecture with the help of an example in our diagram we have a grid
96:21 example in our diagram we have a grid storage array initially with two
96:22 storage array initially with two controllers
96:24 controllers though it looks like a dual controller
96:25 though it looks like a dual controller storage array we can scale its
96:27 storage array we can scale its performance by adding more controllers
96:29 performance by adding more controllers and also scale its capacity by adding
96:31 and also scale its capacity by adding more hard disk
96:33 more hard disk drives as the number of controllers
96:35 drives as the number of controllers increase so does the processing power
96:37 increase so does the processing power and cache of the storage array as we
96:40 and cache of the storage array as we mentioned before regardless of the
96:42 mentioned before regardless of the number of controllers added to the grid
96:44 number of controllers added to the grid architecture it operates as a single
96:47 architecture it operates as a single system one of the key advantages of the
96:49 system one of the key advantages of the grid architecture is that there is no
96:51 grid architecture is that there is no degradation of performance even even
96:53 degradation of performance even even when a controller fails because the
96:55 when a controller fails because the surviving controllers will have
96:56 surviving controllers will have sufficient cach memory to mirror the
96:58 sufficient cach memory to mirror the right cache and as a result they will
97:01 right cache and as a result they will service the host iio using right back
97:04 service the host iio using right back caching
97:05 caching mode that brings us to the end of this
97:07 mode that brings us to the end of this lesson let's summarize what you have
97:09 lesson let's summarize what you have learned in this lesson in this lesson
97:12 learned in this lesson in this lesson you learned what a dual controller
97:14 you learned what a dual controller architecture is and then we looked at
97:16 architecture is and then we looked at what a mid-range storage array is we
97:18 what a mid-range storage array is we also looked at the two types of modes in
97:20 also looked at the two types of modes in which the controllers can work active
97:22 which the controllers can work active active and active pass passive mode we
97:24 active and active pass passive mode we then talked about how the dram cache
97:26 then talked about how the dram cache accelerates the performances of the
97:28 accelerates the performances of the storage array we also looked at the
97:30 storage array we also looked at the major disadvantage of the Dual
97:32 major disadvantage of the Dual controller storage array which is the
97:34 controller storage array which is the irrelevance of right back cach when one
97:36 irrelevance of right back cach when one of the two controllers
97:38 of the two controllers fail next we looked at the grid
97:40 fail next we looked at the grid architecture and then we looked at the
97:42 architecture and then we looked at the scalability in Grid storage
97:45 scalability in Grid storage architecture in the next lesson you will
97:47 architecture in the next lesson you will learn about the basics of raid thanks
97:49 learn about the basics of raid thanks for watching
98:15 hello and welcome to unit 1 introduction to
98:16 to raid in this lesson you will learn the
98:19 raid in this lesson you will learn the basics of raid we're going to start by
98:21 basics of raid we're going to start by looking at what raid is and then we will
98:23 looking at what raid is and then we will take a look at its brief
98:26 take a look at its brief history next we will look at why there
98:28 history next we will look at why there is a need for raid we will then look at
98:31 is a need for raid we will then look at raid Concepts and these are raid group
98:35 raid Concepts and these are raid group parody striping mirroring hot spare and
98:39 parody striping mirroring hot spare and hot
98:40 hot swap now let's begin with what raid
98:44 swap now let's begin with what raid is raid is an acronym that originally
98:47 is raid is an acronym that originally meant redundant array of inexpensive
98:50 meant redundant array of inexpensive discs but now it commonly represents
98:53 discs but now it commonly represents redundant array of independent
98:56 redundant array of independent discs so what exactly is a redundant
98:59 discs so what exactly is a redundant array of independent discs or
99:02 array of independent discs or Raid raid is a storage virtualization
99:05 Raid raid is a storage virtualization technology that combines multiple drives
99:08 technology that combines multiple drives to create one or more logical drives for
99:11 to create one or more logical drives for providing redundancy and enhancing
99:14 providing redundancy and enhancing performance let's explain this with the
99:16 performance let's explain this with the help of an
99:19 help of an example on the slide you can see there
99:22 example on the slide you can see there are nine physical drives the raid
99:24 are nine physical drives the raid technology is what virtualizes the
99:26 technology is what virtualizes the performance and capacity of these drives
99:29 performance and capacity of these drives to create logical
99:31 to create logical drives in our example it creates two
99:34 drives in our example it creates two logical drives of varying
99:37 logical drives of varying capacities there cannot be a better
99:39 capacities there cannot be a better preface to raid than to find out how it
99:41 preface to raid than to find out how it came into
99:44 came into existence raid came into existence in
99:47 existence raid came into existence in the late 1980s and it was invented by
99:50 the late 1980s and it was invented by David a Patterson G Gibson and Randy H
99:54 David a Patterson G Gibson and Randy H Katz at the University of California
99:57 Katz at the University of California Berkeley the acronym then stood for
100:00 Berkeley the acronym then stood for redundant array of inexpensive discs and
100:03 redundant array of inexpensive discs and it was based on the inexpensive magnetic
100:05 it was based on the inexpensive magnetic disc technology that was developed for
100:07 disc technology that was developed for personal
100:13 computers behind every invention there's a need this applies to raid as
100:16 a need this applies to raid as well the purpose of raid then was to
100:19 well the purpose of raid then was to provide an attractive alternative to the
100:22 provide an attractive alternative to the expensive Mainframe drives in terms of
100:25 expensive Mainframe drives in terms of performance reliability scalability
100:28 performance reliability scalability power consumption and
100:30 power consumption and cost at that time it was felt that there
100:33 cost at that time it was felt that there was no IO crisis because the performance
100:35 was no IO crisis because the performance of the Mainframe drives was modest
100:37 of the Mainframe drives was modest compared to the increasing performance
100:40 compared to the increasing performance of CPUs and
100:42 of CPUs and memory so the solution of the raid
100:45 memory so the solution of the raid inventors was to have a collection or an
100:47 inventors was to have a collection or an array of inexpensive magnetic discs
100:51 array of inexpensive magnetic discs which not only provided higher bandwidth
100:53 which not only provided higher bandwidth than the Mainframe drives but also
100:56 than the Mainframe drives but also included extra discs for storing
100:58 included extra discs for storing redundant data to recover the original
101:00 redundant data to recover the original data when a disc
101:02 data when a disc failed thus raid was
101:06 failed thus raid was born now let's look at the concepts that
101:08 born now let's look at the concepts that are fundamental to the understanding of
101:12 are fundamental to the understanding of raid let's start with the raid
101:14 raid let's start with the raid group a raid group is also known as a
101:17 group a raid group is also known as a raid set or Raid array it is a group of
101:21 raid set or Raid array it is a group of two or more physical drives that that
101:23 two or more physical drives that that are configured to work together in order
101:25 are configured to work together in order to provide data redundancy and increased
101:29 to provide data redundancy and increased performance now let's look at par parity
101:33 performance now let's look at par parity is a technique used for detecting errors
101:35 is a technique used for detecting errors and correcting
101:37 and correcting them in this technique the parody data
101:40 them in this technique the parody data is computed from the actual data in the
101:42 is computed from the actual data in the raid
101:43 raid set when a drive fails the parody data
101:47 set when a drive fails the parody data is used with the existing data to
101:49 is used with the existing data to reconstruct the lost
101:51 reconstruct the lost data parody data is constructed using
101:54 data parody data is constructed using the Boolean operator called exclusive or
101:58 the Boolean operator called exclusive or or
102:00 or exor what exor does is it takes two
102:03 exor what exor does is it takes two inputs and produces one output the
102:06 inputs and produces one output the output of the exor is based on the rule
102:09 output of the exor is based on the rule that if the two inputs are identical
102:12 that if the two inputs are identical then the output is
102:14 then the output is zero otherwise the output is
102:17 zero otherwise the output is one let's explain how it is done with
102:20 one let's explain how it is done with the help of an
102:21 the help of an example in our example on the slide we
102:24 example in our example on the slide we have a raid set with three drives a b
102:28 have a raid set with three drives a b and c our parody data will be computed
102:31 and c our parody data will be computed from the actual data that is stored in
102:34 from the actual data that is stored in drive a and drive B and it will be
102:37 drive a and drive B and it will be stored in Drive
102:38 stored in Drive C in the first row drive a has Bit Zero
102:43 C in the first row drive a has Bit Zero and drive B has Bit Zero since they have
102:46 and drive B has Bit Zero since they have identical data applying exor will give
102:49 identical data applying exor will give us the parody data as Bit Zero and it is
102:53 us the parody data as Bit Zero and it is stored in the first row of Drive
102:56 stored in the first row of Drive C in the second row drive a has Bit Zero
103:00 C in the second row drive a has Bit Zero and drive B has bit one since they don't
103:03 and drive B has bit one since they don't have identical data applying exor will
103:06 have identical data applying exor will give us the parody data as bit one and
103:10 give us the parody data as bit one and it is stored in the second row of Drive
103:13 it is stored in the second row of Drive C similarly in the third row drive a and
103:17 C similarly in the third row drive a and drive B don't have identical data bits
103:20 drive B don't have identical data bits and xoring the bits will give us the
103:23 and xoring the bits will give us the parity data as bit one and it is stored
103:26 parity data as bit one and it is stored in the third row of Drive
103:28 in the third row of Drive C the last row of our example has
103:31 C the last row of our example has identical data bits in drive a and drive
103:34 identical data bits in drive a and drive B and exerting the bits will give us the
103:37 B and exerting the bits will give us the parody data as Bit Zero and it is stored
103:41 parody data as Bit Zero and it is stored in the last row of Drive
103:43 in the last row of Drive C if one of our drives fail let's say
103:46 C if one of our drives fail let's say Drive B fails then we can reconstruct
103:49 Drive B fails then we can reconstruct the data in Drive B using the existing
103:52 the data in Drive B using the existing data in drive a a and the parody data
103:54 data in drive a a and the parody data that is available in Drive
103:56 that is available in Drive C this is done by applying exort to data
104:00 C this is done by applying exort to data bits in drive a and drive
104:02 bits in drive a and drive C in the first row of the table drive a
104:06 C in the first row of the table drive a has Bit Zero and drive C has bit
104:09 has Bit Zero and drive C has bit Zer since they have identical data
104:12 Zer since they have identical data applying exor will give us the
104:14 applying exor will give us the reconstructed data as
104:16 reconstructed data as zero in the second row drive a has Bit
104:19 zero in the second row drive a has Bit Zero and drive C has bit one since since
104:23 Zero and drive C has bit one since since they don't have identical data applying
104:25 they don't have identical data applying exor will give us the reconstructed data
104:28 exor will give us the reconstructed data as
104:29 as one we can repeat these steps to
104:32 one we can repeat these steps to reconstruct all the data of Drive
104:34 reconstruct all the data of Drive B there are two types of par in raid
104:38 B there are two types of par in raid dedicated parity and distributed
104:41 dedicated parity and distributed parity in dedicated parity the parody
104:44 parity in dedicated parity the parody data is stored on a separate Drive
104:47 data is stored on a separate Drive whereas in distributed parity the parity
104:50 whereas in distributed parity the parity data is spread across all the physical
104:51 data is spread across all the physical drives
104:54 drives now let's look at
104:55 now let's look at striping striping is the technique of
104:58 striping striping is the technique of writing data by Distributing it equally
105:00 writing data by Distributing it equally across all the physical drives in a raid
105:04 across all the physical drives in a raid set when data is stored on all the
105:06 set when data is stored on all the physical drives it is said to be
105:09 physical drives it is said to be striped striping is done by dividing
105:12 striped striping is done by dividing data into chunks or
105:14 data into chunks or Stripes chunks can be bites or
105:17 Stripes chunks can be bites or blocks let's explain striping with the
105:20 blocks let's explain striping with the help of an
105:21 help of an example in our diagram we have two
105:24 example in our diagram we have two physical drives these two physical
105:27 physical drives these two physical drives support a single logical
105:29 drives support a single logical Drive the data to be written is split
105:32 Drive the data to be written is split into blocks let's say block zero block 1
105:36 into blocks let's say block zero block 1 block 2 and block
105:39 block 2 and block three the first block that is block zero
105:42 three the first block that is block zero is written to drive a the second block
105:46 is written to drive a the second block block one is written to drive
105:48 block one is written to drive B since there are no additional drives
105:51 B since there are no additional drives the third block that is block two is
105:54 the third block that is block two is written to drive a and the fourth block
105:57 written to drive a and the fourth block block three is written to drive B this
106:01 block three is written to drive B this ensures that the blocks are equally
106:03 ensures that the blocks are equally spread across all of the
106:05 spread across all of the drives when the same data needs to be
106:07 drives when the same data needs to be read the blocks can be accessed from all
106:10 read the blocks can be accessed from all the
106:11 the drives striping improves the performance
106:13 drives striping improves the performance of a raid array when data is written in
106:16 of a raid array when data is written in small chunks across all the physical
106:18 small chunks across all the physical drives it makes the physical drives work
106:20 drives it makes the physical drives work in parallel to service the right op
106:24 in parallel to service the right op operation it also allows the data to be
106:26 operation it also allows the data to be read in
106:28 read in parallel the combined I/O performance of
106:31 parallel the combined I/O performance of all these physical drives improves the
106:33 all these physical drives improves the performance of The Raid
106:35 performance of The Raid array let's explain this with the help
106:38 array let's explain this with the help of an
106:39 of an example on the slide you can see a raid
106:41 example on the slide you can see a raid set that comprises four physical
106:44 set that comprises four physical drives let's say that each of these
106:47 drives let's say that each of these drives offers a throughput of 100
106:50 drives offers a throughput of 100 iops so in our example The Raid set with
106:54 iops so in our example The Raid set with four physical drives will give a total
106:56 four physical drives will give a total throughput of up to 400
106:59 throughput of up to 400 iops it is worth noting that the size of
107:02 iops it is worth noting that the size of the chunk affects the performance of The
107:04 the chunk affects the performance of The Raid
107:05 Raid set when the chunk size is small many
107:09 set when the chunk size is small many chunks will be striped across many
107:11 chunks will be striped across many physical drives so there will be many
107:13 physical drives so there will be many drives that will work in parallel to
107:15 drives that will work in parallel to service read and write
107:17 service read and write operations however the downside is that
107:20 operations however the downside is that as chunks increase the positioning time
107:22 as chunks increase the positioning time to access the chunks across all the
107:24 to access the chunks across all the drives
107:26 drives increases on the other hand when the
107:28 increases on the other hand when the chunk size is Big there will be a few
107:31 chunk size is Big there will be a few chunks that will be striped across a few
107:35 chunks that will be striped across a few drives though the positioning time to
107:37 drives though the positioning time to access the chunks reduces there will be
107:39 access the chunks reduces there will be many simultaneous IO operations across
107:42 many simultaneous IO operations across the few drives choosing the chunk size
107:45 the few drives choosing the chunk size should depend on the characteristics of
107:47 should depend on the characteristics of the workload that the raid set is
107:49 the workload that the raid set is required to handle the determining
107:52 required to handle the determining factor is is the average IO size of the
107:55 factor is is the average IO size of the workload a rule of thumb is that if the
107:57 workload a rule of thumb is that if the average IO size is Big then the chunk
108:00 average IO size is Big then the chunk size should be small and if the average
108:03 size should be small and if the average IO size is small then the chunk size
108:05 IO size is small then the chunk size should be big for example transaction
108:09 should be big for example transaction environments such as those running
108:11 environments such as those running databases involve a huge number of small
108:14 databases involve a huge number of small read and write operations and for such
108:17 read and write operations and for such use cases the preferred chunk size is
108:20 use cases the preferred chunk size is big a big chunk size could be anything
108:23 big a big chunk size could be anything starting from 64
108:25 starting from 64 kiloby on the other hand applications
108:28 kiloby on the other hand applications such as video editing applications
108:31 such as video editing applications require a small number of big files to
108:33 require a small number of big files to be read
108:34 be read promptly and for such use cases the
108:37 promptly and for such use cases the preferred chunk size is small a small
108:40 preferred chunk size is small a small chunk size could be anything between 512
108:43 chunk size could be anything between 512 bytes to 8
108:45 bytes to 8 kilobytes stripe width is the number of
108:48 kilobytes stripe width is the number of stripes or chunks that can be written or
108:50 stripes or chunks that can be written or read
108:51 read simultaneously it is equal to the number
108:54 simultaneously it is equal to the number of physical drives present in the raid
108:56 of physical drives present in the raid set so we can say stripe width equals
109:00 set so we can say stripe width equals number of physical drives in the raid
109:03 number of physical drives in the raid set now let's discuss mirroring
109:06 set now let's discuss mirroring mirroring is a technique in which all
109:09 mirroring is a technique in which all data written to a physical dis Drive is
109:11 data written to a physical dis Drive is also written to another physical dis
109:14 also written to another physical dis Drive mirroring provides redundancy
109:17 Drive mirroring provides redundancy because if one physical disc drive fails
109:20 because if one physical disc drive fails then the data can still be recovered
109:21 then the data can still be recovered from the other physical iCal disc drive
109:24 from the other physical iCal disc drive however it is expensive because it
109:26 however it is expensive because it involves duplication of physical disc
109:28 involves duplication of physical disc drives that results in using only 50% of
109:32 drives that results in using only 50% of the total storage
109:38 capacity in mirroring the right operation can be marginally slow because
109:40 operation can be marginally slow because the data has to be written to two
109:42 the data has to be written to two physical disc drives however mirroring
109:45 physical disc drives however mirroring provides improved read performance as
109:48 provides improved read performance as the data can be read in parallel from
109:49 the data can be read in parallel from the physical disc drives
109:53 the physical disc drives let's explain mirroring with the help of
109:55 let's explain mirroring with the help of an
109:56 an example in our diagram we have two
109:58 example in our diagram we have two physical dis drives these two physical
110:01 physical dis drives these two physical drives support a single logical Drive
110:04 drives support a single logical Drive let the two physical drives be drive a
110:06 let the two physical drives be drive a and drive B when data is written to
110:09 and drive B when data is written to drive a the same data will also be
110:12 drive a the same data will also be written to drive
110:13 written to drive B so when data 1 is written to drive a
110:18 B so when data 1 is written to drive a it will be written to drive B similarly
110:21 it will be written to drive B similarly when data 2 is written to drive a it
110:23 when data 2 is written to drive a it will also be written to drive B and so
110:26 will also be written to drive B and so on if each of these two physical disc
110:29 on if each of these two physical disc drives has a capacity of 500 GB The
110:32 drives has a capacity of 500 GB The Logical disc drive will not have an
110:34 Logical disc drive will not have an aggregated capacity of 1 tbte it will
110:37 aggregated capacity of 1 tbte it will only have a capacity of 500 GB this is
110:41 only have a capacity of 500 GB this is because there is a loss of 50% capacity
110:44 because there is a loss of 50% capacity due to data
110:46 due to data duplication now let's see what hot spare
110:49 duplication now let's see what hot spare and hot swap
110:51 and hot swap means hot spare is a spare physical disc
110:54 means hot spare is a spare physical disc drive used to automatically replace a
110:56 drive used to automatically replace a failed physical disc drive in a raid
110:59 failed physical disc drive in a raid array hot swap is the ability to replace
111:02 array hot swap is the ability to replace the failed physical disc drive with a
111:04 the failed physical disc drive with a good drive without disrupting the
111:06 good drive without disrupting the functioning of a raate
111:07 functioning of a raate array and that brings us to the end of
111:10 array and that brings us to the end of this
111:11 this lesson let's summarize what we have
111:14 lesson let's summarize what we have learned in this lesson in this lesson we
111:16 learned in this lesson in this lesson we learned what raid is and then we looked
111:19 learned what raid is and then we looked at its brief
111:20 at its brief history next we looked at why there is a
111:23 history next we looked at why there is a need for raid we then looked at raid
111:26 need for raid we then looked at raid Concepts and these are raid group parody
111:30 Concepts and these are raid group parody striping mirroring hot spare and hot
111:34 striping mirroring hot spare and hot swap in the next lesson you will learn
111:37 swap in the next lesson you will learn about the raid levels thank you for
111:39 about the raid levels thank you for watching
112:04 hello and welcome to unit 2 raid levels in this lesson you will learn
112:06 levels in this lesson you will learn about the raid
112:08 about the raid levels we're going to start by looking
112:10 levels we're going to start by looking at what a raid level is and then we will
112:13 at what a raid level is and then we will be looking at the important raid levels
112:16 be looking at the important raid levels these are raid Z raid 1 RAID 10 raid 01
112:22 these are raid Z raid 1 RAID 10 raid 01 raid five and raid
112:26 raid five and raid six lastly we will look at the two types
112:29 six lastly we will look at the two types of raid
112:30 of raid implementation and these are software
112:33 implementation and these are software raid and Hardware
112:35 raid and Hardware raid now let's look at raid
112:38 raid now let's look at raid levels so what is a RAID
112:41 levels so what is a RAID level the array of drives in raid can be
112:44 level the array of drives in raid can be arranged in various ways to provide a
112:46 arranged in various ways to provide a variety of choices that differ in
112:49 variety of choices that differ in performance and reliability
112:52 performance and reliability each such Arrangement is referred to as
112:54 each such Arrangement is referred to as a raid
112:55 a raid level there are many raid levels but we
112:58 level there are many raid levels but we will focus on the important ones and
113:00 will focus on the important ones and these are raid zero raid 1 RAID 10 raid
113:06 these are raid zero raid 1 RAID 10 raid 01 raid five and raid
113:09 01 raid five and raid six we will start with raid zero raid
113:13 six we will start with raid zero raid zero doesn't offer
113:15 zero doesn't offer redundancy the only thing that raid zero
113:18 redundancy the only thing that raid zero does is striping the data equally across
113:20 does is striping the data equally across the raid set
113:23 the raid set raid zero requires a minimum of two
113:25 raid zero requires a minimum of two physical drives as we have seen before
113:28 physical drives as we have seen before striping improves the performance of The
113:30 striping improves the performance of The Raid
113:31 Raid set raid zero is not recommended for any
113:34 set raid zero is not recommended for any critical data as it doesn't offer data
113:37 critical data as it doesn't offer data protection however it can be used along
113:40 protection however it can be used along with other raid levels that do offer
113:42 with other raid levels that do offer data
113:44 data protection now let's look at raid
113:47 protection now let's look at raid one raid one is based on mirroring it
113:51 one raid one is based on mirroring it requires at least one pair of physical
113:53 requires at least one pair of physical drives as we saw before the data that is
113:56 drives as we saw before the data that is written to a physical Drive is also
113:58 written to a physical Drive is also written to another physical
114:00 written to another physical drive this means that the second
114:02 drive this means that the second physical Drive contains an identical
114:04 physical Drive contains an identical copy of the data that is in the first
114:07 copy of the data that is in the first physical
114:08 physical Drive such a pair of physical drives is
114:11 Drive such a pair of physical drives is referred to as a mirrored
114:13 referred to as a mirrored pair so raid one requires at least one
114:17 pair so raid one requires at least one pair of physical drives in which one
114:19 pair of physical drives in which one drive acts as a duplicate of the other
114:27 raid one provides absolute redundancy because even if a physical Drive fails
114:29 because even if a physical Drive fails the other physical Drive continues to
114:31 the other physical Drive continues to function and the data is safe in
114:35 function and the data is safe in it right performance in raid one is
114:38 it right performance in raid one is marginally affected because data must be
114:40 marginally affected because data must be written twice in contrast the read
114:43 written twice in contrast the read performance improves as data can be read
114:45 performance improves as data can be read in parallel from the physical
114:52 drives raid one is expensive because twice the storage capacity must be
114:54 twice the storage capacity must be purchased to provide the required
114:56 purchased to provide the required storage
114:58 storage space for example if the required
115:01 space for example if the required storage capacity is 1 terabyte then we
115:03 storage capacity is 1 terabyte then we need to purchase two physical drives
115:06 need to purchase two physical drives each with a capacity of 1
115:08 each with a capacity of 1 terabyte so to provide one terabyte of
115:11 terabyte so to provide one terabyte of storage capacity we are actually
115:13 storage capacity we are actually purchasing a total capacity of 2
115:16 purchasing a total capacity of 2 terabyt when a physical Drive fails The
115:19 terabyt when a physical Drive fails The Raid set goes into a degraded mode
115:22 Raid set goes into a degraded mode in a degraded mode the performance of
115:24 in a degraded mode the performance of The Raid set is degraded and there is a
115:27 The Raid set is degraded and there is a risk of losing data if another disc also
115:32 risk of losing data if another disc also fails raid one allows faster rebuild of
115:35 fails raid one allows faster rebuild of the failed drive because the complete
115:37 the failed drive because the complete data exists in the surviving physical
115:45 drive now let's look at RAID 10 RAID 10 is the combination of raid one and raid
115:47 is the combination of raid one and raid zero in this Arrangement raid one is
115:50 zero in this Arrangement raid one is first applied to the raid set followed
115:52 first applied to the raid set followed by raid
115:54 by raid zero this provides the best of both the
115:57 zero this provides the best of both the levels when raid one is applied first we
116:01 levels when raid one is applied first we create a mirrored pair providing
116:02 create a mirrored pair providing redundancy to our
116:04 redundancy to our data on top of this mirrored pair when
116:07 data on top of this mirrored pair when we apply raid zero we take advantage of
116:10 we apply raid zero we take advantage of striping which improves
116:13 striping which improves performance RAID 10 requires a minimum
116:16 performance RAID 10 requires a minimum of four physical drives in our example
116:19 of four physical drives in our example on the slide the first two mirrored
116:21 on the slide the first two mirrored physical dri drives occupy half of the
116:23 physical dri drives occupy half of the striped
116:25 striped data and the second two mirrored
116:27 data and the second two mirrored physical drives occupy the other half of
116:29 physical drives occupy the other half of the striped
116:31 the striped data in a nutshell RAID 10 provides high
116:35 data in a nutshell RAID 10 provides high performance with absolute
116:38 performance with absolute redundancy it is also expensive because
116:41 redundancy it is also expensive because mirroring demands that we purchase twice
116:43 mirroring demands that we purchase twice the storage capacity
116:49 needed now let's look at raid 01 raid 01 is the combination of raid Z
116:53 01 raid 01 is the combination of raid Z and raid 1 it is not the same as RAID 10
116:57 and raid 1 it is not the same as RAID 10 in raid 01 The Raid Z level is applied
117:00 in raid 01 The Raid Z level is applied first and on top of that raid one is
117:04 first and on top of that raid one is applied raid 01 requires a minimum of
117:08 applied raid 01 requires a minimum of four physical drives the first two
117:10 four physical drives the first two physical drives occupy the striped data
117:13 physical drives occupy the striped data and the second two physical drives
117:15 and the second two physical drives mirror the first
117:17 mirror the first pair now let's look at raid five raid
117:21 pair now let's look at raid five raid five is a Block Level striping combined
117:24 five is a Block Level striping combined with a single distributed
117:26 with a single distributed parody it requires a minimum of three
117:29 parody it requires a minimum of three physical drives raid five Stripes blocks
117:32 physical drives raid five Stripes blocks of data along with parody across three
117:35 of data along with parody across three or more physical
117:38 or more physical drives so the chunks used for striping
117:40 drives so the chunks used for striping are
117:41 are blocks the parody data is computed and
117:44 blocks the parody data is computed and spread across the physical drives as a
117:46 spread across the physical drives as a single chunk of parody
117:49 single chunk of parody data raid five is known for good
117:52 data raid five is known for good performance redundancy and storage
117:56 performance redundancy and storage efficiency the advantage of raid five is
117:59 efficiency the advantage of raid five is that it can survive a single dis failure
118:01 that it can survive a single dis failure without losing data because of the
118:03 without losing data because of the single distributed
118:09 parody raid controllers that support raid five usually feature hot sparing
118:12 raid five usually feature hot sparing and automatic rebuilding of the failed
118:14 and automatic rebuilding of the failed physical
118:16 physical Drive raid 5 does have a capacity
118:19 Drive raid 5 does have a capacity overhead and it is calculated as 1
118:22 overhead and it is calculated as 1 divided by the number of drives
118:25 divided by the number of drives multiplied by
118:27 multiplied by 100 we saw earlier that the parody data
118:30 100 we saw earlier that the parody data along with the existing data is used to
118:32 along with the existing data is used to rebuild the lost data but if there are
118:36 rebuild the lost data but if there are many physical drives then the rebuild of
118:38 many physical drives then the rebuild of the failed Drive takes more
118:44 time now let's look at raid six raid 6 is a block level striping
118:48 six raid 6 is a block level striping with double distributed
118:50 with double distributed parity as with rid five the drives are
118:53 parity as with rid five the drives are striped with blocks of data but in this
118:55 striped with blocks of data but in this case there is dual parody data
118:57 case there is dual parody data distributed across the
119:00 distributed across the drives raid six requires four or more
119:04 drives raid six requires four or more physical
119:05 physical drives the advantage of raid 6 is that
119:08 drives the advantage of raid 6 is that it can survive two Drive failures
119:10 it can survive two Drive failures without losing data because of double
119:12 without losing data because of double distributed
119:19 parity when it comes to Performance raid six is marginally less compared to raid
119:21 six is marginally less compared to raid five
119:22 five but raid 6 offers additional
119:26 but raid 6 offers additional redundancy raid 6 like raid five does
119:30 redundancy raid 6 like raid five does have a capacity overhead and it is
119:32 have a capacity overhead and it is calculated as two divided by the number
119:35 calculated as two divided by the number of drives multiplied by
119:42 100 now let's look at how raid is implemented there are two types of raid
119:45 implemented there are two types of raid implementation software raid and
119:47 implementation software raid and Hardware
119:49 Hardware raid software raid is implemented at the
119:52 raid software raid is implemented at the operating system level and all the
119:55 operating system level and all the functions of the raid are carried out by
119:57 functions of the raid are carried out by the processor of the host
120:03 computer software raid is best suited for simple raid levels such as raid zero
120:06 for simple raid levels such as raid zero 1 and
120:09 1 and 10 there is no additional cost that
120:12 10 there is no additional cost that needs to be incurred for software raid
120:14 needs to be incurred for software raid since it is a part of the operating
120:17 since it is a part of the operating system on the downside a software raid
120:20 system on the downside a software raid impacts system perform
120:22 impacts system perform performance the impact is substantial on
120:25 performance the impact is substantial on the overall system performance when it
120:27 the overall system performance when it requires processing complex raid levels
120:30 requires processing complex raid levels such as raid
120:33 such as raid five a software raid is specific to an
120:35 five a software raid is specific to an operating system meaning their
120:37 operating system meaning their functionality depends on the specific
120:39 functionality depends on the specific operating
120:41 operating system this also creates compatibility
120:44 system this also creates compatibility issues because AR raid set up in an
120:46 issues because AR raid set up in an operating system cannot generally be
120:48 operating system cannot generally be shared or accessed using other operating
120:51 shared or accessed using other operating systems
120:57 Hardware raid is implemented using dedicated Hardware that performs all the
120:59 dedicated Hardware that performs all the functions of the
121:02 functions of the raid the dedicated Hardware is called
121:05 raid the dedicated Hardware is called the hardware controller because it
121:06 the hardware controller because it controls the raid
121:12 array the hardware controller is like a miniature computer because it has its
121:15 miniature computer because it has its own processor and
121:20 memory a hardware raid that is implemented in the host host computer's
121:22 implemented in the host host computer's Hardware is either integrated into the
121:24 Hardware is either integrated into the motherboard of the host computer or as a
121:27 motherboard of the host computer or as a controller card that is plugged into an
121:29 controller card that is plugged into an expansion slot of the host
121:37 computer all raid operations are processed by the hardware controller and
121:39 processed by the hardware controller and as a result a hardware raid does not
121:42 as a result a hardware raid does not impact the performance of the host
121:48 computer a hardware raid offers Advanced features and these are right back cache
121:51 features and these are right back cache mode mode Hot spares and hot
121:56 mode mode Hot spares and hot swapping right back cache dramatically
121:58 swapping right back cache dramatically improves the performance of The Raid
122:01 improves the performance of The Raid array and that brings us to the end of
122:03 array and that brings us to the end of this
122:05 this lesson let's summarize what you have
122:07 lesson let's summarize what you have learned in this
122:08 learned in this lesson in this lesson you have learned
122:10 lesson in this lesson you have learned what a raid level is and then we looked
122:13 what a raid level is and then we looked at the important raid levels which are
122:16 at the important raid levels which are raid zero raid 1 RAID 10 raid 01 raid
122:22 raid zero raid 1 RAID 10 raid 01 raid five and raid
122:24 five and raid six next we looked at the two types of
122:27 six next we looked at the two types of raid implementation and these were
122:30 raid implementation and these were software raid and Hardware raid in the
122:33 software raid and Hardware raid in the next lesson you will learn the basics of
122:35 next lesson you will learn the basics of a storage area network or san thank you
122:39 a storage area network or san thank you for watching
122:41 for watching [Music]
123:03 hello and welcome to unit 1 introduction to
123:04 to San in this lesson you will learn the
123:07 San in this lesson you will learn the basics of
123:08 basics of sand we're going to start by looking at
123:11 sand we're going to start by looking at what a San is and then we will take a
123:14 what a San is and then we will take a look at why we need a
123:16 look at why we need a sand next we will look at the evolution
123:19 sand next we will look at the evolution in data storage technology starting with
123:22 in data storage technology starting with the direct attached storage or
123:24 the direct attached storage or Das we will then look at the network
123:27 Das we will then look at the network attached storage or Nas which is the
123:29 attached storage or Nas which is the next stage of advancement in storage
123:33 next stage of advancement in storage technology after Nas it is the storage
123:35 technology after Nas it is the storage area network or san that has marked the
123:38 area network or san that has marked the next stage of evolution in storage
123:41 next stage of evolution in storage technology so we will see how San solves
123:44 technology so we will see how San solves the limitations of Das and then we will
123:47 the limitations of Das and then we will look at the media that connects the
123:48 look at the media that connects the servers to the storage devices in a
123:51 servers to the storage devices in a storage area area
123:53 storage area area network we will also talk about the
123:55 network we will also talk about the fiber channel technology that has become
123:58 fiber channel technology that has become extremely popular as media used for
124:02 extremely popular as media used for sand we will then introduce you to fiber
124:04 sand we will then introduce you to fiber channel sand or FC sand which is a sand
124:08 channel sand or FC sand which is a sand built using fiber channel
124:11 built using fiber channel technology lastly we will look at the
124:13 technology lastly we will look at the benefits of fiber channel
124:15 benefits of fiber channel sand now let's look at what a sand is so
124:19 sand now let's look at what a sand is so what is a sand sand stands stand for
124:22 what is a sand sand stands stand for storage area
124:23 storage area network storage area network is a
124:26 network storage area network is a high-speed Network whose main function
124:28 high-speed Network whose main function is to allow data transfer between the
124:31 is to allow data transfer between the computer systems and the storage devices
124:34 computer systems and the storage devices and also among the storage
124:36 and also among the storage devices why do we need San let's answer
124:40 devices why do we need San let's answer this question San represents the
124:42 this question San represents the evolution in data storage
124:46 evolution in data storage technology there has been progress in
124:48 technology there has been progress in the data storage technology from Das to
124:50 the data storage technology from Das to San
124:53 San in the traditional client server systems
124:55 in the traditional client server systems each server had its own storage and that
124:58 each server had its own storage and that storage was directly attached to the
125:00 storage was directly attached to the server either internally or
125:04 server either internally or externally such server attached storage
125:07 externally such server attached storage is referred to as direct attached
125:09 is referred to as direct attached storage or
125:12 storage or Das Das provides the servers with
125:14 Das Das provides the servers with high-speed exclusive access to the
125:17 high-speed exclusive access to the storage and it is preferred by small
125:19 storage and it is preferred by small companies considering cost and
125:26 performance however the disadvantage of Das is that it creates pockets of
125:29 Das is that it creates pockets of isolated storage that are not
125:30 isolated storage that are not efficiently
125:37 utilized for example when one server has plenty of free storage capacity
125:39 plenty of free storage capacity available another server may be running
125:41 available another server may be running out of space and by the direct attached
125:44 out of space and by the direct attached storage design the free capacity of the
125:47 storage design the free capacity of the servers cannot be
125:49 servers cannot be shared in addition to this when a
125:51 shared in addition to this when a business deploys more servers in a
125:53 business deploys more servers in a network not only is there an increase in
125:56 network not only is there an increase in the waste of storage capacity but there
125:58 the waste of storage capacity but there is also an increase in the complexity of
126:01 is also an increase in the complexity of managing isolated storage
126:07 resources the next stage of advancement in storage technology was the network
126:09 in storage technology was the network attached storage or
126:12 attached storage or Nas the concept of Nas is to decouple
126:15 Nas the concept of Nas is to decouple storage from the servers and make it a
126:17 storage from the servers and make it a centralized pool of shared storage that
126:20 centralized pool of shared storage that can be accessed by all the servers
126:22 can be accessed by all the servers connected to the
126:24 connected to the network technically speaking Nas is not
126:26 network technically speaking Nas is not a network but a storage array that is
126:29 a network but a storage array that is hooked up to an existing Network to
126:31 hooked up to an existing Network to provide shared
126:36 storage Nas can provide centralized shared storage with terabytes of storage
126:39 shared storage with terabytes of storage capacity however it doesn't provide the
126:42 capacity however it doesn't provide the high-speed data protection needed in
126:44 high-speed data protection needed in Enterprise environments because it
126:46 Enterprise environments because it typically sits on an existing shared
126:48 typically sits on an existing shared corporate Network and a complete data
126:50 corporate Network and a complete data backup will not only take a considerable
126:53 backup will not only take a considerable time but it will also bog down the
126:56 time but it will also bog down the network after Nas it was San that marked
126:59 network after Nas it was San that marked the next stage of evolution in storage
127:02 the next stage of evolution in storage technology San is a dedicated Network
127:05 technology San is a dedicated Network that transfers blocks of data at a high
127:08 that transfers blocks of data at a high speed to a storage device and it offers
127:10 speed to a storage device and it offers low latency for the io requests to
127:13 low latency for the io requests to access the storage
127:15 access the storage device in addition to that San allows
127:19 device in addition to that San allows several servers to connect to several
127:21 several servers to connect to several storage devices in order to share data
127:24 storage devices in order to share data and also allows the storage devices to
127:27 and also allows the storage devices to communicate with each
127:29 communicate with each other San helps to solve many of the
127:32 other San helps to solve many of the challenges faced in the traditional
127:34 challenges faced in the traditional server attached storage for example the
127:37 server attached storage for example the traditional server attached storage
127:39 traditional server attached storage cannot satisfy the ever increasing
127:41 cannot satisfy the ever increasing demands for storage capacity as it is
127:43 demands for storage capacity as it is not scalable due to the restrictions in
127:46 not scalable due to the restrictions in the number of storage devices that can
127:47 the number of storage devices that can be added to the
127:49 be added to the servers such problems can be solved by
127:52 servers such problems can be solved by San as it is scalable San allows many
127:55 San as it is scalable San allows many new storage devices to be added to the
127:57 new storage devices to be added to the network without adding new
127:59 network without adding new servers the storage devices in San can
128:02 servers the storage devices in San can be aggregated into a central pool of
128:04 be aggregated into a central pool of shared storage that can be accessed by
128:06 shared storage that can be accessed by the
128:08 the servers the server attached storage also
128:10 servers the server attached storage also doesn't provide High availability of
128:12 doesn't provide High availability of data because if a server goes down the
128:15 data because if a server goes down the data becomes inaccessible since the
128:17 data becomes inaccessible since the storage is tied to the server
128:21 storage is tied to the server San overcomes this problem because it
128:24 San overcomes this problem because it segregates the storage devices from
128:27 segregates the storage devices from servers so if a server goes down in a
128:29 servers so if a server goes down in a storage area network data is still
128:33 storage area network data is still accessible though San connects a
128:35 accessible though San connects a multitude of servers and storage devices
128:37 multitude of servers and storage devices the performance doesn't suffer because
128:40 the performance doesn't suffer because the network is characterized by both
128:42 the network is characterized by both highspeed and low latency
128:44 highspeed and low latency features the high-speed data transfer
128:47 features the high-speed data transfer and low latency of the io requests in
128:49 and low latency of the io requests in San can be compared to the high
128:51 San can be compared to the high performance of storage directly attached
128:54 performance of storage directly attached to a server as in direct attached
128:57 to a server as in direct attached storage so the io requests in San access
129:01 storage so the io requests in San access storage devices similar to
129:04 storage devices similar to das so in a nutshell San is a dedicated
129:07 das so in a nutshell San is a dedicated Network that is scalable and highly
129:10 Network that is scalable and highly available with the primary purpose of
129:12 available with the primary purpose of providing highspeed and low latency
129:14 providing highspeed and low latency access to storage
129:16 access to storage devices the next thing that we will talk
129:19 devices the next thing that we will talk about is the media that connects the
129:20 about is the media that connects the servers to the storage devices in a
129:22 servers to the storage devices in a storage area
129:24 storage area network media is the actual cables and
129:27 network media is the actual cables and physical wiring that connects the
129:29 physical wiring that connects the storage devices to the
129:31 storage devices to the servers the media is associated with a
129:34 servers the media is associated with a unique protocol and it is always managed
129:37 unique protocol and it is always managed by that
129:38 by that protocol the protocol specifies the
129:40 protocol the protocol specifies the format and sequence of data exchange
129:43 format and sequence of data exchange between the servers and the storage
129:46 between the servers and the storage devices fiber channel is a technology
129:49 devices fiber channel is a technology that has become extremely popular as a
129:51 that has become extremely popular as a medium used for sand the actual media
129:54 medium used for sand the actual media used in fiber channel technology can be
129:57 used in fiber channel technology can be different types of optical and
129:58 different types of optical and electrical transmission media such as
130:01 electrical transmission media such as fiber and copper
130:07 respectively Sands are typically built using the fiber channel Technology based
130:09 using the fiber channel Technology based on a family of Standards developed by
130:11 on a family of Standards developed by the American national standards
130:13 the American national standards Institute or
130:20 an these standards Define high-speed Network Technology that provides fast
130:22 Network Technology that provides fast data transfer rates that are commonly
130:24 data transfer rates that are commonly over 2 gbits per
130:27 over 2 gbits per second the standards also define the
130:30 second the standards also define the properties of the media and how data is
130:32 properties of the media and how data is transmitted across the
130:34 transmitted across the media fiber channel has become a defao
130:37 media fiber channel has become a defao standard in Sans for connecting client
130:40 standard in Sans for connecting client computers and server computers to a
130:43 computers and server computers to a highly scalable voluminous data
130:46 highly scalable voluminous data storage a sand built using fiber channel
130:49 storage a sand built using fiber channel technology is called a fiber Channel
130:51 technology is called a fiber Channel sand or FC
130:54 sand or FC San the goal of fiber channel sand is to
130:57 San the goal of fiber channel sand is to increase the accessibility of data
130:59 increase the accessibility of data across the
131:00 across the organization since organizations have
131:03 organization since organizations have heterogeneous combinations of operating
131:05 heterogeneous combinations of operating systems such as Windows Unix Linux and
131:09 systems such as Windows Unix Linux and Os 390 fiber channel was designed to
131:13 Os 390 fiber channel was designed to accommodate these operating systems and
131:15 accommodate these operating systems and the applications that run on
131:18 the applications that run on them fiber channel sand solves a
131:20 them fiber channel sand solves a fundamental problem of reliably making
131:23 fundamental problem of reliably making terabytes of information available to
131:25 terabytes of information available to hundreds of
131:27 hundreds of servers for making data available to
131:30 servers for making data available to servers applications and users across
131:33 servers applications and users across the Enterprise FC sand provide a network
131:35 the Enterprise FC sand provide a network of storage resources that decouple
131:38 of storage resources that decouple storage from Individual
131:41 storage from Individual servers while direct attached storage
131:44 servers while direct attached storage and network attached storage may be
131:46 and network attached storage may be appropriate for small networks fiber
131:48 appropriate for small networks fiber channel sand is most appropriate for
131:50 channel sand is most appropriate for large LGE storage
131:53 large LGE storage networks the benefits of fiber channel
131:56 networks the benefits of fiber channel sand are speeds up backup and restore
132:00 sand are speeds up backup and restore processes provides business
132:03 processes provides business continuity increases High
132:06 continuity increases High availability provid storage
132:09 availability provid storage consolidation let's look into these
132:11 consolidation let's look into these benefits one by
132:12 benefits one by one speeds up backup and restore
132:16 one speeds up backup and restore processes the growth of data and its
132:18 processes the growth of data and its criticality in organizations has made it
132:20 criticality in organizations has made it a valuable business asset that needs
132:23 a valuable business asset that needs protection and
132:25 protection and stability FC San can speed up and
132:28 stability FC San can speed up and simplify data backup
132:30 simplify data backup processes fiber channels are designed to
132:33 processes fiber channels are designed to transport large blocks of data with
132:35 transport large blocks of data with great efficiency and
132:37 great efficiency and reliability two popular sand-based
132:39 reliability two popular sand-based backup and restore models are the
132:42 backup and restore models are the landree and server free
132:44 landree and server free models we will talk more about landree
132:47 models we will talk more about landree and server free in a separate module
132:50 and server free in a separate module titled backup and
132:52 titled backup and Recovery provides business continuity
132:55 Recovery provides business continuity the distributed Network approach of FC
132:57 the distributed Network approach of FC Sands helps to recover data and bring it
133:00 Sands helps to recover data and bring it online in the event of a
133:03 online in the event of a disaster many organizations cannot
133:06 disaster many organizations cannot afford downtime of even a few minutes in
133:09 afford downtime of even a few minutes in such cases sand protects against
133:11 such cases sand protects against downtime in the following Ways by
133:13 downtime in the following Ways by ensuring that there is no single point
133:15 ensuring that there is no single point of failure by integrating failover
133:18 of failure by integrating failover software by rationalizing data backup up
133:21 software by rationalizing data backup up and recovery and enabling mirroring of
133:24 and recovery and enabling mirroring of data in remote
133:26 data in remote locations increases High
133:29 locations increases High availability the key availability
133:31 availability the key availability benefits of San include built-in
133:34 benefits of San include built-in redundancy Dynamic failover
133:37 redundancy Dynamic failover protection and automatic rerouting
133:41 protection and automatic rerouting capabilities with flexible connectivity
133:43 capabilities with flexible connectivity options a sand can be developed with no
133:46 options a sand can be developed with no single point of
133:47 single point of failure FC Sands also provide hot
133:51 failure FC Sands also provide hot pluging features that allow storage to
133:53 pluging features that allow storage to be plugged into the network without
133:55 be plugged into the network without experiencing any server
133:58 experiencing any server downtime provide storage
134:01 downtime provide storage consolidation fiber channel storage area
134:03 consolidation fiber channel storage area networks have become a stronghold for
134:05 networks have become a stronghold for companies that are looking to increase
134:07 companies that are looking to increase their storage utilization and
134:09 their storage utilization and manageability while at the same time
134:11 manageability while at the same time cutting
134:17 costs FC San allows any to any connectivity between heterogeneous
134:19 connectivity between heterogeneous servers and storage systems
134:22 servers and storage systems this allows efficient use of servers and
134:24 this allows efficient use of servers and storage Resources by consolidating the
134:26 storage Resources by consolidating the widely distributed underutilized storage
134:29 widely distributed underutilized storage into centralized
134:32 into centralized storage though sand has gained massive
134:34 storage though sand has gained massive popularity because of fiber channel
134:36 popularity because of fiber channel technology the concept of sand itself is
134:39 technology the concept of sand itself is not tied to any form of
134:42 not tied to any form of technology so sand can also be built
134:45 technology so sand can also be built using other Technologies such as
134:47 using other Technologies such as ethernet-based technology like internet
134:49 ethernet-based technology like internet scuzzi or I SC does
134:52 scuzzi or I SC does he and that brings us to the end of this
134:55 he and that brings us to the end of this lesson let's summarize what you have
134:57 lesson let's summarize what you have learned in this
134:59 learned in this lesson in this lesson you learned what a
135:02 lesson in this lesson you learned what a sand was and then we looked at why we
135:04 sand was and then we looked at why we need a sand next we looked at the
135:07 need a sand next we looked at the evolution in data storage technology
135:09 evolution in data storage technology starting with direct attached storage or
135:12 starting with direct attached storage or Das we then looked at network attached
135:14 Das we then looked at network attached storage or Nas which is the next stage
135:17 storage or Nas which is the next stage of advancement in storage technology
135:21 of advancement in storage technology after Nas it was sand that marked the
135:23 after Nas it was sand that marked the next stage of evolution in storage
135:25 next stage of evolution in storage technology so we saw how sand solves the
135:28 technology so we saw how sand solves the limitations of Das and then we talked
135:31 limitations of Das and then we talked about the media that connects the
135:32 about the media that connects the servers to the storage devices in a
135:34 servers to the storage devices in a storage area
135:36 storage area network we also talked about the fiber
135:38 network we also talked about the fiber channel technology that has become
135:40 channel technology that has become extremely popular as a media used for
135:42 extremely popular as a media used for sand we then introduced you to fiber
135:45 sand we then introduced you to fiber channel sand or FC sand which is a sand
135:48 channel sand or FC sand which is a sand built using fiber channel technology
135:51 built using fiber channel technology lastly we looked at the benefits of the
135:53 lastly we looked at the benefits of the fiber channel sand in the next lesson
135:56 fiber channel sand in the next lesson you will learn about the fiber channel
135:58 you will learn about the fiber channel architecture thank you for watching
136:24 hello and welcome to unit 2 fiber channel
136:25 channel architecture in this lesson you will
136:28 architecture in this lesson you will learn about the fiber channel
136:30 learn about the fiber channel architecture we're going to start by
136:32 architecture we're going to start by looking at what a fiber channel is and
136:35 looking at what a fiber channel is and then we will talk about the channels and
136:37 then we will talk about the channels and the networks in the context of fiber
136:39 the networks in the context of fiber channel
136:41 channel technology next we will look at the
136:43 technology next we will look at the features of fiber
136:45 features of fiber channel and then we will look at why
136:47 channel and then we will look at why there is a need for fiber channnel
136:48 there is a need for fiber channnel technology
136:51 technology we will look at how fiber channnel works
136:54 we will look at how fiber channnel works and then we will look at the fiber
136:55 and then we will look at the fiber channel
136:56 channel protocol before we get into the depth of
136:59 protocol before we get into the depth of fiber channel we will look at the
137:01 fiber channel we will look at the commonly used terms in fiber channel and
137:04 commonly used terms in fiber channel and these are node Port link and
137:09 these are node Port link and frame we will then look at how fiber
137:12 frame we will then look at how fiber channel is logically structured into
137:14 channel is logically structured into layers and then we will look at the
137:16 layers and then we will look at the functions of each
137:17 functions of each layer next we will look at the flow
137:20 layer next we will look at the flow control in fiber channel sand and then
137:23 control in fiber channel sand and then we will talk about the two types of flow
137:25 we will talk about the two types of flow control that can be implemented in fiber
137:27 control that can be implemented in fiber channel
137:29 channel sand these are end to endend flow
137:31 sand these are end to endend flow control and buffer to buffer flow
137:35 control and buffer to buffer flow control lastly we will look at the
137:38 control lastly we will look at the classes of services in fiber channel
137:40 classes of services in fiber channel San and these are class one class 2
137:45 San and these are class one class 2 class 3 class 4 and Class f
137:51 class 3 class 4 and Class f now let's look at what a fiber channnel
137:53 now let's look at what a fiber channnel is fiber channel is a set of standards
137:56 is fiber channel is a set of standards that Define a high performance data
137:58 that Define a high performance data transmission technology that can
138:00 transmission technology that can transport data of varying types and
138:02 transport data of varying types and sizes at high speeds over computer
138:05 sizes at high speeds over computer peripherals and
138:11 networks the name fiber channel can be misleading because the words fiber and
138:14 misleading because the words fiber and channel could make us think that the
138:16 channel could make us think that the technology is limited to fiber optics
138:18 technology is limited to fiber optics and channels
138:21 and channels but the fact is fiber channel also
138:23 but the fact is fiber channel also supports copper as a transmission media
138:26 supports copper as a transmission media and Carries channel and network traffic
138:28 and Carries channel and network traffic with equivalent
138:31 with equivalent efficiency let's shed some light on
138:33 efficiency let's shed some light on channel and network a channel is a
138:36 channel and network a channel is a peripheral input output interface that
138:39 peripheral input output interface that allows direct data transfer between the
138:41 allows direct data transfer between the computer and the devices attached to
138:44 computer and the devices attached to it the primary purpose of channels is to
138:47 it the primary purpose of channels is to provide error-free data delivery through
138:49 provide error-free data delivery through Hardware support
138:52 Hardware support being hardware-based channels provide
138:54 being hardware-based channels provide High data transfer speeds for large data
138:57 High data transfer speeds for large data with minimum
138:58 with minimum overhead channels are typically used for
139:01 overhead channels are typically used for the communication of peripheral devices
139:03 the communication of peripheral devices with the host computer such as dis
139:05 with the host computer such as dis drives tape units and
139:08 drives tape units and printers on the other hand networks
139:10 printers on the other hand networks connect a large number of devices to
139:12 connect a large number of devices to each other allowing any device to
139:15 each other allowing any device to communicate with any other
139:18 communicate with any other device there is a high overhead
139:20 device there is a high overhead associated with data transfer because
139:22 associated with data transfer because the packets are routed to the correct
139:24 the packets are routed to the correct destination usually with software
139:26 destination usually with software support which slows down the
139:28 support which slows down the network the network is used for a wide
139:31 network the network is used for a wide range of tasks such as delivering data
139:33 range of tasks such as delivering data that requires error-free delivery as
139:36 that requires error-free delivery as well as data that requires ontime
139:38 well as data that requires ontime delivery for example voice data transfer
139:41 delivery for example voice data transfer requires ontime
139:43 requires ontime delivery fiber channel overcomes the
139:46 delivery fiber channel overcomes the limitation of network data transfer by
139:49 limitation of network data transfer by combining the best of both Channel
139:50 combining the best of both Channel channel and network
139:52 channel and network Technologies as a result channel and
139:54 Technologies as a result channel and network protocols can share the same
139:56 network protocols can share the same transmission
139:58 transmission media fiber channel can be compared to a
140:00 media fiber channel can be compared to a telephone Network in that it provides a
140:03 telephone Network in that it provides a temporary direct connection between the
140:05 temporary direct connection between the nodes with the option of utilizing the
140:07 nodes with the option of utilizing the full bandwidth of the transmission media
140:10 full bandwidth of the transmission media as long as the connection
140:12 as long as the connection exists it is implemented as an interface
140:15 exists it is implemented as an interface that acts as a general-purpose transport
140:17 that acts as a general-purpose transport vehicle for simultaneously delivering
140:20 vehicle for simultaneously delivering command sets of various protocols such
140:22 command sets of various protocols such as IP scuzzy 3 and so
140:27 as IP scuzzy 3 and so on fiber channel allows data transfers
140:30 on fiber channel allows data transfers of varying sizes be it large data
140:33 of varying sizes be it large data transfer or small data
140:35 transfer or small data transfer as a result they are used in
140:38 transfer as a result they are used in transmitting data in Diversified systems
140:41 transmitting data in Diversified systems that range from workstations to
140:46 supercomputers the way fiber channel works is it defines the method of
140:48 works is it defines the method of transporting the data from one device to
140:51 transporting the data from one device to another device regardless of the type of
140:53 another device regardless of the type of data being
140:55 data being transmitted for example the devices can
140:57 transmitted for example the devices can be two computers on a network exchanging
141:00 be two computers on a network exchanging data or a computer sending data to its
141:02 data or a computer sending data to its peripheral device such as a dis
141:06 peripheral device such as a dis array let's discuss the important
141:08 array let's discuss the important features of fiber
141:10 features of fiber channel fast fiber channel provides
141:13 channel fast fiber channel provides high-speed data transfer rates with the
141:15 high-speed data transfer rates with the common speeds of 2 gbits per second 4
141:18 common speeds of 2 gbits per second 4 gbits per second 8 gbits per second and
141:21 gbits per second 8 gbits per second and 16 gbits per second flexible fiber
141:25 16 gbits per second flexible fiber channel allows multiple protocols to be
141:27 channel allows multiple protocols to be transported over a common transmission
141:30 transported over a common transmission media asynchronous fiber channel handles
141:34 media asynchronous fiber channel handles the transmission of both channel and
141:35 the transmission of both channel and network kinds of traffic in the very
141:38 network kinds of traffic in the very same protocol for the purpose of fully
141:40 same protocol for the purpose of fully utilizing the available
141:42 utilizing the available bandwidth long distance the cable
141:45 bandwidth long distance the cable lengths between devices can be from 30 m
141:48 lengths between devices can be from 30 m to 10
141:49 to 10 km we can have 30 m when using copper as
141:52 km we can have 30 m when using copper as the transmission medium and 10 km when
141:55 the transmission medium and 10 km when using fiber optics as the transmission
141:59 using fiber optics as the transmission medium
142:01 medium scalable more number of devices can be
142:04 scalable more number of devices can be connected a simple arbitrated Loop
142:06 connected a simple arbitrated Loop topology can have up to 126 devices
142:09 topology can have up to 126 devices connected and a switched fabric topology
142:12 connected and a switched fabric topology can support millions of
142:14 can support millions of devices reliable fiber channel provides
142:18 devices reliable fiber channel provides Superior data encoding and error check
142:20 Superior data encoding and error check checking mechanisms along with the
142:22 checking mechanisms along with the improved reliability of Serial
142:29 Communications Backward Compatible fiber channel is compatible with older
142:31 channel is compatible with older Technologies such as scuzzy and ethernet
142:34 Technologies such as scuzzy and ethernet using
142:40 Bridges standardized fiber channel is an industrywide standard so the products
142:42 industrywide standard so the products developed by different vendors will work
142:46 developed by different vendors will work together over the years the speed of
142:48 together over the years the speed of processors and peripheral devices kept
142:50 processors and peripheral devices kept improving but the network and channel
142:53 improving but the network and channel interconnect Technologies such as
142:55 interconnect Technologies such as ethernet and scuzzy couldn't cope with
142:57 ethernet and scuzzy couldn't cope with the Improvement and have become
142:59 the Improvement and have become bottlenecks in system
143:01 bottlenecks in system performance fiber channel was developed
143:04 performance fiber channel was developed as a solution to the growing need of an
143:06 as a solution to the growing need of an interconnect technology that supported
143:08 interconnect technology that supported both high-speed and high volume data
143:11 both high-speed and high volume data transfers for network and storage
143:18 Services fiber channel was the generic name given to the set of standards
143:20 name given to the set of standards developed by the Committees accredited
143:22 developed by the Committees accredited by the American national standards
143:24 by the American national standards Institute
143:26 Institute Ani these standards Define the physical
143:28 Ani these standards Define the physical characteristics of the transmission
143:30 characteristics of the transmission media and the connectors that connect
143:32 media and the connectors that connect the devices and also describes the
143:35 the devices and also describes the related Network
143:37 related Network topologies since fiber channel only
143:39 topologies since fiber channel only provides a data transport mechanism the
143:42 provides a data transport mechanism the standards Define how to map the upper
143:44 standards Define how to map the upper level protocols such as scuzzy and IP to
143:47 level protocols such as scuzzy and IP to the fiber channel data format without
143:49 the fiber channel data format without replacing them
143:51 replacing them them the advantages of mapping the
143:54 them the advantages of mapping the scuzzy command set to fiber channel when
143:56 scuzzy command set to fiber channel when compared to scuzzy are as follows
143:59 compared to scuzzy are as follows high-speed data transfers greater number
144:01 high-speed data transfers greater number of devices can be connected together
144:04 of devices can be connected together large distances are allowed between the
144:07 large distances are allowed between the devices fiber channel protocol is the
144:10 devices fiber channel protocol is the serial scuzzy command protocol used on
144:12 serial scuzzy command protocol used on fiber channel
144:14 fiber channel networks let's look at a few terms that
144:16 networks let's look at a few terms that are commonly used in fiber
144:19 are commonly used in fiber channel fiber Channel devices are called
144:22 channel fiber Channel devices are called nodes a node is a generic name used to
144:25 nodes a node is a generic name used to denote any device such as a dis Drive
144:27 denote any device such as a dis Drive printer workstation scanner
144:31 printer workstation scanner Etc each node has at least one port that
144:34 Etc each node has at least one port that provides access to other
144:37 provides access to other nodes a port is the interface in a node
144:40 nodes a port is the interface in a node used for external
144:42 used for external communication a node will have at least
144:45 communication a node will have at least one
144:46 one port each Port has two connections
144:49 port each Port has two connections designated as transmitter
144:51 designated as transmitter TX and receiver
144:59 RX fiber channel allows transmission of data over different kinds of electrical
145:01 data over different kinds of electrical and optical
145:03 and optical cables the pair of cables that connect
145:05 cables the pair of cables that connect two nodes is called a link one cable
145:08 two nodes is called a link one cable plugged into TX is used to carry
145:10 plugged into TX is used to carry information out of the node and the
145:13 information out of the node and the other cable plugged into RX is used to
145:15 other cable plugged into RX is used to receive information into the
145:18 receive information into the node data is broken into frames which
145:21 node data is broken into frames which are transmitted from one node to the
145:23 are transmitted from one node to the other via a
145:25 other via a link the link can handle multiple types
145:27 link the link can handle multiple types of frames for example a frame containing
145:30 of frames for example a frame containing scuzzy information can be sent along
145:32 scuzzy information can be sent along with a frame containing IP information
145:35 with a frame containing IP information and so
145:36 and so on a frame is the smallest unit of data
145:39 on a frame is the smallest unit of data that can be transferred across a link
145:42 that can be transferred across a link and is composed of a string of four
145:44 and is composed of a string of four contiguous encoded characters such as
145:46 contiguous encoded characters such as data characters or special characters
145:51 data characters or special characters the frames are typically constructed by
145:53 the frames are typically constructed by the node that transmits the
145:55 the node that transmits the frames a sequence in data transmission
145:58 frames a sequence in data transmission is one or more
145:59 is one or more frames an exchange is one or more
146:03 frames an exchange is one or more sequences the main constituents of a
146:05 sequences the main constituents of a frame are a header a payload and a
146:08 frame are a header a payload and a cyclic redundancy check sum
146:12 cyclic redundancy check sum CRC in addition to that it is also
146:15 CRC in addition to that it is also surrounded by a start of frame delimiter
146:18 surrounded by a start of frame delimiter and an end of frame delimiter
146:21 and an end of frame delimiter the header contains Source destination
146:24 the header contains Source destination and other frame
146:29 information it tells from where the frame came and to where it is destined
146:33 frame came and to where it is destined to start of frame and end of frame
146:35 to start of frame and end of frame delimeters indicate the beginning and
146:38 delimeters indicate the beginning and the end of a frame
146:40 the end of a frame respectively a payload contains the
146:43 respectively a payload contains the useful data carried by the
146:45 useful data carried by the frame and has a maximum length of
146:48 frame and has a maximum length of 2,112 bytes
146:51 2,112 bytes cyclic redundancy check sum or CRC is
146:54 cyclic redundancy check sum or CRC is the scheme that detects Errors By
146:56 the scheme that detects Errors By checking the Integrity of the header and
146:58 checking the Integrity of the header and the
147:03 payload fiber channel is logically structured into five layers based on
147:06 structured into five layers based on their functions as shown in the
147:08 their functions as shown in the figure these layers are similar to
147:11 figure these layers are similar to layers of the OSI model and have
147:13 layers of the OSI model and have interfaces for interlayer
147:19 communication let's discuss the functions of each
147:20 functions of each layer fc0 fc0 defines the physical
147:25 layer fc0 fc0 defines the physical interfaces including the physical
147:27 interfaces including the physical transmission media connectors and the
147:30 transmission media connectors and the parameters for optical and electrical
147:32 parameters for optical and electrical data
147:34 data transmission fc1 is the transmission
147:37 transmission fc1 is the transmission protocol layer that does the serial
147:39 protocol layer that does the serial encoding decoding and error control of
147:42 encoding decoding and error control of the
147:44 the data fc2 is the signaling protocol that
147:48 data fc2 is the signaling protocol that breaks the data into frames for
147:49 breaks the data into frames for transport port and reassembles them at
147:52 transport port and reassembles them at the receiving
147:53 the receiving end it defines the framing rules of the
147:56 end it defines the framing rules of the data that is sent from one port to
148:00 data that is sent from one port to another it also has mechanisms for
148:02 another it also has mechanisms for managing three service classes and
148:04 managing three service classes and controls the sequence of data
148:12 transfer fc3 provides a set of services that are common for multiple ports on a
148:16 that are common for multiple ports on a node fc4 defines the interface mapping
148:19 node fc4 defines the interface mapping between the upper layer protocols and
148:21 between the upper layer protocols and the lower layers of fiber
148:24 the lower layers of fiber channel examples of upper layer
148:26 channel examples of upper layer Protocols are scuzi and
148:30 Protocols are scuzi and IP next we will look at flow control
148:34 IP next we will look at flow control flow control restricts the flow of
148:35 flow control restricts the flow of frames from the transmitter port to the
148:38 frames from the transmitter port to the receiver port in order to prevent
148:40 receiver port in order to prevent overflow at the receiver
148:42 overflow at the receiver Port fiber channel uses a credit model
148:45 Port fiber channel uses a credit model to implement flow
148:47 to implement flow control a credit is the maximum number
148:50 control a credit is the maximum number of frames that can be transmitted to a
148:53 of frames that can be transmitted to a recipient there are two types of flow
148:56 recipient there are two types of flow control end to end flow control and
148:59 control end to end flow control and buffer Tob buffer flow
149:01 buffer Tob buffer flow control in end to end flow control the
149:05 control in end to end flow control the credits are negotiated between the
149:06 credits are negotiated between the source end device and the destination
149:09 source end device and the destination end device before the data transmission
149:11 end device before the data transmission takes
149:17 place the source end device decreases its credits by one when it sends a frame
149:19 its credits by one when it sends a frame to the destination end
149:21 to the destination end device on receipt of the frame the
149:24 device on receipt of the frame the destination end device sends an
149:26 destination end device sends an acknowledgement frame or act frame to
149:29 acknowledgement frame or act frame to the source end
149:30 the source end device when the source end device
149:32 device when the source end device receives the acknowledgement frame it
149:34 receives the acknowledgement frame it increases its credits by one so that it
149:37 increases its credits by one so that it can send one more
149:39 can send one more frame buffert buffer flow control is
149:42 frame buffert buffer flow control is used between an end device and a switch
149:45 used between an end device and a switch or between end devices in a
149:47 or between end devices in a point-to-point topology
149:50 point-to-point topology the credits are negotiated before data
149:52 the credits are negotiated before data transmission takes place in buffer to
149:55 transmission takes place in buffer to buffer flow control the receiver sends a
149:58 buffer flow control the receiver sends a receiver ready frame to the sender when
150:01 receiver ready frame to the sender when it is ready to receive the
150:02 it is ready to receive the frames the sender decreases its credits
150:05 frames the sender decreases its credits by one for every frame sent and
150:08 by one for every frame sent and increases its credit by one for every
150:10 increases its credit by one for every receiver ready frame it
150:13 receiver ready frame it receives next we will look at the
150:15 receives next we will look at the classes of
150:16 classes of services classes of services are
150:19 services classes of services are different methods of communication
150:20 different methods of communication between two
150:22 between two nodes there are five classes of
150:25 nodes there are five classes of services class one is a dedicated
150:27 services class one is a dedicated connection between two nodes it allows
150:30 connection between two nodes it allows the nodes to communicate using the full
150:32 the nodes to communicate using the full bandwidth of the dedicated connection
150:34 bandwidth of the dedicated connection without being affected by other network
150:36 without being affected by other network traffic as a result the frames are
150:39 traffic as a result the frames are guaranteed to be delivered in the order
150:41 guaranteed to be delivered in the order of their
150:42 of their transmission end to end flow control is
150:45 transmission end to end flow control is used in class one class one is good for
150:48 used in class one class one is good for high throughput transactions
150:51 high throughput transactions Class 2 Class 2 is a frame switched
150:54 Class 2 Class 2 is a frame switched service with no dedicated connection
150:56 service with no dedicated connection Class 2 allows different nodes to share
150:58 Class 2 allows different nodes to share the bandwidth of the service by
151:00 the bandwidth of the service by multiplexing
151:01 multiplexing frames since there is no dedicated
151:04 frames since there is no dedicated connection a node can transmit frames to
151:07 connection a node can transmit frames to and receive frames from multiple nodes
151:10 and receive frames from multiple nodes the transmitting node receives an
151:11 the transmitting node receives an acknowledgement that confirms the frame
151:14 acknowledgement that confirms the frame delivery Class 2 uses both buffer to
151:17 delivery Class 2 uses both buffer to buffer and endtoend flow control
151:20 buffer and endtoend flow control class two can be compared to typical
151:22 class two can be compared to typical land traffic where order and ontime
151:25 land traffic where order and ontime delivery is not
151:27 delivery is not crucial class 3 class 3 is a frame
151:31 crucial class 3 class 3 is a frame switched connectionless service similar
151:33 switched connectionless service similar to class 2 but does not use
151:35 to class 2 but does not use acknowledgements to confirm frame
151:37 acknowledgements to confirm frame delivery class 3 is also referred to as
151:40 delivery class 3 is also referred to as datagram service when the frames are
151:43 datagram service when the frames are lost during delivery the upper layer
151:45 lost during delivery the upper layer protocol efficiently ensures
151:47 protocol efficiently ensures retransmission
151:54 class 3 uses buffert buffer flow control class 4 provides multiple
151:56 control class 4 provides multiple dedicated connections from one node to
151:59 dedicated connections from one node to many nodes at the same time it differs
152:02 many nodes at the same time it differs from class one because in class one
152:05 from class one because in class one there is only one dedicated connection
152:07 there is only one dedicated connection between two nodes but in class 4 a node
152:10 between two nodes but in class 4 a node may be connected to more than one node
152:12 may be connected to more than one node via separate logical circuits each path
152:15 via separate logical circuits each path provided by a logical circuit between
152:18 provided by a logical circuit between two nodes receives only a portion of the
152:20 two nodes receives only a portion of the bandwidth because it is shared among
152:22 bandwidth because it is shared among multiple logical
152:25 multiple logical circuits in class 4 service the frames
152:28 circuits in class 4 service the frames are guaranteed to be delivered in the
152:29 are guaranteed to be delivered in the order of their transmission and
152:31 order of their transmission and acknowledgements are used for delivered
152:35 acknowledgements are used for delivered frames endtoend flow control is used in
152:38 frames endtoend flow control is used in class
152:39 class 4 class f is a reserved service used for
152:43 4 class f is a reserved service used for switch to switch Communications in a
152:45 switch to switch Communications in a fabric it is a connectionless service
152:48 fabric it is a connectionless service that provides acknowledgement for the
152:50 that provides acknowledgement for the delivery of
152:51 delivery of packets it is used only for ports that
152:54 packets it is used only for ports that connect two switches and for various
152:56 connect two switches and for various tasks such as exchange routing name
152:59 tasks such as exchange routing name service and delivery of notifications
153:02 service and delivery of notifications between
153:03 between switches class F uses buffer to buffer
153:06 switches class F uses buffer to buffer flow
153:07 flow control and that brings us to the end of
153:10 control and that brings us to the end of this
153:11 this lesson let's summarize what you have
153:13 lesson let's summarize what you have learned in this lesson in this lesson
153:15 learned in this lesson in this lesson you learned what a fiber channel is and
153:18 you learned what a fiber channel is and then we talked about channels and
153:19 then we talked about channels and networks in the context of fiber channel
153:23 networks in the context of fiber channel technology next we looked at the
153:25 technology next we looked at the features of fiber channel and then we
153:27 features of fiber channel and then we looked at why there is a need for fiber
153:29 looked at why there is a need for fiber channel technology we also looked at how
153:32 channel technology we also looked at how fiber channel works and then we looked
153:34 fiber channel works and then we looked at the fiber channel
153:36 at the fiber channel protocol next we looked at the commonly
153:38 protocol next we looked at the commonly used terms in fiber channel technology
153:41 used terms in fiber channel technology and these are node Port link and frame
153:45 and these are node Port link and frame we then looked at how fiber channel is
153:47 we then looked at how fiber channel is logically structured into layers and and
153:49 logically structured into layers and and then we looked at the functions of each
153:52 then we looked at the functions of each layer next we looked at the flow control
153:55 layer next we looked at the flow control in fiber channel San and then we looked
153:57 in fiber channel San and then we looked at the two types of flow control that
153:59 at the two types of flow control that can be implemented in fiber channel sand
154:02 can be implemented in fiber channel sand these are end to endend flow control and
154:04 these are end to endend flow control and buffer to buffer flow
154:06 buffer to buffer flow control lastly we looked at the classes
154:08 control lastly we looked at the classes of services in fiber channel sand and
154:11 of services in fiber channel sand and these are class 1 2 3 4 and F in the
154:17 these are class 1 2 3 4 and F in the next lesson you will learn about the
154:19 next lesson you will learn about the components of fiber channel sand thank
154:21 components of fiber channel sand thank you for
154:45 watching hello and welcome to unit 3 components of FC
154:48 components of FC San in this lesson you will learn about
154:51 San in this lesson you will learn about the components of fiber channel sand we
154:54 the components of fiber channel sand we will look at the components of a fiber
154:56 will look at the components of a fiber channel storage area network and these
154:59 channel storage area network and these include servers host bus adapters cables
155:04 include servers host bus adapters cables hubs switches Bridges and storage when
155:08 hubs switches Bridges and storage when we talk about the host bus adapter or
155:11 we talk about the host bus adapter or HBA we will look at what the worldwide
155:14 HBA we will look at what the worldwide name is and then we will look at the two
155:16 name is and then we will look at the two types of worldwide name the these are
155:20 types of worldwide name the these are worldwide node name or
155:23 worldwide node name or wwnn and worldwide Port name or ww
155:34 PN in the context of HBA we will look at what a gigabit interface converter or
155:36 what a gigabit interface converter or gbic is and then we will look at what
155:39 gbic is and then we will look at what small form factor plugable or SFP
155:44 small form factor plugable or SFP is we will also look at the types of
155:46 is we will also look at the types of sfps and these are electrical SFP
155:50 sfps and these are electrical SFP and Optical
155:52 and Optical SFP we will cover the optical SFP in
155:55 SFP we will cover the optical SFP in detail when we talk about the
155:57 detail when we talk about the connectivity of the fiber optic cables
155:59 connectivity of the fiber optic cables to the fiber channel
156:01 to the fiber channel devices next we will look at the
156:03 devices next we will look at the converged network adapter which is a PCI
156:06 converged network adapter which is a PCI expansion card which combines the
156:08 expansion card which combines the functionality of both the host bus
156:10 functionality of both the host bus adapter and the network interface
156:13 adapter and the network interface card when talking about the converged
156:16 card when talking about the converged network adapter we will look at what
156:18 network adapter we will look at what data center Bridge or DCB
156:21 data center Bridge or DCB is when talking about data cables we
156:24 is when talking about data cables we will take a detailed look at the type of
156:26 will take a detailed look at the type of cables that are used in fiber channel
156:29 cables that are used in fiber channel sand these are copper cables and Fiber
156:32 sand these are copper cables and Fiber Optic
156:33 Optic Cables we will also look at what fiber
156:36 Cables we will also look at what fiber optic connectors are and then we will
156:38 optic connectors are and then we will look at the two most popular fiber optic
156:41 look at the two most popular fiber optic connectors these are Lucent connector
156:44 connectors these are Lucent connector and subscriber
156:46 and subscriber connector when talking about fiber
156:49 connector when talking about fiber channel switch we will also look at what
156:51 channel switch we will also look at what inter switch link or ISL is lastly we
156:55 inter switch link or ISL is lastly we will look at what a fiber channel Port
156:57 will look at what a fiber channel Port is and then we will look at the
156:59 is and then we will look at the different port names these are nport
157:02 different port names these are nport fport E port G port and
157:06 fport E port G port and uport now let's start with the
157:08 uport now let's start with the components of a fiber channel storage
157:10 components of a fiber channel storage area
157:11 area network the components of a fiber
157:14 network the components of a fiber channel storage area network are servers
157:17 channel storage area network are servers host bus adapters cables hubs switches
157:22 host bus adapters cables hubs switches Bridges and
157:23 Bridges and storage let's look at them one by one
157:26 storage let's look at them one by one servers in a sand environment do not
157:29 servers in a sand environment do not have storage tied to them as the data
157:32 have storage tied to them as the data management is taken by the storage
157:33 management is taken by the storage devices such as storage arrays the
157:36 devices such as storage arrays the servers can efficiently handle their
157:38 servers can efficiently handle their other
157:39 other tasks the next component we will be
157:41 tasks the next component we will be looking at is the host bus adapter or
157:44 looking at is the host bus adapter or HBA a host bus adapter is an IO adapter
157:48 HBA a host bus adapter is an IO adapter typically in the form of a pcie
157:50 typically in the form of a pcie expansion card or a component on the
157:53 expansion card or a component on the motherboard that connects the server's
157:55 motherboard that connects the server's memory bus to its IO
157:58 memory bus to its IO bus the io adapter accepts input and
158:02 bus the io adapter accepts input and generates output in a specific
158:06 generates output in a specific format the iob bus refers to the path
158:09 format the iob bus refers to the path used for transferring data and control
158:11 used for transferring data and control information either to a storage device
158:13 information either to a storage device or to a
158:15 or to a network HBA typically is a standard PCI
158:19 network HBA typically is a standard PCI expansion card that is plugged into the
158:21 expansion card that is plugged into the expansion slot of the
158:23 expansion slot of the server it connects the server to a
158:26 server it connects the server to a storage device either as direct attached
158:28 storage device either as direct attached storage or as networked
158:32 storage or as networked storage a host bus adapter looks like a
158:35 storage a host bus adapter looks like a network interface card or
158:37 network interface card or Nick compared to an HBA a Nick only
158:41 Nick compared to an HBA a Nick only frames packets and controls the flow of
158:43 frames packets and controls the flow of data to the data link layer a Nick
158:46 data to the data link layer a Nick depends on the server CPU for other
158:48 depends on the server CPU for other tasks such such as protocol
158:51 tasks such such as protocol processing each entity in a fiber
158:53 processing each entity in a fiber Channel network is uniquely identified
158:56 Channel network is uniquely identified by a 64-bit address called a worldwide
158:59 by a 64-bit address called a worldwide name they are represented in eight hexad
159:01 name they are represented in eight hexad deim pairs separated by colons as shown
159:04 deim pairs separated by colons as shown in the
159:06 in the slide there are two types of worldwide
159:09 slide there are two types of worldwide names worldwide node name
159:12 names worldwide node name wwnn and worldwide Port name or wwpns
159:21 wide node name uniquely identifies each device on the fiber Channel
159:23 device on the fiber Channel network the worldwide Port name uniquely
159:26 network the worldwide Port name uniquely identifies each port in a
159:29 identifies each port in a device each HBA has a unique worldwide
159:32 device each HBA has a unique worldwide node name and a unique worldwide Port
159:36 node name and a unique worldwide Port name the worldwide node name identifies
159:39 name the worldwide node name identifies the entire device and in this case the
159:42 the entire device and in this case the entire HBA the worldwide Port name
159:45 entire HBA the worldwide Port name identifies each port and in this case
159:48 identifies each port and in this case the HBA port
159:50 the HBA port the worldwide name is similar to the MAC
159:52 the worldwide name is similar to the MAC address of a Nick card but unlike a MAC
159:55 address of a Nick card but unlike a MAC address it cannot be used to transport
159:57 address it cannot be used to transport frames on the
159:59 frames on the network an HBA contains a processor
160:02 network an HBA contains a processor memory and
160:03 memory and firmware HBA frees the server CPU by
160:07 firmware HBA frees the server CPU by offloading the critical tasks such as
160:09 offloading the critical tasks such as protocol processing and error detection
160:12 protocol processing and error detection during the offload process the HBA
160:15 during the offload process the HBA caches the incoming data into memory to
160:17 caches the incoming data into memory to improve performance in addition to that
160:20 improve performance in addition to that the intelligence of the hba's firmware
160:23 the intelligence of the hba's firmware takes care of useful features such as
160:25 takes care of useful features such as load balancing and
160:28 load balancing and failover the biggest advantage of HBA is
160:31 failover the biggest advantage of HBA is that it gives a throughput close to that
160:33 that it gives a throughput close to that of data link speed with a negligible
160:36 of data link speed with a negligible impact on the
160:38 impact on the CPU the BIOS in the fiber channel HBA
160:41 CPU the BIOS in the fiber channel HBA allows dis list servers to boot from the
160:43 allows dis list servers to boot from the storage array that has the operating
160:45 storage array that has the operating system when a diskless server is powered
160:48 system when a diskless server is powered on the HBA bios of that server connects
160:51 on the HBA bios of that server connects to the San and locates the storage array
160:54 to the San and locates the storage array that has the operating system since
160:56 that has the operating system since fiber channel hbas operate at 1 or 2
160:59 fiber channel hbas operate at 1 or 2 gbits per second booting from Sand
161:02 gbits per second booting from Sand shouldn't be a problem the host bus
161:05 shouldn't be a problem the host bus adapter contains the gigabit interface
161:07 adapter contains the gigabit interface converter or GIC into which the network
161:10 converter or GIC into which the network cable is plugged
161:11 cable is plugged in these networking cables can either be
161:14 in these networking cables can either be fiber optic cable or copper
161:17 fiber optic cable or copper cable GIC is a hot swappable transceiver
161:20 cable GIC is a hot swappable transceiver that takes care of the conversion
161:22 that takes care of the conversion between the electrical signals used by
161:24 between the electrical signals used by the HBA and either electrical or Optical
161:27 the HBA and either electrical or Optical signals suitable for transmission over
161:29 signals suitable for transmission over the network
161:31 the network cables the purpose of gbic was to
161:34 cables the purpose of gbic was to provide a gigabit Port that can support
161:36 provide a gigabit Port that can support a number of fiber channel media
161:39 a number of fiber channel media types the upgraded version of GIC is
161:42 types the upgraded version of GIC is called small form factor pluggable or
161:45 called small form factor pluggable or SFP an SFP module is half the size of a
161:48 SFP an SFP module is half the size of a gbit
161:49 gbit but provides double the number of
161:52 but provides double the number of ports SFP modules are also called mini
161:57 ports SFP modules are also called mini GIC there are two types of sfps
162:00 GIC there are two types of sfps electrical SFP and Optical
162:03 electrical SFP and Optical SFP the electrical SFP connects the HBA
162:06 SFP the electrical SFP connects the HBA to The Copper networking cable and the
162:08 to The Copper networking cable and the optical SFP connects the HBA to the
162:11 optical SFP connects the HBA to the fiber optic networking
162:13 fiber optic networking cable the optical SFP has a laser
162:16 cable the optical SFP has a laser emitting diode or a laser diode that
162:19 emitting diode or a laser diode that emits Optical signals used for
162:21 emits Optical signals used for transmission of data over the Fiber
162:23 transmission of data over the Fiber Optic
162:25 Optic Cables the optical SFP is the type of
162:28 Cables the optical SFP is the type of SFP that is dominant in the
162:30 SFP that is dominant in the market we will talk more about Optical
162:33 market we will talk more about Optical SFP when we talk about the connectivity
162:35 SFP when we talk about the connectivity of the fiber optic cables to the fiber
162:37 of the fiber optic cables to the fiber channel
162:39 channel devices the next thing we will talk
162:41 devices the next thing we will talk about is the converged network
162:43 about is the converged network adapter the converged network adapter is
162:46 adapter the converged network adapter is a PCI expansion card that combin the
162:49 a PCI expansion card that combin the functionality of both the host bus
162:51 functionality of both the host bus adapter and the network interface
162:54 adapter and the network interface card the converge network adapter
162:56 card the converge network adapter supports simultaneous Lan and sand
162:59 supports simultaneous Lan and sand traffic over a shared ethernet Link in
163:02 traffic over a shared ethernet Link in addition to that it offloads the server
163:04 addition to that it offloads the server CPU with the protocol
163:07 CPU with the protocol processing let's say we want to connect
163:09 processing let's say we want to connect our server to both fiber channel sand
163:11 our server to both fiber channel sand and ethernet Lan in a traditional setup
163:14 and ethernet Lan in a traditional setup the server would need two adapters a
163:16 the server would need two adapters a fiber channel host bus adapter and and
163:19 fiber channel host bus adapter and and an Ethernet Nick as you can see in the
163:21 an Ethernet Nick as you can see in the diagram the server can access both the
163:24 diagram the server can access both the storage area network and the local area
163:26 storage area network and the local area network but with the help of two
163:28 network but with the help of two different adapters that is fiber channel
163:31 different adapters that is fiber channel HBA and Nick respectively we mentioned
163:34 HBA and Nick respectively we mentioned that CNA provides the functionality of
163:37 that CNA provides the functionality of both fiber channel HBA and Nick in a
163:39 both fiber channel HBA and Nick in a single adapter now in the diagram you
163:42 single adapter now in the diagram you can see that the server is using only
163:44 can see that the server is using only one converged network adapter that
163:46 one converged network adapter that connects to a DCB switch DC CB or data
163:50 connects to a DCB switch DC CB or data center bridging refers to the
163:51 center bridging refers to the enhancements made to the ethernet
163:53 enhancements made to the ethernet protocol in order to allow ethernet
163:55 protocol in order to allow ethernet based land switching to support fiber
163:57 based land switching to support fiber channel traffic now let's talk about the
164:00 channel traffic now let's talk about the data cables that are used in a fiber
164:02 data cables that are used in a fiber channel storage area
164:04 channel storage area network we know that a cable is one or
164:07 network we know that a cable is one or more wires with a protected casing used
164:09 more wires with a protected casing used for transmitting either electricity or
164:11 for transmitting either electricity or signals a data cable is a cable that
164:14 signals a data cable is a cable that physically connects the computers and
164:16 physically connects the computers and devices for the purpose of data
164:18 devices for the purpose of data communication
164:20 communication there are two types of cables that
164:22 there are two types of cables that enable data transmission in a fiber
164:24 enable data transmission in a fiber channel storage area network copper
164:26 channel storage area network copper cables and Fiber Optic
164:29 cables and Fiber Optic Cables the early implementation of fiber
164:31 Cables the early implementation of fiber channel used copper cables copper cables
164:34 channel used copper cables copper cables use electrical signals to transmit data
164:38 use electrical signals to transmit data the data transmission distance of copper
164:40 the data transmission distance of copper cables in fiber channel cabling is
164:42 cables in fiber channel cabling is limited to a maximum of 30 m it is
164:46 limited to a maximum of 30 m it is ideally used within
164:48 ideally used within buildings let's look at a few
164:49 buildings let's look at a few characteristics of copper cables they
164:52 characteristics of copper cables they are
164:53 are inexpensive they use electrical pulses
164:55 inexpensive they use electrical pulses to transfer data and they can be used
164:58 to transfer data and they can be used for short distance data
165:01 for short distance data transmission copper cables have their
165:03 transmission copper cables have their downsides the downsides are data
165:06 downsides the downsides are data transmission in copper cable is
165:08 transmission in copper cable is disrupted when exposed to
165:09 disrupted when exposed to electromagnetic
165:15 interference there is also a loss of signal strength when the signal travels
165:17 signal strength when the signal travels long distances copper cables are not
165:20 long distances copper cables are not immune to external
165:22 immune to external noise copper cables have a lower
165:24 noise copper cables have a lower bandwidth than Fiber Optic
165:26 bandwidth than Fiber Optic Cables now let's look at the types of
165:29 Cables now let's look at the types of copper cables there are four types of
165:31 copper cables there are four types of copper cables coaxial twin ax unshielded
165:36 copper cables coaxial twin ax unshielded twisted pair or UTP and shielded twisted
165:40 twisted pair or UTP and shielded twisted pair
165:42 pair STP coaxial cable has a solid copper
165:45 STP coaxial cable has a solid copper wire at its Center it is surrounded by a
165:48 wire at its Center it is surrounded by a plastic layer of insulation and this
165:50 plastic layer of insulation and this insulation is covered by the braided
165:52 insulation is covered by the braided metal shield that protects against
165:54 metal shield that protects against electromagnetic
166:00 interference a final layer of insulation covers the braided metal
166:02 covers the braided metal shield twinx has two solid copper wires
166:05 shield twinx has two solid copper wires at its Center instead of
166:07 at its Center instead of one it is the only type of cable that is
166:10 one it is the only type of cable that is widely used in fiber channel copper
166:13 widely used in fiber channel copper cabling unshielded twisted pair cable
166:16 cabling unshielded twisted pair cable has four pairs of Twisted copper wires
166:19 has four pairs of Twisted copper wires each of these wires is covered by
166:21 each of these wires is covered by colorcoded plastic
166:23 colorcoded plastic insulation all the wires are bundled
166:25 insulation all the wires are bundled inside a plastic
166:27 inside a plastic jacket both ends of a UTP cable are
166:30 jacket both ends of a UTP cable are terminated using a RJ45
166:33 terminated using a RJ45 connector one end of the UTP cable with
166:36 connector one end of the UTP cable with the RJ45 connector is plugged into a
166:39 the RJ45 connector is plugged into a computer's Nick card and the other end
166:41 computer's Nick card and the other end with the RJ45 connector is plugged into
166:43 with the RJ45 connector is plugged into a female RJ45 Port there are different
166:47 a female RJ45 Port there are different categories of UTP caes
166:49 categories of UTP caes at present there are six categories
166:51 at present there are six categories Based on data transmission
166:54 Based on data transmission rates we will look at the popular ones
166:57 rates we will look at the popular ones Cat 5 cat 5e and Cat 6 here cat is the
167:02 Cat 5 cat 5e and Cat 6 here cat is the abbreviation of the word category and E
167:04 abbreviation of the word category and E in cat 5e stands for
167:08 in cat 5e stands for enhanced Cat 5 is capable of a data
167:11 enhanced Cat 5 is capable of a data transmission rate of 100 megabits per
167:14 transmission rate of 100 megabits per second cat 5e is capable of a data
167:17 second cat 5e is capable of a data transmission rate of 1 gabit per
167:20 transmission rate of 1 gabit per second cat 6 is capable of a data
167:23 second cat 6 is capable of a data transmission rate of 10 gbits per
167:26 transmission rate of 10 gbits per second a shielded twisted pair cable has
167:29 second a shielded twisted pair cable has four pairs of Twisted copper wires with
167:32 four pairs of Twisted copper wires with each pair shielded with foil and they
167:34 each pair shielded with foil and they are all bundled inside a braided
167:36 are all bundled inside a braided jacket STP cables are costlier than utps
167:40 jacket STP cables are costlier than utps and support higher transmission rates
167:42 and support higher transmission rates across long
167:44 across long distances copper cables are seldom used
167:46 distances copper cables are seldom used in fiber channel sand
167:49 in fiber channel sand fiber optic cable is typically used for
167:51 fiber optic cable is typically used for cabling in FC
167:54 cabling in FC Sands however fiber optic cables are
167:56 Sands however fiber optic cables are costlier than copper
167:58 costlier than copper cables since fiber channel devices are
168:01 cables since fiber channel devices are not compatible with copper cables an SFP
168:04 not compatible with copper cables an SFP transceiver is used to interface between
168:06 transceiver is used to interface between a fiber channel device and a copper
168:09 a fiber channel device and a copper cable now we will talk about Fiber Optic
168:12 cable now we will talk about Fiber Optic Cables Fiber Optic Cables use light
168:15 Cables Fiber Optic Cables use light pulses to transmit
168:17 pulses to transmit data the depending on the type of
168:19 data the depending on the type of optical cable the data transmission
168:21 optical cable the data transmission distance can be up to 2 km or up to 10
168:26 distance can be up to 2 km or up to 10 km now let's look into the composition
168:29 km now let's look into the composition of a fiber optic
168:31 of a fiber optic cable a fiber optic cable is made up of
168:34 cable a fiber optic cable is made up of one or more Optical
168:36 one or more Optical fibers an optical fiber is the medium
168:39 fibers an optical fiber is the medium through which light signals are
168:40 through which light signals are transmitted from one place to
168:42 transmitted from one place to another these signals are digital pulses
168:45 another these signals are digital pulses or continuously modulated analog streams
168:47 or continuously modulated analog streams of light that represent information such
168:50 of light that represent information such as data video and
168:54 as data video and audio each optical fiber consists of a
168:57 audio each optical fiber consists of a core a cladding layer and a protective
168:59 core a cladding layer and a protective buffer layer as shown in the
169:02 buffer layer as shown in the diagram the main part of an optical
169:05 diagram the main part of an optical fiber is the core through which light
169:08 fiber is the core through which light travels the core is made up of an
169:10 travels the core is made up of an extremely thin flexible strand of pure
169:13 extremely thin flexible strand of pure glass with a diameter between 8.3
169:15 glass with a diameter between 8.3 microns and 10 microns a micron R is a
169:19 microns and 10 microns a micron R is a thousandth of a
169:21 thousandth of a millimeter data is transmitted through
169:23 millimeter data is transmitted through the core in the form of light
169:26 the core in the form of light signals the core is covered by a layer
169:28 signals the core is covered by a layer of a different type of glass called
169:30 of a different type of glass called cladding the cladding has a higher
169:33 cladding the cladding has a higher refractive index than the core which
169:35 refractive index than the core which keeps the light signals contained inside
169:37 keeps the light signals contained inside the core by bouncing them back and forth
169:39 the core by bouncing them back and forth when the light signals hit the
169:42 when the light signals hit the cladding the protective buffer layer is
169:45 cladding the protective buffer layer is made of plastic material that protects
169:47 made of plastic material that protects the core and the cladding from from any
169:50 the core and the cladding from from any damage there are two types of buffer
169:52 damage there are two types of buffer tight and loose
169:55 tight and loose tube in a tight buffer the protective
169:58 tube in a tight buffer the protective layer is coated on each side of the
170:00 layer is coated on each side of the optical fibers in the
170:01 optical fibers in the cable in a loose tube buffer one or more
170:05 cable in a loose tube buffer one or more Optical fibers are placed inside a tough
170:07 Optical fibers are placed inside a tough plastic tube which is then filled with
170:09 plastic tube which is then filled with protective gel to provide
170:13 protective gel to provide cushioning Optical fibers inside a fiber
170:16 cushioning Optical fibers inside a fiber optic cable are often designated as a
170:18 optic cable are often designated as a ratio of core size and cladding
170:22 ratio of core size and cladding size the core size is the diameter of
170:24 size the core size is the diameter of the core and the cladding size is the
170:27 the core and the cladding size is the outer diameter of the
170:29 outer diameter of the cladding typical core size by cladding
170:32 cladding typical core size by cladding size ratios are 9 by
170:36 size ratios are 9 by 125 50 by
170:39 125 50 by 125 and 62.5x
170:46 125 in addition to the corar size by cladding size ratio colors are used to
170:49 cladding size ratio colors are used to differentiate fiber optic
170:51 differentiate fiber optic cables for example the jacket colors
170:55 cables for example the jacket colors yellow orange and slate gray are used to
170:58 yellow orange and slate gray are used to identify the core diameters of 9 50 and
171:02 identify the core diameters of 9 50 and 62.5 microns
171:09 respectively the primary advantages of using fiber optic cables are higher band
171:12 using fiber optic cables are higher band with more information can be transmitted
171:15 with more information can be transmitted in less
171:16 in less time less attention uation lower signal
171:20 time less attention uation lower signal loss than
171:22 loss than copper no electrical interference not
171:25 copper no electrical interference not affected by electromagnetic
171:31 interference let's see how fiber optic cable
171:32 cable Works a laser diode or LED emits light
171:36 Works a laser diode or LED emits light signals into the fiber optic
171:39 signals into the fiber optic cable these light signals travel through
171:41 cable these light signals travel through the core by bouncing back and forth on
171:43 the core by bouncing back and forth on the edges of the core covered by
171:45 the edges of the core covered by cladding
171:50 this is because when the light signals are sent at a certain angle to hit the
171:52 are sent at a certain angle to hit the core it propagates through the core by
171:55 core it propagates through the core by being reflected back and forth as if the
171:57 being reflected back and forth as if the edges of the core were a mirror this
172:00 edges of the core were a mirror this process is called total internal
172:07 reflection transmission of light occurs due to total internal
172:10 due to total internal reflection though there is no loss of
172:12 reflection though there is no loss of optical power at the core cladding
172:14 optical power at the core cladding interface the light is still lost while
172:16 interface the light is still lost while it travels through the core this loss of
172:19 it travels through the core this loss of light is called
172:25 attenuation now let's discuss the types of fiber optic cables there are two
172:27 of fiber optic cables there are two types of fiber optic cables single mode
172:30 types of fiber optic cables single mode fiber cable and multi mode fiber
172:34 fiber cable and multi mode fiber cable this classification is based on
172:36 cable this classification is based on the mode in which the light signals
172:38 the mode in which the light signals travel in the optical
172:40 travel in the optical fiber in a single mode fiber cable the
172:43 fiber in a single mode fiber cable the light signals travel in a single path by
172:46 light signals travel in a single path by being reflected at a consistent angle
172:50 being reflected at a consistent angle the most common core diameter of a
172:52 the most common core diameter of a single mode fiber is 8.3
172:55 single mode fiber is 8.3 microns this requires a laser as the
172:57 microns this requires a laser as the light source since the light has to
172:59 light source since the light has to travel long
173:05 distances single mode fibers support data transmission up to a distance of 10
173:08 data transmission up to a distance of 10 km they are characterized by extremely
173:11 km they are characterized by extremely low signal attenuation and high
173:14 low signal attenuation and high bandwidths the single mode fibers are
173:17 bandwidths the single mode fibers are primarily used for tele communication
173:20 primarily used for tele communication systems in a multimode fiber cable the
173:23 systems in a multimode fiber cable the light signals travel in different paths
173:25 light signals travel in different paths by bouncing off the walls of the core at
173:27 by bouncing off the walls of the core at different
173:33 angles the most common core diameter of a multimode fiber is either 50 or 62.5
173:38 a multimode fiber is either 50 or 62.5 microns both laser and light emitting
173:41 microns both laser and light emitting diodes can be used as light sources
173:43 diodes can be used as light sources since the light travels only short
173:46 since the light travels only short distances Optical fibers with multimode
173:48 distances Optical fibers with multimode transmission experience higher
173:50 transmission experience higher attenuation than single mode
173:54 attenuation than single mode transmission multimode fibers with 50
173:56 transmission multimode fibers with 50 microns and 62.5 microns can support
174:00 microns and 62.5 microns can support data transmission up to a distance of
174:02 data transmission up to a distance of 500 M and 175 M
174:07 500 M and 175 M respectively the multimode fibers are
174:09 respectively the multimode fibers are primarily used within data
174:12 primarily used within data centers fiber optic cables are
174:14 centers fiber optic cables are categorized based on the number of
174:16 categorized based on the number of individual Optical fibers present in the
174:18 individual Optical fibers present in the cable the use cost and size of the cable
174:22 cable the use cost and size of the cable determines the number of optical fibers
174:25 determines the number of optical fibers the three categories are as follows
174:28 the three categories are as follows Simplex cables duplex cables and
174:30 Simplex cables duplex cables and multifiber
174:33 multifiber cables a Simplex cable has only one
174:35 cables a Simplex cable has only one tight buffered optical fiber inside
174:38 tight buffered optical fiber inside it they are typically used for
174:41 it they are typically used for interconnections on the front side of a
174:42 interconnections on the front side of a patch panel a duplex cable has two tight
174:46 patch panel a duplex cable has two tight buffered Optical fibers inside it
174:49 buffered Optical fibers inside it in a duplex Cable ONE optical fiber is
174:52 in a duplex Cable ONE optical fiber is used for transmission and the other is
174:54 used for transmission and the other is used for reception for a
174:56 used for reception for a connection they are typically used as
174:59 connection they are typically used as fiber optic Lan backbone
175:02 fiber optic Lan backbone cables a multifiber cable has more than
175:05 cables a multifiber cable has more than two tight buffered Optical fibers inside
175:07 two tight buffered Optical fibers inside it they are typically used for
175:09 it they are typically used for interconnections on the back side of a
175:11 interconnections on the back side of a patch
175:13 patch panel now that we have discussed Fiber
175:15 panel now that we have discussed Fiber Optic Cables we will discuss the
175:17 Optic Cables we will discuss the connectivity of the fiber optic cables
175:20 connectivity of the fiber optic cables to the fiber channel
175:22 to the fiber channel devices Fiber Optic Cables connect to
175:24 devices Fiber Optic Cables connect to the fiber channel devices using small
175:26 the fiber channel devices using small form factor pluggable or SFP
175:31 form factor pluggable or SFP transceivers a transceiver is both a
175:33 transceivers a transceiver is both a transmitter and a receiver in one
175:37 transmitter and a receiver in one module an SFP transceiver is used to
175:40 module an SFP transceiver is used to interface between a fiber channel device
175:42 interface between a fiber channel device and a fiber optic cable an SFP
175:46 and a fiber optic cable an SFP transceiver receives data from the fiber
175:48 transceiver receives data from the fiber Channel device in the form of electrical
175:50 Channel device in the form of electrical signals and converts the data into light
175:52 signals and converts the data into light pulses which are then transmitted across
175:55 pulses which are then transmitted across the fiber optic cable the SFP
175:57 the fiber optic cable the SFP transceiver at the other end of the
175:59 transceiver at the other end of the fiber optic cable receives these light
176:01 fiber optic cable receives these light pulses and converts them back to
176:03 pulses and converts them back to electrical signals which are then used
176:05 electrical signals which are then used by the other fiber channel
176:08 by the other fiber channel device there are two types of SFP
176:10 device there are two types of SFP transceivers shortwave SFP transceivers
176:14 transceivers shortwave SFP transceivers and longwave SFP transceivers shortwave
176:17 and longwave SFP transceivers shortwave SFP P transceivers have shortwave lasers
176:20 SFP P transceivers have shortwave lasers that transmit data over a short
176:23 that transmit data over a short distance it is used to transmit data
176:26 distance it is used to transmit data through multimode fiber cables longwave
176:29 through multimode fiber cables longwave SFP transceivers have longwave lasers
176:32 SFP transceivers have longwave lasers that transmit data over a long
176:35 that transmit data over a long distance it is used to transmit data
176:37 distance it is used to transmit data through single mode and multimode fiber
176:40 through single mode and multimode fiber cables now let's look at
176:43 cables now let's look at connectors fiber optic connectors
176:45 connectors fiber optic connectors connect two fiber optic cables or a
176:48 connect two fiber optic cables or a fiber optic cable and a fiber channel
176:50 fiber optic cable and a fiber channel device the most popular fiber optic
176:53 device the most popular fiber optic connectors are Lucent connector and
176:55 connectors are Lucent connector and subscriber connector Lucent connector or
176:58 subscriber connector Lucent connector or LC was developed by Lucent Technologies
177:01 LC was developed by Lucent Technologies and hence was named Lucent connector LC
177:04 and hence was named Lucent connector LC is an RJ45 type male connector in duplex
177:08 is an RJ45 type male connector in duplex configuration it is the popular small
177:11 configuration it is the popular small form factor
177:13 form factor connector LC connectors are also
177:15 connector LC connectors are also available either in Simplex or duplex
177:19 available either in Simplex or duplex configuration subscriber connector or SC
177:23 configuration subscriber connector or SC was developed by a company called nipon
177:25 was developed by a company called nipon Telegraph and telephone SC is a snap-in
177:28 Telegraph and telephone SC is a snap-in mail connector that is available either
177:30 mail connector that is available either in Simplex or duplex
177:32 in Simplex or duplex configuration SC Simplex connectors are
177:35 configuration SC Simplex connectors are color-coded beige for multimode fibers
177:38 color-coded beige for multimode fibers and blue for single mode
177:41 and blue for single mode fibers the next thing we will talk about
177:43 fibers the next thing we will talk about is fiber channel Hub a hub is a device
177:47 is fiber channel Hub a hub is a device that has ports for interconnecting other
177:49 that has ports for interconnecting other devices all the devices attached to the
177:52 devices all the devices attached to the hub form a circular path called a loop
177:55 hub form a circular path called a loop and they communicate with each
177:57 and they communicate with each other the bandwidth in a hub is shared
178:00 other the bandwidth in a hub is shared among all the connected
178:02 among all the connected devices fiber channel hubs were
178:04 devices fiber channel hubs were developed to solve problems associated
178:06 developed to solve problems associated with the loops created by connecting the
178:08 with the loops created by connecting the transmit link to the receive link
178:11 transmit link to the receive link between multiple devices for example if
178:14 between multiple devices for example if a new device had to be added to the loop
178:16 a new device had to be added to the loop the entire Loop must be brought to
178:18 the entire Loop must be brought to down these problems were resolved by the
178:21 down these problems were resolved by the star configuration offered by The Hub
178:24 star configuration offered by The Hub the downside of hubs is that by
178:26 the downside of hubs is that by cascading hubs a sand cannot be built
178:29 cascading hubs a sand cannot be built with more than 127 nodes now let's talk
178:33 with more than 127 nodes now let's talk about a fiber channel switch a fiber
178:35 about a fiber channel switch a fiber channel switch is a device that provides
178:38 channel switch is a device that provides Central connection points for servers
178:40 Central connection points for servers and fiber channel devices to communicate
178:42 and fiber channel devices to communicate with each
178:44 with each other this switch temporarily sets up
178:47 other this switch temporarily sets up logical connections between the fiber
178:48 logical connections between the fiber channel devices for routing data through
178:51 channel devices for routing data through the connection
178:52 the connection points the switch learns which devices
178:55 points the switch learns which devices are connected to which ports and uses
178:57 are connected to which ports and uses that information to forward traffic to
178:59 that information to forward traffic to the correct
179:01 the correct device a fiber channel switch has at
179:03 device a fiber channel switch has at least eight
179:04 least eight ports each Port of the switch has a
179:07 ports each Port of the switch has a dedicated
179:09 dedicated bandwidth a storage area network built
179:11 bandwidth a storage area network built with at least one fiber channel switch
179:13 with at least one fiber channel switch is called a
179:15 is called a fabric a fiber channel switch also
179:17 fabric a fiber channel switch also offers certain services such as
179:19 offers certain services such as Management Service name service and so
179:22 Management Service name service and so on to the devices and servers attached
179:25 on to the devices and servers attached to
179:26 to it these services are called fabric
179:29 it these services are called fabric services and they simplify the
179:31 services and they simplify the management of the devices that are
179:32 management of the devices that are interconnected on the
179:38 fabric a fiber channel switch can also be connected to another fiber channel
179:40 be connected to another fiber channel switch so that the existing fabric can
179:42 switch so that the existing fabric can be a large-scale sand the cable that
179:45 be a large-scale sand the cable that connects two fiber channel switches is
179:47 connects two fiber channel switches is called called an inter switch link or
179:50 called called an inter switch link or ISL there are three types of fiber
179:52 ISL there are three types of fiber channel switches entry-level switches
179:55 channel switches entry-level switches fabric switches and director
179:58 fabric switches and director switches entry-level switches are
180:00 switches entry-level switches are lowcost switches with limited
180:02 lowcost switches with limited scalability they are typically used as
180:05 scalability they are typically used as replacements for
180:06 replacements for hubs fabric switches are expensive
180:08 hubs fabric switches are expensive switches that are scalable they are used
180:11 switches that are scalable they are used to interconnect other switches to form a
180:13 to interconnect other switches to form a large fabric director switches are
180:16 large fabric director switches are highly expensive switches that form the
180:18 highly expensive switches that form the core of a large storage area
180:21 core of a large storage area network they are typically blade based
180:24 network they are typically blade based with each blade having up to 64 ports so
180:27 with each blade having up to 64 ports so director switches usually have a high
180:29 director switches usually have a high Port count for example brocade dcx can
180:33 Port count for example brocade dcx can handle up to 512
180:36 handle up to 512 ports director switches are known for
180:38 ports director switches are known for high performance High availability and
180:40 high performance High availability and high
180:42 high scalability they are known for high
180:44 scalability they are known for high availability because they are redundant
180:46 availability because they are redundant hot swappable components and redundant
180:49 hot swappable components and redundant power supplies and cooling systems which
180:51 power supplies and cooling systems which ensure that there is no single point of
180:55 ensure that there is no single point of failure in addition to that the director
180:58 failure in addition to that the director switches have dual controllers which
181:00 switches have dual controllers which provide active passive
181:02 provide active passive failover next we will look at Bridges
181:06 failover next we will look at Bridges bridges are the devices that connect the
181:08 bridges are the devices that connect the parallel scuzzi devices to the fiber
181:10 parallel scuzzi devices to the fiber channel storage area
181:12 channel storage area network for example they connect a
181:14 network for example they connect a scuzzy tape device to an FC sand
181:19 scuzzy tape device to an FC sand let's now talk about Storage storage is
181:22 let's now talk about Storage storage is one of the main components of the fiber
181:23 one of the main components of the fiber channel storage area network it can
181:26 channel storage area network it can either be fiber channel based or scuzzy
181:29 either be fiber channel based or scuzzy based let's look at the fiber channel
181:31 based let's look at the fiber channel based storage
181:33 based storage devices fiber channel disc drive in its
181:37 devices fiber channel disc drive in its simplest form a storage device can be a
181:39 simplest form a storage device can be a dis drive with a fiber channel interface
181:41 dis drive with a fiber channel interface that provides dual porting
181:49 capability jbod jbod stands for just a bunch of discs it is a collection of dis
181:52 bunch of discs it is a collection of dis drives connected together in a
181:55 drives connected together in a cabinet storage array a storage array is
181:59 cabinet storage array a storage array is a storage system that provides data
182:01 a storage system that provides data storage to computers connected to it
182:03 storage to computers connected to it through a shared
182:09 network a storage array contains a collection of dis drives and one or more
182:11 collection of dis drives and one or more controllers enclosed in a cabinet the
182:14 controllers enclosed in a cabinet the controllers in a storage array have the
182:16 controllers in a storage array have the intelligence to manage the dis Drive and
182:18 intelligence to manage the dis Drive and to provide redundancy of data through
182:20 to provide redundancy of data through raid let's talk about fiber channel
182:22 raid let's talk about fiber channel ports in FC
182:24 ports in FC San a fiber channel Port is an interface
182:27 San a fiber channel Port is an interface where the fiber channel cable is plugged
182:29 where the fiber channel cable is plugged in for the purpose of
182:32 in for the purpose of communication the ports are implemented
182:34 communication the ports are implemented on devices such as a fiber channel
182:36 on devices such as a fiber channel switch a host bus adapter on a server
182:39 switch a host bus adapter on a server and a storage array fiber channel ports
182:41 and a storage array fiber channel ports have different port names based on their
182:43 have different port names based on their mode of operation which depend on the
182:45 mode of operation which depend on the kind of device connected to the other
182:47 kind of device connected to the other end of the
182:48 end of the Port we will start with end port and end
182:52 Port we will start with end port and end port is a node Port available in end
182:54 port is a node Port available in end devices such as servers and storage
182:56 devices such as servers and storage arrays it is used to connect to a port
182:59 arrays it is used to connect to a port on the fiber channel switch in a fabric
183:02 on the fiber channel switch in a fabric an fport is the port on a fiber channel
183:04 an fport is the port on a fiber channel switch that connects to the end port of
183:06 switch that connects to the end port of an end device such as a server or a
183:08 an end device such as a server or a storage
183:10 storage array an E port is an expansion Port
183:13 array an E port is an expansion Port that resides on fiber channel
183:16 that resides on fiber channel switches it is used to connect a fiber
183:18 switches it is used to connect a fiber channel switch with another fiber
183:20 channel switch with another fiber channel switch the fiber channel cable
183:23 channel switch the fiber channel cable that connects two fiber channel switches
183:25 that connects two fiber channel switches is called an inter switch
183:27 is called an inter switch link a g Port is a generic port on a
183:30 link a g Port is a generic port on a fiber channel switch it becomes either
183:33 fiber channel switch it becomes either an fport or an end port depending on
183:36 an fport or an end port depending on whether the device connected at the
183:37 whether the device connected at the other end is a fiber channel switch or
183:40 other end is a fiber channel switch or an end
183:42 an end device a u Port is a universal fiber
183:45 device a u Port is a universal fiber channel
183:45 channel Port all unidentified ified and unlisted
183:48 Port all unidentified ified and unlisted ports are considered as U
183:52 ports are considered as U ports and that brings us to the end of
183:54 ports and that brings us to the end of this lesson let's summarize what you
183:56 this lesson let's summarize what you have learned in this
183:57 have learned in this lesson in this lesson you learned about
184:00 lesson in this lesson you learned about the components of a fiber channel
184:02 the components of a fiber channel sand these components are servers host
184:05 sand these components are servers host bus adapters or HBA cables hubs switches
184:10 bus adapters or HBA cables hubs switches Bridges and
184:12 Bridges and storage when we talked about the host
184:14 storage when we talked about the host bus adapter we also looked at what a
184:16 bus adapter we also looked at what a worldwide name was
184:18 worldwide name was and then we looked at the two types of
184:20 and then we looked at the two types of worldwide name these are worldwide node
184:23 worldwide name these are worldwide node name or
184:24 name or wwnn and worldwide Port name or
184:41 wwpns FP is we also looked into the types of sfps and these are electrical
184:44 types of sfps and these are electrical SFP and Optical SFP
184:48 SFP and Optical SFP we covered the optical SFP in detail
184:50 we covered the optical SFP in detail when we talked about the connectivity of
184:52 when we talked about the connectivity of the fiber optic cables to the fiber
184:54 the fiber optic cables to the fiber channel
184:55 channel devices next we looked at the converged
184:58 devices next we looked at the converged network adapter which is a PCI expansion
185:01 network adapter which is a PCI expansion card that combines the functionality of
185:03 card that combines the functionality of both the host bus adapter and the
185:05 both the host bus adapter and the network interface
185:07 network interface card when talking about converged
185:09 card when talking about converged network adapter we looked at what data
185:11 network adapter we looked at what data center bridging or DCB is while talking
185:15 center bridging or DCB is while talking about data cables we took a detailed
185:17 about data cables we took a detailed look at the types of cables that are
185:18 look at the types of cables that are used in fiber channel sand these are
185:21 used in fiber channel sand these are copper cables and Fiber Optic
185:24 copper cables and Fiber Optic Cables we also looked at what fiber
185:26 Cables we also looked at what fiber optic connectors are and then we looked
185:29 optic connectors are and then we looked at the two most popular fiber optic
185:31 at the two most popular fiber optic connectors these are Lucent connector
185:34 connectors these are Lucent connector and subscriber
185:35 and subscriber connector when talking about the FC
185:38 connector when talking about the FC switch we also looked at what inter
185:40 switch we also looked at what inter switch link or ISL is lastly we looked
185:43 switch link or ISL is lastly we looked at what a fiber channel Port is and then
185:46 at what a fiber channel Port is and then we looked at the different port names
185:48 we looked at the different port names these are nport fport E port G port and
185:52 these are nport fport E port G port and U port in the next lesson you will learn
185:55 U port in the next lesson you will learn about the topologies of fiber channel
185:57 about the topologies of fiber channel storage area network thank you for
185:59 storage area network thank you for watching
186:02 watching [Music]
186:25 hello and welcome to unit 4 FC sand topologies in this lesson you will learn
186:27 topologies in this lesson you will learn about the topologies of fiber channel
186:29 about the topologies of fiber channel storage area
186:31 storage area network we're going to start by looking
186:33 network we're going to start by looking at what a topology
186:35 at what a topology is and then we will look at the types of
186:38 is and then we will look at the types of fiber channel sand topologies these are
186:41 fiber channel sand topologies these are point to point
186:42 point to point topology arbitrated Loop topology and
186:45 topology arbitrated Loop topology and switched fabric topology
186:52 we will also look at what a fabric is and then we will look at the types of
186:54 and then we will look at the types of Switched fabric
186:56 Switched fabric topologies these are cascade topology
187:00 topologies these are cascade topology ring
187:01 ring topology mesh topology and core Edge
187:06 topology mesh topology and core Edge topology then we will illustrate a
187:09 topology then we will illustrate a simple fiber channel sand using the
187:11 simple fiber channel sand using the switched fabric
187:13 switched fabric topology and then we will explain the
187:16 topology and then we will explain the terms of high availability and
187:18 terms of high availability and scalability with reference to fiber
187:20 scalability with reference to fiber channel
187:21 channel sand lastly we will illustrate a good
187:24 sand lastly we will illustrate a good fiber channel sand using the switched
187:26 fiber channel sand using the switched fabric
187:27 fabric topology now let's take a look at what a
187:30 topology now let's take a look at what a topology is a topology is a logical
187:34 topology is a topology is a logical layout of a computer
187:36 layout of a computer network it deals with the layout of The
187:38 network it deals with the layout of The Logical connections between the
187:40 Logical connections between the components of the network such as
187:42 components of the network such as computers switches storage devices and
187:45 computers switches storage devices and other peripheral devices irrespective of
187:48 other peripheral devices irrespective of their actual physical
187:54 locations fiber channel architecture provides three
187:55 provides three topologies pointto point arbitrated Loop
187:59 topologies pointto point arbitrated Loop and switched
188:00 and switched fabric each topology is meant for a
188:02 fabric each topology is meant for a specific
188:04 specific purpose however there is no strict rule
188:07 purpose however there is no strict rule that only one topology should be used to
188:09 that only one topology should be used to deliver a solution these topologies can
188:12 deliver a solution these topologies can be joined together to provide a solution
188:14 be joined together to provide a solution that serves a specific need
188:17 that serves a specific need let's discuss these topologies one by
188:20 let's discuss these topologies one by one point-to-point topology is a direct
188:23 one point-to-point topology is a direct connection between two devices the
188:25 connection between two devices the transmit or TX Port of a device is
188:28 transmit or TX Port of a device is connected to the receive or RX Port of
188:30 connected to the receive or RX Port of the other device through a fiber channel
188:33 the other device through a fiber channel cable and vice versa as shown in the
188:36 cable and vice versa as shown in the diagram since there is always a
188:38 diagram since there is always a dedicated connection between the two
188:40 dedicated connection between the two devices the bandwidth of the
188:42 devices the bandwidth of the transmission media is fully available to
188:44 transmission media is fully available to these
188:46 these devices point to Point topology is
188:48 devices point to Point topology is typically used for directly connecting
188:50 typically used for directly connecting servers to storage arrays for example if
188:53 servers to storage arrays for example if a storage array has four fiber channel
188:55 a storage array has four fiber channel ports then four servers can be directly
188:58 ports then four servers can be directly connected to the storage
189:00 connected to the storage array the advantage of point-to-point
189:02 array the advantage of point-to-point topology is that it provides shared
189:04 topology is that it provides shared storage for multiple servers with
189:06 storage for multiple servers with minimum
189:12 configurations in arbitrated Loop topology the devices are connected to
189:14 topology the devices are connected to each other to form a circular data path
189:16 each other to form a circular data path called a loop
189:19 called a loop the arbitrated Loop topology allows up
189:21 the arbitrated Loop topology allows up to 126 devices to be connected to each
189:24 to 126 devices to be connected to each other in the form of a
189:26 other in the form of a Ring The Ring is formed by connecting
189:29 Ring The Ring is formed by connecting the transmit or TX Port of a device to
189:31 the transmit or TX Port of a device to the receive or RX Port of the other
189:34 the receive or RX Port of the other device through a fiber channel
189:37 device through a fiber channel cable this process is repeated until the
189:40 cable this process is repeated until the receive Port of the first device is
189:42 receive Port of the first device is connected to the transmit Port of the
189:44 connected to the transmit Port of the last device in the ring as shown in the
189:46 last device in the ring as shown in the diagram
189:53 in this topology all the devices have a shared transmission media and they
189:55 shared transmission media and they communicate with each other on a time
189:57 communicate with each other on a time sharing
189:59 sharing basis let's say the two devices in an
190:02 basis let's say the two devices in an arbitrated Loop topology wanted to
190:04 arbitrated Loop topology wanted to communicate with each
190:06 communicate with each other one of the devices will own the
190:08 other one of the devices will own the shared transmission media in order for
190:10 shared transmission media in order for the communication to take place during
190:13 the communication to take place during this period of communication the other
190:15 this period of communication the other device has to wait there is an
190:17 device has to wait there is an arbitration of control over the loop
190:20 arbitration of control over the loop among the devices to share the bandwidth
190:22 among the devices to share the bandwidth of the single- shared transmission media
190:24 of the single- shared transmission media hence the name arbitrated Loop
190:28 hence the name arbitrated Loop topology arbitrated Loop topology is
190:31 topology arbitrated Loop topology is typically used to connect strings of
190:33 typically used to connect strings of hard drives to a server as shown in the
190:36 hard drives to a server as shown in the diagram a fabric is a storage area
190:39 diagram a fabric is a storage area network built with at least one fiber
190:41 network built with at least one fiber channel
190:42 channel switch in a switched fabric topology
190:45 switch in a switched fabric topology each device is connected to a fiber
190:47 each device is connected to a fiber channel switch and the switches in turn
190:50 channel switch and the switches in turn interconnect all the devices in the
190:52 interconnect all the devices in the network through inter switch links a
190:55 network through inter switch links a device in a switched fabric topology can
190:57 device in a switched fabric topology can communicate with other devices
190:59 communicate with other devices simultaneously at full bandwidth since
191:02 simultaneously at full bandwidth since switches can efficiently Route traffic
191:04 switches can efficiently Route traffic between
191:05 between them the advantage of Switched fabric
191:08 them the advantage of Switched fabric topology is that it is scalable and the
191:11 topology is that it is scalable and the devices can be added or removed to the
191:13 devices can be added or removed to the network without disrupting other
191:16 network without disrupting other devices there are different kinds of
191:18 devices there are different kinds of fabric topologies and we can use a
191:20 fabric topologies and we can use a combination of these Fabrics to build a
191:22 combination of these Fabrics to build a highly available and scalable
191:24 highly available and scalable Network the common fabric topologies are
191:27 Network the common fabric topologies are as follows Cascade topology ring
191:30 as follows Cascade topology ring topology mesh topology and core Edge
191:34 topology mesh topology and core Edge topology Cascade topology is a series of
191:37 topology Cascade topology is a series of switches connected to one another
191:39 switches connected to one another through inter switch links to form a
191:40 through inter switch links to form a chained
191:42 chained Network however the switches at the end
191:45 Network however the switches at the end of the chain are not connected to each
191:47 of the chain are not connected to each other other Cascade topology is
191:50 other other Cascade topology is unreliable because each switch is a
191:52 unreliable because each switch is a single point of failure so if a switch
191:54 single point of failure so if a switch fails the network is
191:57 fails the network is disrupted it is typically used in
191:59 disrupted it is typically used in circumstances where most of the traffic
192:01 circumstances where most of the traffic is restricted to individual switches and
192:04 is restricted to individual switches and the management traffic can be routed
192:06 the management traffic can be routed through the inter switch
192:08 through the inter switch link ring topology is similar to Cascade
192:11 link ring topology is similar to Cascade topology but with switches at the end of
192:13 topology but with switches at the end of the chain connected to each other to
192:15 the chain connected to each other to form a circular Network
192:18 form a circular Network ring topology has better reliability
192:20 ring topology has better reliability compared to Cascade topology because the
192:22 compared to Cascade topology because the network is not disrupted even if one of
192:25 network is not disrupted even if one of the switches or isls fail in the
192:28 the switches or isls fail in the ring however ring topology is not
192:31 ring however ring topology is not scalable without disrupting the fabric
192:34 scalable without disrupting the fabric because if we want to connect a new
192:35 because if we want to connect a new switch to the fabric at least one ISL
192:38 switch to the fabric at least one ISL must be
192:40 must be disconnected mesh topology is an
192:42 disconnected mesh topology is an interlaced fabric of interconnected
192:44 interlaced fabric of interconnected switches there are two types of mesh top
192:47 switches there are two types of mesh top topology full mesh topology and partial
192:50 topology full mesh topology and partial mesh
192:51 mesh topology in full mesh topology every
192:54 topology in full mesh topology every switch is directly connected to every
192:56 switch is directly connected to every other switch in the
192:58 other switch in the fabric let's explain this with the help
193:00 fabric let's explain this with the help of an example if we have eight switches
193:03 of an example if we have eight switches in our fabric then each switch in the
193:05 in our fabric then each switch in the mesh topology will connect to seven
193:07 mesh topology will connect to seven switches in the fabric reducing the
193:10 switches in the fabric reducing the available ports by
193:11 available ports by seven full mesh topology is resilient
193:15 seven full mesh topology is resilient because it provides multiple redundant
193:17 because it provides multiple redundant inter switch links so if an inter switch
193:20 inter switch links so if an inter switch link fails traffic can still be routed
193:22 link fails traffic can still be routed through other inter switch
193:24 through other inter switch Links full mesh topology is not scalable
193:27 Links full mesh topology is not scalable without disrupting the fabric because if
193:29 without disrupting the fabric because if we want to connect a new switch to the
193:31 we want to connect a new switch to the fabric The Edge devices must be
193:38 disconnected partial mesh topology is similar to full mesh topology but has
193:40 similar to full mesh topology but has some switches that are not directly
193:42 some switches that are not directly connected to each
193:44 connected to each other as you can see in the diagram two
193:47 other as you can see in the diagram two two switches are not directly connected
193:48 two switches are not directly connected to each other as there is no inter
193:51 to each other as there is no inter switch link or ISL between
193:54 switch link or ISL between them the ISL between two switches is
193:57 them the ISL between two switches is removed if there is no traffic flow
193:59 removed if there is no traffic flow between them like full mesh topology
194:02 between them like full mesh topology partial mesh topology is also resilient
194:05 partial mesh topology is also resilient partial mesh topology is also not
194:07 partial mesh topology is also not scalable without disrupting the fabric
194:10 scalable without disrupting the fabric but it has more free ports than full
194:12 but it has more free ports than full mesh
194:13 mesh topology in core Edge topology two or
194:16 topology in core Edge topology two or more switches form the center of the
194:18 more switches form the center of the network and are interconnected with
194:20 network and are interconnected with other
194:21 other switches the switches that form the
194:23 switches the switches that form the center of the network are called core
194:26 center of the network are called core switches the switches connected to the
194:28 switches the switches connected to the core switches are called Edge
194:31 core switches are called Edge switches in a typical core Edge topology
194:34 switches in a typical core Edge topology director switches are used as core
194:37 director switches are used as core switches it is a common practice to have
194:40 switches it is a common practice to have the storage devices directly connected
194:42 the storage devices directly connected to the core switches and the hosts or
194:44 to the core switches and the hosts or servers connected to the edge switches
194:49 servers connected to the edge switches core Edge topology is scalable without
194:51 core Edge topology is scalable without disrupting the network it is also
194:53 disrupting the network it is also reliable and offers high
194:56 reliable and offers high performance now let's construct a simple
194:58 performance now let's construct a simple sand using the switched fabric topology
195:01 sand using the switched fabric topology and for this we will need a server with
195:03 and for this we will need a server with a host bus adapter a fiber channel
195:05 a host bus adapter a fiber channel switch fiber channel cables and a
195:07 switch fiber channel cables and a storage array in our diagram This Server
195:11 storage array in our diagram This Server is a diskless server with an HBA card we
195:14 is a diskless server with an HBA card we will connect the server to the fiber
195:16 will connect the server to the fiber channel switch by plugging one end of a
195:18 channel switch by plugging one end of a fiber channel cable into the host bus
195:20 fiber channel cable into the host bus adapter of the server and its other end
195:23 adapter of the server and its other end into one of the ports of the fiber
195:25 into one of the ports of the fiber channel
195:27 channel switch in the next step we will plug one
195:29 switch in the next step we will plug one end of another fiber channel cable into
195:32 end of another fiber channel cable into one of the free ports of the FC switch
195:34 one of the free ports of the FC switch and connect its other end into the port
195:36 and connect its other end into the port of the storage
195:38 of the storage array we have created a simple FC sand
195:42 array we have created a simple FC sand using switched fabric
195:44 using switched fabric topology it is a simple sand but definit
195:47 topology it is a simple sand but definit not a good one because a good sand
195:49 not a good one because a good sand should be highly available and
195:56 scalable by high availability we mean the sand should be able to survive any
195:58 the sand should be able to survive any kind of failure in our simple sand if
196:01 kind of failure in our simple sand if any one of the components fails it
196:03 any one of the components fails it disrupts the entire
196:06 disrupts the entire network High availability can be
196:08 network High availability can be achieved by configuring redundant
196:10 achieved by configuring redundant components so if one component fails the
196:13 components so if one component fails the Redundant component comes to the rescue
196:19 we will add redundant components to our simple sand and make it highly
196:22 simple sand and make it highly available by scalability we mean that
196:25 available by scalability we mean that more servers more storage and more
196:27 more servers more storage and more switches can be added to the sand
196:29 switches can be added to the sand without affecting
196:31 without affecting performance now let's construct a good
196:33 performance now let's construct a good sand using the switched fabric topology
196:36 sand using the switched fabric topology and for this we will need a server with
196:38 and for this we will need a server with two host bus adapters two fiber channel
196:41 two host bus adapters two fiber channel switches fiber channel cables and a dual
196:44 switches fiber channel cables and a dual controller storage array as you can see
196:47 controller storage array as you can see in our diagram the dis server has two
196:49 in our diagram the dis server has two host bus adapters each host bus adapter
196:53 host bus adapters each host bus adapter is connected to a fiber channel switch
196:55 is connected to a fiber channel switch which in turn is connected to the same
196:57 which in turn is connected to the same dual controller storage
196:59 dual controller storage array if you notice there is a dual path
197:02 array if you notice there is a dual path between the server and the storage array
197:05 between the server and the storage array if one path fails the Redundant path
197:07 if one path fails the Redundant path provides the alternate path between the
197:09 provides the alternate path between the server and the storage
197:11 server and the storage array for example if host bus adapter X
197:15 array for example if host bus adapter X fails the server can still access the
197:17 fails the server can still access the storage via host bus adapter
197:20 storage via host bus adapter y even the cables and switches can
197:23 y even the cables and switches can survive the
197:24 survive the failures for example if one of the fiber
197:27 failures for example if one of the fiber channel cables connecting switchx and
197:29 channel cables connecting switchx and the Dual controller storage array breaks
197:32 the Dual controller storage array breaks switch X can still access the storage
197:35 switch X can still access the storage through the Redundant fiber channel
197:38 through the Redundant fiber channel Cable in a worst case scenario if the
197:41 Cable in a worst case scenario if the fiber channel switch X itself fails the
197:43 fiber channel switch X itself fails the server can still access the storage
197:45 server can still access the storage through fiber channel switch
197:48 through fiber channel switch y that brings us to the end of this
197:50 y that brings us to the end of this lesson let's summarize what you have
197:52 lesson let's summarize what you have learned in this
197:53 learned in this lesson in this lesson you learned about
197:56 lesson in this lesson you learned about the topologies of fiber channel storage
197:58 the topologies of fiber channel storage area network we started by looking at
198:01 area network we started by looking at what a topology is and then we looked at
198:03 what a topology is and then we looked at the types of fiber channel sand
198:05 the types of fiber channel sand topologies these are Point too
198:08 topologies these are Point too arbitrated Loop and switched fabric
198:12 arbitrated Loop and switched fabric topology we also looked at what a fabric
198:15 topology we also looked at what a fabric is and then we looked at the types of
198:17 is and then we looked at the types of Switched fabric topologies these are
198:19 Switched fabric topologies these are cascade topology ring topology mesh
198:23 cascade topology ring topology mesh topology and core Edge
198:26 topology and core Edge topology next we Illustrated a simple
198:28 topology next we Illustrated a simple fiber channel sand using the switched
198:30 fiber channel sand using the switched fabric topology and then we explain the
198:33 fabric topology and then we explain the terms High availability and scalability
198:36 terms High availability and scalability with reference to the fiber channel
198:39 with reference to the fiber channel sand lastly we Illustrated a good fiber
198:42 sand lastly we Illustrated a good fiber channel sand using the switched fabric
198:45 channel sand using the switched fabric topology in the next lesson
198:47 topology in the next lesson you will learn about the different
198:48 you will learn about the different features and capabilities of fiber
198:50 features and capabilities of fiber channel switches thank you for
199:18 watching hello and and welcome to unit 5 characteristics of FC
199:22 5 characteristics of FC switches in this lesson you will learn
199:24 switches in this lesson you will learn about the different features and
199:26 about the different features and capabilities of fiber channel
199:29 capabilities of fiber channel switches we're going to start by looking
199:31 switches we're going to start by looking at what a domain ID is in the context of
199:34 at what a domain ID is in the context of a fiber channel
199:37 a fiber channel switch and then we will look at the
199:39 switch and then we will look at the fabric Services provided by the fiber
199:41 fabric Services provided by the fiber channel
199:43 channel switches these fabric services are name
199:46 switches these fabric services are name service
199:47 service login server fabric controller
199:50 login server fabric controller management server and time
199:54 management server and time server after talking about the fabric
199:56 server after talking about the fabric Services we will talk about the fabric
199:58 Services we will talk about the fabric login process and the port login
200:02 login process and the port login process next we will look at what zoning
200:05 process next we will look at what zoning is and then we will look at what a Zone
200:08 is and then we will look at what a Zone set
200:09 set is we will also look at the types of
200:12 is we will also look at the types of zoning and these are worldwide name
200:14 zoning and these are worldwide name zoning and Port zoning
200:21 we will look at the advantages of zoning and then we will look at the
200:23 and then we will look at the implementation of zoning we will also
200:25 implementation of zoning we will also look at what a Zone Alias is and then
200:28 look at what a Zone Alias is and then look at the best practices in
200:31 look at the best practices in zoning next we will look at what a
200:33 zoning next we will look at what a virtual fabric is and then we will look
200:35 virtual fabric is and then we will look at its
200:38 at its features we will also look at what over
200:40 features we will also look at what over subscription is and lastly we will look
200:43 subscription is and lastly we will look at what over subscription ratio is
200:47 at what over subscription ratio is now let's look at what a domain ID
200:50 now let's look at what a domain ID is each switch in a fabric is identified
200:53 is each switch in a fabric is identified by a unique ID called a domain ID domain
200:57 by a unique ID called a domain ID domain IDs are assigned to the switches by a
200:59 IDs are assigned to the switches by a switch in the fabric called the
201:01 switch in the fabric called the principal
201:02 principal switch fiber channel switches in the
201:05 switch fiber channel switches in the fabric provide a common set of services
201:07 fabric provide a common set of services to devices in the fabric these services
201:09 to devices in the fabric these services are called fabric
201:12 are called fabric Services the fabric Services provide
201:15 Services the fabric Services provide information to the devices connected to
201:17 information to the devices connected to the
201:18 the fabric we will discuss these Services
201:20 fabric we will discuss these Services one by
201:21 one by one let's start with name server the
201:25 one let's start with name server the name server is a central repository of
201:27 name server is a central repository of information about the
201:29 information about the fabric whenever a device is added to the
201:32 fabric whenever a device is added to the network it registers itself with the
201:34 network it registers itself with the name
201:35 name server there is only one name server for
201:38 server there is only one name server for the entire
201:39 the entire fabric the name server information is
201:42 fabric the name server information is shared with the name servers of all the
201:44 shared with the name servers of all the switches making it a distributed name
201:46 switches making it a distributed name server
201:48 server server just like nodes have Network
201:50 server just like nodes have Network addresses the fabric Services also have
201:52 addresses the fabric Services also have Network
201:54 Network addresses but these addresses do not
201:56 addresses but these addresses do not change and are known as well-known
202:04 addresses now we will talk about the login
202:05 login server a device that needs to connect to
202:07 server a device that needs to connect to the fabric must send a login request to
202:10 the fabric must send a login request to the well-known address 0x FF FF Fe of
202:15 the well-known address 0x FF FF Fe of the login server
202:17 the login server server the login server processes the
202:21 server the login server processes the request now we will talk about the
202:23 request now we will talk about the fabric
202:24 fabric controller the well-known address of the
202:27 controller the well-known address of the fabric controller is 0x FF FF
202:32 fabric controller is 0x FF FF FD this address sends a state change
202:35 FD this address sends a state change notification to all the registered
202:37 notification to all the registered devices in the
202:38 devices in the fabric a state change notification is
202:41 fabric a state change notification is used when there is a change in the
202:43 used when there is a change in the fabric topology
202:49 a device registers for a state change notification by sending a state change
202:52 notification by sending a state change registration frame or scr to the
202:55 registration frame or scr to the well-known address of the fabric
202:58 well-known address of the fabric controller the fabric controller lets
203:00 controller the fabric controller lets the device know of any change by sending
203:02 the device know of any change by sending a notification called a registered state
203:05 a notification called a registered state change notification frame or rscn frame
203:09 change notification frame or rscn frame to the
203:11 to the device now we will talk about the
203:13 device now we will talk about the management
203:14 management server the well-known address of the
203:16 server the well-known address of the management server is 0 x FF
203:21 management server is 0 x FF FFA the fabric can be managed by using
203:24 FFA the fabric can be managed by using the management server from any switch in
203:26 the management server from any switch in the
203:27 the fabric now we will talk about the time
203:30 fabric now we will talk about the time server the well-known address of the
203:32 server the well-known address of the time server is 0x FF FF
203:37 time server is 0x FF FF FB the time server maintains a uniform
203:40 FB the time server maintains a uniform system time across all the devices in
203:42 system time across all the devices in the
203:44 the fabric now we will talk about about the
203:46 fabric now we will talk about about the fabric login
203:48 fabric login process though a server or a storage
203:50 process though a server or a storage device is physically connected to a
203:52 device is physically connected to a fabric The Logical connection is
203:54 fabric The Logical connection is established only when it executes the
203:56 established only when it executes the fabric login or F loggy
204:00 fabric login or F loggy process the F logy process is done to
204:04 process the F logy process is done to discover the existence of a
204:06 discover the existence of a fabric if a fabric is found the fabric
204:09 fabric if a fabric is found the fabric login sets up a session between an nport
204:12 login sets up a session between an nport and the participating
204:14 and the participating fport once the session is is established
204:17 fport once the session is is established the nport will send an F loggy frame to
204:19 the nport will send an F loggy frame to the well-known address of the login
204:21 the well-known address of the login server with its details such as node
204:24 server with its details such as node name and Port name and service
204:32 parameters when the f logy is successful the fport assigns a 24-bit dynamic
204:34 the fport assigns a 24-bit dynamic address to the end port of the device in
204:38 address to the end port of the device in addition to that buffert buffer credits
204:40 addition to that buffert buffer credits are also
204:42 are also negotiated now we will talk about the
204:44 negotiated now we will talk about the port login process
204:47 port login process the next step after the F loggy process
204:49 the next step after the F loggy process is the port login or P logy
204:53 is the port login or P logy process Port login is necessary for the
204:55 process Port login is necessary for the data exchange to take place between the
204:57 data exchange to take place between the end
205:00 end devices it is used to establish a
205:02 devices it is used to establish a session between the two end devices or
205:04 session between the two end devices or end
205:06 end ports during the port login process the
205:09 ports during the port login process the two end ports negotiate service
205:11 two end ports negotiate service parameters such as endtoend credit and
205:14 parameters such as endtoend credit and the information about a device is
205:16 the information about a device is entered into the name
205:23 server now let's talk about zoning the drawback of server operating systems of
205:25 drawback of server operating systems of the servers attached to a sand is that
205:27 the servers attached to a sand is that they are not aware of each other and
205:29 they are not aware of each other and this will result in uncoordinated
205:31 this will result in uncoordinated actions of the sand attached
205:35 actions of the sand attached storage techniques were developed to
205:37 storage techniques were developed to provide the servers restricted access to
205:39 provide the servers restricted access to the storage devices by subdividing the
205:41 the storage devices by subdividing the sand into private
205:44 sand into private networks each such private Network
205:47 networks each such private Network provided coordinated sharing of shared
205:49 provided coordinated sharing of shared storage within its
205:51 storage within its Network these techniques have different
205:53 Network these techniques have different names depending on where they are
205:56 names depending on where they are implemented if it is implemented in a
205:58 implemented if it is implemented in a switch it is called zoning if it is
206:01 switch it is called zoning if it is implemented in a host bus adapter it is
206:03 implemented in a host bus adapter it is called Lun
206:04 called Lun masking if it is implemented in a raid
206:07 masking if it is implemented in a raid it is called Lun
206:09 it is called Lun mapping all the techniques work on the
206:11 mapping all the techniques work on the principle of restricting access by
206:14 principle of restricting access by blocking a range of addresses
206:18 blocking a range of addresses so zoning is a technique of subdividing
206:20 so zoning is a technique of subdividing a sand into private networks called
206:23 a sand into private networks called zones a zone is essentially a collection
206:26 zones a zone is essentially a collection of end devices or end ports that are
206:29 of end devices or end ports that are allowed to communicate with each
206:35 other it is recommended for a Zone to have a single server and one or more
206:37 have a single server and one or more storage ports this approach is called
206:40 storage ports this approach is called single initiator
206:47 zoning the initiator is a device that first initiates the communication or
206:49 first initiates the communication or requests an action for example a server
206:52 requests an action for example a server is an
206:54 is an initiator a Target is a device that
206:57 initiator a Target is a device that responds to the requests of an initiator
206:59 responds to the requests of an initiator by providing the required
207:01 by providing the required data for example storage is a
207:06 data for example storage is a Target now we will talk about the
207:08 Target now we will talk about the restrictions imposed by
207:10 restrictions imposed by zoning when zoning is enabled a device
207:13 zoning when zoning is enabled a device in a Zone can only communicate with
207:15 in a Zone can only communicate with other devices Within in the same
207:18 other devices Within in the same Zone this also means that the device in
207:21 Zone this also means that the device in the zone cannot communicate with devices
207:23 the zone cannot communicate with devices outside the
207:25 outside the Zone devices not included in any zone
207:28 Zone devices not included in any zone are not accessible to any other devices
207:31 are not accessible to any other devices in the
207:32 in the fabric when no zoning is enabled the
207:35 fabric when no zoning is enabled the fabric is said to be in a default
207:39 fabric is said to be in a default Zone the default zoning access can
207:41 Zone the default zoning access can either be open or
207:44 either be open or closed when the default zoning is open
207:47 closed when the default zoning is open all devices can see all the other
207:49 all devices can see all the other devices when it is closed all devices
207:52 devices when it is closed all devices are
208:00 isolated now we will talk about Zone set a Zone set is a collection of
208:02 a Zone set is a collection of zones there can be hundreds of zones in
208:04 zones there can be hundreds of zones in a
208:05 a fabric a fabric can have a single or
208:08 fabric a fabric can have a single or multiple Zone sets but at any given time
208:12 multiple Zone sets but at any given time only one zone set can be active for a
208:14 only one zone set can be active for a fabric
208:17 fabric when any changes are done to an active
208:18 when any changes are done to an active Zone set such as adding a zone or
208:21 Zone set such as adding a zone or removing a Zone it should be reapplied
208:23 removing a Zone it should be reapplied to the fabric for the changes to become
208:27 to the fabric for the changes to become effective there are two types of zoning
208:30 effective there are two types of zoning worldwide name zoning and Port
208:33 worldwide name zoning and Port zoning worldwide name zoning allows the
208:36 zoning worldwide name zoning allows the connectivity between the attached
208:38 connectivity between the attached devices based on their worldwide
208:41 devices based on their worldwide names the benefit of this zoning is that
208:43 names the benefit of this zoning is that the attached node can be moved anywhere
208:45 the attached node can be moved anywhere in the Fabric and it will still be in
208:47 in the Fabric and it will still be in the same
208:50 the same Zone Port zoning Port zoning allows the
208:53 Zone Port zoning Port zoning allows the connectivity based on the port numbers
208:55 connectivity based on the port numbers of the
208:56 of the switches in this zoning all devices
208:59 switches in this zoning all devices connected to all the switch ports that
209:01 connected to all the switch ports that are members of a Zone can communicate
209:03 are members of a Zone can communicate with each
209:05 with each other the benefit of port zoning is that
209:08 other the benefit of port zoning is that ports can easily be added to the Zone
209:10 ports can easily be added to the Zone irrespective of whether a device is
209:12 irrespective of whether a device is connected to a port or
209:14 connected to a port or not the advantage of zoning is that it
209:17 not the advantage of zoning is that it not only prevents unauthorized access of
209:20 not only prevents unauthorized access of storage but it also prevents The
209:22 storage but it also prevents The Unwanted host-to-host
209:24 Unwanted host-to-host Communications and the fabric wide
209:26 Communications and the fabric wide registered state change
209:32 notifications the reason to prevent fabric wide registered state change
209:34 fabric wide registered state change notifications is that it has the
209:36 notifications is that it has the potential to disrupt the storage traffic
209:39 potential to disrupt the storage traffic since we talked about the two types of
209:40 since we talked about the two types of zoning we will now cover the
209:42 zoning we will now cover the implementation of zoning the worldwide
209:45 implementation of zoning the worldwide name Zone zoning and Port zoning can be
209:47 name Zone zoning and Port zoning can be implemented either as soft zoning or
209:50 implemented either as soft zoning or hard
209:51 hard zoning this is like asking if we are
209:53 zoning this is like asking if we are going to take a soft approach or a hard
209:56 going to take a soft approach or a hard approach in implementing the worldwide
209:58 approach in implementing the worldwide name zoning and Port
210:01 name zoning and Port zoning the hard zoning and soft zoning
210:04 zoning the hard zoning and soft zoning should not be confused with Worldwide
210:06 should not be confused with Worldwide name zoning and Port zoning soft zoning
210:09 name zoning and Port zoning soft zoning is implemented by the name server when
210:12 is implemented by the name server when an end device queries the name server
210:14 an end device queries the name server about other devices in the fabric the
210:16 about other devices in the fabric the name server provides only the list of
210:18 name server provides only the list of devices that are in the same
210:21 devices that are in the same Zone as a result the end device receives
210:24 Zone as a result the end device receives a restricted view of the
210:26 a restricted view of the fabric however the downside of this
210:28 fabric however the downside of this zoning is that if an end device knows
210:31 zoning is that if an end device knows the address of another end device it can
210:33 the address of another end device it can still communicate with it so it is not
210:35 still communicate with it so it is not considered
210:38 considered secure hard zoning is implemented by
210:40 secure hard zoning is implemented by switch Hardware it checks the frames
210:43 switch Hardware it checks the frames that cross the fabric and allows only
210:45 that cross the fabric and allows only those Fram fres that belong to the
210:48 those Fram fres that belong to the Zone it is used in conjunction with soft
210:52 Zone it is used in conjunction with soft zoning now we will talk about Zone
210:55 zoning now we will talk about Zone Alias a Zone Alias is the custom name
210:58 Alias a Zone Alias is the custom name that is assigned to a port address or
211:00 that is assigned to a port address or wwnn address in a
211:03 wwnn address in a Zone this is because the port addresses
211:06 Zone this is because the port addresses and wwnn addresses are difficult to read
211:09 and wwnn addresses are difficult to read and remember and a humanfriendly name or
211:11 and remember and a humanfriendly name or Alias is needed that simplifies the Zone
211:14 Alias is needed that simplifies the Zone Administration once the Zone Alias is
211:17 Administration once the Zone Alias is assigned the zoning operation can be
211:19 assigned the zoning operation can be performed using the
211:21 performed using the Alias now we will talk about best
211:24 Alias now we will talk about best practices in
211:25 practices in zoning it is recommended to use single
211:28 zoning it is recommended to use single initiator single Target or single
211:31 initiator single Target or single initiator multiple Target Zone sets it
211:34 initiator multiple Target Zone sets it is recommended to create a Zone with
211:36 is recommended to create a Zone with only a few
211:38 only a few members it is recommended to Define
211:41 members it is recommended to Define zones using worldwide Port
211:43 zones using worldwide Port names the default Zone set should be set
211:46 names the default Zone set should be set to no access because when Zone setting
211:49 to no access because when Zone setting is disabled the device will be
211:52 is disabled the device will be isolated zoning changes affect the
211:54 isolated zoning changes affect the entire fabric several minutes should be
211:56 entire fabric several minutes should be allowed for the changes to propagate
211:58 allowed for the changes to propagate across the entire fabric if the fabric
212:01 across the entire fabric if the fabric is
212:03 is large now let's talk about virtual
212:05 large now let's talk about virtual fabric virtual fabric is known as
212:08 fabric virtual fabric is known as virtual San or
212:10 virtual San or vsan the vsan technology allows a large
212:14 vsan the vsan technology allows a large sand to be logically partitioned into
212:16 sand to be logically partitioned into small Sands called virtual Sands or
212:18 small Sands called virtual Sands or virtual
212:20 virtual Fabrics a virtual fabric is created by
212:23 Fabrics a virtual fabric is created by partitioning a physical switch into
212:25 partitioning a physical switch into multiple logical switches each virtual
212:28 multiple logical switches each virtual fabric is just like a separate physical
212:30 fabric is just like a separate physical fabric containing its own dedicated
212:32 fabric containing its own dedicated fabric Services data paths and
212:34 fabric Services data paths and management
212:36 management capabilities hosts and devices that
212:38 capabilities hosts and devices that belong to a virtual fabric communicate
212:41 belong to a virtual fabric communicate with each other using a virtual topology
212:43 with each other using a virtual topology implemented over a physical sand
212:47 implemented over a physical sand the virtual Fabrics cannot communicate
212:49 the virtual Fabrics cannot communicate with each other because they are
212:50 with each other because they are separate
212:52 separate entities as you can see in the diagram
212:55 entities as you can see in the diagram the fiber channel switch has 16 ports we
212:58 the fiber channel switch has 16 ports we will create two virtual Fabrics by
213:00 will create two virtual Fabrics by partitioning this switch into two
213:01 partitioning this switch into two logical switches the first six ports are
213:04 logical switches the first six ports are configured as vsan 1 and the remaining
213:07 configured as vsan 1 and the remaining 10 ports are configured as vsan
213:10 10 ports are configured as vsan 2 these vends do not communicate with
213:13 2 these vends do not communicate with each other a port in the physical fabric
213:16 each other a port in the physical fabric switch cannot belong to two VSS Sands
213:18 switch cannot belong to two VSS Sands for example Port two only belongs to VSS
213:21 for example Port two only belongs to VSS sand
213:23 sand 1 now let's talk about the features of
213:26 1 now let's talk about the features of virtual fabric shared topology all
213:30 virtual fabric shared topology all virtual Fabrics implemented on a
213:31 virtual Fabrics implemented on a physical sand share the same
213:34 physical sand share the same topology isolation virtual Fabrics
213:38 topology isolation virtual Fabrics cannot communicate with each
213:40 cannot communicate with each other the traffic of a virtual fabric is
213:43 other the traffic of a virtual fabric is contained within its boundaries
213:47 contained within its boundaries scalability the ability to create
213:49 scalability the ability to create multiple virtual Sands from a single
213:51 multiple virtual Sands from a single physical sand increases the scalability
213:54 physical sand increases the scalability of s since it can be suited for
213:56 of s since it can be suited for different
214:00 applications redundancy having more than one virtual
214:02 redundancy having more than one virtual sand provides redundancy because if one
214:05 sand provides redundancy because if one virtual sand fails another virtual sand
214:08 virtual sand fails another virtual sand can be configured as a backup path
214:10 can be configured as a backup path between the server and the
214:12 between the server and the switch ease of
214:14 switch ease of configuration it is very easy to move a
214:17 configuration it is very easy to move a device from one virtual sand to another
214:19 device from one virtual sand to another virtual Sand by configuring it at the
214:21 virtual Sand by configuring it at the Port level instead of physically moving
214:23 Port level instead of physically moving it to another
214:25 it to another location Independence a change in the
214:28 location Independence a change in the fabric configuration of one virtual
214:31 fabric configuration of one virtual fabric doesn't affect the
214:33 fabric doesn't affect the other now let's look at over
214:36 other now let's look at over subscription when several devices
214:38 subscription when several devices contend for the same link there may not
214:40 contend for the same link there may not be sufficient bandwidth available over
214:42 be sufficient bandwidth available over the link to support all the devices and
214:45 the link to support all the devices and when this happens we say the link is
214:47 when this happens we say the link is over
214:52 subscribed having over subscription doesn't mean that it will result in
214:53 doesn't mean that it will result in congestion because not all the devices
214:56 congestion because not all the devices will operate at the maximum throughput
214:58 will operate at the maximum throughput at the same
215:00 at the same time if the combined throughput of all
215:02 time if the combined throughput of all the devices does not exceed the
215:04 the devices does not exceed the bandwidth of the link only then is it
215:07 bandwidth of the link only then is it safe to oversubscribe the
215:10 safe to oversubscribe the link there are some cases for which over
215:13 link there are some cases for which over subscription is not suitable
215:15 subscription is not suitable for example traffic generated by video
215:18 for example traffic generated by video streaming content that is continuous and
215:20 streaming content that is continuous and constant it is not suitable for over
215:23 constant it is not suitable for over subscription because it needs sufficient
215:29 bandwidth inter switch links are usually over
215:31 over subscribed in order to ensure that
215:34 subscribed in order to ensure that performance is not affected over the ISL
215:36 performance is not affected over the ISL the number of devices that can contend
215:38 the number of devices that can contend for an isl's bandwidth is calculated as
215:41 for an isl's bandwidth is calculated as a ratio number called an over
215:43 a ratio number called an over subscription ratio
215:46 subscription ratio over subscription ratio is the ratio
215:48 over subscription ratio is the ratio number of non- ISL ports to the number
215:51 number of non- ISL ports to the number of ISL ports on the
215:53 of ISL ports on the switch let's say our Edge switch has 16
215:56 switch let's say our Edge switch has 16 ports all operating at 4 gbits per
215:59 ports all operating at 4 gbits per second and it has two isls that are each
216:02 second and it has two isls that are each connected to two core
216:05 connected to two core switches Since There are 16 ports and
216:08 switches Since There are 16 ports and two are used for ISL the remaining free
216:10 two are used for ISL the remaining free ports or non ISL ports are
216:13 ports or non ISL ports are 14 hence the over subscription ratio is
216:16 14 hence the over subscription ratio is calculated as
216:18 calculated as follows over subscription ratio equals
216:21 follows over subscription ratio equals number of non ISL ports to number of ISL
216:26 number of non ISL ports to number of ISL ports that is equal to 14
216:30 ports that is equal to 14 to2 it reduces to
216:33 to2 it reduces to 7:1 so the over subscription ratio is
216:37 7:1 so the over subscription ratio is 7:1 over subscription is just a
216:39 7:1 over subscription is just a possibility indicating that the seven
216:42 possibility indicating that the seven devices May contend for an ISL and it
216:45 devices May contend for an ISL and it doesn't mean that all seven devices are
216:47 doesn't mean that all seven devices are really contending for an
216:49 really contending for an ISL and that brings us to the end of
216:52 ISL and that brings us to the end of this lesson let's summarize what you
216:54 this lesson let's summarize what you have learned in this lesson in this
216:57 have learned in this lesson in this lesson you learned about the different
216:58 lesson you learned about the different features and capabilities of fiber
217:00 features and capabilities of fiber channel
217:01 channel switches we started by looking at what a
217:04 switches we started by looking at what a domain ID is in the context of fiber
217:06 domain ID is in the context of fiber channel switch and then we looked at the
217:09 channel switch and then we looked at the fabric Services provided by the fiber
217:11 fabric Services provided by the fiber channel
217:11 channel switches these fabric services are name
217:14 switches these fabric services are name server login server fabric controller
217:18 server login server fabric controller management server and time
217:20 management server and time server after covering the fabric
217:22 server after covering the fabric Services we looked at the fabric login
217:25 Services we looked at the fabric login process and the port login
217:28 process and the port login process next we looked at what zoning
217:30 process next we looked at what zoning was and then we looked at what a Zone
217:32 was and then we looked at what a Zone set was we also looked at the types of
217:35 set was we also looked at the types of zoning and these are worldwide name
217:37 zoning and these are worldwide name zoning and Port
217:39 zoning and Port zoning we looked at the advantages of
217:41 zoning we looked at the advantages of zoning and then we looked at the
217:43 zoning and then we looked at the implementation of zoning
217:46 implementation of zoning we also looked at what a Zone Alias is
217:48 we also looked at what a Zone Alias is and then we looked at the best practices
217:50 and then we looked at the best practices in
217:51 in zoning next we looked at what a virtual
217:53 zoning next we looked at what a virtual fabric was and we looked at its
217:56 fabric was and we looked at its features we also looked at what an over
217:58 features we also looked at what an over subscription was and lastly we looked at
218:01 subscription was and lastly we looked at what an over subscription ratio was in
218:03 what an over subscription ratio was in the next lesson you will learn about the
218:05 the next lesson you will learn about the end port ID virtualization thank you for
218:08 end port ID virtualization thank you for watching
218:34 hello and welcome to unit 6 nport ID virtualization in this lesson you will
218:36 virtualization in this lesson you will learn about the end port ID
218:38 learn about the end port ID virtualization since nport ID
218:41 virtualization since nport ID virtualization is based on server
218:43 virtualization is based on server virtualization we're going to start by
218:45 virtualization we're going to start by looking at at what server virtualization
218:47 looking at at what server virtualization is and then we will look at what
218:49 is and then we will look at what hypervisor
218:50 hypervisor is next we will talk about the virtual
218:53 is next we will talk about the virtual machines sharing the host IO connections
218:55 machines sharing the host IO connections and the problems associated with it we
218:58 and the problems associated with it we will then look at the challenges faced
219:00 will then look at the challenges faced with server
219:01 with server virtualization next we will recall what
219:04 virtualization next we will recall what nport is and then we will talk about the
219:06 nport is and then we will talk about the fabric login or F loggy process we will
219:10 fabric login or F loggy process we will also talk about the port login or P
219:12 also talk about the port login or P loggy process we will then look at nport
219:15 loggy process we will then look at nport ID virtualization or
219:18 ID virtualization or npiv while talking about nport ID
219:21 npiv while talking about nport ID virtualization we will look at fabric
219:23 virtualization we will look at fabric Discovery and then we will look at the
219:25 Discovery and then we will look at the implementation of nport ID
219:27 implementation of nport ID virtualization we will also look at the
219:29 virtualization we will also look at the advantages of
219:31 advantages of npiv next we will look at the challenges
219:33 npiv next we will look at the challenges faced in a traditional environment and
219:36 faced in a traditional environment and then we will see how nport virtualizer
219:38 then we will see how nport virtualizer or npv addresses these challenges we
219:42 or npv addresses these challenges we will also look at how nport virtualizer
219:44 will also look at how nport virtualizer works and then we will look at the
219:46 works and then we will look at the benefits of end port virtualizer next we
219:48 benefits of end port virtualizer next we will look at what multipathing is and
219:51 will look at what multipathing is and then we will look at the implementation
219:52 then we will look at the implementation of multipathing we will also look at
219:55 of multipathing we will also look at redundancy in the context of
219:57 redundancy in the context of multipathing we will look at why there's
219:59 multipathing we will look at why there's a need for multipathing software and
220:01 a need for multipathing software and then we will look at the types of
220:03 then we will look at the types of multipathing software based on their
220:05 multipathing software based on their modes of operation and these are active
220:08 modes of operation and these are active passive mode and active active mode
220:11 passive mode and active active mode lastly we will look at The Logical Drive
220:13 lastly we will look at The Logical Drive visibility of the host operating system
220:15 visibility of the host operating system in the contents of
220:17 in the contents of multipathing nport ID virtualization is
220:20 multipathing nport ID virtualization is based on server virtualization so we
220:23 based on server virtualization so we will talk about that
220:25 will talk about that first the present day trend in data
220:28 first the present day trend in data centers is to use server virtualization
220:30 centers is to use server virtualization to avoid proliferation of physical
220:34 to avoid proliferation of physical servers server virtualization is the
220:36 servers server virtualization is the partitioning of a single physical server
220:38 partitioning of a single physical server into multiple virtual
220:41 into multiple virtual servers these multiple virtual servers
220:43 servers these multiple virtual servers are called virtual Machin
220:46 are called virtual Machin each virtual machine behaves like a
220:48 each virtual machine behaves like a physical
220:49 physical server the software that implements
220:52 server the software that implements server virtualization is called
220:55 server virtualization is called hypervisor and it runs directly on the
220:57 hypervisor and it runs directly on the physical server in place of an operating
221:00 physical server in place of an operating system when it comes to accessing
221:02 system when it comes to accessing storage these virtual machines share the
221:05 storage these virtual machines share the io connections of the physical
221:07 io connections of the physical server sharing the io connections leads
221:10 server sharing the io connections leads to the problem of bandwidth contention
221:12 to the problem of bandwidth contention among the virtual machines which in turn
221:15 among the virtual machines which in turn ffects the quality of service received
221:17 ffects the quality of service received by the applications running in these
221:18 by the applications running in these virtual
221:24 machines in addition to this storage administrators don't have application
221:26 administrators don't have application Level visibility in the tools used for
221:28 Level visibility in the tools used for the following
221:30 the following monitoring troubleshooting and securing
221:33 monitoring troubleshooting and securing San this is because iOS come from the
221:36 San this is because iOS come from the same physical
221:38 same physical HBA before server virtualization a
221:41 HBA before server virtualization a typical practice was to create a Zone to
221:44 typical practice was to create a Zone to allow a server access to a storage LUN
221:47 allow a server access to a storage LUN this was done using worldwide name
221:49 this was done using worldwide name zoning where the wwnn of the server's
221:52 zoning where the wwnn of the server's host bus adapter was assigned to a
221:54 host bus adapter was assigned to a Lun since wwnn is a unique identifier of
221:58 Lun since wwnn is a unique identifier of HBA this method not only allowed secure
222:01 HBA this method not only allowed secure access to the Lan but it also provided a
222:03 access to the Lan but it also provided a customizable quality of service for the
222:11 application with server virtualization this practice came to a halt because a
222:14 this practice came to a halt because a physical server may have multiple
222:15 physical server may have multiple virtual machines and each virtual
222:18 virtual machines and each virtual machine shares access to the server's
222:20 machine shares access to the server's host bus
222:21 host bus adapter and as a result has the same
222:24 adapter and as a result has the same wwnn identification to the
222:27 wwnn identification to the Lun there was no mechanism in place to
222:30 Lun there was no mechanism in place to identify individual virtual machines to
222:33 identify individual virtual machines to track their use of sand resources or to
222:35 track their use of sand resources or to ensure they didn't conflict with the
222:37 ensure they didn't conflict with the sand
222:38 sand resources there is also another
222:40 resources there is also another challenge with server virtualization
222:42 challenge with server virtualization because it provides live migration of
222:44 because it provides live migration of virtual Machin from one physical server
222:46 virtual Machin from one physical server to
222:48 to another when live migration is done
222:50 another when live migration is done storage administrators need to remember
222:52 storage administrators need to remember to include the wwnn of the second
222:55 to include the wwnn of the second physical serers HBA in the zone
222:58 physical serers HBA in the zone otherwise the migrated virtual machine
223:00 otherwise the migrated virtual machine will not see its
223:03 will not see its Lun this is because the virtual machine
223:05 Lun this is because the virtual machine after migration will have the wwnn of
223:08 after migration will have the wwnn of the second physical servers HBA and not
223:11 the second physical servers HBA and not that of the first physical servers HBA
223:14 that of the first physical servers HBA and sand will block its access to the
223:16 and sand will block its access to the Lun treating it as an unauthorized
223:20 Lun treating it as an unauthorized wwnn one might think that adding
223:22 wwnn one might think that adding dedicated physical server hbas to each
223:25 dedicated physical server hbas to each virtual machine would solve the problem
223:27 virtual machine would solve the problem instead of having the hypervisor manage
223:29 instead of having the hypervisor manage the virtual
223:35 hbas but that's a costly Affair and it doesn't provide much value on
223:37 doesn't provide much value on investment in addition including more
223:40 investment in addition including more physical hbas would require more end
223:43 physical hbas would require more end ports in a fabric result ing in a bigger
223:45 ports in a fabric result ing in a bigger sand
223:51 fabric in order to address these concerns nport ID virtualization or npiv
223:54 concerns nport ID virtualization or npiv was
223:55 was developed before we look at what nport
223:58 developed before we look at what nport id virtualization is we will first cover
224:01 id virtualization is we will first cover a few basic things such as end port
224:04 a few basic things such as end port fabric login process and Port login
224:07 fabric login process and Port login process we know that an end port is a
224:10 process we know that an end port is a node Port that refers to the ports on
224:12 node Port that refers to the ports on the end devices in a
224:14 the end devices in a fabric and it could be either an HBA
224:16 fabric and it could be either an HBA port in the server or a Target Port in
224:21 port in the server or a Target Port in storage each end port of an end device
224:23 storage each end port of an end device has a unique 64-bit identifier called
224:26 has a unique 64-bit identifier called the worldwide Port name assigned by its
224:33 manufacturer in a non- virtualized environment where an end device is first
224:36 environment where an end device is first attached to a fabric switch it will
224:38 attached to a fabric switch it will execute a fabric log on process or F
224:41 execute a fabric log on process or F loggy
224:42 loggy process an F logy is a service that
224:45 process an F logy is a service that establishes a logical connection between
224:47 establishes a logical connection between the nport of the end device and the
224:49 the nport of the end device and the fport of the fabric
224:52 fport of the fabric switch once the session is established
224:55 switch once the session is established the end port will send an f- loggy frame
224:57 the end port will send an f- loggy frame to the well-known address of the login
224:59 to the well-known address of the login server with its details such as node
225:02 server with its details such as node name n Port name and service
225:06 name n Port name and service parameters when the f logy is successful
225:09 parameters when the f logy is successful the fport assigns a 24-bit dynamic
225:11 the fport assigns a 24-bit dynamic address called an nport ID to the nport
225:15 address called an nport ID to the nport of the
225:16 of the device the end port ID assigned to an
225:19 device the end port ID assigned to an end port will change each time when the
225:21 end port will change each time when the end port is reinitialized it is used by
225:24 end port is reinitialized it is used by the nport for communicating with the
225:31 fabric the next step after the fogy process is the port login or P
225:34 process is the port login or P logy during the P logy process the end
225:37 logy during the P logy process the end device registers its information such as
225:40 device registers its information such as end port ID with the name
225:42 end port ID with the name server so in a non calized environment
225:46 server so in a non calized environment an nport has both a worldwide Port name
225:49 an nport has both a worldwide Port name and an nport
225:51 and an nport ID if we take fiber channel HBA as an
225:54 ID if we take fiber channel HBA as an example its end port will have a single
225:56 example its end port will have a single worldwide Port name and a single end
225:59 worldwide Port name and a single end port ID associated with
226:07 it nport ID virtualization or npiv is an ANC standard that allows the following
226:10 ANC standard that allows the following single fiber channel HBA port or nport
226:13 single fiber channel HBA port or nport to register as multiple world wide Port
226:15 to register as multiple world wide Port names in the
226:17 names in the fabric each registered worldwide Port
226:19 fabric each registered worldwide Port name is assigned a unique nport
226:23 name is assigned a unique nport ID in simple words nport ID
226:26 ID in simple words nport ID virtualization allows a single physical
226:28 virtualization allows a single physical end port to acquire multiple nport
226:32 end port to acquire multiple nport IDs each nport ID can be mapped to a
226:36 IDs each nport ID can be mapped to a different initiator such as a virtual
226:38 different initiator such as a virtual machine this allows the creation of
226:41 machine this allows the creation of multiple virtual links over a single
226:43 multiple virtual links over a single physical link by mapping many nport IDs
226:46 physical link by mapping many nport IDs with one
226:48 with one fport an end device that supports npiv
226:52 fport an end device that supports npiv features will use the F logy process
226:55 features will use the F logy process only once to get the first nport
226:58 only once to get the first nport ID in order to acquire additional nport
227:01 ID in order to acquire additional nport IDs it will start executing a fabric
227:04 IDs it will start executing a fabric Discovery process or an Fisk process as
227:07 Discovery process or an Fisk process as many times as needed fabric Discovery is
227:10 many times as needed fabric Discovery is a service that verifies if the existing
227:13 a service that verifies if the existing login in the fabric is still valid
227:16 login in the fabric is still valid however it is also used to obtain
227:18 however it is also used to obtain additional nport
227:20 additional nport IDs in order to implement npiv the San
227:24 IDs in order to implement npiv the San itself should be npiv capable that is it
227:27 itself should be npiv capable that is it should contain npiv capable hbas and
227:30 should contain npiv capable hbas and npiv capable
227:33 npiv capable switches an npiv capable HBA can be
227:37 switches an npiv capable HBA can be virtualized into multiple virtual
227:40 virtualized into multiple virtual hbas each virtual HBA will have its own
227:44 hbas each virtual HBA will have its own virtual world worldwide Port name and
227:46 virtual world worldwide Port name and its own virtual end
227:48 its own virtual end port the virtual end port like a
227:51 port the virtual end port like a physical end port can register with the
227:53 physical end port can register with the fabric to get an nport
227:55 fabric to get an nport ID the nport ID can then be used by the
227:58 ID the nport ID can then be used by the virtual HBA to communicate with the
228:02 virtual HBA to communicate with the fabric the advantages of
228:04 fabric the advantages of npiv in a virtualized environment each
228:08 npiv in a virtualized environment each virtual machine can have a separate
228:10 virtual machine can have a separate worldwide Port name and its zoning can
228:12 worldwide Port name and its zoning can be managed independently
228:18 in addition to that there would be no need for extra physical hbas to be
228:21 need for extra physical hbas to be connected to sand so there would be no
228:23 connected to sand so there would be no need for more Edge switches one major
228:26 need for more Edge switches one major concern when it comes to designing and
228:28 concern when it comes to designing and building a sand is the number of
228:30 building a sand is the number of switches that can exist each fiber
228:33 switches that can exist each fiber channel switch in a fabric requires a
228:35 channel switch in a fabric requires a unique domain ID when a lot of switches
228:38 unique domain ID when a lot of switches join a fabric the number of domain IDs
228:40 join a fabric the number of domain IDs becomes a
228:42 becomes a concern the fiber channel standard
228:44 concern the fiber channel standard allows a maximum of 239 Port addresses
228:47 allows a maximum of 239 Port addresses that can be used for domain
228:49 that can be used for domain IDs in addition to that having more
228:52 IDs in addition to that having more domain IDs creates complexity in
228:55 domain IDs creates complexity in managing the fabric and it also impacts
228:57 managing the fabric and it also impacts the performance because of a lot of
228:59 the performance because of a lot of switch
229:01 switch connectivity one other design issue that
229:03 connectivity one other design issue that administrators face is the
229:05 administrators face is the interoperability with the thirdparty
229:07 interoperability with the thirdparty switches this is because the vendor
229:10 switches this is because the vendor specific attributes used for switchto
229:12 specific attributes used for switchto switch connectivity between different
229:14 switch connectivity between different vendor switches is make inter switch
229:16 vendor switches is make inter switch connectivity challenging in order to
229:18 connectivity challenging in order to address these concerns nport virtualizer
229:21 address these concerns nport virtualizer was
229:22 was developed the nport virtualizer or npv
229:25 developed the nport virtualizer or npv is based on an nport ID virtualization
229:28 is based on an nport ID virtualization but is implemented at switch level it
229:31 but is implemented at switch level it allows an npiv enabled switch to bundle
229:34 allows an npiv enabled switch to bundle the connections it receives from end
229:36 the connections it receives from end devices or end ports into one or more
229:38 devices or end ports into one or more connections that link to a core
229:42 connections that link to a core switch the requirement for npv is that
229:45 switch the requirement for npv is that the core switch support npiv
229:48 the core switch support npiv features the npiv enabled Edge switch
229:51 features the npiv enabled Edge switch registers with the fabric as an end
229:54 registers with the fabric as an end device the end port virtualizer provides
229:57 device the end port virtualizer provides the npiv enabled Edge switch with a
230:00 the npiv enabled Edge switch with a unique Port name and node name used for
230:02 unique Port name and node name used for the fabric login or F loggy
230:06 the fabric login or F loggy process therefore the npiv enabled Edge
230:09 process therefore the npiv enabled Edge switch is not assigned a domain the
230:12 switch is not assigned a domain the connection from an npiv enabled Edge
230:14 connection from an npiv enabled Edge switch to a non- npiv fabric switch is
230:17 switch to a non- npiv fabric switch is treated as an nport to fport connection
230:20 treated as an nport to fport connection rather than an E port to E port
230:23 rather than an E port to E port connection an npiv enabled Edge switch
230:26 connection an npiv enabled Edge switch shares the domain ID of the core
230:29 shares the domain ID of the core switch it doesn't require separate
230:31 switch it doesn't require separate domain IDs to receive connectivity from
230:34 domain IDs to receive connectivity from the fabric and doesn't participate in
230:36 the fabric and doesn't participate in fabric
230:38 fabric Services the end ports of the end
230:40 Services the end ports of the end devices that log in to the npiv enabled
230:43 devices that log in to the npiv enabled Edge switch have a a unique node name
230:45 Edge switch have a a unique node name and Port name and these are seen by the
230:48 and Port name and these are seen by the npiv enabled Edge switch as a group of
230:51 npiv enabled Edge switch as a group of npiv ports behind a single physical Port
230:55 npiv ports behind a single physical Port the npiv enabled Edge switch Maps them
230:57 the npiv enabled Edge switch Maps them to its permanent Port
231:04 name now let's look at the benefits of the end port
231:06 the end port virtualizer the benefit of an end port
231:09 virtualizer the benefit of an end port virtualizer is that it makes sand
231:11 virtualizer is that it makes sand scalable by allowing more switches to be
231:13 scalable by allowing more switches to be added to the fabric without using extra
231:16 added to the fabric without using extra domain
231:18 domain IDs it also solves the problem of
231:21 IDs it also solves the problem of interoperability between switches that
231:23 interoperability between switches that come from different
231:25 come from different vendors now let's talk about
231:27 vendors now let's talk about multipathing in a sand fabric if a
231:30 multipathing in a sand fabric if a server has only one IO paath to access
231:32 server has only one IO paath to access the storage device then in the event of
231:35 the storage device then in the event of the IOP paath failure it will not be
231:37 the IOP paath failure it will not be able to access the storage
231:40 able to access the storage device multipathing is a technique that
231:43 device multipathing is a technique that allows a server to to use more than one
231:45 allows a server to to use more than one path to access a storage
231:48 path to access a storage device it offers both redundancy and
231:50 device it offers both redundancy and performance to the fabric in order to
231:53 performance to the fabric in order to implement the multipathing technique the
231:55 implement the multipathing technique the fabric should have redundant components
231:57 fabric should have redundant components such as the following fiber channel hpas
232:01 such as the following fiber channel hpas fiber channel switches and fiber channel
232:03 fiber channel switches and fiber channel cables these components provide multiple
232:06 cables these components provide multiple physical paths from the servers to the
232:08 physical paths from the servers to the storage
232:10 storage device setting up multiple paths with
232:13 device setting up multiple paths with redundant components in ensures that
232:15 redundant components in ensures that there is no single point of failure so
232:18 there is no single point of failure so even if one of the physical paths fails
232:20 even if one of the physical paths fails owing to a component failure the server
232:22 owing to a component failure the server can still access the storage device
232:24 can still access the storage device using an alternate
232:26 using an alternate path now let's talk about multipathing
232:29 path now let's talk about multipathing software having multiple paths alone
232:32 software having multiple paths alone will not help reroute the input output
232:34 will not help reroute the input output requests from One path to another in the
232:37 requests from One path to another in the event of a component failure the server
232:39 event of a component failure the server should be aware of the existence of
232:41 should be aware of the existence of multiple paths otherwise it will not
232:43 multiple paths otherwise it will not route the IO requests to an alternate
232:46 route the IO requests to an alternate path in the event of a component
232:48 path in the event of a component failure here's where multipath software
232:51 failure here's where multipath software comes to our Aid multipathing software
232:55 comes to our Aid multipathing software differs in the way it uses multiple
232:57 differs in the way it uses multiple paths the types of multipathing software
233:00 paths the types of multipathing software are as
233:01 are as follows active passive mode active
233:04 follows active passive mode active active
233:05 active mode in active passive mode the
233:08 mode in active passive mode the multipathing software takes care of only
233:11 multipathing software takes care of only failover recovery though the
233:13 failover recovery though the multipathing software manages all the
233:15 multipathing software manages all the paths between the server and the storage
233:17 paths between the server and the storage device it uses only one path for sending
233:20 device it uses only one path for sending the io requests if that path fails the
233:23 the io requests if that path fails the multipathing software will identify and
233:25 multipathing software will identify and access an alternate path to redirect the
233:28 access an alternate path to redirect the io
233:33 requests in active active mode the multipathing software shares the io
233:35 multipathing software shares the io requests equally across all the
233:37 requests equally across all the available
233:39 available paths in order to do this it
233:41 paths in order to do this it continuously monitors the paths to see
233:43 continuously monitors the paths to see which one is available available and
233:45 which one is available available and enables or disables them based on their
233:49 enables or disables them based on their availability the advantage of active
233:51 availability the advantage of active active mode is that it makes the best
233:53 active mode is that it makes the best use of underlying Hardware by combining
233:56 use of underlying Hardware by combining failover recovery with load
233:59 failover recovery with load balancing the multipathing software
234:01 balancing the multipathing software ensures that the storage device's
234:03 ensures that the storage device's logical Drive is visible to the host
234:06 logical Drive is visible to the host operating system or its application
234:08 operating system or its application without any
234:09 without any duplication even though the same logical
234:12 duplication even though the same logical Drive is made available several times to
234:14 Drive is made available several times to The Host post using each and every
234:17 The Host post using each and every path and that brings us to the end of
234:19 path and that brings us to the end of this
234:21 this lesson let's summarize what you have
234:23 lesson let's summarize what you have learned in this
234:25 learned in this lesson in this lesson you learned about
234:27 lesson in this lesson you learned about nport ID
234:30 nport ID virtualization since nport ID
234:32 virtualization since nport ID virtualization is based on server
234:34 virtualization is based on server virtualization we started by looking at
234:36 virtualization we started by looking at what server virtualization is and then
234:39 what server virtualization is and then we looked at what hypervisor
234:42 we looked at what hypervisor is next we talked about the virtual
234:45 is next we talked about the virtual machines sharing the host IO connections
234:47 machines sharing the host IO connections and the problems associated with
234:50 and the problems associated with it we then looked at the challenges
234:52 it we then looked at the challenges faced with server
234:54 faced with server virtualization next we recalled what
234:56 virtualization next we recalled what nport is and then we talked about the
234:59 nport is and then we talked about the fabric login or F logy
235:01 fabric login or F logy process we also talked about the port
235:04 process we also talked about the port login or P logy
235:06 login or P logy process we then looked at ort ID
235:09 process we then looked at ort ID virtualization or
235:11 virtualization or npiv while talking about nport ID
235:13 npiv while talking about nport ID virtualization ation we looked at fabric
235:16 virtualization ation we looked at fabric Discovery and then we talked about the
235:18 Discovery and then we talked about the implementation of nport ID
235:22 implementation of nport ID virtualization we also looked at the
235:24 virtualization we also looked at the advantages of
235:26 advantages of npiv next we looked at the challenges
235:28 npiv next we looked at the challenges faced in a traditional environment and
235:31 faced in a traditional environment and we saw how nport virtualizer or npv
235:34 we saw how nport virtualizer or npv addresses these challenges we also
235:37 addresses these challenges we also looked at how nport virtualizer works
235:39 looked at how nport virtualizer works and then we looked at the benefits of
235:41 and then we looked at the benefits of nport
235:42 nport virtualizer next we looked at what
235:44 virtualizer next we looked at what multipathing is and then we looked at
235:46 multipathing is and then we looked at the implementation of
235:48 the implementation of multipathing we also looked at
235:50 multipathing we also looked at redundancy in the context of
235:53 redundancy in the context of multipathing we looked at why there is a
235:55 multipathing we looked at why there is a need for multipathing software and then
235:57 need for multipathing software and then we looked at the types of multipathing
235:59 we looked at the types of multipathing software based on their mode of
236:01 software based on their mode of operation and these are active passive
236:04 operation and these are active passive mode and active active mode lastly we
236:07 mode and active active mode lastly we looked at the logical Drive visibility
236:09 looked at the logical Drive visibility of the host operating system in the
236:11 of the host operating system in the context of
236:13 context of multipathing in the next lesson you will
236:15 multipathing in the next lesson you will learn about IP San thank you for
236:42 watching hello and welcome to unit 1 introduction to IP San
236:45 introduction to IP San in this lesson you will learn about IP
236:47 in this lesson you will learn about IP San we're going to start by looking at
236:50 San we're going to start by looking at the evolution of Ip San and then we will
236:52 the evolution of Ip San and then we will look at what IP San is next we will talk
236:56 look at what IP San is next we will talk about combining both the application
236:57 about combining both the application Network and the storage Network on the
236:59 Network and the storage Network on the same IP network we will also talk about
237:02 same IP network we will also talk about the it skills required for IP San and
237:05 the it skills required for IP San and then we will talk about the cost of Ip
237:08 then we will talk about the cost of Ip components next we will talk about
237:10 components next we will talk about consolidating storage using IP San and
237:13 consolidating storage using IP San and then we will look look at the advantages
237:15 then we will look look at the advantages of consolidating
237:16 of consolidating storage we will look at the
237:18 storage we will look at the disadvantages of using FC sand for
237:20 disadvantages of using FC sand for storage
237:22 storage consolidation and then we will look at
237:23 consolidation and then we will look at the benefits of Ip
237:25 the benefits of Ip San we will also look at the types of Ip
237:28 San we will also look at the types of Ip San and then we will look at the three
237:31 San and then we will look at the three protocols for implementing IP
237:33 protocols for implementing IP San these are internet scuzzy or I
237:37 San these are internet scuzzy or I scuzzy fiber channel over IP or
237:41 scuzzy fiber channel over IP or FCI and internet fiber channel Proto
237:44 FCI and internet fiber channel Proto call or
237:46 call or ifcp lastly we will look at the types of
237:49 ifcp lastly we will look at the types of Ip sand deployments these are native
237:52 Ip sand deployments these are native bridging and
237:55 bridging and extension now let's look at the
237:57 extension now let's look at the evolution of Ip
237:59 evolution of Ip sand sand became popular because of
238:01 sand sand became popular because of fiber channel
238:03 fiber channel technology FC San a high-speed dedicated
238:06 technology FC San a high-speed dedicated network with low latency and high
238:08 network with low latency and high availability features is a perfect
238:11 availability features is a perfect choice for Mission critical applications
238:18 however its implementation is complex and requires new skills to manage
238:20 and requires new skills to manage it in addition to that the relatively
238:23 it in addition to that the relatively higher cost of FC San infrastructure
238:26 higher cost of FC San infrastructure compared to the ethernet-based networks
238:28 compared to the ethernet-based networks affected the adoption of sand in many
238:32 affected the adoption of sand in many organizations this led to the
238:33 organizations this led to the development of standards and
238:35 development of standards and technologies that used ethernet and IP
238:37 technologies that used ethernet and IP for storage
238:38 for storage networking its goal was to deliver a San
238:41 networking its goal was to deliver a San that could use the following existing
238:43 that could use the following existing Network infrastructure existing it
238:46 Network infrastructure existing it skills and leverage of lowcost ethernet
238:49 skills and leverage of lowcost ethernet to offer performance and scalability
238:51 to offer performance and scalability with the ease of simplified management
238:53 with the ease of simplified management across the
238:55 across the organization when storage is networked
238:57 organization when storage is networked over an IP network it offers many of the
239:00 over an IP network it offers many of the benefits of fiber channel San along with
239:02 benefits of fiber channel San along with cost
239:08 savings fiber channel San uses block iOS to communicate with the storage
239:11 to communicate with the storage devices block iOS are the storage
239:13 devices block iOS are the storage protocol calls such as scuzzy that issue
239:16 protocol calls such as scuzzy that issue IO commands for reading and writing
239:18 IO commands for reading and writing blocks of data to
239:21 blocks of data to storage with the advancement in IP
239:23 storage with the advancement in IP technology block ios's can now be
239:26 technology block ios's can now be handled over an existing IP
239:29 handled over an existing IP network and as a result we have a
239:31 network and as a result we have a storage solution that can connect
239:33 storage solution that can connect servers to storage devices using block
239:36 servers to storage devices using block ios's over IP
239:38 ios's over IP networks in simple words storage area
239:41 networks in simple words storage area network over IP is referred to as as IP
239:45 network over IP is referred to as as IP San what makes IP sand more attractive
239:48 San what makes IP sand more attractive is that it makes storage networking
239:50 is that it makes storage networking possible over an IP network at low cost
239:53 possible over an IP network at low cost and great
239:55 and great efficiency IP San leverages the existing
239:58 efficiency IP San leverages the existing IP network
240:00 IP network infrastructure the protocols that make
240:02 infrastructure the protocols that make IP sand happen are I scuzi fcip and
240:07 IP sand happen are I scuzi fcip and ifcp these run on top of tcpip which
240:10 ifcp these run on top of tcpip which makes it compatible with the components
240:12 makes it compatible with the components of an IP network such such as cables
240:15 of an IP network such such as cables switches routers and Management
240:18 switches routers and Management Systems it is possible to combine both
240:21 Systems it is possible to combine both the application Network and the storage
240:23 the application Network and the storage Network on the same IP network by taking
240:26 Network on the same IP network by taking advantage of the existing IP network
240:29 advantage of the existing IP network infrastructure however many
240:31 infrastructure however many organizations prefer to have a storage
240:33 organizations prefer to have a storage Network separated from the application
240:35 Network separated from the application Network for reasons such as security and
240:40 Network for reasons such as security and performance organizations that have
240:42 performance organizations that have existing IP network infr structures have
240:44 existing IP network infr structures have it staffs to manage and maintain the
240:48 it staffs to manage and maintain the networks while implementing IP San the
240:51 networks while implementing IP San the skills of the existing IT staff can be
240:54 skills of the existing IT staff can be used instead of hiring new IT
240:57 used instead of hiring new IT staff the lowc costs of Ip components
241:00 staff the lowc costs of Ip components can be attributed to the high level of
241:02 can be attributed to the high level of interoperability among various vendor
241:05 interoperability among various vendor equipment for example an ethernet switch
241:08 equipment for example an ethernet switch would cost less than a fiber channel
241:11 would cost less than a fiber channel switch the real savings come from the
241:13 switch the real savings come from the decision to use NYX which offers zero
241:16 decision to use NYX which offers zero cost per host connection over fiber
241:18 cost per host connection over fiber channel
241:24 hbas a Nick can handle a moderate IO workload however when the workload
241:26 workload however when the workload increases the protocol overhead will
241:28 increases the protocol overhead will start affecting the performance of the
241:32 start affecting the performance of the server one may tend to choose NYX with
241:35 server one may tend to choose NYX with tcpip offload engines because it takes
241:38 tcpip offload engines because it takes over the protocol workload from the
241:39 over the protocol workload from the server's processor but it will cost more
241:42 server's processor but it will cost more and will eliminate the Savings of
241:44 and will eliminate the Savings of implementing an IP
241:47 implementing an IP San so an IP San would make more sense
241:50 San so an IP San would make more sense in a low to moderate workload
241:53 in a low to moderate workload environment now we will talk about
241:55 environment now we will talk about consolidating storage using IP San and
241:58 consolidating storage using IP San and its
241:59 its advantages IP San is considered an ideal
242:02 advantages IP San is considered an ideal solution for consolidating storage
242:04 solution for consolidating storage confined to low-end
242:07 confined to low-end servers the advantages of consolidating
242:09 servers the advantages of consolidating storage are optional efficiency improved
242:13 storage are optional efficiency improved data security
242:14 data security and lowered total cost of
242:16 and lowered total cost of ownership now let's look at the
242:18 ownership now let's look at the disadvantages of using FC sand for
242:20 disadvantages of using FC sand for storage
242:22 storage consolidation FC sand cannot be used to
242:25 consolidation FC sand cannot be used to consolidate storage of low-end servers
242:27 consolidate storage of low-end servers since it is an expensive
242:29 since it is an expensive solution this is because the cost of a
242:32 solution this is because the cost of a fiber channel HBA along with the ports
242:35 fiber channel HBA along with the ports of a fiber channel switch used for
242:37 of a fiber channel switch used for connectivity may be more costly than the
242:39 connectivity may be more costly than the low-end
242:40 low-end servers now let's summarize the benefits
242:43 servers now let's summarize the benefits of Ip sand
242:45 of Ip sand IP sand provides a standard sand-based
242:47 IP sand provides a standard sand-based storage environment by providing Block
242:49 storage environment by providing Block Level storage access to the
242:52 Level storage access to the servers it offers an ease of migration
242:55 servers it offers an ease of migration from direct attached
242:58 from direct attached storage and it has the lowest total cost
243:00 storage and it has the lowest total cost of ownership when compared to FC
243:04 of ownership when compared to FC San this is because the cost associated
243:07 San this is because the cost associated with using and managing an existing
243:09 with using and managing an existing infrastructure is less than that of FC
243:12 infrastructure is less than that of FC San my ation to IP s is easy because the
243:16 San my ation to IP s is easy because the IP expertise of the existing IT staff
243:18 IP expertise of the existing IT staff can be
243:20 can be used remote data replication can be done
243:22 used remote data replication can be done using IP
243:24 using IP San use of well-managed standards and
243:27 San use of well-managed standards and management tools makes Network
243:28 management tools makes Network management
243:34 easier while FC San is used for business critical applications IP San can be used
243:37 critical applications IP San can be used for applications that are not business
243:40 for applications that are not business critical now let's look at the types of
243:43 critical now let's look at the types of Ip San
243:45 Ip San IP sand transports scuzzy block ios's
243:48 IP sand transports scuzzy block ios's over an IP network in two ways in one
243:51 over an IP network in two ways in one method a fiber channel frame is
243:53 method a fiber channel frame is encapsulated inside an IP datagram with
243:57 encapsulated inside an IP datagram with the help of one of the following fcip
243:59 the help of one of the following fcip protocol or ifcp protocol in another
244:03 protocol or ifcp protocol in another method a scuzzy frame is encapsulated
244:06 method a scuzzy frame is encapsulated inside an IP datagram with the help of
244:08 inside an IP datagram with the help of an i scuzzy
244:10 an i scuzzy protocol now we will look at the
244:12 protocol now we will look at the protocols that are used used to
244:14 protocols that are used used to implement IP
244:16 implement IP San the three protocols that are used to
244:19 San the three protocols that are used to implement IP San are as follows internet
244:22 implement IP San are as follows internet scuzzy or I scuzzy fiber channel over IP
244:25 scuzzy or I scuzzy fiber channel over IP or fcip and internet fiber channel
244:29 or fcip and internet fiber channel protocol or
244:31 protocol or ifcp now let's look at these protocols
244:33 ifcp now let's look at these protocols one by one we will start with the is
244:36 one by one we will start with the is scuzzi
244:37 scuzzi protocol the I scuzzi protocol functions
244:40 protocol the I scuzzi protocol functions as a native
244:42 as a native protocol it is because all devices in IP
244:45 protocol it is because all devices in IP San have ethernet interfaces and use I
244:48 San have ethernet interfaces and use I scy to communicate with each other
244:50 scy to communicate with each other without any
244:55 translation I scuzi protocol works over long distances and it uses TCP IP
244:58 long distances and it uses TCP IP headers to ensure that its frames are
245:00 headers to ensure that its frames are routed with guaranteed
245:03 routed with guaranteed delivery now let's look at
245:06 delivery now let's look at fcip FCI is a protocol that allows
245:09 fcip FCI is a protocol that allows tunneling of fiber channel information
245:11 tunneling of fiber channel information through the IP network and because
245:13 through the IP network and because because of this the fiber channel
245:15 because of this the fiber channel devices don't know the IP network
245:19 devices don't know the IP network exists since fcip does tunneling of
245:22 exists since fcip does tunneling of fiber channel information it is also
245:24 fiber channel information it is also called tunneling
245:27 called tunneling protocol fcip interconnects two isolated
245:30 protocol fcip interconnects two isolated FC SS and merges them into a single FC s
245:34 FC SS and merges them into a single FC s with only one name
245:36 with only one name server fcip also supports long-distance
245:40 server fcip also supports long-distance connectivity using an IP network
245:44 connectivity using an IP network now let's look at
245:46 now let's look at ifcp ifcp allows fiber channel devices
245:50 ifcp ifcp allows fiber channel devices to communicate with each other over the
245:52 to communicate with each other over the IP network using TCP
245:55 IP network using TCP IP it does this by replacing the lower
245:58 IP it does this by replacing the lower level transport mechanism of the fiber
246:00 level transport mechanism of the fiber channel protocol with TCP
246:03 channel protocol with TCP I ifcp uses an existing IP network to
246:07 I ifcp uses an existing IP network to interconnect isolated FC sand networks
246:10 interconnect isolated FC sand networks that are often geographically
246:11 that are often geographically distributed without merging them into a
246:14 distributed without merging them into a single sand
246:15 single sand fabric since it interconnects FC Sands
246:19 fabric since it interconnects FC Sands ifcp is referred to as a bridging
246:23 ifcp is referred to as a bridging protocol ifcp supports long-distance
246:26 protocol ifcp supports long-distance connectivity using an IP
246:28 connectivity using an IP network now let's look at the types of
246:31 network now let's look at the types of deployments in IP storage area
246:34 deployments in IP storage area networking there are three types of
246:36 networking there are three types of deployments in IP
246:37 deployments in IP sand native bridging and extension in
246:42 sand native bridging and extension in the native type of deployment everything
246:44 the native type of deployment everything is IP based this deployment uses
246:47 is IP based this deployment uses existing IP infrastructure and all the
246:50 existing IP infrastructure and all the devices in the infrastructure have
246:52 devices in the infrastructure have ethernet
246:53 ethernet devices they connect to ethernet Lan and
246:57 devices they connect to ethernet Lan and communicate using I scuzzy protocol with
246:59 communicate using I scuzzy protocol with no translations or tunneling
247:06 required this is a pure IP sand deployment which doesn't need any fiber
247:08 deployment which doesn't need any fiber channel devices such as fiber channel
247:10 channel devices such as fiber channel switches or fiber channel host bus
247:12 switches or fiber channel host bus adapters
247:18 in the bridging type of deployment devices with ethernet interfaces that
247:20 devices with ethernet interfaces that can communicate using I scuzi protocol
247:22 can communicate using I scuzi protocol are joined to the existing fiber channel
247:26 are joined to the existing fiber channel Sand Bridge deployment is used in
247:29 Sand Bridge deployment is used in situations where a lower cost solution
247:31 situations where a lower cost solution is required for consolidating storage in
247:33 is required for consolidating storage in an existing
247:38 Sand Bridge deployment uses Bridge equipment to join an IP network and FC
247:41 equipment to join an IP network and FC San
247:44 San in the extension type of deployment the
247:46 in the extension type of deployment the IP network is used to interconnect
247:48 IP network is used to interconnect existing
247:50 existing Sands this is primarily done to
247:52 Sands this is primarily done to interconnect Sands that are in different
247:54 interconnect Sands that are in different locations without having to install a
247:56 locations without having to install a dedicated Network to connect
247:59 dedicated Network to connect them that brings us to the end of this
248:01 them that brings us to the end of this lesson let's summarize what we have
248:03 lesson let's summarize what we have learned in this
248:05 learned in this lesson in this lesson you learned about
248:07 lesson in this lesson you learned about IP San we started by looking at the
248:10 IP San we started by looking at the evolution of Ip San and then we looked
248:13 evolution of Ip San and then we looked at what IP San is next we talked about
248:17 at what IP San is next we talked about combining both the application Network
248:19 combining both the application Network and storage Network on the same IP
248:21 and storage Network on the same IP network we also talked about the it
248:24 network we also talked about the it skills required for IP s and then we
248:26 skills required for IP s and then we talked about the cost of Ip
248:29 talked about the cost of Ip components next we talked about
248:31 components next we talked about consolidating storage using IP San and
248:34 consolidating storage using IP San and then we looked at the advantages of
248:35 then we looked at the advantages of consolidating
248:36 consolidating storage we looked at the disadvantages
248:39 storage we looked at the disadvantages of using FC sand for storage
248:41 of using FC sand for storage consolidation and then we looked at the
248:43 consolidation and then we looked at the benefits of Ip
248:45 benefits of Ip San we also looked at the types of Ip
248:47 San we also looked at the types of Ip San and then we looked at the three
248:49 San and then we looked at the three protocols that are used to implement IP
248:52 protocols that are used to implement IP San these are internet scuzzy or I
248:55 San these are internet scuzzy or I scuzzy fiber channel over IP or
248:58 scuzzy fiber channel over IP or fcip and internet fiber channel protocol
249:01 fcip and internet fiber channel protocol or
249:03 or ifcp lastly we looked at the types of Ip
249:06 ifcp lastly we looked at the types of Ip sand deployments native bridging and
249:11 sand deployments native bridging and extension in the next lesson you will
249:13 extension in the next lesson you will learn about I scuzzy San thank you for
249:41 watching hello and welcome to unit 2 I scuzzy San
249:43 scuzzy San in this lesson you will learn about I
249:45 in this lesson you will learn about I scuzzy San we're going to start by
249:48 scuzzy San we're going to start by looking at what I scuzzy San is and then
249:51 looking at what I scuzzy San is and then we will look at the I scuzzy
249:57 architecture we will also look at the components of I scuzzy sand these are
250:00 components of I scuzzy sand these are initiator Target and IP
250:04 initiator Target and IP network next we will look at the
250:06 network next we will look at the physical I scuzzi interfaces and then we
250:08 physical I scuzzi interfaces and then we will talk about the I scuzzy
250:11 will talk about the I scuzzy networks we will look at the isui naming
250:14 networks we will look at the isui naming and then we will look at the device
250:16 and then we will look at the device Discovery process in isui
250:19 Discovery process in isui San we will also look at what a network
250:22 San we will also look at what a network portal is and then we will look at how
250:24 portal is and then we will look at how an initiator discovers a
250:28 an initiator discovers a Target next we will talk about the ice
250:30 Target next we will talk about the ice scuzzy session that the initiator
250:32 scuzzy session that the initiator establishes with the Target and then we
250:35 establishes with the Target and then we will talk about the login
250:37 will talk about the login process we'll look at the full feature
250:40 process we'll look at the full feature phase that begins when the login process
250:42 phase that begins when the login process is complete
250:44 is complete next we'll talk about the ice scuzzy
250:45 next we'll talk about the ice scuzzy payload and its
250:48 payload and its Transportation lastly we will look at
250:50 Transportation lastly we will look at the two popular methods used for
250:52 the two popular methods used for implementing I scuzzy
250:54 implementing I scuzzy security these are challenge handshake
250:57 security these are challenge handshake Authentication Protocol or
250:59 Authentication Protocol or Chap and Internet Protocol security or
251:03 Chap and Internet Protocol security or IP
251:05 IP secc now let's begin with iuy
251:08 secc now let's begin with iuy San An isui San is a storage area
251:11 San An isui San is a storage area network implemented over a IP network
251:14 network implemented over a IP network using an i scuzi
251:17 using an i scuzi protocol I scuzi protocol is a mapping
251:20 protocol I scuzi protocol is a mapping of the scuzzy protocol over the TCP
251:24 of the scuzzy protocol over the TCP protocol it carries Block Level data
251:27 protocol it carries Block Level data over the IP network as a result the
251:30 over the IP network as a result the block storage can be accessed over the
251:32 block storage can be accessed over the IP network as if it's directly attached
251:35 IP network as if it's directly attached to the
251:36 to the server I scuzzi architecture is based on
251:39 server I scuzzi architecture is based on the client server model of
251:41 the client server model of scuzzy in an is scuzi par Lance it is
251:44 scuzzy in an is scuzi par Lance it is referenced to as the initiator Target
251:48 referenced to as the initiator Target Model basically an I scuzi sand consists
251:51 Model basically an I scuzi sand consists of three components initiator Target and
251:55 of three components initiator Target and IP
251:56 IP network in isui San the initiator is the
252:00 network in isui San the initiator is the system component that first initiates
252:02 system component that first initiates the read or write requests over the IP
252:07 the read or write requests over the IP network an example of a device that runs
252:09 network an example of a device that runs the initiator process is the server
252:11 the initiator process is the server computer
252:14 computer the target is the system component that
252:16 the target is the system component that responds to the requests of the
252:18 responds to the requests of the initiator over the IP
252:20 initiator over the IP network an example of a device that runs
252:23 network an example of a device that runs the target process is the storage
252:26 the target process is the storage array initiators and targets need
252:29 array initiators and targets need physical I scuzzi interfaces to connect
252:31 physical I scuzzi interfaces to connect to the IP
252:33 to the IP network the I scuzzy interface is
252:35 network the I scuzzy interface is available either as a PCI expansion card
252:38 available either as a PCI expansion card or it is integrated into the motherboard
252:45 there are four different kinds of I scuzzy
252:46 scuzzy interfaces ethernet Nick ethernet Nick
252:50 interfaces ethernet Nick ethernet Nick with TCP offload engine is scuzzy host
252:53 with TCP offload engine is scuzzy host bus adapter and converged network
252:56 bus adapter and converged network adapter we will begin with ethernet
253:00 adapter we will begin with ethernet Nick software on the server called the
253:03 Nick software on the server called the software initiator configures the
253:05 software initiator configures the standard ethernet Nick as an I scuzzy
253:08 standard ethernet Nick as an I scuzzy initiator most operating systems such as
253:11 initiator most operating systems such as Windows and Linux have built in software
253:15 Windows and Linux have built in software initiators the second one is ethernet
253:18 initiators the second one is ethernet Nick with TCP offload engine or tow
253:21 Nick with TCP offload engine or tow engine this interface is an Ethernet
253:24 engine this interface is an Ethernet Nick with a TCP offload engine and is
253:27 Nick with a TCP offload engine and is also configured by a software
253:30 also configured by a software initiator the TCP offload engine
253:33 initiator the TCP offload engine relieves the server CPU from processing
253:35 relieves the server CPU from processing the protocol overhead of the TCP stack
253:39 the protocol overhead of the TCP stack however other activities such as pdu
253:42 however other activities such as pdu creation and encapsulation or
253:44 creation and encapsulation or decapsulation of pdus are handled by the
253:47 decapsulation of pdus are handled by the server's
253:49 server's CPU the third option is the I scuzzy
253:52 CPU the third option is the I scuzzy host bus adapter the ice scuzzy host bus
253:55 host bus adapter the ice scuzzy host bus adapter is more powerful than the
253:56 adapter is more powerful than the ethernet Nick with toe because it has a
253:59 ethernet Nick with toe because it has a CPU and memory and it takes care of all
254:02 CPU and memory and it takes care of all the I scuzzi related
254:05 the I scuzzi related processing the I scuzi host bus adapter
254:08 processing the I scuzi host bus adapter has a ROM that allows dis list servers
254:10 has a ROM that allows dis list servers to boot from an I scuzzy sand storage
254:14 to boot from an I scuzzy sand storage volume the last kind of interface that
254:17 volume the last kind of interface that we will discuss is the converged network
254:19 we will discuss is the converged network adapter or
254:25 CNA the converged network adapter is similar to I scuzzy HBA but it has the
254:28 similar to I scuzzy HBA but it has the ability to support additional protocols
254:30 ability to support additional protocols such as
254:35 fcoe we will now look at the three common types of Ip networks that connect
254:38 common types of Ip networks that connect the initiators and the
254:40 the initiators and the targets these are shared IP Networks
254:43 targets these are shared IP Networks networ dedicated VLAN and dedicated
254:46 networ dedicated VLAN and dedicated physical IP
254:48 physical IP network we will first look at the Shared
254:51 network we will first look at the Shared IP
254:52 IP network this is the existing IP network
254:54 network this is the existing IP network in an
254:56 in an organization when isui San is
254:59 organization when isui San is implemented the IP network will carry
255:01 implemented the IP network will carry both storage traffic and other
255:04 both storage traffic and other traffic this is the cheapest solution
255:07 traffic this is the cheapest solution available and is not
255:09 available and is not secure and it has performance problems
255:13 secure and it has performance problems the second type of IP network is the
255:16 the second type of IP network is the creation of a dedicated
255:18 creation of a dedicated VLAN having a VLAN dedicated for ice
255:21 VLAN having a VLAN dedicated for ice scuzzy sand ensures that storage traffic
255:23 scuzzy sand ensures that storage traffic is isolated from the other
255:26 is isolated from the other traffic a dedicated VLAN improves
255:29 traffic a dedicated VLAN improves performance and security when compared
255:31 performance and security when compared to a shared IP
255:33 to a shared IP network it's also the cheapest solution
255:36 network it's also the cheapest solution available since it doesn't require any
255:38 available since it doesn't require any additional
255:40 additional equipment the last type of IP network
255:43 equipment the last type of IP network that we will discuss is the dedicated
255:44 that we will discuss is the dedicated physical IP network this is the best
255:48 physical IP network this is the best solution available because having a
255:50 solution available because having a dedicated physical IP network provides
255:52 dedicated physical IP network provides the best performance and
255:55 the best performance and security however the downside is that it
255:57 security however the downside is that it is
255:59 is costly in a dedicated physical IP
256:02 costly in a dedicated physical IP network High availability can be
256:04 network High availability can be achieved by having redundant paths as in
256:07 achieved by having redundant paths as in FC
256:09 FC San this solution is well suited for
256:11 San this solution is well suited for business critical application
256:14 business critical application now let's look at the isui naming in
256:17 now let's look at the isui naming in iszi San each and every device is
256:20 iszi San each and every device is identified by a unique Global name for
256:23 identified by a unique Global name for addressing
256:24 addressing purposes the popular naming convention
256:27 purposes the popular naming convention used in I scuzzy San is called is scuzi
256:30 used in I scuzzy San is called is scuzi qualified name or
256:32 qualified name or iqn the format of an iqn name is as
256:37 iqn the format of an iqn name is as follows iqn represents the naming
256:39 follows iqn represents the naming convention used in is scuzi qualified
256:42 convention used in is scuzi qualified name
256:44 name y y y y- mm generally represents the
256:48 y y y y- mm generally represents the year and month in which the company
256:50 year and month in which the company registered its domain
256:53 registered its domain name the domain name of the company
256:55 name the domain name of the company represents the naming
256:57 represents the naming Authority the optional string is any
257:00 Authority the optional string is any string in utf8 text format to specify
257:04 string in utf8 text format to specify additional information such as the
257:06 additional information such as the device model and
257:08 device model and number the dot hyphen and colon are
257:11 number the dot hyphen and colon are delimiters that separate the name ging
257:13 delimiters that separate the name ging Fields anything after the colon is
257:15 Fields anything after the colon is considered optional
257:22 text we will now discuss the discovery of devices by the
257:24 of devices by the initiators for communication to take
257:26 initiators for communication to take place between an initiator and a Target
257:29 place between an initiator and a Target the initiator must first find the target
257:31 the initiator must first find the target with which it can establish
257:34 with which it can establish contact the process of finding the
257:36 contact the process of finding the target is technically referred to as
257:38 target is technically referred to as device
257:40 device Discovery the initiator Discover covers
257:43 Discovery the initiator Discover covers a Target using the target's is scuzzi
257:45 a Target using the target's is scuzzi name IP address and TCP Port the
257:49 name IP address and TCP Port the combination of a target's IP address and
257:52 combination of a target's IP address and its listening TCP Port 3260 is called a
257:56 its listening TCP Port 3260 is called a network
257:57 network portal there are four ways in which an
258:00 portal there are four ways in which an initiator discovers a
258:02 initiator discovers a Target these are manual Discovery send
258:06 Target these are manual Discovery send targets Discovery internet storage
258:09 targets Discovery internet storage network service isns discovery
258:14 network service isns discovery and service location protocol
258:17 and service location protocol Discovery in the manual Discovery method
258:20 Discovery in the manual Discovery method the initiator is manually configured
258:22 the initiator is manually configured with the Target's
258:24 with the Target's address this method is not scalable and
258:27 address this method is not scalable and it is suitable only for small scale
258:30 it is suitable only for small scale environments in send targets Discovery
258:33 environments in send targets Discovery method an initiator is manually
258:35 method an initiator is manually configured with the network portal of a
258:38 configured with the network portal of a Target the initiator then uses the
258:41 Target the initiator then uses the target's network portal to establish
258:43 target's network portal to establish contact with the
258:45 contact with the Target in this process the initiator
258:48 Target in this process the initiator issues a send targets command to the
258:51 issues a send targets command to the Target and the target responds with the
258:53 Target and the target responds with the list of names and IP addresses of the
258:55 list of names and IP addresses of the available Targets this method is used in
258:58 available Targets this method is used in smallscale is scuzzy San
259:02 smallscale is scuzzy San implementation in the internet storage
259:04 implementation in the internet storage name service or isns method the
259:07 name service or isns method the initiator can automatically discover the
259:10 initiator can automatically discover the target the initiator and the Target
259:13 target the initiator and the Target register themselves with the isns
259:15 register themselves with the isns servers the initiator can query the isns
259:19 servers the initiator can query the isns for the list of available
259:21 for the list of available Targets this method is used in large
259:23 Targets this method is used in large scale is scuzzi
259:26 scale is scuzzi implementation the last method we will
259:28 implementation the last method we will discuss is the service location protocol
259:31 discuss is the service location protocol Discovery in this discovery method the
259:34 Discovery in this discovery method the initiator issues a service location
259:36 initiator issues a service location protocol or SLP multicast request to
259:40 protocol or SLP multicast request to which the target responds
259:43 which the target responds this method requires the SLP user agent
259:45 this method requires the SLP user agent to be running on the initiator and the
259:48 to be running on the initiator and the SLP service agent to be running on the
259:51 SLP service agent to be running on the target SLP is used in the mediums scale
259:55 target SLP is used in the mediums scale implementation of is scuzzy
259:57 implementation of is scuzzy sand after the initiator has discovered
260:00 sand after the initiator has discovered its Target the next step is to establish
260:02 its Target the next step is to establish an i scuzi session with the
260:05 an i scuzi session with the target an i scuzzi session can have one
260:08 target an i scuzzi session can have one or more TCP connections between the
260:10 or more TCP connections between the initiator and the target
260:13 initiator and the target the initiator sets up each TCP
260:16 the initiator sets up each TCP connection and initiates the login
260:18 connection and initiates the login process for that
260:20 process for that connection the initiator starts the
260:22 connection the initiator starts the login process by connecting to the
260:24 login process by connecting to the listening TCP Port 3260 on the
260:28 listening TCP Port 3260 on the target during the login process the
260:30 target during the login process the following happens both the devices are
260:33 following happens both the devices are authenticated security parameters are
260:36 authenticated security parameters are negotiated and optional parameters are
260:38 negotiated and optional parameters are negotiated and finally the TCP
260:41 negotiated and finally the TCP connection is marked as part of the is
260:43 connection is marked as part of the is scuzi session the login process must be
260:46 scuzi session the login process must be completed before the icui data can be
260:49 completed before the icui data can be transmitted on that
260:51 transmitted on that connection when the login process is
260:53 connection when the login process is complete the ice scuzi session is said
260:55 complete the ice scuzi session is said to enter the full feature
260:57 to enter the full feature phase in this phase the scuzzy data
261:00 phase in this phase the scuzzy data transmission occurs between the
261:02 transmission occurs between the initiator and the target over the is
261:04 initiator and the target over the is scuzi
261:06 scuzi session in is scuzzy San the is scuzi
261:09 session in is scuzzy San the is scuzi protocol is used for transporting scuzi
261:11 protocol is used for transporting scuzi data over the IP
261:14 data over the IP network the scuzzy data consists of
261:16 network the scuzzy data consists of scuzzy commands and user data and
261:19 scuzzy commands and user data and collectively it is referred to as I
261:21 collectively it is referred to as I scuzi
261:24 scuzi payload we will now see how the scuzzy
261:26 payload we will now see how the scuzzy payload is transported from the
261:28 payload is transported from the initiator to the
261:29 initiator to the Target the basic unit of communication
261:32 Target the basic unit of communication in is skazi San is protocol data unit or
261:37 in is skazi San is protocol data unit or pdu the is scuzzi payload is
261:39 pdu the is scuzzi payload is encapsulated inside a pdu
261:43 encapsulated inside a pdu it is then encapsulated inside one or
261:45 it is then encapsulated inside one or more TCP
261:47 more TCP segments the TCP segments are
261:49 segments the TCP segments are encapsulated in the IP packet which in
261:52 encapsulated in the IP packet which in turn is encapsulated inside the ethernet
261:57 turn is encapsulated inside the ethernet frame these levels of encapsulation are
262:00 frame these levels of encapsulation are necessary to transmit the scuzzy data
262:02 necessary to transmit the scuzzy data over the IP
262:03 over the IP network the scuzzy payload passes
262:06 network the scuzzy payload passes through the different layers of the is
262:08 through the different layers of the is scuzi protocol
262:10 scuzi protocol model on the left hand side of the slide
262:13 model on the left hand side of the slide we have the scuzzy payload at the scuzzy
262:15 we have the scuzzy payload at the scuzzy layer of the
262:21 initiator at the I scuzzy layer the scuzzy payload is encapsulated inside
262:23 scuzzy payload is encapsulated inside the I scuzzy pdu it is passed down to
262:27 the I scuzzy pdu it is passed down to the TCP
262:28 the TCP layer at the TCP layer the I scuzzy pdu
262:32 layer at the TCP layer the I scuzzy pdu is encapsulated inside one or more TCP
262:36 is encapsulated inside one or more TCP segments it is then forwarded to the IP
262:38 segments it is then forwarded to the IP layer where it is further encapsulated
262:41 layer where it is further encapsulated inside an IP package
262:46 then it is passed down to the ethernet layer where it is further encapsulated
262:48 layer where it is further encapsulated inside an Ethernet
262:51 inside an Ethernet frame the I scuzzy pdu with all these
262:54 frame the I scuzzy pdu with all these layers of encapsulation is ready to be
262:56 layers of encapsulation is ready to be transmitted over the IP network to the
263:00 transmitted over the IP network to the Target when the encapsulated I scuzzi
263:02 Target when the encapsulated I scuzzi payload reaches the target it finds its
263:05 payload reaches the target it finds its way up the protocol
263:11 stack at the ethernet layer the ethernet encapsulation is
263:13 encapsulation is removed it is then passed up to the IP
263:16 removed it is then passed up to the IP layer where the IP encapsulation is
263:19 layer where the IP encapsulation is removed it is then moved up to the TCP
263:22 removed it is then moved up to the TCP layer where the TCP segments are
263:25 layer where the TCP segments are removed it is at the TCP layer that the
263:28 removed it is at the TCP layer that the reordering of frames is done if they
263:30 reordering of frames is done if they were not received in the order of
263:33 were not received in the order of transmission the ice scuzzy pdu is then
263:36 transmission the ice scuzzy pdu is then passed on to the I scuzzy layer where
263:38 passed on to the I scuzzy layer where the I scuzi pdu encapsulation is removed
263:42 the I scuzi pdu encapsulation is removed to extract the is scuzi payload which
263:44 to extract the is scuzi payload which was transmitted by the
263:46 was transmitted by the initiator now let's talk about the
263:48 initiator now let's talk about the security aspect of isui San the two
263:52 security aspect of isui San the two popular methods used for implementing
263:54 popular methods used for implementing isui security are as follows challenge
263:57 isui security are as follows challenge handshake Authentication Protocol or
263:59 handshake Authentication Protocol or Chap and Internet Protocol security
264:04 Chap and Internet Protocol security IPC challenge handshake Authentication
264:07 IPC challenge handshake Authentication Protocol or chat allows the initiator
264:10 Protocol or chat allows the initiator and the target to mutually authenticate
264:11 and the target to mutually authenticate each other
264:13 each other the authentication is based on a shared
264:15 the authentication is based on a shared secret
264:16 secret password it is highly recommended to use
264:19 password it is highly recommended to use chap with strong
264:21 chap with strong passwords chap does not secure the data
264:24 passwords chap does not secure the data as it is transmitted in clear text over
264:26 as it is transmitted in clear text over the IP network this is where Internet
264:29 the IP network this is where Internet Protocol security or IPC comes to the
264:33 Protocol security or IPC comes to the rescue IPC provides an endtoend
264:36 rescue IPC provides an endtoend encryption service for the data
264:38 encryption service for the data transferred over the IP network between
264:40 transferred over the IP network between the initiator and the target
264:47 that brings us to the end of this lesson let's summarize what you have learned in
264:48 let's summarize what you have learned in this lesson in this lesson you learned
264:51 this lesson in this lesson you learned about is skazzy San we started by
264:54 about is skazzy San we started by looking at what iszy San is and then we
264:57 looking at what iszy San is and then we looked at the is scui
264:59 looked at the is scui architecture we also looked at the
265:01 architecture we also looked at the components of iszy San these are
265:03 components of iszy San these are initiator Target and IP
265:07 initiator Target and IP network next we looked at the physical I
265:10 network next we looked at the physical I scuzzy interfaces and then we talked
265:12 scuzzy interfaces and then we talked about the is scuzzy
265:14 about the is scuzzy networks we looked at is scuzzi naming
265:17 networks we looked at is scuzzi naming and then we looked at the device
265:19 and then we looked at the device Discovery process in is scui
265:21 Discovery process in is scui San we also looked at what a network
265:23 San we also looked at what a network portal is and then we looked at how an
265:26 portal is and then we looked at how an initiator discovers a
265:28 initiator discovers a Target next we talked about the ice
265:31 Target next we talked about the ice scuzzy session that the initiator
265:33 scuzzy session that the initiator establishes with the Target and then we
265:35 establishes with the Target and then we talked about the login process we looked
265:38 talked about the login process we looked at the full feature phase that begins
265:40 at the full feature phase that begins when the login process is complete
265:46 then we talked about the ice scuzzy payload and its
265:48 payload and its Transportation lastly we looked at the
265:50 Transportation lastly we looked at the two popular methods used for
265:52 two popular methods used for implementing I scuzzy
265:54 implementing I scuzzy security these are challenge handshake
265:57 security these are challenge handshake Authentication Protocol or Chap and
265:59 Authentication Protocol or Chap and Internet Protocol security or
266:03 Internet Protocol security or IPC in the next lesson you will learn
266:05 IPC in the next lesson you will learn about the basics of network convergence
266:08 about the basics of network convergence thank you for watching
266:33 hello and welcome to unit 1 introduction to converged
266:39 networking in this lesson you will learn the basics of network convergence
266:42 the basics of network convergence we're going to start by looking at what
266:44 we're going to start by looking at what network convergence is and then we will
266:47 network convergence is and then we will look at why there's a need for
266:50 look at why there's a need for convergence next we will look at the
266:52 convergence next we will look at the drawbacks of traditional
266:54 drawbacks of traditional ethernet and then we will compare fiber
266:57 ethernet and then we will compare fiber channel sand with
266:59 channel sand with ethernet we will also look at the
267:01 ethernet we will also look at the limitations of fiber channel
267:04 limitations of fiber channel sand we will look at the immergence of
267:06 sand we will look at the immergence of network
267:08 network convergence and then we will look at
267:10 convergence and then we will look at what network convergence does
267:13 what network convergence does we will also talk about the benefits of
267:15 we will also talk about the benefits of network
267:16 network convergence next we will talk about the
267:19 convergence next we will talk about the two setups one is a traditional setup
267:22 two setups one is a traditional setup without network
267:23 without network convergence and the other one is a setup
267:25 convergence and the other one is a setup with network
267:28 with network convergence lastly we will look at what
267:30 convergence lastly we will look at what fiber channel over Ethernet or fcoe
267:34 fiber channel over Ethernet or fcoe is now let's begin by looking at what
267:37 is now let's begin by looking at what network convergence
267:39 network convergence is Network convergence concerns
267:42 is Network convergence concerns combining an Ethernet Lan and a fiber
267:44 combining an Ethernet Lan and a fiber channel sand into a single unified
267:47 channel sand into a single unified Network that can carry both server
267:49 Network that can carry both server traffic and storage traffic over a
267:51 traffic and storage traffic over a single network
267:57 cable the primary reasons for having the network infrastructure and storage
267:59 network infrastructure and storage infrastructure converge into a single
268:01 infrastructure converge into a single infrastructure are the
268:03 infrastructure are the following to reduce the infrastructure
268:06 following to reduce the infrastructure footprint in a data
268:08 footprint in a data center to improve the utilization of a
268:10 center to improve the utilization of a data Center's resources
268:13 data Center's resources and to reduce the cost of
268:16 and to reduce the cost of ownership by having multiple types of
268:18 ownership by having multiple types of network traffic handled in a single
268:20 network traffic handled in a single infrastructure there are Savings in
268:22 infrastructure there are Savings in terms of the
268:23 terms of the following cost reduction associated with
268:25 following cost reduction associated with the purchase installation operation and
268:29 the purchase installation operation and management of the equipment in a data
268:33 management of the equipment in a data center a single infrastructure for both
268:35 center a single infrastructure for both Network and storage eliminates the need
268:38 Network and storage eliminates the need for multiple types of equipment and
268:40 for multiple types of equipment and cables
268:42 cables the reduced footprint of a data center
268:45 the reduced footprint of a data center provides Savings in the terms of the
268:46 provides Savings in the terms of the cost of the following equipment cables
268:50 cost of the following equipment cables power and cooling consumption IT staff
268:54 power and cooling consumption IT staff traditionally ethernet is used to
268:56 traditionally ethernet is used to provide local area connections between
268:58 provide local area connections between clients and
269:04 servers however as a protocol it is not meant to transfer block data in a
269:06 meant to transfer block data in a storage Network when multiple computers
269:09 storage Network when multiple computers in an Ethernet Network try to send data
269:11 in an Ethernet Network try to send data simult mously it results in data
269:18 collisions as the network consumption increases the data collisions will also
269:20 increases the data collisions will also increase significantly and will
269:22 increase significantly and will subsequently consume all the available
269:24 subsequently consume all the available Network
269:29 bandwidth when there is Network congestion the ethernet drops frames so
269:32 congestion the ethernet drops frames so the higher level Protocols tcpip are
269:35 the higher level Protocols tcpip are used to ensure that the frames are
269:37 used to ensure that the frames are retransmitted when they drop based on an
269:40 retransmitted when they drop based on an acknowledgement mechanism
269:46 since ethernet drops frames it is considered as a lossy
269:49 considered as a lossy network compared to ethernet fiber
269:52 network compared to ethernet fiber channel is considered a high-speed low
269:54 channel is considered a high-speed low latency and lossless
269:56 latency and lossless Network by lossless we mean that packets
269:59 Network by lossless we mean that packets are not dropped when there is Network
270:02 are not dropped when there is Network congestion fiber channel has desirable
270:05 congestion fiber channel has desirable traits for a storage network but it does
270:07 traits for a storage network but it does have some
270:09 have some limitations fiber channel is a separate
270:12 limitations fiber channel is a separate network from the data center ethernet
270:15 network from the data center ethernet Network and it requires additional
270:18 Network and it requires additional infrastructure and
270:19 infrastructure and costs in addition to that FC San is a
270:23 costs in addition to that FC San is a different technology than ethernet so it
270:25 different technology than ethernet so it requires different skill sets to install
270:28 requires different skill sets to install configure operate and manage adding to
270:31 configure operate and manage adding to the cost in terms of IT staff
270:38 requirements one of the factors that contributed to network convergence was
270:40 contributed to network convergence was the continuous Evol ution of the
270:42 the continuous Evol ution of the ethernet network from the transmission
270:44 ethernet network from the transmission speed of 100 megabits per second to a
270:47 speed of 100 megabits per second to a widely used speed of 10 gbits per
270:51 widely used speed of 10 gbits per second at present 40 gbits per second
270:54 second at present 40 gbits per second and even 100 gbits per second has become
270:57 and even 100 gbits per second has become a
270:59 a reality in the simple context of data
271:01 reality in the simple context of data transmission speeds the speed of
271:04 transmission speeds the speed of ethernet have exceeded the available
271:05 ethernet have exceeded the available speeds of fiber
271:08 speeds of fiber channel furthermore ethernet can now be
271:11 channel furthermore ethernet can now be enriched with the capabilities that make
271:13 enriched with the capabilities that make it a low latency and lossless Network
271:15 it a low latency and lossless Network similar to fiber
271:21 Channel network convergence combines Network and storage traffic into a
271:23 Network and storage traffic into a unified Network that has the following
271:26 unified Network that has the following high performance low latency high
271:28 high performance low latency high scalability and high
271:31 scalability and high reliability the benefits of network
271:33 reliability the benefits of network convergence are as follows it provides a
271:36 convergence are as follows it provides a single high-speed Network that can
271:38 single high-speed Network that can support both Network and storage traffic
271:42 support both Network and storage traffic it offers low latency high throughput
271:44 it offers low latency high throughput scalability and
271:51 reliability it provides cost reduction it offers simplified
271:53 reduction it offers simplified management and improved resource
272:00 utilization now let's look at an oversimplified traditional setup without
272:02 oversimplified traditional setup without network
272:04 network convergence as you can see on the slide
272:07 convergence as you can see on the slide the server requires ethernet Nick and
272:09 the server requires ethernet Nick and HBA adapters to connect to both the
272:12 HBA adapters to connect to both the ethernet Network and the fiber channel
272:16 ethernet Network and the fiber channel sand in addition to that each Network
272:19 sand in addition to that each Network requires different switches and
272:21 requires different switches and cables as a result purchases
272:24 cables as a result purchases installations configurations operations
272:27 installations configurations operations and management are done separately for
272:29 and management are done separately for both the ethernet Network and the FC
272:36 San now let's look at a setup with network
272:38 network convergence the converged Network uses
272:40 convergence the converged Network uses enhanced ethernet capabil ities to
272:42 enhanced ethernet capabil ities to combine both storage and network traffic
272:44 combine both storage and network traffic on a single
272:47 on a single Network in this diagram the server is
272:49 Network in this diagram the server is using only one converged network
272:53 using only one converged network adapter and the converged network
272:55 adapter and the converged network adapter connects to a DCB
272:58 adapter connects to a DCB switch a converged network doesn't
273:00 switch a converged network doesn't require different types of adapters
273:02 require different types of adapters switches and cables to have the clients
273:05 switches and cables to have the clients servers and storage devices connected to
273:08 servers and storage devices connected to the
273:09 the network fewer devices results in
273:12 network fewer devices results in simplified management and reduces the
273:14 simplified management and reduces the total cost of
273:20 ownership the convergence Network uses enhanced ethernet as its physical
273:22 enhanced ethernet as its physical transmission
273:23 transmission technology and the fiber channel frames
273:26 technology and the fiber channel frames transmitted over ethernet are called
273:28 transmitted over ethernet are called fiber channel over Ethernet or fcoe
273:36 frames in the next lesson we will talk more about
273:38 more about fcoe and that brings us to the end of
273:40 fcoe and that brings us to the end of this lesson
273:42 this lesson let's summarize what you have learned in
273:43 let's summarize what you have learned in this
273:45 this lesson in this lesson you have learned
273:47 lesson in this lesson you have learned the basics of network
273:49 the basics of network convergence we started by looking at
273:52 convergence we started by looking at what network convergence is and then we
273:54 what network convergence is and then we looked at why there's a need for Network
273:58 looked at why there's a need for Network convergence next we looked at the
274:00 convergence next we looked at the drawbacks of traditional ethernet and
274:02 drawbacks of traditional ethernet and then we compared fiber channel sand with
274:05 then we compared fiber channel sand with ethernet we also looked at the
274:07 ethernet we also looked at the limitations of FC
274:10 limitations of FC San we looked at the emergence of
274:12 San we looked at the emergence of network convergence and then we looked
274:14 network convergence and then we looked at what network convergence does we also
274:17 at what network convergence does we also talked about the benefits of network
274:20 talked about the benefits of network convergence next we talked about the two
274:23 convergence next we talked about the two setups one is a traditional setup
274:25 setups one is a traditional setup without network convergence and the
274:27 without network convergence and the other one is a setup with network
274:30 other one is a setup with network convergence lastly we looked at what
274:32 convergence lastly we looked at what fiber channel over Ethernet or fcoe
274:36 fiber channel over Ethernet or fcoe is in the next lesson you will learn
274:39 is in the next lesson you will learn about fiber channel over Ethernet or
274:42 about fiber channel over Ethernet or fcoe thank you for
275:08 watching hello and welcome to unit 2 fiber channel over ethernet
275:12 fiber channel over ethernet in this lesson you will learn about
275:14 in this lesson you will learn about fiber channel over Ethernet or
275:17 fiber channel over Ethernet or fcoe we're going to start by looking at
275:20 fcoe we're going to start by looking at what fiber channel over Ethernet or fcoe
275:23 what fiber channel over Ethernet or fcoe is and then we will look at fcoe
275:28 is and then we will look at fcoe encapsulation next we will look at the
275:30 encapsulation next we will look at the fcoe protocol stack and then we will
275:33 fcoe protocol stack and then we will look at the fcoe
275:39 infrastructure we will also look at the traditional setup in a Data Center and
275:41 traditional setup in a Data Center and then then we will look at the fcoe
275:48 setup while talking about the fcoe setup we will look at the converged network
275:50 we will look at the converged network adapter or CNA and the fcoe
275:55 adapter or CNA and the fcoe switch next we will look at what
275:57 switch next we will look at what lossless ethernet is and then we will
276:00 lossless ethernet is and then we will look at what data center bridging or DCB
276:03 look at what data center bridging or DCB is while talking about DCB we will also
276:06 is while talking about DCB we will also look at what DCB task groups are we will
276:09 look at what DCB task groups are we will look at three ethernet enhancements
276:12 look at three ethernet enhancements and these are priority flow control or
276:15 and these are priority flow control or PFC enhanced transmission selection
276:19 PFC enhanced transmission selection ETS and data center bridging exchange or
276:28 dcbx lastly we will look at the three common fcoe deployments and these are IO
276:32 common fcoe deployments and these are IO link
276:33 link consolidation topof rack deployments and
276:36 consolidation topof rack deployments and end of road
276:42 deployments now let's look at what fiber channel over Ethernet or fcoe is fiber
276:46 channel over Ethernet or fcoe is fiber channel over ethernet is a protocol that
276:48 channel over ethernet is a protocol that allows fiber channel frames to be
276:50 allows fiber channel frames to be transmitted over
276:52 transmitted over ethernet it's used for transmitting
276:55 ethernet it's used for transmitting storage traffic along with the network
276:57 storage traffic along with the network traffic over enhanced
277:00 traffic over enhanced ethernet in FC San the scuzzy data is
277:03 ethernet in FC San the scuzzy data is encapsulated inside a fiber channel
277:05 encapsulated inside a fiber channel frame on the fiber channel host bus
277:08 frame on the fiber channel host bus adapter of the server before being
277:10 adapter of the server before being transmitted over the FC
277:17 Network in a converg network the scuzzy data is also encapsulated into an FC
277:19 data is also encapsulated into an FC frame but the fcoe protocol encapsulates
277:23 frame but the fcoe protocol encapsulates the fiber channel frames into ethernet
277:25 the fiber channel frames into ethernet frames that can be transmitted along
277:27 frames that can be transmitted along with the IP
277:34 traffic the sizes of the FC frames are over 2 kilobytes so it becomes necessary
277:36 over 2 kilobytes so it becomes necessary for the adapters and switches in the
277:38 for the adapters and switches in the converge Network to support baby jumbo
277:41 converge Network to support baby jumbo frame
277:42 frame in order to prevent the segmentation of
277:44 in order to prevent the segmentation of the FC
277:50 frames during the encapsulation process fcoe wraps the complete fiber channel
277:53 fcoe wraps the complete fiber channel frame without modifying
277:56 frame without modifying it the encapsulation provides onetoone
277:59 it the encapsulation provides onetoone mapping of the fiber channel frames with
278:02 mapping of the fiber channel frames with the ethernet frames which means that a
278:04 the ethernet frames which means that a fiber channel frame is not segmented nor
278:07 fiber channel frame is not segmented nor are multiple frames placed inside a
278:10 are multiple frames placed inside a single ethernet frame
278:16 fcoe ensures that the construct of a fiber channel frame is preserved to
278:18 fiber channel frame is preserved to provide for the following an fcoe frame
278:21 provide for the following an fcoe frame can be seamlessly integrated with the
278:23 can be seamlessly integrated with the existing FC
278:25 existing FC environments an fcoe frame can use Fiber
278:28 environments an fcoe frame can use Fiber Channel Technologies such as zoning
278:31 Channel Technologies such as zoning distributed name server registered state
278:34 distributed name server registered state change notification and management
278:37 change notification and management tools now let's look at the fcoe
278:40 tools now let's look at the fcoe protocol stack
278:42 protocol stack an fcoe protocol stack is constructed by
278:46 an fcoe protocol stack is constructed by replacing the lower layers fc0 and fc1
278:50 replacing the lower layers fc0 and fc1 of the fiber channel protocol stack with
278:52 of the fiber channel protocol stack with the physical layer and the data link
278:54 the physical layer and the data link layer of the lossless
279:02 ethernet however the fcoe protocol stack retains the upper layers of the FC
279:04 retains the upper layers of the FC protocol stack that is fc4 fc3 and fc2
279:11 protocol stack that is fc4 fc3 and fc2 as you can see in the diagram the fcoe
279:14 as you can see in the diagram the fcoe protocol lies in between the FC layers
279:16 protocol lies in between the FC layers and the ethernet
279:23 layers now let's look at the fcoe infrastructure the fcoe infrastructure
279:25 infrastructure the fcoe infrastructure consists of three
279:27 consists of three components a converged network
279:30 components a converged network adapter lossless ethernet links and an
279:34 adapter lossless ethernet links and an fcoe
279:36 fcoe switch in a traditional data center
279:38 switch in a traditional data center approach the server has a Nick for
279:40 approach the server has a Nick for Network traffic and a fiber channel HBA
279:43 Network traffic and a fiber channel HBA for storage traffic as shown in the
279:47 for storage traffic as shown in the diagram with fcoe these two adapters in
279:50 diagram with fcoe these two adapters in the server are replaced with a single
279:52 the server are replaced with a single converged network
279:58 adapter a single CNA connects the server to an fcoe switch which in turn provides
280:01 to an fcoe switch which in turn provides connectivity to Lan and
280:10 San a converged network adapter or CNA is a network adapter that that can
280:11 is a network adapter that that can function as a standard ethernet Nick as
280:14 function as a standard ethernet Nick as well as a fiber channel
280:21 HBA it supports both protocols when a CNA is installed in a
280:24 protocols when a CNA is installed in a server the server operating system will
280:26 server the server operating system will not see any fcoe device but it will see
280:29 not see any fcoe device but it will see a Nick entity and an HBA entity so that
280:33 a Nick entity and an HBA entity so that the CNA can function as a storage
280:35 the CNA can function as a storage adapter as well as a LAN adapter
280:44 the CNA has an fcoe entity that completes the encapsulation before
280:46 completes the encapsulation before sending it over the converged enhanced
280:48 sending it over the converged enhanced Ethernet or CE link it also completes
280:52 Ethernet or CE link it also completes the decapsulation of ethernet frames
280:55 the decapsulation of ethernet frames when it receives from the CE
280:58 when it receives from the CE link an fcoe switch is a network device
281:02 link an fcoe switch is a network device that connects both FC San and ethernet
281:05 that connects both FC San and ethernet Lan
281:11 environments it contains an fcoe entity that that extracts the FC payload from
281:13 that that extracts the FC payload from the ethernet frames and forwards it to
281:15 the ethernet frames and forwards it to the FC storage
281:24 devices it also encapsulates FC frames that need to be transmitted over
281:25 that need to be transmitted over ethernet
281:32 links fcoe switches inspect the ether type of the frames that they receive
281:34 type of the frames that they receive from the
281:35 from the servers if the ether type of the frame
281:38 servers if the ether type of the frame is fcoe then the switch recognizes that
281:40 is fcoe then the switch recognizes that it can contains the FC payload and
281:43 it can contains the FC payload and forwards it to the fcoe entity within
281:46 forwards it to the fcoe entity within the switch as you can see in the
281:49 the switch as you can see in the diagram The fcoe Entity extracts the FC
281:53 diagram The fcoe Entity extracts the FC payload and forwards it to the FC
281:56 payload and forwards it to the FC Port if the ether type of the frame is
281:58 Port if the ether type of the frame is not fcoe then as shown in the diagram
282:02 not fcoe then as shown in the diagram the switch handles the traffic as
282:04 the switch handles the traffic as Network traffic and forwards it over the
282:06 Network traffic and forwards it over the ethernet
282:08 ethernet ports since fcoe switches support both
282:11 ports since fcoe switches support both storage traffic and network traffic they
282:14 storage traffic and network traffic they can seamlessly integrate into FC sand
282:17 can seamlessly integrate into FC sand and ethernet
282:23 environments fcoe frames are transmitted over an Ethernet that does not drop
282:25 over an Ethernet that does not drop frames in the event of network
282:27 frames in the event of network congestion such an Ethernet is called a
282:29 congestion such an Ethernet is called a lossless
282:37 ethernet with fcoe any lost frames can be recovered only at the scuzzy layer
282:39 be recovered only at the scuzzy layer because it has no transmission control
282:41 because it has no transmission control protocol
282:47 TCP fortunately a set of enhancements are available to the ethernet to support
282:49 are available to the ethernet to support the lossless Behavior this is called
282:52 the lossless Behavior this is called Data Center bridging or
282:56 Data Center bridging or DCB DCB is also referred to as converged
282:59 DCB DCB is also referred to as converged enhanced
283:02 enhanced ethernet the i e workg group that
283:05 ethernet the i e workg group that provided these extensions to enable the
283:07 provided these extensions to enable the enhanced ethernet is the DCB task group
283:12 enhanced ethernet is the DCB task group we will look at three such enhancements
283:14 we will look at three such enhancements that provide a lossless transport for
283:16 that provide a lossless transport for fcoe frames over the
283:20 fcoe frames over the ethernet these are priority flow control
283:23 ethernet these are priority flow control or
283:24 or PFC enhanced transmission selection or
283:28 PFC enhanced transmission selection or etss and data center bridging exchange
283:31 etss and data center bridging exchange or
283:33 or dcbx now let's look at priority flow
283:36 dcbx now let's look at priority flow control or
283:38 control or PFC priority flow control is a link like
283:41 PFC priority flow control is a link like protocol that allows high priority
283:43 protocol that allows high priority traffic to flow while temporarily
283:46 traffic to flow while temporarily stopping the low priority traffic when
283:48 stopping the low priority traffic when Network congestion
283:51 Network congestion occurs now let's look at enhanced
283:53 occurs now let's look at enhanced transmission selection or
283:56 transmission selection or ETS enhanced transmission selection is a
283:59 ETS enhanced transmission selection is a link layer protocol that controls the
284:02 link layer protocol that controls the bandwidth management by ensuring that
284:04 bandwidth management by ensuring that one kind of traffic doesn't consume too
284:06 one kind of traffic doesn't consume too much of the overall
284:09 much of the overall bandwidth ETF s can be used to allocate
284:12 bandwidth ETF s can be used to allocate bandwidth by prioritizing the
284:16 bandwidth by prioritizing the traffic now let's look at data center
284:18 traffic now let's look at data center bridging exchange or
284:21 bridging exchange or dcbx data center bridging exchange is a
284:24 dcbx data center bridging exchange is a discovery and configuration protocol
284:27 discovery and configuration protocol that allows for the discovery and
284:28 that allows for the discovery and configuration of fiber channel devices
284:31 configuration of fiber channel devices on the converged
284:33 on the converged Network it is an extension of Link layer
284:36 Network it is an extension of Link layer Discovery protocol or
284:39 Discovery protocol or llddp now now let's talk about fcoe
284:43 llddp now now let's talk about fcoe deployments fcoe deployments are seen in
284:46 deployments fcoe deployments are seen in three
284:47 three areas IO link
284:49 areas IO link consolidation topof rack deployments and
284:53 consolidation topof rack deployments and end of road
284:55 end of road deployments we will first look at IO
284:58 deployments we will first look at IO link
284:59 link consolidation server IO link
285:01 consolidation server IO link consolidation occurs because of server
285:05 consolidation occurs because of server virtualization the io link consolidation
285:08 virtualization the io link consolidation results in higher IO needs because of
285:11 results in higher IO needs because of the applications in multiple virtual
285:14 the applications in multiple virtual machines CNAs with 10 GBE throughputs
285:18 machines CNAs with 10 GBE throughputs can be used to satisfy the higher IO
285:20 can be used to satisfy the higher IO needs that arise from IO link
285:24 needs that arise from IO link consolidation CNAs with 10 GBE
285:27 consolidation CNAs with 10 GBE throughputs also allow for a
285:28 throughputs also allow for a consolidation of slower ethernet and FC
285:31 consolidation of slower ethernet and FC connections onto a faster
285:35 connections onto a faster connection next we will look at top of
285:37 connection next we will look at top of rack
285:38 rack deployments a rack is a standardized
285:41 deployments a rack is a standardized frame used in data centers to mount
285:43 frame used in data centers to mount multiple Computing
285:46 multiple Computing devices in top of rack deployments the
285:49 devices in top of rack deployments the switches are placed on the top shelves
285:51 switches are placed on the top shelves of the rack while the physical servers
285:53 of the rack while the physical servers that connect to the switches occupy the
285:55 that connect to the switches occupy the shelves beneath
286:01 them most server racks and data centers connect to lands and Sands using two
286:04 connect to lands and Sands using two redundant ethernet switches and two
286:06 redundant ethernet switches and two redundant fiber channel
286:12 switches in such environments a single pair of fcoe switches can simultaneously
286:16 pair of fcoe switches can simultaneously replace all the ethernet switches and
286:18 replace all the ethernet switches and fiber channel switches in the rack that
286:20 fiber channel switches in the rack that connects to lands and
286:23 connects to lands and Sands an fcoe switch eliminates the need
286:27 Sands an fcoe switch eliminates the need for separate switches and cables for LS
286:29 for separate switches and cables for LS and Sands because both Lan and sand
286:32 and Sands because both Lan and sand traffic can travel over a CE
286:36 traffic can travel over a CE link this results in less rack space
286:39 link this results in less rack space usage and simplified C in throughout the
286:41 usage and simplified C in throughout the data
286:43 data center since we touched on cabling we
286:46 center since we touched on cabling we need to mention that the copper twin
286:48 need to mention that the copper twin axial cabling is an available option for
286:50 axial cabling is an available option for fcoe Solutions of a 10 gbit
286:55 fcoe Solutions of a 10 gbit ethernet it uses SFP plus interface for
286:59 ethernet it uses SFP plus interface for copper
287:00 copper connection it consumes low power and is
287:03 connection it consumes low power and is low
287:05 low cost the copper twin axial cabling
287:08 cost the copper twin axial cabling supports short distances less than 10 m
287:12 supports short distances less than 10 m it is ideal for server to top of Rack or
287:15 it is ideal for server to top of Rack or end of row switch
287:17 end of row switch environments next we will look at end of
287:20 environments next we will look at end of row
287:21 row deployments in data centers the server
287:24 deployments in data centers the server racks are arranged in
287:26 racks are arranged in rows in end of row deployments a common
287:29 rows in end of row deployments a common switch is placed at the end of the row
287:32 switch is placed at the end of the row and the servers in the individual racks
287:34 and the servers in the individual racks of a row are connected to
287:36 of a row are connected to it there is no need for topof rack
287:39 it there is no need for topof rack switches in this kind of deployment
287:42 switches in this kind of deployment the number of fcoe switches that
287:45 the number of fcoe switches that participate in end of row deployments is
287:47 participate in end of row deployments is fewer than topof rack deployments but
287:50 fewer than topof rack deployments but they provide High availability with no
287:53 they provide High availability with no single point of failure since they have
287:55 single point of failure since they have redundant
287:57 redundant components and that brings us to the end
287:59 components and that brings us to the end of this
288:00 of this lesson let's summarize what you have
288:02 lesson let's summarize what you have learned in this lesson in this lesson
288:05 learned in this lesson in this lesson you learned about fiber channel over
288:07 you learned about fiber channel over Ethernet or
288:09 Ethernet or fcoe we started by looking at what fiber
288:11 fcoe we started by looking at what fiber channel over Ethernet or fcoe is and
288:15 channel over Ethernet or fcoe is and then we looked at the fcoe
288:19 then we looked at the fcoe encapsulation next we looked at the fcoe
288:22 encapsulation next we looked at the fcoe protocol stack and then we looked at the
288:24 protocol stack and then we looked at the fcoe
288:26 fcoe infrastructure we also looked at the
288:28 infrastructure we also looked at the traditional setup in a Data Center and
288:30 traditional setup in a Data Center and then we looked at the fcoe
288:33 then we looked at the fcoe setup while talking about the fcoe setup
288:36 setup while talking about the fcoe setup we looked at the converged network
288:38 we looked at the converged network adapter or CNA and and the fcoe
288:42 adapter or CNA and and the fcoe switch next we looked at what lossless
288:45 switch next we looked at what lossless ethernet is and then we looked at what
288:48 ethernet is and then we looked at what data center bridging or DCB
288:50 data center bridging or DCB is while talking about DCB we also
288:54 is while talking about DCB we also looked at what DCB test group
288:57 looked at what DCB test group is we looked at the three ethernet
288:59 is we looked at the three ethernet enhancements these are priority flow
289:02 enhancements these are priority flow control or
289:04 control or PFC enhanced transmission selection or
289:07 PFC enhanced transmission selection or etss and data center bridging Exchange
289:10 etss and data center bridging Exchange change or
289:12 change or dcbx lastly we looked at the three
289:15 dcbx lastly we looked at the three common fcoe
289:17 common fcoe deployments these are iol link
289:21 deployments these are iol link consolidation topof rack deployments and
289:24 consolidation topof rack deployments and end of Road
289:25 end of Road deployments in the next lesson you will
289:28 deployments in the next lesson you will learn about environmental concerns and
289:30 learn about environmental concerns and physical safety in data centers thank
289:33 physical safety in data centers thank you for watching
289:58 hello and welcome to unit three data center
290:00 center operations in this lesson you will learn
290:02 operations in this lesson you will learn about the environmental concerns and
290:04 about the environmental concerns and physical safety in data
290:07 physical safety in data centers we're going to start by looking
290:09 centers we're going to start by looking at what a data Center is and then we
290:12 at what a data Center is and then we will look at why there's a need for data
290:14 will look at why there's a need for data centers we will also look at the data
290:17 centers we will also look at the data center
290:18 center environment next we will look at the
290:20 environment next we will look at the heating ventilation and cooling or HVAC
290:24 heating ventilation and cooling or HVAC system of the data center and then we
290:26 system of the data center and then we will look at how HVAC Works in a data
290:29 will look at how HVAC Works in a data center while talking about HVAC we will
290:32 center while talking about HVAC we will look at the typical hot Isle cold aisle
290:35 look at the typical hot Isle cold aisle conditions in a data
290:37 conditions in a data center we will also look at the rack
290:39 center we will also look at the rack mount servers and then we will talk
290:41 mount servers and then we will talk about rack
290:43 about rack loading we will look at the power
290:46 loading we will look at the power distribution in data centers and next we
290:49 distribution in data centers and next we will look at the fire risk in data
290:51 will look at the fire risk in data centers while talking about the fire
290:53 centers while talking about the fire risk we will look at the impact of fire
290:56 risk we will look at the impact of fire and then we will look at fire
290:57 and then we will look at fire suppression agents we will also look at
291:00 suppression agents we will also look at the types of fire suppression agents
291:02 the types of fire suppression agents which are wet pipe dry pipe and gaseous
291:07 which are wet pipe dry pipe and gaseous agents next we will look at the lifting
291:09 agents next we will look at the lifting techniques and then we will look at what
291:12 techniques and then we will look at what an anti-static device
291:14 an anti-static device is lastly we will talk about rack
291:18 is lastly we will talk about rack stabilization now let's look at what a
291:20 stabilization now let's look at what a data center is according to Wikipedia a
291:24 data center is according to Wikipedia a data center is a facility used to house
291:27 data center is a facility used to house computer systems and Associated
291:29 computer systems and Associated components such as telecommunications
291:32 components such as telecommunications and storage
291:33 and storage systems It generally includes redundant
291:36 systems It generally includes redundant or backup power supplies redundant data
291:39 or backup power supplies redundant data Communications connections
291:41 Communications connections environmental controls for example air
291:44 environmental controls for example air conditioning and fire suppression and
291:46 conditioning and fire suppression and various security
291:48 various security devices but why do we need data
291:52 devices but why do we need data centers data centers help organizations
291:55 centers data centers help organizations centralize the management of computing
291:58 centralize the management of computing resources they reduce the total cost of
292:01 resources they reduce the total cost of ownership by consolidating Power Cooling
292:04 ownership by consolidating Power Cooling and maintenance required for running
292:06 and maintenance required for running servers in a single
292:08 servers in a single place now let's talk about the the data
292:10 place now let's talk about the the data center
292:12 center environment racks and rack mounted
292:14 environment racks and rack mounted servers are used in data
292:16 servers are used in data centers the components of a data center
292:19 centers the components of a data center such as servers storage arrays power
292:22 such as servers storage arrays power distribution units switches Etc generate
292:25 distribution units switches Etc generate a lot of
292:27 a lot of heat heat should be removed because it
292:29 heat heat should be removed because it affects the electrical equipment
292:31 affects the electrical equipment resulting in equipment malfunction or
292:35 resulting in equipment malfunction or failure since heat affects the
292:37 failure since heat affects the reliability of the electrical equipment
292:39 reliability of the electrical equipment we need to keep the data center cool
292:42 we need to keep the data center cool this requires exhausting the hot air
292:44 this requires exhausting the hot air from the machines and moving in cold air
292:47 from the machines and moving in cold air just like in a PC
292:49 just like in a PC Chassis hot air and cold air within the
292:52 Chassis hot air and cold air within the data center should not be mixed and for
292:55 data center should not be mixed and for this reason we have the hot Isle cold
292:57 this reason we have the hot Isle cold Isle
292:59 Isle Arrangement if the cold air mixes with
293:01 Arrangement if the cold air mixes with the hot air without going through the
293:03 the hot air without going through the equipment then it becomes
293:06 equipment then it becomes useless now let's talk about HVAC HVAC
293:09 useless now let's talk about HVAC HVAC stands for heating ventilation and
293:13 stands for heating ventilation and cooling an HVAC system provides Optimum
293:16 cooling an HVAC system provides Optimum temperature and indoor air quality for
293:19 temperature and indoor air quality for Data Center
293:21 Data Center Performance the HVAC system not only
293:23 Performance the HVAC system not only keeps things cool and keeps things humid
293:26 keeps things cool and keeps things humid to an extent but it also removes
293:28 to an extent but it also removes contaminants from the
293:29 contaminants from the air let's see how HVAC
293:33 air let's see how HVAC works as you can see in the diagram cold
293:36 works as you can see in the diagram cold air is pumped from the HVAC system into
293:38 air is pumped from the HVAC system into the cold aisle as an input put for the
293:41 the cold aisle as an input put for the servers servers pull in cold air from
293:43 servers servers pull in cold air from the front to cool themselves and they
293:46 the front to cool themselves and they exhaust hot air which goes into the hot
293:48 exhaust hot air which goes into the hot aisle the AC duct carries the hot air
293:51 aisle the AC duct carries the hot air from the hot aisle to the HVAC to cool
293:54 from the hot aisle to the HVAC to cool it again or exhaust it elsewhere now
293:57 it again or exhaust it elsewhere now let's look at the typical hot Isle cold
293:59 let's look at the typical hot Isle cold Isle
294:00 Isle temperatures the cold aisle temperature
294:03 temperatures the cold aisle temperature is between 55° and 78°
294:07 is between 55° and 78° F and the hot aisle temperature is
294:09 F and the hot aisle temperature is between 70 3° and 96°
294:13 between 70 3° and 96° F the amount of heat carried by the
294:15 F the amount of heat carried by the stream of air exiting the heat load
294:18 stream of air exiting the heat load should be 15° to 20°
294:22 should be 15° to 20° F now let's talk about rack mount
294:25 F now let's talk about rack mount servers racks and data centers contain
294:28 servers racks and data centers contain servers rack mount servers have a
294:31 servers rack mount servers have a different form factor than desktop
294:33 different form factor than desktop servers when it comes to Rack loading
294:36 servers when it comes to Rack loading large rack mount servers and Equipment
294:38 large rack mount servers and Equipment are installed at the bottom of a rack to
294:40 are installed at the bottom of a rack to ensure the rack doesn't
294:42 ensure the rack doesn't fall rack loading should not exceed the
294:45 fall rack loading should not exceed the weight rated capacity of the raised
294:47 weight rated capacity of the raised floor to ensure that the raised floor
294:49 floor to ensure that the raised floor doesn't collapse because of
294:52 doesn't collapse because of overweight now let's talk about power
294:55 overweight now let's talk about power distribution in data centers data
294:57 distribution in data centers data centers are usually connected to
294:59 centers are usually connected to multiple power grids for redundancy if
295:02 multiple power grids for redundancy if power on one grid is lost a data center
295:05 power on one grid is lost a data center will still continue to operate
295:08 will still continue to operate normally for devices we with redundant
295:11 normally for devices we with redundant power supplies power comes from separate
295:13 power supplies power comes from separate circuits providing
295:19 redundancy power requirements of a data center are determined by taking into
295:21 center are determined by taking into account the power requirements of all
295:23 account the power requirements of all the equipment and factoring in future
295:27 the equipment and factoring in future growth all data center equipment is
295:29 growth all data center equipment is grounded and independent of other
295:31 grounded and independent of other building
295:33 building grounds now let's talk about fire risk
295:36 grounds now let's talk about fire risk in data
295:37 in data centers the demand for power increased p
295:40 centers the demand for power increased p with an increase in the amount of
295:42 with an increase in the amount of equipment many types of equipment with
295:44 equipment many types of equipment with increased power consumption when
295:46 increased power consumption when confined in small spaces are susceptible
295:49 confined in small spaces are susceptible to fire
295:50 to fire accidents fire in a data center can have
295:53 accidents fire in a data center can have a catastrophic impact on business
295:55 a catastrophic impact on business operations Financial loss to the
295:58 operations Financial loss to the business due to fire can be staggering
296:01 business due to fire can be staggering the downtime of business operations due
296:03 the downtime of business operations due to fire can be days or
296:05 to fire can be days or weeks data centers have fire detection
296:08 weeks data centers have fire detection systems that detect fires
296:11 systems that detect fires portable fire extinguishers are placed
296:13 portable fire extinguishers are placed at critical locations in the data
296:15 at critical locations in the data center data centers have emergency power
296:18 center data centers have emergency power off switches which are big red buttons
296:21 off switches which are big red buttons on the wall that cut off all
296:23 on the wall that cut off all power emergency powerof switches should
296:26 power emergency powerof switches should be used in emergencies because servers
296:29 be used in emergencies because servers are not friendly to hard power
296:32 are not friendly to hard power shutdowns in the event of a fire fire
296:34 shutdowns in the event of a fire fire suppression agents put out the actual
296:36 suppression agents put out the actual fire selecting the right fire
296:39 fire selecting the right fire suppression agent is critical critical
296:40 suppression agent is critical critical because it can either allow quick
296:42 because it can either allow quick recovery or it can result in weeks of
296:45 recovery or it can result in weeks of recovery there are three types of fire
296:47 recovery there are three types of fire suppression agents wet pipe dry pipe and
296:51 suppression agents wet pipe dry pipe and gaseous agents a wet pipe sprinkles
296:54 gaseous agents a wet pipe sprinkles plain water in the presence of smoke or
296:57 plain water in the presence of smoke or heat the biggest disadvantage of a wet
297:00 heat the biggest disadvantage of a wet pipe is that it drenches servers storage
297:03 pipe is that it drenches servers storage devices and other equipment in water and
297:06 devices and other equipment in water and water can catastrophically damage the
297:09 water can catastrophically damage the equipment putting out a fire using water
297:11 equipment putting out a fire using water can cause the data center to be down for
297:14 can cause the data center to be down for weeks a dry pipe works just like a wet
297:17 weeks a dry pipe works just like a wet pipe but the water is not kept in the
297:19 pipe but the water is not kept in the pipes this is because a water pipe can
297:22 pipes this is because a water pipe can accumulate moisture due to condensation
297:24 accumulate moisture due to condensation and drip on the
297:26 and drip on the equipment since a dry pipe is a
297:28 equipment since a dry pipe is a water-based system the disadvantage is
297:31 water-based system the disadvantage is the same as wet
297:32 the same as wet pipe Gus agents provide immediate fire
297:36 pipe Gus agents provide immediate fire suppression by denying heat and oxygen
297:38 suppression by denying heat and oxygen to a data center fire
297:41 to a data center fire clean agents such as fm200 remove heat
297:44 clean agents such as fm200 remove heat from a fire and inert gases such as
297:47 from a fire and inert gases such as carbon dioxide deprive a fire of
297:51 carbon dioxide deprive a fire of oxygen gaseous agents provide immediate
297:54 oxygen gaseous agents provide immediate recovery of business
297:55 recovery of business operations however they are expensive
297:58 operations however they are expensive and may require training the staff to
298:00 and may require training the staff to handle a
298:01 handle a fire now let's talk about physical
298:04 fire now let's talk about physical safety techniques we'll first talk about
298:06 safety techniques we'll first talk about proper lifting techniques and weight
298:08 proper lifting techniques and weight considerations
298:10 considerations the easiest way to injure ourselves is
298:12 the easiest way to injure ourselves is through improper lifting of heavy
298:15 through improper lifting of heavy equipment it happens constantly and
298:17 equipment it happens constantly and almost everyone lifts
298:20 almost everyone lifts improperly the first thing is don't
298:22 improperly the first thing is don't attempt to lift anything that is more
298:24 attempt to lift anything that is more than a quarter of your
298:26 than a quarter of your weight for example if the equipment is
298:29 weight for example if the equipment is 50 lb and you weigh 175 lb then don't
298:33 50 lb and you weigh 175 lb then don't lift
298:34 lift it you need to be at least 200 lb to
298:37 it you need to be at least 200 lb to lift something that is 50 lb
298:41 lift something that is 50 lb if you're not then have someone help
298:43 if you're not then have someone help you you also want to lift from your legs
298:46 you you also want to lift from your legs with a straight back this is difficult
298:49 with a straight back this is difficult to describe but easy to do generally
298:52 to describe but easy to do generally speaking a lot of people will lift with
298:54 speaking a lot of people will lift with their back bent over not bending their
298:56 their back bent over not bending their legs just their torso putting so much
298:59 legs just their torso putting so much stress on their back your back is not
299:02 stress on their back your back is not meant to take that much stress so bend
299:04 meant to take that much stress so bend your legs and stand up from your legs
299:07 your legs and stand up from your legs bending your knees and straightening
299:08 bending your knees and straightening them that way the bulk of the weight is
299:11 them that way the bulk of the weight is going down into your quads into your
299:14 going down into your quads into your hamstrings and into your calf muscles
299:16 hamstrings and into your calf muscles rather than into your lower
299:18 rather than into your lower back we know that static electricity
299:21 back we know that static electricity damages Hardware components so we use
299:24 damages Hardware components so we use anti-static devices to help us
299:26 anti-static devices to help us neutralize static electricity preventing
299:28 neutralize static electricity preventing damage to devices such as hard disk
299:32 damage to devices such as hard disk drives examples of anti-static devices
299:35 drives examples of anti-static devices are anti-static bags anti-static mats
299:38 are anti-static bags anti-static mats and anti-static straps
299:43 last but not least we will talk about rack
299:44 rack stabilization rack stabilization is
299:47 stabilization rack stabilization is critical to the technician's physical
299:49 critical to the technician's physical safety as well as for the stability of
299:51 safety as well as for the stability of the equipment racks that are not stable
299:54 the equipment racks that are not stable are likely to collapse at any time for
299:57 are likely to collapse at any time for rack stability large servers and
299:59 rack stability large servers and equipment must be installed at the
300:01 equipment must be installed at the bottom of the rack and that brings us to
300:04 bottom of the rack and that brings us to the end of this lesson let's summarize
300:06 the end of this lesson let's summarize what we've learned in this
300:08 what we've learned in this lesson in this lesson you learned about
300:10 lesson in this lesson you learned about the environmental concerns and physical
300:12 the environmental concerns and physical safety in data
300:14 safety in data centers we started by looking at what a
300:16 centers we started by looking at what a data center is and then we looked at why
300:19 data center is and then we looked at why there's a need for a data center we also
300:21 there's a need for a data center we also looked at the data center
300:23 looked at the data center environment next we looked at the
300:25 environment next we looked at the heating ventilation and cooling system
300:27 heating ventilation and cooling system or HVAC system of the data center and
300:31 or HVAC system of the data center and then we looked at how HVAC Works in a
300:33 then we looked at how HVAC Works in a data
300:34 data center while talking about HVAC we
300:37 center while talking about HVAC we looked at the typical hot aisle cold AIS
300:39 looked at the typical hot aisle cold AIS condition in a data
300:41 condition in a data center we also looked at rack mount
300:44 center we also looked at rack mount servers and then we talked about rack
300:46 servers and then we talked about rack loading we also looked at power
300:48 loading we also looked at power distribution in data
300:50 distribution in data centers next we looked at fire risk in
300:53 centers next we looked at fire risk in data centers while talking about fire
300:56 data centers while talking about fire risk we looked at the impact of fire and
300:58 risk we looked at the impact of fire and then we looked at the fire suppression
301:01 then we looked at the fire suppression agents we also looked at the types of
301:03 agents we also looked at the types of fire suppression agents these are wet
301:06 fire suppression agents these are wet pipe dry pipe and gaseous agents
301:10 pipe dry pipe and gaseous agents next we looked at lifting techniques and
301:13 next we looked at lifting techniques and then we looked at what an anti-static
301:15 then we looked at what an anti-static device is lastly we talked about rack
301:19 device is lastly we talked about rack stabilization in the next lesson you
301:22 stabilization in the next lesson you will learn about replication thank you
301:24 will learn about replication thank you for watching
301:52 hello and welcome to unit one replication in this lesson you will
301:55 replication in this lesson you will learn about
301:56 learn about replication we're going to start by
301:58 replication we're going to start by looking at what replication
302:01 looking at what replication is since replication is one of the
302:03 is since replication is one of the methods employed to ensure business
302:05 methods employed to ensure business continuity we will touch upon business
302:08 continuity we will touch upon business continuity we will also discuss the
302:10 continuity we will also discuss the purpose of
302:11 purpose of replication next we will look at the
302:14 replication next we will look at the characteristics of a replica these are
302:16 characteristics of a replica these are recoverability and restartability
302:19 recoverability and restartability we will look at the two important
302:21 we will look at the two important aspects of replication planning these
302:24 aspects of replication planning these are recovery Point objective or RPO and
302:28 are recovery Point objective or RPO and recovery time objective or
302:31 recovery time objective or RTO we will then talk in detail about
302:33 RTO we will then talk in detail about the two categories of
302:36 the two categories of replication these are local replication
302:39 replication these are local replication and remote
302:41 and remote replication lastly we will look at site
302:49 redundancy now let's look at what replication
302:50 replication is replication is the process of making
302:54 is replication is the process of making an exact copy of data either locally or
302:58 an exact copy of data either locally or remotely it's one of the methods
303:00 remotely it's one of the methods employed to ensure business
303:03 employed to ensure business continuity business continuity is the
303:06 continuity business continuity is the process and procedures for ensuring that
303:08 process and procedures for ensuring that an organization critical business
303:10 an organization critical business functions will either continue to
303:12 functions will either continue to operate in the event of a disaster or
303:15 operate in the event of a disaster or recover in a short time after a disaster
303:18 recover in a short time after a disaster has
303:24 struck it is critical that data should be continuously available for the smooth
303:26 be continuously available for the smooth functioning of a
303:28 functioning of a business but its availability can be
303:30 business but its availability can be disrupted by threats such as natural
303:33 disrupted by threats such as natural disasters unplanned it outages cyber
303:37 disasters unplanned it outages cyber attacks adverse weather security
303:40 attacks adverse weather security breaches and so
303:46 on the main purpose of replication is to provide a replica or exact copy of data
303:50 provide a replica or exact copy of data that is suitable for recovering the data
303:52 that is suitable for recovering the data in the event of data
303:55 in the event of data loss now let's look at the important
303:57 loss now let's look at the important characteristics of a
303:59 characteristics of a replica a replica should offer both of
304:01 replica a replica should offer both of the following recoverability and
304:03 the following recoverability and restartability
304:06 restartability recoverability is the ability to restore
304:08 recoverability is the ability to restore the data from the replic replica to the
304:10 the data from the replic replica to the point of failure in the event of data
304:12 point of failure in the event of data loss or
304:14 loss or corruption restartability means that
304:17 corruption restartability means that data is in a consistent State and the
304:20 data is in a consistent State and the application that has had its data
304:22 application that has had its data replicated is completely aware of the
304:24 replicated is completely aware of the replicated
304:30 data there are two important things that need to be considered in replication
304:32 need to be considered in replication planning these are recovery Point
304:35 planning these are recovery Point objective
304:37 objective RPO and Recovery time objective or
304:43 RPO and Recovery time objective or RTO now let's talk about recovery Point
304:45 RTO now let's talk about recovery Point objective or
304:47 objective or RPO recovery Point objective is the
304:50 RPO recovery Point objective is the maximum tolerable time period before an
304:53 maximum tolerable time period before an outage during which loss of data is
304:55 outage during which loss of data is considered
305:01 acceptable it also specifies the time interval between two consecutive
305:04 interval between two consecutive backups we will explain this with the
305:06 backups we will explain this with the help of an
305:08 help of an example let's say our RPO is 10 minutes
305:11 example let's say our RPO is 10 minutes as per the service level agreement with
305:13 as per the service level agreement with the
305:14 the business this means that the acceptable
305:16 business this means that the acceptable data loss can be a maximum of 10 minutes
305:19 data loss can be a maximum of 10 minutes prior to an
305:21 prior to an outage for example if we had taken a
305:24 outage for example if we had taken a complete replication of our production
305:26 complete replication of our production data at 9:00 a.m. and the outage
305:29 data at 9:00 a.m. and the outage occurred at 909 a.m. as shown in the
305:32 occurred at 909 a.m. as shown in the diagram then the production data written
305:35 diagram then the production data written for 9 minutes between 9:00 a.m. and 9:09
305:38 for 9 minutes between 9:00 a.m. and 9:09 a.m. will be lost and cannot be
305:41 a.m. will be lost and cannot be recovered from the
305:43 recovered from the replica the data lost during this period
305:46 replica the data lost during this period is considered acceptable because of the
305:48 is considered acceptable because of the 10-minute RPO and it has to be recovered
305:51 10-minute RPO and it has to be recovered through other
305:53 through other means it is always better to have a
305:56 means it is always better to have a smaller RPO because the smaller the RPO
305:59 smaller RPO because the smaller the RPO the lesser the difference between the
306:01 the lesser the difference between the replica and the production
306:03 replica and the production data now let's talk about recovery time
306:06 data now let's talk about recovery time objective or
306:08 objective or RTO recovery time objective is the
306:10 RTO recovery time objective is the maximum tolerable time period within
306:13 maximum tolerable time period within which a business process must be
306:15 which a business process must be restored to an operational state after
306:17 restored to an operational state after an
306:19 an outage going back to our example let's
306:22 outage going back to our example let's say our RTO is 2 hours and if the outage
306:26 say our RTO is 2 hours and if the outage occurs at 9:09 a.m. then the business
306:29 occurs at 9:09 a.m. then the business will be expected to be up and running
306:31 will be expected to be up and running before 11:09 a.m. with the service as
306:34 before 11:09 a.m. with the service as good as it was at 9:00 a.m. before the
306:37 good as it was at 9:00 a.m. before the outage occurred
306:40 outage occurred there are two categories of replication
306:42 there are two categories of replication local replication and remote
306:45 local replication and remote replication local replication refers to
306:48 replication local replication refers to the replication that is done within a
306:50 the replication that is done within a storage array or a data
306:52 storage array or a data center remote replication refers to the
306:55 center remote replication refers to the replication that is done at a remote
306:58 replication that is done at a remote site now let's talk about local
307:01 site now let's talk about local replication the local replication
307:04 replication the local replication concerns snapshots and
307:07 concerns snapshots and clones the purpose of the snapshot Tech
307:09 clones the purpose of the snapshot Tech technology is to capture the data copy
307:12 technology is to capture the data copy of a dis volume at a specific moment of
307:14 of a dis volume at a specific moment of time without affecting the business
307:17 time without affecting the business operations the created data copy is
307:20 operations the created data copy is referred to as a
307:26 snapshot snapshots allow access to their contents regardless of the modifications
307:28 contents regardless of the modifications done to the original
307:30 done to the original data the primary purpose of a snapshot
307:33 data the primary purpose of a snapshot is to provide the ability to go back to
307:35 is to provide the ability to go back to a certain point in time to recover data
307:39 a certain point in time to recover data this feature helps to instantly restore
307:41 this feature helps to instantly restore business
307:43 business operations for example restore the
307:46 operations for example restore the business operation to a time just before
307:48 business operation to a time just before the data loss or data corruption has
307:52 the data loss or data corruption has occurred there are two ways in which a
307:54 occurred there are two ways in which a snapshot can be created full snapshot
307:58 snapshot can be created full snapshot and space efficient
308:01 and space efficient snapshot now let's talk about full
308:04 snapshot now let's talk about full snapshot a full snapshot creates a full
308:07 snapshot a full snapshot creates a full copy of the entire dis volume and and
308:09 copy of the entire dis volume and and for this reason it is also called a
308:12 for this reason it is also called a clone a full snapshot or clone requires
308:16 clone a full snapshot or clone requires as much space as the dis volume
308:19 as much space as the dis volume itself for example if we are going to
308:22 itself for example if we are going to take two full snapshots of a dis volume
308:25 take two full snapshots of a dis volume then we would need 200% capacity of the
308:28 then we would need 200% capacity of the dis
308:29 dis volume though it is not an efficient
308:32 volume though it is not an efficient replication model it is still valuable
308:34 replication model it is still valuable for Disaster
308:36 for Disaster Recovery now let's talk about space
308:38 Recovery now let's talk about space efficient snapshot
308:40 efficient snapshot shot a space efficient snapshot creates
308:43 shot a space efficient snapshot creates the snapshot based on the changes since
308:45 the snapshot based on the changes since the last
308:46 the last snapshot as the name suggests space
308:49 snapshot as the name suggests space efficient snapshots require less space
308:52 efficient snapshots require less space allowing administrators to take frequent
308:54 allowing administrators to take frequent snapshots of the disc
308:57 snapshots of the disc volume regardless of the type of
308:59 volume regardless of the type of snapshot taken it is important to note
309:02 snapshot taken it is important to note that a snapshot will be usable only if
309:04 that a snapshot will be usable only if it is in a consistent
309:06 it is in a consistent State a snapshot can be in a consist
309:09 State a snapshot can be in a consist state only when the application is aware
309:12 state only when the application is aware of
309:13 of it let's look at the key benefits of
309:16 it let's look at the key benefits of snapshot
309:17 snapshot technology snapshots allow creating
309:19 technology snapshots allow creating multiple recovery points that help in
309:22 multiple recovery points that help in reducing the amount of data loss after a
309:29 disaster and thus supplements daily backups without affecting the production
309:31 backups without affecting the production systems snapshots provide instantaneous
309:34 systems snapshots provide instantaneous access to
309:36 access to data and thus have an RPO of second
309:39 data and thus have an RPO of second which makes them suitable for
309:41 which makes them suitable for supplementing backup
309:44 supplementing backup Technologies snapshots allow creating
309:46 Technologies snapshots allow creating data copies that can be used for
309:48 data copies that can be used for business continuity rapid application
309:51 business continuity rapid application development and regulatory
309:57 requirements it should be noted that snapshots do not replace traditional
309:59 snapshots do not replace traditional backups but rather supplement them
310:02 backups but rather supplement them because snapshots can help in the quick
310:04 because snapshots can help in the quick recovery of data at a specific point in
310:06 recovery of data at a specific point in time
310:09 time now let's talk about remote
310:12 now let's talk about remote replication in remote replication data
310:14 replication in remote replication data is replicated from the production site
310:16 is replicated from the production site to the remote
310:18 to the remote site since data is replicated from the
310:21 site since data is replicated from the production site to the remote site the
310:23 production site to the remote site the production site is referred to as source
310:26 production site is referred to as source and the remote site is referred to as
310:30 and the remote site is referred to as Target remote replication Solutions are
310:32 Target remote replication Solutions are broadly categorized into the following
310:35 broadly categorized into the following synchronous replication and asynchronous
310:38 synchronous replication and asynchronous replication
310:44 in synchronous replication whenever the server writes data to the storage system
310:46 server writes data to the storage system in the production site it is
310:48 in the production site it is concurrently written to the storage
310:49 concurrently written to the storage system in the remote
310:52 system in the remote site as a result synchronous replication
310:55 site as a result synchronous replication guarantees zero data
311:01 loss let's demonstrate synchronous replication with the help of an
311:03 replication with the help of an oversimplified
311:05 oversimplified example in our example as shown in the
311:08 example in our example as shown in the diagram the production site has a server
311:10 diagram the production site has a server and a storage
311:12 and a storage system and our remote site has a storage
311:17 system and our remote site has a storage system when the server writes data to
311:19 system when the server writes data to the storage system at the production
311:21 the storage system at the production site this data is replicated from The
311:24 site this data is replicated from The Source storage system to the Target
311:26 Source storage system to the Target storage system at the remote
311:29 storage system at the remote site once the data is written to the
311:31 site once the data is written to the Target storage system an acknowledgement
311:34 Target storage system an acknowledgement is issued to the source storage system
311:37 is issued to the source storage system which in turn issues an acknowledgement
311:39 which in turn issues an acknowledgement to to the server indicating that the
311:41 to to the server indicating that the right operation is
311:46 complete since the server's right operation is committed to both the
311:48 operation is committed to both the source storage system and the target
311:50 source storage system and the target storage system synchronous replication
311:53 storage system synchronous replication provides zero RPO after a
312:01 disaster however the downside of this replication is that the server input
312:03 replication is that the server input output operations are affected because
312:06 output operations are affected because of the time taken to replicate the data
312:08 of the time taken to replicate the data on the remote remot storage
312:11 on the remote remot storage system the distance between the primary
312:14 system the distance between the primary site and the remote site is dependent on
312:16 site and the remote site is dependent on the application tolerance for latency
312:19 the application tolerance for latency and is typically less than 125
312:22 and is typically less than 125 miles in addition to that the link
312:25 miles in addition to that the link between the primary site and the remote
312:28 between the primary site and the remote site must be able to handle Peak
312:30 site must be able to handle Peak workload
312:31 workload bandwidth now let's talk about
312:33 bandwidth now let's talk about asynchronous
312:36 asynchronous replication in asynchronous replication
312:38 replication in asynchronous replication the the server's right operation is
312:40 the the server's right operation is committed to the storage system of the
312:42 committed to the storage system of the primary site and acknowledged
312:44 primary site and acknowledged immediately by the
312:47 immediately by the server unlike synchronous replication
312:50 server unlike synchronous replication the right operation to the remote
312:51 the right operation to the remote storage system in asynchronous
312:53 storage system in asynchronous replication is no longer tied to the
312:56 replication is no longer tied to the acknowledgement of the
313:01 server there is no impact on the application's response time since the
313:03 application's response time since the right operations are acknowledged
313:07 right operations are acknowledged immediately but on the downside side the
313:09 immediately but on the downside side the data on the remote site will lag behind
313:11 data on the remote site will lag behind that of the production
313:14 that of the production site the RPO of asynchronous replication
313:17 site the RPO of asynchronous replication is
313:18 is nonzero it depends on the available
313:20 nonzero it depends on the available Network bandwidth of the replication
313:22 Network bandwidth of the replication link between the primary site and the
313:25 link between the primary site and the target
313:26 target site RPO also depends on the workload
313:29 site RPO also depends on the workload transferred from the primary site to the
313:31 transferred from the primary site to the Target
313:33 Target site while the performance is increased
313:35 site while the performance is increased compared to synchronous replication
313:38 compared to synchronous replication there is no guarantee that the remote
313:40 there is no guarantee that the remote site will have the current copy of the
313:42 site will have the current copy of the data in the event of data loss at the
313:44 data in the event of data loss at the primary
313:46 primary site the main benefit of asynchronous
313:49 site the main benefit of asynchronous replication is that it allows
313:51 replication is that it allows replication over long distances because
313:54 replication over long distances because the application's response time is not
313:56 the application's response time is not dependent on the distance between the
313:58 dependent on the distance between the primary site and the remote
314:02 primary site and the remote site let's explain this with the help of
314:04 site let's explain this with the help of an example in our diagram the production
314:07 an example in our diagram the production site has a server and a storage system
314:10 site has a server and a storage system and our remote site has a storage system
314:13 and our remote site has a storage system they are all
314:15 they are all linked when the server writes data to
314:17 linked when the server writes data to the source storage system and
314:19 the source storage system and acknowledgement is sent immediately to
314:21 acknowledgement is sent immediately to the server from the storage
314:27 system The Source storage system replicates the data to the Target
314:29 replicates the data to the Target storage system by transmitting the right
314:32 storage system by transmitting the right data to the Target storage system as
314:34 data to the Target storage system as they are received from the
314:37 they are received from the server if the right data is more it is
314:40 server if the right data is more it is buffered and sent to the remote storage
314:43 buffered and sent to the remote storage system once the target storage system
314:46 system once the target storage system commits the data it sends an
314:48 commits the data it sends an acknowledgement to the source storage
314:51 acknowledgement to the source storage system in essence asynchronous
314:53 system in essence asynchronous replication decouples the remote
314:55 replication decouples the remote replication process from the
314:57 replication process from the acknowledgement sent to the server by
314:59 acknowledgement sent to the server by The Source storage
315:01 The Source storage system now let's discuss site
315:05 system now let's discuss site redundancy in earlier lessons we saw
315:08 redundancy in earlier lessons we saw redundancy is achieved through redundant
315:11 redundancy is achieved through redundant components now we will go one step
315:14 components now we will go one step further to look at how redundancy is
315:16 further to look at how redundancy is achieved at a higher level through
315:18 achieved at a higher level through redundant
315:19 redundant sites the Redundant site is a secondary
315:22 sites the Redundant site is a secondary site that is ready to resume the
315:24 site that is ready to resume the operations of the primary site in the
315:27 operations of the primary site in the event of a
315:29 event of a disaster the Redundant site ensures that
315:31 disaster the Redundant site ensures that business operations don't suffer when
315:34 business operations don't suffer when disaster devastates an entire primary
315:36 disaster devastates an entire primary site such as by an earthquake flood or
315:41 site such as by an earthquake flood or hurricane it should be noted that under
315:44 hurricane it should be noted that under normal circumstances the secondary site
315:46 normal circumstances the secondary site is not used for production
315:49 is not used for production workloads when configuring redundant
315:52 workloads when configuring redundant sites we can use either synchronous
315:54 sites we can use either synchronous replication or asynchronous replication
315:57 replication or asynchronous replication depending on the distances between the
315:59 depending on the distances between the primary site and the remote
316:02 primary site and the remote site we will explain this with the help
316:04 site we will explain this with the help of an example as shown in the
316:06 of an example as shown in the diagram let's say the distance between
316:08 diagram let's say the distance between between the primary site and the remote
316:10 between the primary site and the remote site S1 is less than 125
316:14 site S1 is less than 125 miles in this case we can configure
316:17 miles in this case we can configure synchronous replication between them the
316:20 synchronous replication between them the data replication between the primary
316:22 data replication between the primary site and the remote site provides a
316:24 site and the remote site provides a Disaster Recovery
316:27 Disaster Recovery Solution by this we mean if the primary
316:30 Solution by this we mean if the primary site goes down then the remote site can
316:32 site goes down then the remote site can be active and resume the operations of
316:34 be active and resume the operations of the primary site site redundancy ensures
316:38 the primary site site redundancy ensures smooth running of the business because
316:41 smooth running of the business because the remote site is now providing the
316:43 the remote site is now providing the services of the primary site even though
316:45 services of the primary site even though the primary site is
316:48 the primary site is down the downside of site redundancy in
316:51 down the downside of site redundancy in case of synchronous replication is that
316:53 case of synchronous replication is that a local disaster such as a hurricane
316:56 a local disaster such as a hurricane flood or earthquake will bring down both
316:58 flood or earthquake will bring down both sites at the same
317:00 sites at the same time so an alternative would be to have
317:03 time so an alternative would be to have one more secondary site that is
317:05 one more secondary site that is geographically distant from the primary
317:07 geographically distant from the primary site
317:09 site asynchronous replication is configured
317:11 asynchronous replication is configured between the primary site and the new
317:13 between the primary site and the new secondary site
317:15 secondary site S2 so we have data being replicated from
317:18 S2 so we have data being replicated from the primary site to two secondary sites
317:21 the primary site to two secondary sites the data is replicated from primary to
317:24 the data is replicated from primary to secondary site S1 using synchronous
317:26 secondary site S1 using synchronous replication and to secondary site S2
317:29 replication and to secondary site S2 using asynchronous
317:31 using asynchronous replication with this site redundancy
317:34 replication with this site redundancy disaster setup even if a local disaster
317:36 disaster setup even if a local disaster affects both the primary site and
317:38 affects both the primary site and secondary site S1 the secondary site S2
317:42 secondary site S1 the secondary site S2 can become active and resume the primary
317:45 can become active and resume the primary site's
317:46 site's operations since we are dealing with
317:49 operations since we are dealing with asynchronous replication for the
317:50 asynchronous replication for the secondary site S2 the data at S2 will
317:54 secondary site S2 the data at S2 will lag behind the primary site and the
317:57 lag behind the primary site and the users will see a hit in the application
317:59 users will see a hit in the application response time because of network
318:01 response time because of network latency however a drop in performance is
318:04 latency however a drop in performance is better than no service at
318:07 better than no service at all and that brings us to the end of
318:09 all and that brings us to the end of this lesson let's summarize what you
318:12 this lesson let's summarize what you have learned in this lesson in this
318:14 have learned in this lesson in this lesson you learned about replication we
318:17 lesson you learned about replication we started by looking at what replication
318:20 started by looking at what replication is since replication is one of the
318:23 is since replication is one of the methods employed to ensure business
318:25 methods employed to ensure business continuity we touched upon business
318:28 continuity we touched upon business continuity we also discussed the purpose
318:30 continuity we also discussed the purpose of
318:31 of replication next we looked at the
318:33 replication next we looked at the characteristics of a replica these are
318:36 characteristics of a replica these are recoverability and restartability
318:39 recoverability and restartability we looked at the two important aspects
318:41 we looked at the two important aspects of replication planning these are
318:45 of replication planning these are recovery Point objective RPO and
318:48 recovery Point objective RPO and recovery time objective
318:51 recovery time objective RTO we then talked in detail about the
318:53 RTO we then talked in detail about the two categories of
318:55 two categories of replication these are local replication
318:58 replication these are local replication and remote
319:00 and remote replication lastly we looked at site
319:04 replication lastly we looked at site redundancy in the next lesson you will
319:06 redundancy in the next lesson you will learn the basics of backup and restore
319:09 learn the basics of backup and restore thank you for
319:36 watching hello and welcome to unit 1 introduction to backup and Recovery
319:39 introduction to backup and Recovery in this lesson you will learn the basics
319:42 in this lesson you will learn the basics of backup and
319:43 of backup and Recovery we're going to start by looking
319:45 Recovery we're going to start by looking at what a backup is and then we'll talk
319:48 at what a backup is and then we'll talk about the purposes of
319:50 about the purposes of backup these are Disaster Recovery
319:54 backup these are Disaster Recovery operational backup and
319:56 operational backup and archive while talking about Disaster
319:59 archive while talking about Disaster Recovery we will look at tape based
320:01 Recovery we will look at tape based backup and remote
320:04 backup and remote replication next we will look at
320:06 replication next we will look at considerations for a backup strategy
320:09 considerations for a backup strategy lastly we will talk about backup
320:13 lastly we will talk about backup performance now let's look at what a
320:15 performance now let's look at what a backup is a backup is a copy of data
320:19 backup is a backup is a copy of data that is meant for recovering the data
320:20 that is meant for recovering the data when it is lost or
320:23 when it is lost or corrupted backups are primarily done for
320:26 corrupted backups are primarily done for three reasons Disaster Recovery
320:29 three reasons Disaster Recovery operational backup and
320:32 operational backup and archive let's look at Disaster Recovery
320:36 archive let's look at Disaster Recovery we need backups to recover data in the
320:38 we need backups to recover data in the event of of a disaster when the primary
320:40 event of of a disaster when the primary site goes down due to a disaster the
320:43 site goes down due to a disaster the backups are used to restore services at
320:45 backups are used to restore services at the secondary
320:46 the secondary site different backup Solutions are
320:49 site different backup Solutions are implemented by organizations depending
320:51 implemented by organizations depending on their RPO and RTO
320:55 on their RPO and RTO requirements when a tape-based backup
320:57 requirements when a tape-based backup medium is used as a backup solution it
320:59 medium is used as a backup solution it is shipped and stored at an off-site
321:03 is shipped and stored at an off-site location when a disaster strikes the
321:05 location when a disaster strikes the primary site these tapes are brought to
321:07 primary site these tapes are brought to the secondary site s it to restore
321:11 the secondary site s it to restore Services when organizations have strict
321:13 Services when organizations have strict RPO and RTO requirements they use remote
321:17 RPO and RTO requirements they use remote replication technology to replicate data
321:19 replication technology to replicate data at the secondary
321:21 at the secondary site in the event of a disaster at the
321:24 site in the event of a disaster at the primary site the services are restored
321:26 primary site the services are restored at the secondary site in a relatively
321:28 at the secondary site in a relatively short
321:33 time now let's talk about operational backup backups are not only needed in
321:36 backup backups are not only needed in the event of a disaster but also when
321:38 the event of a disaster but also when the production data gets corrupted or
321:41 the production data gets corrupted or lost the backups used in such cases are
321:44 lost the backups used in such cases are the operational backups which are
321:46 the operational backups which are backups of data at certain points in
321:48 backups of data at certain points in time for example if a user deleted a
321:52 time for example if a user deleted a file unknowingly it can be restored
321:54 file unknowingly it can be restored using the operational
321:56 using the operational backup operational backups are created
321:59 backup operational backups are created using incremental or differential
322:02 using incremental or differential techniques we will discuss these
322:04 techniques we will discuss these techniques in an upcoming
322:06 techniques in an upcoming video now let's look at
322:09 video now let's look at archive backups are done for the purpose
322:11 archive backups are done for the purpose of archiving and for legal
322:15 of archiving and for legal compliance content addressable storage
322:18 compliance content addressable storage is used as a primary solution for
322:21 is used as a primary solution for archival however small and medium-sized
322:24 archival however small and medium-sized organizations are still using
322:25 organizations are still using traditional backups such as Optical
322:28 traditional backups such as Optical discs and
322:30 discs and tapes we will discuss more about content
322:33 tapes we will discuss more about content addressable storage in an upcoming
322:35 addressable storage in an upcoming video now we will look at the
322:37 video now we will look at the considerations for a backup strategy the
322:40 considerations for a backup strategy the primary factors that are taken into
322:42 primary factors that are taken into account when deciding on the backup
322:44 account when deciding on the backup strategy are the RPO and the RTO which
322:48 strategy are the RPO and the RTO which specify the acceptable data loss and the
322:50 specify the acceptable data loss and the time taken to recover the
322:54 time taken to recover the service another factor that is taken
322:56 service another factor that is taken into consideration is the retention
322:58 into consideration is the retention period for the backed up
323:00 period for the backed up data for example operational backups are
323:03 data for example operational backups are retained for a relatively shorter period
323:05 retained for a relatively shorter period of time than archives
323:08 of time than archives backup strategy should also take into
323:10 backup strategy should also take into account the time to perform the backup
323:12 account the time to perform the backup operations because it should not affect
323:14 operations because it should not affect the
323:20 production now we will talk about backup performance the performance of the
323:22 performance the performance of the backup is affected by the media used for
323:25 backup is affected by the media used for making the
323:26 making the backup for example if it takes 20 hours
323:29 backup for example if it takes 20 hours to back up with a physical tape
323:31 to back up with a physical tape Library the same data will take less
323:33 Library the same data will take less than one hour to back up using a virtual
323:36 than one hour to back up using a virtual tape Library
323:42 that brings us to the end of this lesson let's summarize what you have learned in
323:44 let's summarize what you have learned in this
323:45 this lesson in this lesson you learned the
323:48 lesson in this lesson you learned the basics of backup and Recovery we started
323:50 basics of backup and Recovery we started by looking at what a backup is and then
323:53 by looking at what a backup is and then we talked about the purposes of
323:55 we talked about the purposes of backup these are Disaster Recovery
323:58 backup these are Disaster Recovery operational backup and
324:02 operational backup and archive while talking about Disaster
324:04 archive while talking about Disaster Recovery we looked at tape-based backup
324:07 Recovery we looked at tape-based backup and remote replication
324:10 and remote replication next we looked at considerations for a
324:12 next we looked at considerations for a backup strategy lastly we talked about
324:15 backup strategy lastly we talked about backup
324:16 backup performance in the next lesson you will
324:19 performance in the next lesson you will learn the basics of backup and restore
324:21 learn the basics of backup and restore methods thank you for watching
324:47 hello and welcome to unit 2 backup and restore
324:49 restore methods in this lesson you will learn
324:52 methods in this lesson you will learn about backup and restore
324:54 about backup and restore methods we're going to start by looking
324:56 methods we're going to start by looking at the types of backup methods and in
324:59 at the types of backup methods and in specific we will look at full
325:01 specific we will look at full backup incremental
325:03 backup incremental backup differential backup and synthetic
325:07 backup differential backup and synthetic full backup
325:09 full backup next we will talk about verifying
325:11 next we will talk about verifying backups and then we will talk about
325:13 backups and then we will talk about check
325:14 check sums we will also look at what
325:16 sums we will also look at what application verification is and then we
325:19 application verification is and then we will talk about data retention
325:22 will talk about data retention schemes while talking about data
325:24 schemes while talking about data retention schemes we'll look at one such
325:27 retention schemes we'll look at one such scheme called grandfather father son or
325:32 scheme called grandfather father son or GFS now let's look at the types of
325:35 GFS now let's look at the types of backup
325:36 backup methods backups are primarily done to
325:39 methods backups are primarily done to secure
325:40 secure data there are different types of backup
325:42 data there are different types of backup methods based on the level of detail
325:44 methods based on the level of detail that a backup encompasses to suit a
325:47 that a backup encompasses to suit a particular business
325:49 particular business need the major types of backup methods
325:52 need the major types of backup methods are full backup incremental backup
325:56 are full backup incremental backup differential backup and synthetic full
326:04 backup a full backup is a backup of all the data on the production volume a full
326:07 the data on the production volume a full backup is created by copying all the
326:09 backup is created by copying all the data to a storage device such as a tape
326:12 data to a storage device such as a tape Library where it will be stored for
326:14 Library where it will be stored for future
326:15 future use for example if we are creating a
326:19 use for example if we are creating a full backup of a disk then it would be a
326:22 full backup of a disk then it would be a complete backup of all the files and
326:24 complete backup of all the files and folders on that
326:31 disk the advantage of a full backup is that it minimizes the downtime in the
326:33 that it minimizes the downtime in the event of an outage because the data can
326:35 event of an outage because the data can be restored quickly
326:42 the disadvantages of a full backup is that it consumes a lot of time because
326:44 that it consumes a lot of time because it has to back up all the
326:47 it has to back up all the data irrespective of the changes done to
326:49 data irrespective of the changes done to the production volume even if there is a
326:52 the production volume even if there is a minor change or no change at all a full
326:55 minor change or no change at all a full backup will do a complete backup of all
326:57 backup will do a complete backup of all the data consuming a significant storage
327:00 the data consuming a significant storage capacity on the storage
327:07 device as the production data grows the time to create a full backup of that
327:09 time to create a full backup of that data also
327:15 increases businesses have a backup window or a time slot dedicated for
327:18 window or a time slot dedicated for taking
327:19 taking backups if the full backup is done over
327:22 backups if the full backup is done over a network to a tape Library the backup
327:25 a network to a tape Library the backup window may not be sufficient to back up
327:27 window may not be sufficient to back up all the
327:29 all the data in addition to that a full backup
327:32 data in addition to that a full backup carried out over the network consumes
327:34 carried out over the network consumes the network bandwidth because it has to
327:37 the network bandwidth because it has to transport a lot of data from the
327:39 transport a lot of data from the production volume to the storage
327:46 device now let's look at incremental backup an incremental backup is a backup
327:49 backup an incremental backup is a backup of the data that has changed since the
327:51 of the data that has changed since the previous
327:53 previous backup the previous backup can either be
327:56 backup the previous backup can either be an incremental backup or a full
327:59 an incremental backup or a full backup we will explain this with the
328:01 backup we will explain this with the help of an
328:02 help of an example let's say on Sunday we performed
328:05 example let's say on Sunday we performed a full backup of our data that is on the
328:07 a full backup of our data that is on the production
328:09 production volume on Monday let's say a few changes
328:12 volume on Monday let's say a few changes were done to the data on the production
328:15 were done to the data on the production volume in the diagram we will highlight
328:18 volume in the diagram we will highlight these changes in blue an incremental
328:21 these changes in blue an incremental backup on Monday will copy only the
328:23 backup on Monday will copy only the changes that were done to the production
328:25 changes that were done to the production volume after Sunday's full
328:28 volume after Sunday's full backup on Tuesday let's say a few more
328:31 backup on Tuesday let's say a few more changes were done on the production
328:33 changes were done on the production volume in the diagram we'll highlight
328:36 volume in the diagram we'll highlight these in
328:37 these in pink an incremental backup on Tuesday
328:40 pink an incremental backup on Tuesday will copy only the changes that were
328:42 will copy only the changes that were done on the production volume after
328:44 done on the production volume after Monday's incremental
328:50 backup on Wednesday let's say a few more additional changes were done on the
328:52 additional changes were done on the production
328:53 production volume in the diagram we will highlight
328:56 volume in the diagram we will highlight these in
328:57 these in Orange an incremental backup on
328:59 Orange an incremental backup on Wednesday will copy only the changes
329:02 Wednesday will copy only the changes that were done on the production volume
329:04 that were done on the production volume after Tuesday's incremental backup
329:11 the advantage of incremental backup is that it takes less time to back up and
329:13 that it takes less time to back up and consumes Less storage capacity than a
329:15 consumes Less storage capacity than a full backup because it only copies the
329:18 full backup because it only copies the changes and not all the
329:21 changes and not all the data in addition to that an incremental
329:24 data in addition to that an incremental backup carried out over the network will
329:26 backup carried out over the network will not be Network intensive because it has
329:29 not be Network intensive because it has less data to transport from the
329:30 less data to transport from the production volume to the storage
329:38 device the disadvantages of incremental backup is that restoring data from the
329:40 backup is that restoring data from the backup can be a time-consuming
329:44 backup can be a time-consuming process if the data must be fully
329:46 process if the data must be fully restored the restoration process must
329:49 restored the restoration process must begin with the full backup and then
329:51 begin with the full backup and then involves all the intermediate
329:53 involves all the intermediate incremental backups that were taken
329:55 incremental backups that were taken since the full
330:01 backup we will explain this with the help of our previous example let's say
330:04 help of our previous example let's say on Wednesday we lost our production data
330:06 on Wednesday we lost our production data because of a disc failure
330:09 because of a disc failure now if we want to restore the complete
330:11 now if we want to restore the complete production data then the last available
330:14 production data then the last available backup that is Tuesday's backup alone
330:17 backup that is Tuesday's backup alone will not help it's because Tuesday's
330:20 will not help it's because Tuesday's backup contains only the copy of the
330:22 backup contains only the copy of the data that has been changed since
330:26 data that has been changed since Monday having Monday's backup alone will
330:28 Monday having Monday's backup alone will not help because it contains only the
330:31 not help because it contains only the copy of data that was changed since
330:34 copy of data that was changed since Sunday so if we have to restore the
330:37 Sunday so if we have to restore the complete production data as it was on
330:40 complete production data as it was on Tuesday we need Sunday's full backup and
330:43 Tuesday we need Sunday's full backup and all the incremental backups from Monday
330:46 all the incremental backups from Monday and Tuesday as shown in the
330:50 and Tuesday as shown in the diagram it should be noted that the
330:52 diagram it should be noted that the restore process of an incremental backup
330:55 restore process of an incremental backup is more complicated than that of a full
330:58 is more complicated than that of a full backup now let's take a look at the
331:00 backup now let's take a look at the differential
331:01 differential backup differential backup is the backup
331:05 backup differential backup is the backup that copies all the changes since the
331:07 that copies all the changes since the last full backup
331:08 last full backup up this type of backup is also known as
331:11 up this type of backup is also known as cumulative
331:13 cumulative backup we'll explain this with the help
331:15 backup we'll explain this with the help of an
331:17 of an example let's say on Sunday we performed
331:20 example let's say on Sunday we performed a full backup of our data that is on the
331:22 a full backup of our data that is on the production
331:23 production volume on Monday let's say a few changes
331:26 volume on Monday let's say a few changes were done to this
331:28 were done to this data in the diagram we will highlight
331:30 data in the diagram we will highlight these changes in
331:33 these changes in blue a differential backup on Monday
331:35 blue a differential backup on Monday will copy the changes done to the
331:37 will copy the changes done to the production volume after Sunday's full
331:40 production volume after Sunday's full backup till now it looks similar to an
331:43 backup till now it looks similar to an incremental backup but you will notice
331:45 incremental backup but you will notice the difference
331:47 the difference soon on Tuesday let's say a few more
331:50 soon on Tuesday let's say a few more changes were done on the production
331:53 changes were done on the production volume in the diagram we will highlight
331:56 volume in the diagram we will highlight these changes in pink a differential
331:59 these changes in pink a differential backup on Tuesday will copy all the
332:01 backup on Tuesday will copy all the changes that were made on the production
332:03 changes that were made on the production volume after Sunday's full
332:05 volume after Sunday's full backup on the contrary and incremental
332:08 backup on the contrary and incremental backup will copy only the changes that
332:10 backup will copy only the changes that were made on the production volume since
332:13 were made on the production volume since the previous incremental
332:19 backup this is the main difference between the differential backup and the
332:21 between the differential backup and the incremental
332:23 incremental backup on Wednesday let's say a few more
332:26 backup on Wednesday let's say a few more additional changes were done on the
332:27 additional changes were done on the production volume in the diagram we will
332:30 production volume in the diagram we will highlight these in
332:32 highlight these in Orange a differential backup on
332:34 Orange a differential backup on Wednesday will copy all the changes that
332:37 Wednesday will copy all the changes that were done on the the production volume
332:39 were done on the the production volume after Sunday's full
332:42 after Sunday's full backup the advantage of differential
332:44 backup the advantage of differential backup over full backup is that it takes
332:47 backup over full backup is that it takes less time to back up and takes less
332:49 less time to back up and takes less storage capacity than a full backup as
332:52 storage capacity than a full backup as it does not copy all the
332:55 it does not copy all the data it is easier and takes less time to
332:58 data it is easier and takes less time to restore than an incremental
333:02 restore than an incremental backup in the event of data loss the
333:04 backup in the event of data loss the full restoration requires the last full
333:07 full restoration requires the last full backup and the last differential backup
333:10 backup and the last differential backup to fully restore the production data to
333:12 to fully restore the production data to the state it was on the day of the last
333:14 the state it was on the day of the last differential
333:17 differential backup we will explain this with the
333:19 backup we will explain this with the help of our previous
333:21 help of our previous example let's say on Wednesday we lost
333:24 example let's say on Wednesday we lost our production data because of a dis
333:27 our production data because of a dis failure now if we want to restore the
333:29 failure now if we want to restore the complete production data we need only
333:32 complete production data we need only Sunday's full backup and the last
333:34 Sunday's full backup and the last available differential backup that is
333:37 available differential backup that is Tuesday's
333:38 Tuesday's backup this is because Tuesday's backup
333:41 backup this is because Tuesday's backup contains the copy of all the data that
333:43 contains the copy of all the data that changed since Sunday's full
333:46 changed since Sunday's full backup there is no need for intermediate
333:49 backup there is no need for intermediate differential backup of Monday because
333:52 differential backup of Monday because the change data of Monday is already
333:54 the change data of Monday is already available in Tuesday's differential
333:58 available in Tuesday's differential backup so if we have to restore the
334:00 backup so if we have to restore the complete production data as it was on
334:03 complete production data as it was on Tuesday we need Sunday's full backup and
334:07 Tuesday we need Sunday's full backup and Tuesday's differential
334:09 Tuesday's differential backup the restore process of a
334:11 backup the restore process of a differential backup is easy compared to
334:14 differential backup is easy compared to an incremental backups restore
334:17 an incremental backups restore process now we will look at synthetic
334:19 process now we will look at synthetic full
334:21 full backup synthetic full backup is a full
334:24 backup synthetic full backup is a full backup created using the last full
334:26 backup created using the last full backup and all the subsequent
334:28 backup and all the subsequent incremental
334:35 backups it's also called Progressive backup it is used in production
334:37 backup it is used in production environments that cannot extend their
334:39 environments that cannot extend their backup window to accommodate full
334:43 backup window to accommodate full backups the primary purpose of synthetic
334:45 backups the primary purpose of synthetic full backup is to create a full backup
334:48 full backup is to create a full backup offline without affecting the production
334:51 offline without affecting the production IO
334:52 IO workloads we will explain this with the
334:55 workloads we will explain this with the help of a previous example that involved
334:57 help of a previous example that involved incremental
334:59 incremental backup let's say that on last Sunday we
335:02 backup let's say that on last Sunday we performed a full backup of our data that
335:04 performed a full backup of our data that is on the production volume and from
335:07 is on the production volume and from Monday to Saturday we will be taking
335:09 Monday to Saturday we will be taking incremental
335:11 incremental backups let's say we want to take a full
335:14 backups let's say we want to take a full backup on the coming
335:16 backup on the coming Sunday now instead of taking a full
335:19 Sunday now instead of taking a full backup again what the synthetic backup
335:22 backup again what the synthetic backup does is it Aggregates the incremental
335:25 does is it Aggregates the incremental backups of all days from Monday to
335:28 backups of all days from Monday to Saturday and merges it with the existing
335:30 Saturday and merges it with the existing full backup as shown in the
335:33 full backup as shown in the diagram the resulting backup will be the
335:36 diagram the resulting backup will be the full backup that we want wanted on the
335:38 full backup that we want wanted on the coming
335:43 Sunday the reason why synthetic full backup is possible is because the
335:45 backup is possible is because the incremental backup so aggregated
335:48 incremental backup so aggregated reflects all the changes that were made
335:50 reflects all the changes that were made since the last full
335:53 since the last full backup the advantages of synthetic full
335:56 backup the advantages of synthetic full backup is that it frees the network
335:58 backup is that it frees the network bandwidth from the backup
336:01 bandwidth from the backup process the downside of synthetic full
336:03 process the downside of synthetic full backup is that it is complicated because
336:06 backup is that it is complicated because it involves merging ing the incremental
336:08 it involves merging ing the incremental backups with the last full
336:12 backups with the last full backup it should be noted that backups
336:14 backup it should be noted that backups are no good if data cannot be recovered
336:16 are no good if data cannot be recovered from
336:18 from them it is likely that the data stored
336:20 them it is likely that the data stored on the backup media can become corrupted
336:23 on the backup media can become corrupted because of demagnetization or
336:31 defects so whenever data is backed up we must verify the backup to ensure that
336:33 must verify the backup to ensure that data in the backup can be read and
336:35 data in the backup can be read and restored when needed
336:42 however verifying a backup doesn't ensure that the structure of the data in
336:44 ensure that the structure of the data in the backup is
336:46 the backup is correct and for this reason backups are
336:48 correct and for this reason backups are created with backup check
336:51 created with backup check sums check sums are used to verify the
336:54 sums check sums are used to verify the Integrity of the
336:56 Integrity of the data a successful verification of a
336:59 data a successful verification of a backup with check sums ensures that the
337:02 backup with check sums ensures that the data on the backup is
337:04 data on the backup is reliable some software applications or
337:07 reliable some software applications or or their agents perform a consistency
337:09 or their agents perform a consistency check on their
337:11 check on their data this type of verification is called
337:14 data this type of verification is called application
337:16 application verification now let's talk about data
337:18 verification now let's talk about data retention
337:20 retention schemes it is necessary to back up data
337:23 schemes it is necessary to back up data regularly so that it can be recovered in
337:25 regularly so that it can be recovered in the event of data
337:28 the event of data loss however if we want to retain this
337:31 loss however if we want to retain this backed up data forever it will take a
337:33 backed up data forever it will take a lot of physical space to store the
337:35 lot of physical space to store the backup media such as tapes or hard disk
337:38 backup media such as tapes or hard disk drives that were used to store the
337:40 drives that were used to store the backup
337:46 data so the solution is to use backup media in a rotation scheme to minimize
337:49 media in a rotation scheme to minimize the number of tapes or hard disk drives
337:51 the number of tapes or hard disk drives used but without compromising the data
337:54 used but without compromising the data recovery
338:01 capability now we will talk about one such backup media rotation scheme called
338:03 such backup media rotation scheme called grandfather father son or GFS
338:09 grandfather father son or GFS grandfather father son is a rotation
338:12 grandfather father son is a rotation scheme that provides a regular recurring
338:14 scheme that provides a regular recurring schedule for backing up
338:16 schedule for backing up data it minimizes the number of tapes or
338:19 data it minimizes the number of tapes or Diss used but with the ability to
338:22 Diss used but with the ability to recover
338:23 recover data in the grandfather fatherson
338:26 data in the grandfather fatherson rotation scheme backup media are
338:29 rotation scheme backup media are categorized into three types son father
338:33 categorized into three types son father and
338:34 and grandfather Sun backups refer to the
338:37 grandfather Sun backups refer to the backup media used for daily backups on a
338:40 backup media used for daily backups on a rotational
338:42 rotational basis father backups refer to the backup
338:45 basis father backups refer to the backup media used for weekly backups on a
338:47 media used for weekly backups on a rotational
338:49 rotational basis a full Daily backup in a week is
338:52 basis a full Daily backup in a week is promoted from son to
338:55 promoted from son to father grandfather backups refer to the
338:58 father grandfather backups refer to the backup media used for monthly backups on
339:00 backup media used for monthly backups on a rotational
339:02 a rotational basis the most recent full monthly
339:05 basis the most recent full monthly father backup in a month is promoted
339:07 father backup in a month is promoted from father to
339:09 from father to grandfather and that brings us to the
339:11 grandfather and that brings us to the end of this lesson let's summarize what
339:14 end of this lesson let's summarize what you have learned in this lesson in this
339:17 you have learned in this lesson in this lesson you learned about backup and
339:19 lesson you learned about backup and restore
339:20 restore methods we started by looking at the
339:22 methods we started by looking at the types of backup methods and in specific
339:25 types of backup methods and in specific we looked at full backup incremental
339:28 we looked at full backup incremental backup differential backup and synthetic
339:31 backup differential backup and synthetic full
339:33 full backup next we talked about verifying
339:36 backup next we talked about verifying backups and then we talked about check
339:39 backups and then we talked about check sums we also looked at what application
339:42 sums we also looked at what application verification is and then we talked about
339:45 verification is and then we talked about data retention
339:46 data retention schemes while talking about data
339:48 schemes while talking about data retention schemes we looked at one such
339:51 retention schemes we looked at one such scheme called grandfather father sun or
339:55 scheme called grandfather father sun or GFS in the next lesson you will learn
339:57 GFS in the next lesson you will learn the various methods of backup
339:59 the various methods of backup implementation thank you for watching
340:26 hello and welcome to unit 3 backup implementation
340:27 implementation methods in this lesson you will learn
340:30 methods in this lesson you will learn the various methods of backup
340:32 the various methods of backup implementation we're going to start by
340:34 implementation we're going to start by looking at the components of a backup
340:36 looking at the components of a backup environment
340:38 environment these are backup server backup client
340:42 these are backup server backup client media server and backup Target next we
340:45 media server and backup Target next we will look at how these backup components
340:47 will look at how these backup components work
340:48 work together lastly we will take a detailed
340:51 together lastly we will take a detailed look at the types of backup
340:53 look at the types of backup implementation methods these are direct
340:56 implementation methods these are direct attached Backup landbased backup sand
341:00 attached Backup landbased backup sand based backup and serverless
341:03 based backup and serverless backup now let's look at the components
341:06 backup now let's look at the components of a backup environment
341:08 of a backup environment a backup environment typically consists
341:10 a backup environment typically consists of the following a backup server backup
341:14 of the following a backup server backup clients media servers backup
341:17 clients media servers backup targets we will explain these terms with
341:20 targets we will explain these terms with the help of a
341:22 the help of a diagram a backup server is the central
341:24 diagram a backup server is the central control center for all backup
341:28 control center for all backup activities it is responsible for
341:30 activities it is responsible for coordinating the backup operations of
341:32 coordinating the backup operations of all the backup clients in the backup
341:35 all the backup clients in the backup environment it Main contains a backup
341:38 environment it Main contains a backup catalog which is a special purpose
341:40 catalog which is a special purpose database that contains all the
341:42 database that contains all the information about the backup
341:45 information about the backup environment it also manages the backup
341:48 environment it also manages the backup schedules of backup jobs such as when to
341:51 schedules of backup jobs such as when to backup a
341:53 backup a client a backup server can be associated
341:56 client a backup server can be associated with many backup clients and media
341:59 with many backup clients and media servers a backup client is any computer
342:02 servers a backup client is any computer that contains data to be backed
342:05 that contains data to be backed up it could be an application server
342:08 up it could be an application server database server file server and so
342:12 database server file server and so on a software agent is needed on the
342:14 on a software agent is needed on the backup client to assist with the backup
342:18 backup client to assist with the backup process it accesses the storage device
342:21 process it accesses the storage device through the media
342:23 through the media server a media server is any computer in
342:26 server a media server is any computer in the backup environment to which backup
342:28 the backup environment to which backup devices are connected it controls those
342:37 devices it processes all the backup jobs and writes data to the backup
342:40 and writes data to the backup devices a media server can be associated
342:43 devices a media server can be associated with only one backup
342:46 with only one backup server a backup Target is a device that
342:49 server a backup Target is a device that stores the backup data for example a
342:52 stores the backup data for example a tape library is a backup
342:55 tape library is a backup Target it's also known as a backup
343:03 device these components work together in a client server model now let's see how
343:06 a client server model now let's see how they work
343:07 they work together the backup server gets the
343:10 together the backup server gets the backup related information from its
343:12 backup related information from its backup
343:13 backup catalog it then initiates the backup
343:16 catalog it then initiates the backup process as per the schedule by sending a
343:19 process as per the schedule by sending a request to the backup client's software
343:21 request to the backup client's software agent asking it to start the backup
343:25 agent asking it to start the backup job in the meantime the backup server
343:28 job in the meantime the backup server instructs the media server to keep the
343:30 instructs the media server to keep the backup device ready for storing backup
343:33 backup device ready for storing backup data the backup client software agent
343:36 data the backup client software agent resp responds to the backup server's
343:38 resp responds to the backup server's request with its
343:40 request with its metadata the software agent then sends
343:43 metadata the software agent then sends the data to be backed up over the
343:45 the data to be backed up over the network to the media server the media
343:48 network to the media server the media server writes the data to the backup
343:50 server writes the data to the backup Target such as a tape
343:53 Target such as a tape Library the media server then sends the
343:56 Library the media server then sends the metadata information to the backup
343:58 metadata information to the backup server with the status of the backup
344:02 server with the status of the backup operation now we will look at the
344:04 operation now we will look at the different implementations of backups
344:07 different implementations of backups there are four types of backup
344:09 there are four types of backup implementation
344:10 implementation methods these are direct attached Backup
344:14 methods these are direct attached Backup landbased backup sand based backup and
344:18 landbased backup sand based backup and serverless
344:20 serverless backup in direct attached backup the
344:22 backup in direct attached backup the backup device is directly connected to
344:24 backup device is directly connected to the backup client which in turn is
344:27 the backup client which in turn is connected to the backup server through
344:29 connected to the backup server through Lan as shown in the
344:31 Lan as shown in the diagram there is no separate media
344:33 diagram there is no separate media server since the backup client acts as a
344:36 server since the backup client acts as a media server
344:38 media server in this Arrangement the landan is used
344:40 in this Arrangement the landan is used for sending only metadata from the
344:42 for sending only metadata from the backup client to the backup
344:45 backup client to the backup server this amount of metadata traffic
344:48 server this amount of metadata traffic transported over the landan is less than
344:51 transported over the landan is less than the application
344:53 the application traffic the advantages of direct
344:55 traffic the advantages of direct attached backup is the backup devices
344:58 attached backup is the backup devices can operate at high
345:00 can operate at high speed backup and restore operations are
345:03 speed backup and restore operations are optimized for performance because the
345:06 optimized for performance because the backup devices are both local and
345:08 backup devices are both local and dedicated to the backup
345:10 dedicated to the backup client it is suitable for small scale
345:14 client it is suitable for small scale environments the disadvantage of direct
345:16 environments the disadvantage of direct attached backup is that it impacts host
345:19 attached backup is that it impacts host performance because the backup workload
345:21 performance because the backup workload consumes host resources such as
345:24 consumes host resources such as processor memory and
345:32 iobs also it is not scalable and is not suitable for large
345:35 suitable for large environments now let's look at landbased
345:38 environments now let's look at landbased backup in landbased backup the backup
345:41 backup in landbased backup the backup server the backup client and the media
345:43 server the backup client and the media server are connected to the Lan the
345:46 server are connected to the Lan the backup device is directly connected to
345:48 backup device is directly connected to the media server that is part of the
345:51 the media server that is part of the Lan in this type of implementation the
345:54 Lan in this type of implementation the backup data is sent from the backup
345:56 backup data is sent from the backup client to the media server over the Lan
345:59 client to the media server over the Lan and the media server writes the data to
346:01 and the media server writes the data to the backup
346:03 the backup device the advantage of landbased backup
346:06 device the advantage of landbased backup is it reduces cost because it increases
346:09 is it reduces cost because it increases storage utilization of backup
346:12 storage utilization of backup devices it improves manageability
346:15 devices it improves manageability because it has a centralized
346:18 because it has a centralized backup the disadvantage of landbased
346:20 backup the disadvantage of landbased backup is it impacts host performance
346:23 backup is it impacts host performance because the backup workload consumes the
346:25 because the backup workload consumes the host resources such as processor memory
346:29 host resources such as processor memory and
346:30 and iobs it affects the network performance
346:33 iobs it affects the network performance and availability especially when the
346:35 and availability especially when the application traffic and the backup
346:37 application traffic and the backup traffic share the same network
346:41 traffic share the same network bandwidth the impact on the network
346:43 bandwidth the impact on the network performance can be reduced by isolating
346:45 performance can be reduced by isolating the application traffic from the backup
346:48 the application traffic from the backup traffic by having a separate Network
346:50 traffic by having a separate Network installed for
346:52 installed for backups now let's look at a sand based
346:55 backups now let's look at a sand based backup sand based backup is also known
346:58 backup sand based backup is also known as landree backup in a sand-based backup
347:02 as landree backup in a sand-based backup the backup client the media server and
347:05 the backup client the media server and the backup device are connected to the
347:07 the backup device are connected to the San in addition to that the backup
347:09 San in addition to that the backup client and the media server in turn are
347:12 client and the media server in turn are connected to the backup server through
347:14 connected to the backup server through Lan as shown in the
347:16 Lan as shown in the diagram in this Arrangement the backup
347:19 diagram in this Arrangement the backup client sends the data being backed up
347:22 client sends the data being backed up over the sand to the media server which
347:25 over the sand to the media server which in turn writes the data to the backup
347:27 in turn writes the data to the backup device as shown in the
347:30 device as shown in the diagram the advantage of sand based
347:33 diagram the advantage of sand based backup is that Network performance of
347:35 backup is that Network performance of Lan is not affected
347:37 Lan is not affected this is because San is used for sending
347:40 this is because San is used for sending backup
347:41 backup traffic and the Lan is used only for
347:43 traffic and the Lan is used only for sending metad dat from the backup client
347:46 sending metad dat from the backup client and the media server to the backup
347:49 and the media server to the backup server the disadvantage of sand-based
347:52 server the disadvantage of sand-based backup is that it impacts host
347:54 backup is that it impacts host performance because the backup workload
347:56 performance because the backup workload consumes the host resources such as
347:59 consumes the host resources such as processor memory and
348:07 iobs let's look at the serverless backup serverless backup is a sand based backup
348:10 serverless backup is a sand based backup that uses sand resources to copy data
348:13 that uses sand resources to copy data from The Source storage device to the
348:15 from The Source storage device to the backup storage
348:17 backup storage device it is called serverless backup
348:20 device it is called serverless backup because it doesn't use host resources to
348:22 because it doesn't use host resources to perform the
348:24 perform the Backup serverless backup uses the
348:27 Backup serverless backup uses the extended copy function of the fiber
348:29 extended copy function of the fiber channel sand to copy data from one
348:31 channel sand to copy data from one storage device to another storage device
348:34 storage device to another storage device in the
348:35 in the sand let's explain this with the help of
348:38 sand let's explain this with the help of a
348:39 a diagram the backup server initiates the
348:41 diagram the backup server initiates the backup process by sending a request to
348:44 backup process by sending a request to the backup client over the Lan to start
348:47 the backup client over the Lan to start the backup as you can see the backup
348:49 the backup as you can see the backup client storage array and tape Library
348:53 client storage array and tape Library are connected to the storage area
348:55 are connected to the storage area network the data of the backup client is
348:58 network the data of the backup client is stored in the storage
349:00 stored in the storage array using the extended copy function
349:04 array using the extended copy function the data is copied from the storage
349:05 the data is copied from the storage array to the T tape Library over the
349:13 sand the advantage of serverless backup is that it doesn't use host resources
349:15 is that it doesn't use host resources because the data is copied between the
349:17 because the data is copied between the storage
349:23 devices Lan is not used to transport backup traffic there is an increase in
349:26 backup traffic there is an increase in performance because the backup traffic
349:28 performance because the backup traffic is transported over fiber channel
349:31 is transported over fiber channel sand the disadvantage of using
349:33 sand the disadvantage of using serverless backup is that applications
349:35 serverless backup is that applications using sand may be affected indirectly if
349:38 using sand may be affected indirectly if the sand experiences High backup
349:42 the sand experiences High backup traffic and that brings us to the end of
349:44 traffic and that brings us to the end of this lesson let's summarize what you
349:47 this lesson let's summarize what you have learned in this lesson in this
349:49 have learned in this lesson in this lesson you learned about the various
349:51 lesson you learned about the various methods of backup
349:53 methods of backup implementation we started by looking at
349:55 implementation we started by looking at the components of a backup
349:57 the components of a backup environment these are backup server
350:00 environment these are backup server backup client media server and backup
350:04 backup client media server and backup Target next we looked at how these
350:07 Target next we looked at how these backup components work together lastly
350:10 backup components work together lastly we took a detailed look at the types of
350:12 we took a detailed look at the types of backup implementation
350:14 backup implementation methods these are direct attached Backup
350:17 methods these are direct attached Backup landbased backup sand based backup and
350:21 landbased backup sand based backup and serverless
350:22 serverless backup in the next lesson you will learn
350:25 backup in the next lesson you will learn about backup targets thank you for
350:27 about backup targets thank you for watching
350:52 hello and welcome to unit 4 backup Targets in this lesson you will learn
350:55 Targets in this lesson you will learn about backup targets we're going to
350:57 about backup targets we're going to start by looking at what a backup Target
351:01 start by looking at what a backup Target is next we will look at the history of
351:03 is next we will look at the history of tape and then we will look at what a
351:05 tape and then we will look at what a tape is
351:08 tape is we will also look at what a tape drive
351:10 we will also look at what a tape drive is and then we will look at what a tape
351:12 is and then we will look at what a tape library
351:14 library is we will look at the disk to tape or
351:17 is we will look at the disk to tape or d2t backup
351:19 d2t backup solution next we will look at what shoe
351:22 solution next we will look at what shoe shining is and then we will look at
351:27 shining is and then we will look at multiplexing we will also look at
351:29 multiplexing we will also look at multistreaming and then we will talk
351:31 multistreaming and then we will talk about linear tape open or
351:35 about linear tape open or lto next we will talk about the
351:37 lto next we will talk about the performance of a media server in a
351:39 performance of a media server in a typical backup
351:41 typical backup environment and then we will look at the
351:43 environment and then we will look at the network data management protocol or
351:50 ndmp we will also look at what a virtual tape library or vtl
351:53 tape library or vtl is next we will compare the virtual tape
351:56 is next we will compare the virtual tape Library versus physical tape library and
351:59 Library versus physical tape library and then we will talk about the dis to disk
352:01 then we will talk about the dis to disk or D2D backup
352:04 or D2D backup solution lastly we will talk about about
352:06 solution lastly we will talk about about the dis to disk to tape or D2D tot
352:10 the dis to disk to tape or D2D tot backup
352:11 backup solution now let's look at what a backup
352:14 solution now let's look at what a backup Target is a backup Target is a device
352:17 Target is a backup Target is a device that stores the backup
352:19 that stores the backup data it is also known as a backup
352:23 data it is also known as a backup device an example of a backup Target is
352:26 device an example of a backup Target is a tape drive it uses tapes as the backup
352:31 a tape drive it uses tapes as the backup media during the latter part of the 20th
352:34 media during the latter part of the 20th century tape was used for backup and
352:36 century tape was used for backup and Disaster Recovery
352:38 Disaster Recovery purposes tape is still used today as a
352:41 purposes tape is still used today as a backup Medium as it has stability and
352:43 backup Medium as it has stability and low unit
352:45 low unit cost tape refers to the magnetic tape
352:48 cost tape refers to the magnetic tape that is kept inside a casing called a
352:51 that is kept inside a casing called a cartridge typically it is used for
352:53 cartridge typically it is used for offline data
352:54 offline data storage a tape drive is a data storage
352:57 storage a tape drive is a data storage device used to either write data to a
353:00 device used to either write data to a magnetic tape or to read data from
353:07 it a tape drive provides sequential access to data by moving the tape
353:09 access to data by moving the tape forward or
353:15 backward it's not suitable for Random Access because the tape drive has to
353:17 Access because the tape drive has to literally wind the tape around the reels
353:19 literally wind the tape around the reels to read any particular
353:21 to read any particular data now we will look at the tape
353:24 data now we will look at the tape Library a tape library is a storage
353:27 Library a tape library is a storage device that contains the following one
353:30 device that contains the following one or more tape drives a bunch of slots for
353:32 or more tape drives a bunch of slots for holding the tape cartridges a barcode
353:35 holding the tape cartridges a barcode reader for ident identifying the tape
353:37 reader for ident identifying the tape cartridges and an automated mechanism
353:40 cartridges and an automated mechanism called a robot for loading and unloading
353:42 called a robot for loading and unloading the tape cartridges to and from the tape
353:51 drives now let's talk about disk to tape disc to tape or d2t in short is a
353:55 tape disc to tape or d2t in short is a backup solution in which data is backed
353:57 backup solution in which data is backed up from a hard disk drive of a backup
354:00 up from a hard disk drive of a backup client such as an application server to
354:02 client such as an application server to a
354:04 a tape tape and tape drive are not going
354:07 tape tape and tape drive are not going to disappear in the near future because
354:09 to disappear in the near future because they offer the following
354:12 they offer the following advantages cost effective the cost of
354:15 advantages cost effective the cost of storing data in tapes is less expensive
354:17 storing data in tapes is less expensive than other storage
354:20 than other storage media for example tape is less expensive
354:23 media for example tape is less expensive than hard disk drives on a per bite
354:26 than hard disk drives on a per bite basis the cost of adding tapes to a tape
354:29 basis the cost of adding tapes to a tape library is less than adding diss to a
354:31 library is less than adding diss to a storage
354:33 storage array power consumed by a tape Library
354:36 array power consumed by a tape Library is significantly less than hard disk
354:39 is significantly less than hard disk drives since tapes are ideally used for
354:42 drives since tapes are ideally used for large archives the data stored in tapes
354:44 large archives the data stored in tapes is not often accessed which saves cost
354:47 is not often accessed which saves cost in power and cooling over hard disk
354:51 in power and cooling over hard disk drives scalability tape libraries are
354:54 drives scalability tape libraries are scalable because they allow tape drives
354:56 scalable because they allow tape drives to be added and in some cases even
354:59 to be added and in some cases even cartridge slots can be
355:01 cartridge slots can be added with the addition of tape drives
355:04 added with the addition of tape drives capacity and performance of the tape
355:06 capacity and performance of the tape library is also
355:08 library is also increased consolidation a tape Library
355:11 increased consolidation a tape Library allows consolidation of individual tape
355:14 allows consolidation of individual tape backup Solutions and with a centralized
355:17 backup Solutions and with a centralized Administration the entire backup process
355:20 Administration the entire backup process can be
355:21 can be managed compression tape drives
355:24 managed compression tape drives automatically compress the data in their
355:26 automatically compress the data in their Hardware whereas hard drives
355:34 don't encryption tape drives have inbuilt encryption processes and each
355:36 inbuilt encryption processes and each tape drive has a key management system
355:38 tape drive has a key management system attached to
355:41 attached to it worm support tapes have worm
355:44 it worm support tapes have worm capability this means that data can only
355:47 capability this means that data can only be written once and cannot be
355:50 be written once and cannot be modified worm functionality of tapes is
355:53 modified worm functionality of tapes is used for legal compliance when it comes
355:55 used for legal compliance when it comes to archiving
355:57 to archiving data reliability unlike the hard disk
356:00 data reliability unlike the hard disk drives the heads in tape drives are
356:03 drives the heads in tape drives are separated from the tape media
356:06 separated from the tape media even if the head fails the tape will
356:08 even if the head fails the tape will still be in working
356:10 still be in working condition the tapes are durable because
356:13 condition the tapes are durable because the hard casing covering the tape
356:15 the hard casing covering the tape protects it from physical
356:17 protects it from physical damage performance the readr performance
356:21 damage performance the readr performance of tape drives starts from 120 megabytes
356:25 of tape drives starts from 120 megabytes per second whereas Enterprise disc
356:27 per second whereas Enterprise disc drives offer an average read write
356:29 drives offer an average read write performance of 100 megabytes per
356:37 second multiple disc drives running in parallel are needed to match the
356:39 parallel are needed to match the performance of a single
356:41 performance of a single tape removability tapes are designed to
356:44 tape removability tapes are designed to be ejected from the tape Library they
356:47 be ejected from the tape Library they can be moved to other locations
356:49 can be moved to other locations effortlessly because they weigh less and
356:51 effortlessly because they weigh less and are
356:53 are durable a 3.5 in hard disk drive weighs
356:57 durable a 3.5 in hard disk drive weighs about 1.5 lb whereas a tape weighs
357:00 about 1.5 lb whereas a tape weighs around5
357:02 around5 lb since tapes have high density the
357:05 lb since tapes have high density the shipping cost per gigabyte of a tape is
357:08 shipping cost per gigabyte of a tape is less than Enterprise dis
357:10 less than Enterprise dis drives now let's look at what shoe
357:13 drives now let's look at what shoe shining is in a tape-based backup
357:16 shining is in a tape-based backup environment during the backup process if
357:19 environment during the backup process if the data transfer rate is not fast
357:21 the data transfer rate is not fast enough to cope with the transfer speed
357:23 enough to cope with the transfer speed of the tape drive then the tape drive's
357:25 of the tape drive then the tape drive's buffer is
357:27 buffer is emptied when this happens the tape drive
357:30 emptied when this happens the tape drive will stop and will rewind to the last
357:32 will stop and will rewind to the last right position to begin writing when the
357:34 right position to begin writing when the buffer fills
357:36 buffer fills this event is termed shoe
357:40 this event is termed shoe shining shoe shining impacts performance
357:43 shining shoe shining impacts performance as it slows down the backup
357:45 as it slows down the backup process in addition it also causes wear
357:48 process in addition it also causes wear and tear to the
357:50 and tear to the tape most backup applications often use
357:53 tape most backup applications often use multiplexing to avoid shoe
357:56 multiplexing to avoid shoe shining in multiplexing data is sent
357:59 shining in multiplexing data is sent from multiple backup clients to a single
358:01 from multiple backup clients to a single tape drive since these clients cannot
358:04 tape drive since these clients cannot send data fast enough to keep the tape
358:06 send data fast enough to keep the tape Drive busy multiplexing allows them to
358:08 Drive busy multiplexing allows them to send their backup data
358:11 send their backup data simultaneously while multiplexing allows
358:13 simultaneously while multiplexing allows the tape drive to operate at full speed
358:16 the tape drive to operate at full speed it requires backups to be coordinated
358:18 it requires backups to be coordinated across several backup clients to stream
358:20 across several backup clients to stream this data the disadvantage of this
358:23 this data the disadvantage of this approach is that it increases the time
358:25 approach is that it increases the time taken to restore the data because the
358:27 taken to restore the data because the backup application now has to get the
358:30 backup application now has to get the data through several backup sessions to
358:32 data through several backup sessions to restore data to the particular
358:34 restore data to the particular application server
358:37 application server now let's talk about
358:39 now let's talk about multi-streaming in multistreaming data
358:42 multi-streaming in multistreaming data from a single backup client is
358:43 from a single backup client is simultaneously sent in multiple streams
358:46 simultaneously sent in multiple streams to multiple tape
358:48 to multiple tape drives multistreaming is required when
358:51 drives multistreaming is required when the client's data transfer rate is
358:53 the client's data transfer rate is faster than the transfer speed of a
358:55 faster than the transfer speed of a single tape
358:56 single tape drive it is critical that not more than
358:59 drive it is critical that not more than a single stream of data should be
359:01 a single stream of data should be allowed to access a single tape drive at
359:03 allowed to access a single tape drive at the same time because it would cause
359:05 the same time because it would cause Drive
359:10 failure while there are many tape Technologies the most popular tape
359:12 Technologies the most popular tape technology is linear tape open or
359:15 technology is linear tape open or lto it is a magnetic tape-based data
359:18 lto it is a magnetic tape-based data storage technology that was developed as
359:20 storage technology that was developed as an open standard alternative to
359:22 an open standard alternative to proprietary tape
359:24 proprietary tape Technologies the open nature of lto
359:27 Technologies the open nature of lto provides compatibility among the
359:29 provides compatibility among the products offered by different
359:32 products offered by different vendors there are different generations
359:34 vendors there are different generations of lto tape drives starting from lto1 to
359:38 of lto tape drives starting from lto1 to the latest generation
359:41 the latest generation lto6 lto drives feature high capacity
359:44 lto6 lto drives feature high capacity and fast data transfer
359:50 rates the maximum data capacity and data transfer speed of each generation of lto
359:53 transfer speed of each generation of lto Drive is tabulated on the
359:56 Drive is tabulated on the slide one of the highlights of lto
359:59 slide one of the highlights of lto drives is that they provide backward
360:01 drives is that they provide backward compatibility to help companies protect
360:03 compatibility to help companies protect their investments in technology
360:06 their investments in technology each generation of lto Drive is backward
360:09 each generation of lto Drive is backward read write compatible with the previous
360:11 read write compatible with the previous generation of the lto media and also
360:14 generation of the lto media and also read compatible with lto media two
360:17 read compatible with lto media two generations
360:22 old let's explain this with the help of an example lto6 drives are readr
360:26 an example lto6 drives are readr compatible with lto5 media and also read
360:30 compatible with lto5 media and also read compatible with lto4
360:32 compatible with lto4 media now let's look at network drive
360:34 media now let's look at network drive management protocol or
360:37 management protocol or ndmp in a typical backup environment
360:40 ndmp in a typical backup environment backup data is sent from an application
360:42 backup data is sent from an application server to the media server over the
360:45 server to the media server over the Lan the media server buffers the data in
360:48 Lan the media server buffers the data in its disc cache until it can stream
360:51 its disc cache until it can stream sufficient data to the tape Library
360:53 sufficient data to the tape Library without slowing down the data transfer
360:56 without slowing down the data transfer rate while the data is buffered the
360:58 rate while the data is buffered the media center loads the tape for
361:00 media center loads the tape for transferring the data to
361:02 transferring the data to it the downside of this approach is that
361:05 it the downside of this approach is that the media server becomes a bottleneck
361:07 the media server becomes a bottleneck where data has to be cached before it
361:10 where data has to be cached before it can be transferred to the tape
361:12 can be transferred to the tape library in order to overcome this
361:15 library in order to overcome this bottleneck imposed by the media server
361:17 bottleneck imposed by the media server the network data management protocol or
361:20 the network data management protocol or ndmp was developed to allow application
361:23 ndmp was developed to allow application servers to directly back up data to the
361:25 servers to directly back up data to the tape
361:27 tape library ndmp is a protocol to manage
361:30 library ndmp is a protocol to manage Network
361:31 Network backups in ndmp parlaments the backup
361:35 backups in ndmp parlaments the backup and restore operations are referred to
361:37 and restore operations are referred to as data management
361:39 as data management operations and the backup server that
361:42 operations and the backup server that coordinates these operations is called a
361:44 coordinates these operations is called a data management application server or
361:47 data management application server or dma
361:54 server in the ndmp configuration the dma server manages the tape library and
361:57 server manages the tape library and maintains the backup catalog and the
362:00 maintains the backup catalog and the data gets backed up directly from the
362:02 data gets backed up directly from the application server to the tape
362:04 application server to the tape Library now let's talk about virtual
362:07 Library now let's talk about virtual tape library or
362:10 tape library or vtl a virtual tape library is a
362:12 vtl a virtual tape library is a disk-based backup system that emulates a
362:15 disk-based backup system that emulates a physical tape
362:17 physical tape Library a vtl is made up of the
362:20 Library a vtl is made up of the following computer hardware run by a
362:23 following computer hardware run by a linux-based operating system a software
362:26 linux-based operating system a software program that emulates the components of
362:28 program that emulates the components of a physical tape
362:29 a physical tape library and a raid based dis array to
362:33 library and a raid based dis array to prevent loss of backup data in the event
362:35 prevent loss of backup data in the event of a dry fail
362:36 of a dry fail failure in a backup environment a VT
362:40 failure in a backup environment a VT presents itself as a physical tape
362:42 presents itself as a physical tape library and hence it can be easily
362:44 library and hence it can be easily integrated with the existing backup
362:46 integrated with the existing backup software backup processes and
362:49 software backup processes and policies now let's compare a virtual
362:52 policies now let's compare a virtual tape library with a physical tape
362:55 tape library with a physical tape library in vtl a new virtual tape drive
362:58 library in vtl a new virtual tape drive can be added on demand through software
363:00 can be added on demand through software configuration at no additional
363:03 configuration at no additional cost however in a physical tape library
363:07 cost however in a physical tape library to add a new tape drive it must be
363:09 to add a new tape drive it must be purchased and manually
363:11 purchased and manually installed vtls are increasingly scalable
363:14 installed vtls are increasingly scalable over physical tape libraries this is
363:17 over physical tape libraries this is because more hard disk drives can be
363:19 because more hard disk drives can be added to the dis array enclosure to
363:21 added to the dis array enclosure to increase the storage
363:23 increase the storage capacity in a physical tape Library
363:26 capacity in a physical tape Library backup failures occur when there is a
363:28 backup failures occur when there is a problem with the tape drives and
363:30 problem with the tape drives and media whereas in a vtl backup failures
363:33 media whereas in a vtl backup failures are very rare because it uses raid based
363:38 are very rare because it uses raid based storage most vtls are capable of
363:40 storage most vtls are capable of delivering a throughput of 150 megabytes
363:43 delivering a throughput of 150 megabytes per second which improves performance
363:46 per second which improves performance and allows backups to be completed
363:49 and allows backups to be completed within the backup
363:55 window however modern physical tape drives that have the ability to back up
363:57 drives that have the ability to back up data along with data compression at a
364:00 data along with data compression at a throughput rate greater than 50
364:02 throughput rate greater than 50 megabytes per second are faster than VTO
364:05 megabytes per second are faster than VTO a especially when backing up large
364:07 a especially when backing up large amounts of multistream
364:15 data restoring backups in vtl takes less time than a physical tape Library
364:17 time than a physical tape Library especially when recovering particular
364:19 especially when recovering particular files this is because the VTO is a dis
364:22 files this is because the VTO is a dis based system and diss are good at random
364:26 based system and diss are good at random access while tape drives are good at
364:28 access while tape drives are good at sequential
364:30 sequential access however for restoring large
364:33 access however for restoring large amounts of data the physical tape
364:35 amounts of data the physical tape Library can be faster when it involves
364:38 Library can be faster when it involves parallel read access by multiple
364:41 parallel read access by multiple drives we saw earlier that multiplexing
364:44 drives we saw earlier that multiplexing data streams to a single tape drive was
364:46 data streams to a single tape drive was done to avoid shoe
364:49 done to avoid shoe shining however restoring data from a
364:51 shining however restoring data from a multiplexed backup takes a long time
364:54 multiplexed backup takes a long time because data from several backup clients
364:56 because data from several backup clients are mixed up and are spread over the
364:59 are mixed up and are spread over the tape so instead of multiplexing data a
365:02 tape so instead of multiplexing data a vtl assigns a separate Virtual Drive to
365:05 vtl assigns a separate Virtual Drive to each backup client the data backed up
365:08 each backup client the data backed up can be copied from the dis to a physical
365:11 can be copied from the dis to a physical tape without
365:12 tape without multiplexing and as a result recovering
365:15 multiplexing and as a result recovering data from such physical tape can be
365:17 data from such physical tape can be faster than restoring it from a
365:19 faster than restoring it from a multiplex
365:21 multiplex backup hence vtls are usually deployed
365:24 backup hence vtls are usually deployed as a front end to a traditional physical
365:27 as a front end to a traditional physical tape
365:29 tape Library backup to disk is also called
365:32 Library backup to disk is also called dis to disk backup or D Tod backup
365:35 dis to disk backup or D Tod backup because data is copied from one hard
365:37 because data is copied from one hard disk drive to another hard disk
365:41 disk drive to another hard disk drive backup to disk has become more
365:43 drive backup to disk has become more reliable and less expensive than in the
365:46 reliable and less expensive than in the past making it a compelling alternative
365:48 past making it a compelling alternative to tape
365:49 to tape technology this is evident from the fact
365:52 technology this is evident from the fact that virtual tape Library itself is a
365:54 that virtual tape Library itself is a disk-based
365:56 disk-based system let's discuss the advantages of
366:00 system let's discuss the advantages of D2D speed of recovery D2D offers quick
366:04 D2D speed of recovery D2D offers quick recovery of L data since hard disk
366:07 recovery of L data since hard disk drives are good at random
366:10 drives are good at random access cost effective hard disk drives
366:13 access cost effective hard disk drives are less expensive than they were in the
366:16 are less expensive than they were in the past userfriendly
366:19 past userfriendly disk-based backup doesn't require
366:21 disk-based backup doesn't require loading and unloading like tape
366:24 loading and unloading like tape drives reliability disk-based backup has
366:28 drives reliability disk-based backup has become more reliable over time
366:30 become more reliable over time especially when it is implemented in a
366:32 especially when it is implemented in a raid
366:34 raid configuration flexibility backup discs
366:37 configuration flexibility backup discs can be implemented in storage devices
366:40 can be implemented in storage devices that can be directly attached as follows
366:42 that can be directly attached as follows to an individual server as a network
366:45 to an individual server as a network attached storage on a lan or as storage
366:48 attached storage on a lan or as storage connected to a
366:49 connected to a sand now let's look at disk to disk to
366:53 sand now let's look at disk to disk to tape or D2D tot backup
366:57 tape or D2D tot backup Solutions in a D2D tot backup solution
367:01 Solutions in a D2D tot backup solution data is first backed up on a dis where
367:03 data is first backed up on a dis where it is retained for a certain period of
367:05 it is retained for a certain period of time for quick restores of lost data and
367:08 time for quick restores of lost data and then it is moved to a
367:11 then it is moved to a tape that brings us to the end of this
367:13 tape that brings us to the end of this lesson let's summarize what you have
367:15 lesson let's summarize what you have learned in this lesson in this lesson
367:18 learned in this lesson in this lesson you learned about backup targets we
367:21 you learned about backup targets we started by looking at what a backup
367:23 started by looking at what a backup Target is next we looked at the history
367:26 Target is next we looked at the history of tape and then we looked at what a
367:28 of tape and then we looked at what a tape is we also looked at what a tape
367:31 tape is we also looked at what a tape drive is and then we looked at what a
367:34 drive is and then we looked at what a tape library is
367:36 tape library is we looked at the dis to tape or d2t
367:39 we looked at the dis to tape or d2t backup
367:40 backup solution next we looked at what shoe
367:42 solution next we looked at what shoe shining is and then we looked at
367:45 shining is and then we looked at multiplexing we also looked at multi
367:48 multiplexing we also looked at multi streaming and then we talked about
367:50 streaming and then we talked about linear tape open or
367:53 linear tape open or lto next we talked about the performance
367:56 lto next we talked about the performance of a media server in a typical backup
367:59 of a media server in a typical backup environment and then we looked at
368:02 environment and then we looked at network data management protocol or ndmp
368:06 network data management protocol or ndmp we also looked at what a virtual tape
368:08 we also looked at what a virtual tape library or vtl
368:10 library or vtl is next we compared the virtual tape
368:13 is next we compared the virtual tape library to a physical tape library and
368:16 library to a physical tape library and then we talked about the dis to disk or
368:18 then we talked about the dis to disk or D2D backup
368:21 D2D backup solution lastly we talked about disc to
368:24 solution lastly we talked about disc to disk to tape or the D2D tot backup
368:28 disk to tape or the D2D tot backup solution in the next lesson you will
368:30 solution in the next lesson you will learn about content addressable storage
368:32 learn about content addressable storage and archive thank you for watching
369:00 hello and welcome to unit 5 content addressable storage and
369:02 addressable storage and archive in this lesson you will learn
369:05 archive in this lesson you will learn about content addressable storage and
369:08 about content addressable storage and archive we're going to start by looking
369:11 archive we're going to start by looking at information life cycle and then we
369:13 at information life cycle and then we will look at what fixed data is next
369:16 will look at what fixed data is next we'll look at content addressable
369:18 we'll look at content addressable storage or Cass while talking about Cass
369:21 storage or Cass while talking about Cass we will Define content address and fixed
369:24 we will Define content address and fixed content asset we'll also look at what an
369:27 content asset we'll also look at what an archive is and then we will look at the
369:29 archive is and then we will look at the types of archives the types of archives
369:32 types of archives the types of archives we will cover are online archive
369:34 we will cover are online archive nearline archive and offline archive
369:38 nearline archive and offline archive next we will look at the legal
369:39 next we will look at the legal compliance associated with the
369:41 compliance associated with the maintenance of archives and then we will
369:43 maintenance of archives and then we will talk about the traditional
369:46 talk about the traditional archives while talking about traditional
369:48 archives while talking about traditional archives we'll also look at the
369:50 archives we'll also look at the disadvantages of traditional archives
369:53 disadvantages of traditional archives next we will look at Cass as archives as
369:55 next we will look at Cass as archives as an archive solution and then we will
369:57 an archive solution and then we will talk about the benefits of content
369:59 talk about the benefits of content addressable storage we'll also look at
370:02 addressable storage we'll also look at what vaulting is and lastly we will look
370:04 what vaulting is and lastly we will look at what EV vating is now let's look at
370:08 at what EV vating is now let's look at information life cycle in the
370:10 information life cycle in the information life cycle once data is
370:13 information life cycle once data is created it is accessed and edited
370:15 created it is accessed and edited multiple
370:17 multiple times but as the data becomes old it is
370:20 times but as the data becomes old it is no longer edited and so it becomes a
370:22 no longer edited and so it becomes a fixed
370:24 fixed entity such types of data that do not
370:26 entity such types of data that do not Chang are called fixed
370:28 Chang are called fixed data examples of fixed content are
370:31 data examples of fixed content are static web pages emails electronic
370:34 static web pages emails electronic documents M ments videos and so
370:38 documents M ments videos and so on when fixed data becomes voluminous
370:41 on when fixed data becomes voluminous it's difficult to store and
370:44 it's difficult to store and access thus content addressable storage
370:47 access thus content addressable storage was developed to store fixed
370:50 was developed to store fixed data content addressable storage or Cass
370:53 data content addressable storage or Cass is an object-based data storage
370:55 is an object-based data storage technology developed for storing and
370:58 technology developed for storing and retrieving fixed
371:00 retrieving fixed data it stores user data and its
371:03 data it stores user data and its metadata as separate objects
371:06 metadata as separate objects the stored object is assigned a unique
371:08 the stored object is assigned a unique Global identifier called a Content
371:11 Global identifier called a Content address or
371:13 address or CA the ca uniquely identifies the
371:16 CA the ca uniquely identifies the content but not its
371:18 content but not its location the content address is
371:20 location the content address is generated from the binary representation
371:22 generated from the binary representation of the
371:23 of the object fixed data stored for future
371:26 object fixed data stored for future business purposes is called a fixed
371:28 business purposes is called a fixed content
371:30 content asset fixed content assets are used by
371:33 asset fixed content assets are used by businesses for various purposes
371:35 businesses for various purposes such as generating Revenue improving
371:38 such as generating Revenue improving business operations and taking advantage
371:40 business operations and taking advantage of its historic
371:47 value since fixed content assets will be accessed frequently they should be
371:48 accessed frequently they should be readily available on
371:51 readily available on demand now let's talk about archive an
371:54 demand now let's talk about archive an archive is a storage area where fixed
371:57 archive is a storage area where fixed content is
371:58 content is stored there are three types of
372:00 stored there are three types of archives online archive nearline archive
372:04 archives online archive nearline archive and offline
372:07 and offline archive an online archive is for
372:09 archive an online archive is for businesses that need frequent and
372:11 businesses that need frequent and immediate access to the fixed data in
372:13 immediate access to the fixed data in the
372:14 the archive in this type of archive the
372:17 archive in this type of archive the storage device is directly connected to
372:19 storage device is directly connected to the host allowing for immediate access
372:21 the host allowing for immediate access of
372:23 of data a nearline archive is for
372:25 data a nearline archive is for businesses that need regular access to
372:27 businesses that need regular access to the fixed data in the
372:30 the fixed data in the archive in this type of archive the
372:33 archive in this type of archive the storage device is connected to the host
372:35 storage device is connected to the host but has to be mounted for accessing the
372:38 but has to be mounted for accessing the data as the name near line suggests the
372:42 data as the name near line suggests the archive is near to becoming online when
372:44 archive is near to becoming online when the storage device is
372:47 the storage device is mounted an offline archive is for
372:49 mounted an offline archive is for businesses that don't need regular
372:51 businesses that don't need regular access to the fixed data in the
372:54 access to the fixed data in the archive in this type of archive the
372:57 archive in this type of archive the storage device is neither directly
372:58 storage device is neither directly connected nor can it be loaded for
373:01 connected nor can it be loaded for real-time
373:03 real-time access manual intervention is needed
373:06 access manual intervention is needed before data can be
373:08 before data can be accessed businesses as part of their
373:11 accessed businesses as part of their legal compliance are required to ensure
373:13 legal compliance are required to ensure that data in the archive cannot be
373:15 that data in the archive cannot be modified or
373:18 modified or deleted this is the reason why archives
373:20 deleted this is the reason why archives are usually stored on a right once read
373:23 are usually stored on a right once read many device or worm
373:25 many device or worm device an example of a worm device is a
373:28 device an example of a worm device is a CD
373:29 CD ROM worm devices ensure that the data is
373:32 ROM worm devices ensure that the data is not modified
373:35 not modified archives are traditionally stored on
373:37 archives are traditionally stored on optical discs and
373:39 optical discs and tapes the downside of archiving on
373:41 tapes the downside of archiving on optical discs and tape drives is that
373:44 optical discs and tape drives is that Optical discs and tape drives are not
373:46 Optical discs and tape drives are not capable of recognizing the data being
373:48 capable of recognizing the data being stored in order to avoid the same data
373:51 stored in order to avoid the same data being stored many
373:53 being stored many times in addition to that Optical discs
373:56 times in addition to that Optical discs and tape devices can succumb to wear and
373:59 and tape devices can succumb to wear and tear so there is a risk of data being
374:03 tear so there is a risk of data being lost and there will also be overhead
374:05 lost and there will also be overhead involved if the data has to be converted
374:07 involved if the data has to be converted into new formats to allow access to the
374:12 into new formats to allow access to the content content addressable storage
374:14 content content addressable storage emerged as an alternative to overcome
374:17 emerged as an alternative to overcome the shortcomings of optical discs and
374:18 the shortcomings of optical discs and tape
374:24 drives Cass improves data accessibility and also provides protection to the
374:26 and also provides protection to the archive
374:28 archive data now let's look at the benefits of
374:30 data now let's look at the benefits of content addressable
374:32 content addressable storage authenticity
374:35 storage authenticity Cass provides content
374:38 Cass provides content authenticity all the data stored using
374:40 authenticity all the data stored using Cass has a unique content
374:44 Cass has a unique content address the unique content address is
374:47 address the unique content address is generated using the binary
374:48 generated using the binary representation of the
374:50 representation of the object whenever an object is accessed
374:53 object whenever an object is accessed its content address is generated using a
374:56 its content address is generated using a hashing algorithm and compared with the
374:58 hashing algorithm and compared with the object's original content
375:05 address if the validation fails the object is restored from the mirrored
375:12 copy Integrity Cass ensures that content is not
375:14 is not modified the hashing algorithm used to
375:16 modified the hashing algorithm used to check the content authenticity also
375:19 check the content authenticity also ensures the Integrity of the
375:26 content single instance storage Cass ensures that only a single instance of
375:28 ensures that only a single instance of data is
375:30 data is stored this is done using a unique
375:32 stored this is done using a unique signature that's generated from the
375:34 signature that's generated from the binary representation of the
375:37 binary representation of the data retention enforcement cast protects
375:41 data retention enforcement cast protects and keeps the data based on the
375:42 and keeps the data based on the retention
375:45 retention policy now let's talk about vaulting and
375:48 policy now let's talk about vaulting and EV vating vating is the process of
375:51 EV vating vating is the process of securing the data by copying it to a
375:53 securing the data by copying it to a tape and placing it in a secure off-site
375:56 tape and placing it in a secure off-site location called a
375:58 location called a vault EV vating is the process of
376:01 vault EV vating is the process of transferring the data to a secure
376:03 transferring the data to a secure off-site location called called an EV
376:05 off-site location called called an EV Vault through a computer
376:11 network and that brings us to the end of this lesson let's summarize what you
376:14 this lesson let's summarize what you have learned in this
376:15 have learned in this lesson in this lesson you learned about
376:17 lesson in this lesson you learned about content addressable storage and
376:20 content addressable storage and archive we started by looking at
376:22 archive we started by looking at information life cycle and then we
376:24 information life cycle and then we looked at what fixed data
376:27 looked at what fixed data is next we looked at what content
376:29 is next we looked at what content addressable storage or Cass
376:32 addressable storage or Cass is while talking about Cass we defined
376:35 is while talking about Cass we defined content address and fixed content
376:38 content address and fixed content asset we also looked at what an archive
376:41 asset we also looked at what an archive is and then we looked at the types of
376:44 is and then we looked at the types of archives the types of archives we
376:46 archives the types of archives we covered are online archive nearline
376:48 covered are online archive nearline archive and offline
376:51 archive and offline archive next we looked at the legal
376:54 archive next we looked at the legal compliance associated with the
376:55 compliance associated with the maintenance of
376:56 maintenance of archives and then we talked about the
376:59 archives and then we talked about the traditional
377:01 traditional archives while talking about traditional
377:03 archives while talking about traditional archives we also looked at the
377:05 archives we also looked at the disadvantages of traditional
377:08 disadvantages of traditional archives next we looked at Cass as
377:10 archives next we looked at Cass as archives and then we talked about the
377:12 archives and then we talked about the benefits of content addressable
377:15 benefits of content addressable storage we also looked at what vating is
377:18 storage we also looked at what vating is and lastly we looked at EV
377:21 and lastly we looked at EV vating in the next lesson you will learn
377:24 vating in the next lesson you will learn about capacity optimization methods
377:27 about capacity optimization methods thank you
377:52 hello and welcome to unit 1 capacity optimization methods part
377:55 optimization methods part one in this lesson you will learn about
377:58 one in this lesson you will learn about a popular capacity optimization method
378:01 a popular capacity optimization method called Data D duplication
378:06 we're going to start by looking at the need for capacity
378:08 need for capacity optimization and then we will look at
378:10 optimization and then we will look at the popular capacity optimization method
378:13 the popular capacity optimization method called Data D
378:15 called Data D duplication when we cover data D
378:17 duplication when we cover data D duplication we will look at what data D
378:20 duplication we will look at what data D duplication is and then we will look at
378:22 duplication is and then we will look at the methods of data
378:24 the methods of data duplication these are file Level D
378:27 duplication these are file Level D duplication and Block Level D
378:30 duplication and Block Level D duplication we will also look at D
378:33 duplication we will also look at D duplication ratio and then we will look
378:35 duplication ratio and then we will look at the characteristics of data which
378:37 at the characteristics of data which have an impact on data
378:39 have an impact on data duplication after that we will talk
378:41 duplication after that we will talk about the effectiveness of data D
378:43 about the effectiveness of data D duplication and then we will look at the
378:45 duplication and then we will look at the types of data D
378:48 types of data D duplication under the types of data D
378:50 duplication under the types of data D duplication we will look at postprocess
378:53 duplication we will look at postprocess D duplication and inline
378:56 D duplication and inline duplication next we will cover
378:58 duplication next we will cover appliance-based
378:59 appliance-based duplication and lastly we will look at
379:02 duplication and lastly we will look at software based D duplication
379:05 software based D duplication now let's look at the need for capacity
379:07 now let's look at the need for capacity optimization
379:09 optimization methods storage has become a major cost
379:12 methods storage has become a major cost component in data
379:14 component in data centers though the cost of storage
379:16 centers though the cost of storage devices is decreasing the tremendous
379:19 devices is decreasing the tremendous growth of data has increase the cost of
379:21 growth of data has increase the cost of managing the storage
379:24 managing the storage systems as a result organizations want
379:27 systems as a result organizations want to utilize storage more efficiently
379:29 to utilize storage more efficiently without increasing the
379:31 without increasing the cost capacity optimization methods are
379:34 cost capacity optimization methods are the methods that reduce the consumption
379:36 the methods that reduce the consumption of space required to store
379:39 of space required to store data the popular capacity optimization
379:42 data the popular capacity optimization methods are data duplication compression
379:45 methods are data duplication compression and thin
379:46 and thin provisioning data duplication refers to
379:49 provisioning data duplication refers to the elimination of duplicate
379:52 the elimination of duplicate data in data duplication duplicate data
379:55 data in data duplication duplicate data is identified and deleted as a result of
379:58 is identified and deleted as a result of D duplication only a single instance of
380:01 D duplication only a single instance of data that is only one copy of the data
380:04 data that is only one copy of the data is retained in the
380:10 storage there are two major ways in which data D duplication is
380:13 which data D duplication is accomplished file Level D duplication
380:16 accomplished file Level D duplication and Block Level D
380:18 and Block Level D duplication file Level D duplication is
380:21 duplication file Level D duplication is also called single instancing D
380:24 also called single instancing D duplication in file Level D duplication
380:28 duplication in file Level D duplication duplicate files are identified and
380:31 duplicate files are identified and deleted since an entire file is compared
380:33 deleted since an entire file is compared with other files for eliminating
380:36 with other files for eliminating duplicates the files should be exactly
380:38 duplicates the files should be exactly the same to be considered as
380:41 the same to be considered as duplicates file Level D duplication is
380:43 duplicates file Level D duplication is simple and fast but it doesn't look into
380:46 simple and fast but it doesn't look into duplicate content that exists in
380:48 duplicate content that exists in different files for example let's say
380:51 different files for example let's say there are two exact copies of a word
380:53 there are two exact copies of a word file if we make even a negligible change
380:56 file if we make even a negligible change to one of the word files such as adding
380:58 to one of the word files such as adding a
381:00 a space then it will make that file a
381:02 space then it will make that file a different one as a result D duplication
381:05 different one as a result D duplication will not delete that duplicate file and
381:08 will not delete that duplicate file and it will still be occupying storage
381:12 it will still be occupying storage space Block Level D duplication is also
381:15 space Block Level D duplication is also called subfile Level D
381:18 called subfile Level D duplication in Block Level D duplication
381:21 duplication in Block Level D duplication a file is broken into smaller fixed or
381:24 a file is broken into smaller fixed or variable size blocks and duplicate
381:26 variable size blocks and duplicate blocks are found for
381:29 blocks are found for deletion it is more efficient than file
381:32 deletion it is more efficient than file Level D duplication because it works at
381:34 Level D duplication because it works at a Block
381:36 a Block Level Block Level duplication makes it
381:39 Level Block Level duplication makes it possible to duplicate files that are
381:41 possible to duplicate files that are similar but a little
381:43 similar but a little different unlike file Level D
381:45 different unlike file Level D duplication Block Level D duplication
381:48 duplication Block Level D duplication does not require absolutely identical
381:51 does not require absolutely identical copies of the same file for D
381:54 copies of the same file for D duplication there are two types of Block
381:56 duplication there are two types of Block Level D duplication fixed length D
381:59 Level D duplication fixed length D duplication and variable length D
382:02 duplication and variable length D duplication in fixed length D
382:05 duplication in fixed length D duplication a file is broken into fixed
382:07 duplication a file is broken into fixed sized blocks and duplicate blocks are
382:10 sized blocks and duplicate blocks are found for
382:11 found for deletion fixed length D duplication
382:14 deletion fixed length D duplication consumes less processing power but is
382:16 consumes less processing power but is less effective than variable length
382:19 less effective than variable length duplication this is because any change
382:21 duplication this is because any change to an identical file will result in the
382:23 to an identical file will result in the creation of blocks that have
382:25 creation of blocks that have changed therefore two files with a small
382:29 changed therefore two files with a small amount of difference will only have a
382:30 amount of difference will only have a few identical
382:33 few identical blocks let's EXP explain this with the
382:35 blocks let's EXP explain this with the help of an
382:36 help of an example in our diagram the Top Line
382:39 example in our diagram the Top Line shows the original fixed size blocks of
382:41 shows the original fixed size blocks of a file before any change is done to the
382:44 a file before any change is done to the file the bottom line shows the fixed
382:47 file the bottom line shows the fixed size blocks after a single slight change
382:49 size blocks after a single slight change was done to the
382:51 was done to the file though the data is almost identical
382:54 file though the data is almost identical in both of these lines the blocks have
382:56 in both of these lines the blocks have changed and hence only a few duplicate
382:59 changed and hence only a few duplicate blocks are
383:00 blocks are identified even after d duplication we
383:03 identified even after d duplication we will be storing nine blocks of the same
383:07 will be storing nine blocks of the same file in variable length duplication a
383:10 file in variable length duplication a file is broken into variable sized
383:12 file is broken into variable sized blocks and duplicate blocks are found
383:15 blocks and duplicate blocks are found for
383:16 for deletion variable length D duplication
383:19 deletion variable length D duplication is considered very effective because any
383:21 is considered very effective because any change in an identical file will be
383:23 change in an identical file will be restricted to its block alone and it
383:26 restricted to its block alone and it will not affect other blocks of the
383:28 will not affect other blocks of the file therefore two files with a small
383:31 file therefore two files with a small amount of difference are going to have
383:33 amount of difference are going to have many identical block blocks let's
383:36 many identical block blocks let's explain this with the help of an
383:38 explain this with the help of an example in our diagram the Top Line
383:40 example in our diagram the Top Line shows the original blocks of a file
383:42 shows the original blocks of a file before any change is done to the file
383:45 before any change is done to the file when we added the changes to the Block C
383:48 when we added the changes to the Block C of the file it alone changes and other
383:51 of the file it alone changes and other blocks are not affected the bottom line
383:53 blocks are not affected the bottom line shows the variable sized blocks after a
383:56 shows the variable sized blocks after a single slight change that was done to
383:58 single slight change that was done to the file only the Block C has now
384:01 the file only the Block C has now changed to block F the rest of the
384:03 changed to block F the rest of the blocks and a b d and e are identical to
384:08 blocks and a b d and e are identical to the same blocks in the top
384:11 the same blocks in the top line after d duplication we are only
384:14 line after d duplication we are only storing six blocks of the
384:17 storing six blocks of the file the storage space savings by D
384:19 file the storage space savings by D duplication are depicted by a ratio
384:22 duplication are depicted by a ratio called the D duplication
384:24 called the D duplication ratio the D duplication ratio refers to
384:28 ratio the D duplication ratio refers to the number of bytes input into the data
384:30 the number of bytes input into the data D duplication process divided by the
384:33 D duplication process divided by the number of bytes output from the
384:35 number of bytes output from the process for example if 100 bytes are
384:39 process for example if 100 bytes are input into the data D duplication
384:41 input into the data D duplication process and the output from the process
384:44 process and the output from the process is 10 bytes then our data D duplication
384:47 is 10 bytes then our data D duplication ratio is
384:48 ratio is 10:1 it means that we have saved the
384:51 10:1 it means that we have saved the storage space by 90 bytes that is by
384:59 90% the quantity of storage space saved in a data duplication process depends on
385:02 in a data duplication process depends on the characteristics of the data for
385:05 the characteristics of the data for example full backups D duplicate well
385:08 example full backups D duplicate well because it contains most of the data
385:10 because it contains most of the data from the previous backups and only a
385:12 from the previous backups and only a small amount of data would have actually
385:16 small amount of data would have actually changed in order to increase the
385:18 changed in order to increase the effectiveness of data D duplication it's
385:20 effectiveness of data D duplication it's better to avoid compression during data
385:24 better to avoid compression during data backups this is because compressing data
385:27 backups this is because compressing data not only reduces the number of bytes in
385:29 not only reduces the number of bytes in a file but it also randomizes the
385:32 a file but it also randomizes the leftover bytes with randomization a file
385:36 leftover bytes with randomization a file cannot be effectively
385:39 cannot be effectively duplicated data duplication is
385:41 duplicated data duplication is categorized into two types based on the
385:44 categorized into two types based on the place where it
385:46 place where it occurs these are postprocess
385:49 occurs these are postprocess duplication and inline D
385:52 duplication and inline D duplication in postprocess D duplication
385:56 duplication in postprocess D duplication data is first written to a storage
385:57 data is first written to a storage device and it is duplicated at a later
386:01 device and it is duplicated at a later time let's explain this with the help of
386:04 time let's explain this with the help of an
386:04 an example in our diagram we have the
386:07 example in our diagram we have the existing data on our storage
386:10 existing data on our storage device let's say the new data g h and I
386:15 device let's say the new data g h and I arrive then it is immediately written to
386:17 arrive then it is immediately written to the storage
386:19 the storage device at a later time the data D
386:21 device at a later time the data D duplication process is carried out on
386:24 duplication process is carried out on all the stored
386:26 all the stored data it looks for duplicate data that
386:28 data it looks for duplicate data that can be D duplicated and in this case it
386:31 can be D duplicated and in this case it has found two copies of G and and it is
386:39 duplicated next it has found two copies of H and it is
386:41 of H and it is duplicated this process continues until
386:44 duplicated this process continues until there is no more duplicate
386:47 there is no more duplicate data the advantage of postprocess
386:50 data the advantage of postprocess duplication is that it allows for faster
386:53 duplication is that it allows for faster rights since there is no duplication
386:55 rights since there is no duplication done while storing the
386:58 done while storing the data the disadvantage of post-process
387:01 data the disadvantage of post-process duplication is that storing duplicate
387:03 duplication is that storing duplicate data may become an issue if the storage
387:06 data may become an issue if the storage device is near full
387:08 device is near full capacity in inline duplication data is
387:11 capacity in inline duplication data is duplicated in real time as it enters the
387:14 duplicated in real time as it enters the storage
387:15 storage device if the D duplication process
387:18 device if the D duplication process finds a block that already exists on the
387:20 finds a block that already exists on the storage device it doesn't store that
387:22 storage device it doesn't store that block but rather references it to the
387:25 block but rather references it to the existing block let's explain this with
387:28 existing block let's explain this with the help of an
387:29 the help of an example in our diagram we have the
387:32 example in our diagram we have the existing data on our storage device
387:35 existing data on our storage device let's say the new data g h and I arrive
387:40 let's say the new data g h and I arrive in this case the D duplication process
387:42 in this case the D duplication process is performed as the data arrives before
387:45 is performed as the data arrives before it is written to the
387:46 it is written to the storage so when data G arrives the D
387:50 storage so when data G arrives the D duplication process checks it with the
387:52 duplication process checks it with the existing
387:53 existing data and since it already exists the
387:56 data and since it already exists the data G is not stored but it is
387:58 data G is not stored but it is referenced to the existing data
388:01 referenced to the existing data G when H arrives the D duplication
388:05 G when H arrives the D duplication process is performed and since it
388:07 process is performed and since it already exists the data H is not stored
388:10 already exists the data H is not stored but it is referenced to the existing
388:12 but it is referenced to the existing data
388:14 data H when data I arrives the D duplication
388:18 H when data I arrives the D duplication process finds that data I doesn't exist
388:21 process finds that data I doesn't exist so it stores I to the storage
388:28 device the advantage of inline D duplication is that it consumes Less
388:30 duplication is that it consumes Less storage because data is not duplicated
388:34 storage because data is not duplicated the disadvantage of this approach is
388:36 the disadvantage of this approach is that data intake can be
388:38 that data intake can be slower and as a result it can affect
388:41 slower and as a result it can affect throughput of the storage
388:43 throughput of the storage device now let's look at appliance-based
388:46 device now let's look at appliance-based D
388:48 D duplication a duplication Appliance is a
388:51 duplication a duplication Appliance is a computer system with a specific purpose
388:53 computer system with a specific purpose to carry out the duplication process
388:56 to carry out the duplication process using its hardware and
388:57 using its hardware and firmware it is also referred to as
389:00 firmware it is also referred to as Hardware D
389:02 Hardware D duplication D duplication appliances can
389:05 duplication D duplication appliances can perform either postprocess or inline D
389:08 perform either postprocess or inline D duplication
389:09 duplication processes it also offloads the D
389:12 processes it also offloads the D duplication load from the existing
389:14 duplication load from the existing systems the D duplication Appliance
389:17 systems the D duplication Appliance accepts data streams from multiple
389:19 accepts data streams from multiple sources to perform the D duplication
389:22 sources to perform the D duplication process and the resulting D duplicated
389:24 process and the resulting D duplicated data can coexist on its disk storage it
389:28 data can coexist on its disk storage it is a high performance and high cost
389:31 is a high performance and high cost solution now let's look at
389:33 solution now let's look at software-based DD
389:35 software-based DD duplication software-based D duplication
389:38 duplication software-based D duplication doesn't have dedicated Hardware to carry
389:40 doesn't have dedicated Hardware to carry out the D
389:41 out the D duplication since the software does the
389:44 duplication since the software does the duplication process it needs to be
389:46 duplication process it needs to be installed on a host the installed
389:49 installed on a host the installed software uses the processing power of
389:51 software uses the processing power of the host to carry out the D duplication
389:54 the host to carry out the D duplication process it is a low performance and
389:56 process it is a low performance and lowcost solution and that brings us to
389:59 lowcost solution and that brings us to the end of this lesson let's summarize
390:02 the end of this lesson let's summarize what you have learned in this lesson in
390:04 what you have learned in this lesson in this lesson you learned about a popular
390:06 this lesson you learned about a popular capacity optimization method called Data
390:08 capacity optimization method called Data D
390:09 D duplication we started by looking at the
390:11 duplication we started by looking at the need for capacity optimization and then
390:14 need for capacity optimization and then we looked at the popular capacity
390:16 we looked at the popular capacity optimization method called Data
390:18 optimization method called Data duplication when we covered data D
390:21 duplication when we covered data D duplication we looked at what D
390:23 duplication we looked at what D duplication is and then we looked at the
390:26 duplication is and then we looked at the methods of data D duplication these are
390:29 methods of data D duplication these are file level and block level
390:37 we also looked at D duplication ratio and then we looked at the
390:38 and then we looked at the characteristics of data which have an
390:40 characteristics of data which have an impact on data D
390:42 impact on data D duplication after that we talked about
390:44 duplication after that we talked about the effectiveness of data
390:46 the effectiveness of data duplication and then we looked at the
390:48 duplication and then we looked at the types of data
390:51 types of data duplication under types we looked at
390:53 duplication under types we looked at post-process D duplication and inline D
391:01 duplication the last thing we covered was appliance-based duplication
391:04 was appliance-based duplication and software-based D
391:07 and software-based D duplication in the next lesson you will
391:09 duplication in the next lesson you will learn about compression and thin
391:11 learn about compression and thin provisioning thank you for watching
391:39 hello and welcome to unit 2 capacity optimization methods part
391:41 optimization methods part two in this lesson you will learn about
391:44 two in this lesson you will learn about the two popular capacity optimization
391:48 the two popular capacity optimization methods these are compression and thin
391:51 methods these are compression and thin provisioning when we cover compression
391:53 provisioning when we cover compression we will look at What compression is and
391:56 we will look at What compression is and then we will look at the advantages of
391:58 then we will look at the advantages of compression we will also look at the
392:00 compression we will also look at the types of compression in detail and these
392:03 types of compression in detail and these are postprocess compression and realtime
392:08 are postprocess compression and realtime compression the last thing we will talk
392:10 compression the last thing we will talk about regarding compression is about
392:12 about regarding compression is about applying D duplication before
392:15 applying D duplication before compression next we will cover thin
392:17 compression next we will cover thin provisioning under this topic we will
392:19 provisioning under this topic we will look at what thick provisioning is and
392:22 look at what thick provisioning is and then we will look at what thin
392:23 then we will look at what thin provisioning
392:25 provisioning is we will also look at what over
392:27 is we will also look at what over provisioning is and lastly we will talk
392:30 provisioning is and lastly we will talk about zero page
392:32 about zero page reclaim now let's look at at What
392:34 reclaim now let's look at at What compression is compression is the
392:36 compression is compression is the ability to convert data into a smaller
392:38 ability to convert data into a smaller size by eliminating similar blocks in
392:41 size by eliminating similar blocks in the original
392:43 the original data compression encodes the data using
392:46 data compression encodes the data using fewer bits than the original
392:49 fewer bits than the original data let's explain this with the help of
392:51 data let's explain this with the help of an
392:52 an example in our example let's say the
392:55 example in our example let's say the original data is in binary format as
392:58 original data is in binary format as shown in the slide then if we encode our
393:01 shown in the slide then if we encode our original data with fewer bits than our
393:03 original data with fewer bits than our compress data may look as 1 1
393:07 compress data may look as 1 1 0 5 * 1
393:11 0 5 * 1 01 4 * 0 and 1 as shown in the
393:17 01 4 * 0 and 1 as shown in the slide what we have done is eliminated
393:19 slide what we have done is eliminated similar bits of ones and encoded it as 5
393:22 similar bits of ones and encoded it as 5 *
393:24 * 1 likewise we have also eliminated
393:27 1 likewise we have also eliminated similar bits of zeros and encoded it as
393:29 similar bits of zeros and encoded it as 4 *
393:32 4 * 0 this is the underlying concept of
393:36 0 this is the underlying concept of compression the advantage of compression
393:38 compression the advantage of compression is that it allows data to consume less
393:41 is that it allows data to consume less storage
393:42 storage space compression is categorized into
393:45 space compression is categorized into two types based on the place where it
393:47 two types based on the place where it occurs these are postprocess compression
393:50 occurs these are postprocess compression and realtime
393:53 and realtime compression in post-process compression
393:56 compression in post-process compression data is first written to a storage
393:58 data is first written to a storage device such as a storage array in an
394:00 device such as a storage array in an uncompressed Manner and later the data
394:03 uncompressed Manner and later the data is compressed
394:10 this approach is considered inefficient as it consumes significant processing
394:12 as it consumes significant processing power because of the Intensive dis
394:15 power because of the Intensive dis operations in realtime compression data
394:18 operations in realtime compression data is compressed in real time as it is
394:21 is compressed in real time as it is stored in the storage
394:22 stored in the storage device this approach is considered
394:25 device this approach is considered efficient since it consumes less
394:27 efficient since it consumes less processing power because of reduced disc
394:31 processing power because of reduced disc operations now let's talk about applying
394:33 operations now let's talk about applying d duplication before
394:35 d duplication before compression a compressed file is not
394:37 compression a compressed file is not suitable for D duplication because
394:40 suitable for D duplication because compression will remove identical blocks
394:42 compression will remove identical blocks of
394:43 of data so for this reason compression
394:45 data so for this reason compression should be applied after d duplication
394:48 should be applied after d duplication and not the other way
394:50 and not the other way around before we talk about thin
394:52 around before we talk about thin provisioning let's talk about
394:54 provisioning let's talk about traditional storage allocation
394:56 traditional storage allocation methods in the traditional storage
394:58 methods in the traditional storage allocation method storage capacity is
395:01 allocation method storage capacity is allocated upfront to individual servers
395:04 allocated upfront to individual servers this traditional model is often referred
395:06 this traditional model is often referred to as thick or fat
395:09 to as thick or fat provisioning now let's talk about thin
395:13 provisioning now let's talk about thin provisioning thin provisioning is a
395:15 provisioning thin provisioning is a capacity optimization method that
395:17 capacity optimization method that optimizes the consumption of existing
395:20 optimizes the consumption of existing storage thin provisioning is the
395:23 storage thin provisioning is the OnDemand allocation of the storage space
395:26 OnDemand allocation of the storage space based on the actual need it does not
395:29 based on the actual need it does not allocate the storage space up front as
395:31 allocate the storage space up front as in traditional storage deployment
395:34 in traditional storage deployment we will explain thick and thin
395:36 we will explain thick and thin provisioning with the help of an
395:38 provisioning with the help of an example using the traditional thick
395:40 example using the traditional thick provisioning model let's say we create a
395:43 provisioning model let's say we create a volume of capacity 1,000 GB and at the
395:46 volume of capacity 1,000 GB and at the creation time 1,000 GB of physical
395:49 creation time 1,000 GB of physical storage space on the storage is
395:51 storage space on the storage is immediately reserved for the exclusive
395:53 immediately reserved for the exclusive use of that entire volume irrespective
395:56 use of that entire volume irrespective of how much space will really be used if
395:59 of how much space will really be used if only 250 GB of the 1,000 GB is actually
396:03 only 250 GB of the 1,000 GB is actually used for writing data then the unused
396:05 used for writing data then the unused allocated capacity is 750
396:08 allocated capacity is 750 GB unfortunately we have only 25% of
396:12 GB unfortunately we have only 25% of storage capacity utilization and the
396:15 storage capacity utilization and the remaining unused capacity cannot be used
396:17 remaining unused capacity cannot be used for creating new volumes since it is
396:19 for creating new volumes since it is tied to the thick
396:21 tied to the thick volume as a result storage capacity gets
396:26 volume as a result storage capacity gets wasted Now using the thin provisioning
396:28 wasted Now using the thin provisioning model let's say we create a volume of
396:31 model let's say we create a volume of capacity 1,000 GB at the creation time
396:34 capacity 1,000 GB at the creation time there's no reservation of 1,000 GB of
396:37 there's no reservation of 1,000 GB of physical storage space on the storage
396:40 physical storage space on the storage but the application sees it as physical
396:42 but the application sees it as physical storage capacity of 1,000 GB the
396:45 storage capacity of 1,000 GB the physical storage space is allocated On
396:47 physical storage space is allocated On Demand only when an application writes
396:50 Demand only when an application writes data to the
396:51 data to the volume if 250 GB of the 1,000 GB is
396:56 volume if 250 GB of the 1,000 GB is actually used for writing data then only
396:59 actually used for writing data then only 250 GB of physical storage space is
397:02 250 GB of physical storage space is allocated as a result storage capacity
397:05 allocated as a result storage capacity doesn't get wasted the unallocated
397:07 doesn't get wasted the unallocated physical storage capacity on a storage
397:09 physical storage capacity on a storage array can be used for creating new
397:13 array can be used for creating new volumes as a result thin provisioning
397:15 volumes as a result thin provisioning uses all of the physical storage
397:17 uses all of the physical storage capacity to create logical
397:20 capacity to create logical volumes with thin provisioning storage
397:23 volumes with thin provisioning storage capacity utilization can be geared
397:25 capacity utilization can be geared towards achieving 100% with a very
397:27 towards achieving 100% with a very little administrative
397:30 little administrative cost now let's talk about
397:32 cost now let's talk about over-provisioning
397:34 over-provisioning thin provisioning provides a mechanism
397:36 thin provisioning provides a mechanism that allows applications to see more
397:38 that allows applications to see more storage capacity than what is physically
397:40 storage capacity than what is physically available on the storage array this is
397:43 available on the storage array this is called over
397:44 called over provisioning we will explain this with
397:46 provisioning we will explain this with the help of an
397:47 the help of an example in our example let's say the
397:50 example in our example let's say the array has a usable physical storage
397:52 array has a usable physical storage capacity of 50
397:54 capacity of 50 terabytes and that we had provisioned
397:56 terabytes and that we had provisioned all of the 50 terabytes for creating
397:58 all of the 50 terabytes for creating thin
397:59 thin volumes over a period of time let's say
398:02 volumes over a period of time let's say only 20 terabytes has been
398:04 only 20 terabytes has been used even though we provisioned 100% of
398:07 used even though we provisioned 100% of physical storage capacity we still have
398:09 physical storage capacity we still have 30 terabyt of unallocated physical
398:12 30 terabyt of unallocated physical capacity on the
398:14 capacity on the array so if the business needs more
398:17 array so if the business needs more storage capacity then instead of
398:19 storage capacity then instead of purchasing and adding new physical
398:21 purchasing and adding new physical storage capacity we can over-provision
398:24 storage capacity we can over-provision the existing physical storage capacity
398:26 the existing physical storage capacity by creating new thin volumes and having
398:29 by creating new thin volumes and having them use 30 terab of unallocated
398:31 them use 30 terab of unallocated physical capacity that was is already
398:38 provisioned over-provisioning is done based on the past average storage
398:41 based on the past average storage utilization so we can safely allocate 30
398:44 utilization so we can safely allocate 30 terabyt of thin volumes making the
398:46 terabyt of thin volumes making the storage array over
398:48 storage array over provisioned however it is important to
398:51 provisioned however it is important to closely monitor the physical storage
398:52 closely monitor the physical storage capacity when it is over-provisioned
398:55 capacity when it is over-provisioned because when the physical storage
398:57 because when the physical storage capacity is exhausted the servers will
398:59 capacity is exhausted the servers will receive right
399:01 receive right errors over a period of time when files
399:04 errors over a period of time when files are deleted in thin volumes storage
399:06 are deleted in thin volumes storage arrays will identify those blocks and
399:09 arrays will identify those blocks and release them back into the free pool of
399:11 release them back into the free pool of physical storage
399:12 physical storage capacity this technology is called zero
399:15 capacity this technology is called zero page
399:17 page reclaim and that brings us to the end of
399:19 reclaim and that brings us to the end of this lesson let's summarize what we have
399:21 this lesson let's summarize what we have learned in this lesson in this lesson
399:24 learned in this lesson in this lesson you learned about the two popular
399:26 you learned about the two popular capacity optimization methods these are
399:28 capacity optimization methods these are compression and thin
399:30 compression and thin provisioning when we covered compression
399:33 provisioning when we covered compression we looked at What compression was and
399:35 we looked at What compression was and then we looked at the advantages of
399:37 then we looked at the advantages of compression we also looked at the types
399:39 compression we also looked at the types of compression in detail these are
399:42 of compression in detail these are post-process compression and real time
399:44 post-process compression and real time compression the last thing about
399:46 compression the last thing about compression we talked about was applying
399:48 compression we talked about was applying duplication before
399:51 duplication before compression next we covered thin
399:53 compression next we covered thin provisioning under this topic we looked
399:55 provisioning under this topic we looked at what thick provisioning was and then
399:58 at what thick provisioning was and then we looked at what thin provisioning was
400:00 we looked at what thin provisioning was we also looked at what over provisioning
400:02 we also looked at what over provisioning was and and lastly we talked about zero
400:05 was and and lastly we talked about zero page
400:06 page reclaim in the next lesson you will
400:08 reclaim in the next lesson you will learn about Lun provisioning techniques
400:11 learn about Lun provisioning techniques thank you for watching
400:36 hello and welcome to unit 3 Lun provisioning
400:37 provisioning techniques in this lesson you will learn
400:39 techniques in this lesson you will learn about the Lun provisioning
400:41 about the Lun provisioning techniques we're going to start by
400:43 techniques we're going to start by looking at what Lun provisioning is and
400:46 looking at what Lun provisioning is and then we will look at a logical unit we
400:48 then we will look at a logical unit we will also compare Lun with
400:51 will also compare Lun with volume next we will look at Lun masking
400:54 volume next we will look at Lun masking and in specific we will look at Lun
400:56 and in specific we will look at Lun masking in servers and storage
400:59 masking in servers and storage arrays now let's look at what Lun
401:01 arrays now let's look at what Lun provisioning is lond provisioning is the
401:04 provisioning is lond provisioning is the process of allocating the physical
401:06 process of allocating the physical storage capacity of a storage array to
401:09 storage capacity of a storage array to the server
401:10 the server computers when we spoke about sand
401:12 computers when we spoke about sand storage array we mentioned how the
401:14 storage array we mentioned how the storage is presented to the server
401:17 storage is presented to the server computers the physical storage capacity
401:20 computers the physical storage capacity of a storage array has to be shared
401:22 of a storage array has to be shared among the host servers so it is
401:24 among the host servers so it is partitioned into one or more logical
401:27 partitioned into one or more logical discs assigned to the
401:29 discs assigned to the servers these logical discs appear to
401:32 servers these logical discs appear to the servers as local
401:34 the servers as local discs The Logical disc or The Logical
401:37 discs The Logical disc or The Logical unit as it is usually called is
401:39 unit as it is usually called is identified by a unique number called The
401:42 identified by a unique number called The Logical unit number or
401:48 Lun it should be noted that The Logical unit itself is commonly referred to as a
401:50 unit itself is commonly referred to as a Lun and The Logical unit number as a Lun
401:54 Lun and The Logical unit number as a Lun ID so when you hear someone say lun they
401:57 ID so when you hear someone say lun they may actually be referring to a logical
401:59 may actually be referring to a logical unit instead of The Logical unit number
402:02 unit instead of The Logical unit number and when they mentioned Lun ID they are
402:04 and when they mentioned Lun ID they are referring to the logical unit
402:07 referring to the logical unit number a logical unit can constitute a
402:10 number a logical unit can constitute a complete physical disc or a portion of
402:12 complete physical disc or a portion of the physical disc or a portion of the
402:14 the physical disc or a portion of the entire physical storage capacity of a
402:16 entire physical storage capacity of a storage array or even an entire physical
402:19 storage array or even an entire physical storage capacity
402:26 itself so a logical unit can potentially span across a number of physical discs
402:28 span across a number of physical discs on a storage array and yet it will
402:31 on a storage array and yet it will appear as just one disc to the server
402:34 appear as just one disc to the server it should be noted that the storage
402:35 it should be noted that the storage capacity as seen by the server will be
402:38 capacity as seen by the server will be the same amount of physical storage
402:39 the same amount of physical storage capacity assigned to the
402:46 Lun the terms Lun and volume should not be confused with each other while Lun is
402:49 be confused with each other while Lun is a unique number assigned to a logical
402:52 a unique number assigned to a logical unit volume on the other hand is a broad
402:55 unit volume on the other hand is a broad term that denotes a contiguous area on a
402:57 term that denotes a contiguous area on a storage device and includes lungs and
403:01 storage device and includes lungs and partitions logical units play a vital
403:04 partitions logical units play a vital role in the management of storage in the
403:06 role in the management of storage in the sand storage
403:08 sand storage array they not only provide a logical
403:11 array they not only provide a logical abstraction between the host computers
403:13 abstraction between the host computers and the hard disk drives of the storage
403:15 and the hard disk drives of the storage array but logical units improve storage
403:19 array but logical units improve storage utilization for example if a server
403:21 utilization for example if a server needs only 500 GB of storage space and
403:25 needs only 500 GB of storage space and if the available physical disc is 2
403:27 if the available physical disc is 2 terabytes then without the logical units
403:30 terabytes then without the logical units we would have to allocate the entire 2
403:32 we would have to allocate the entire 2 terabytes to that
403:38 server on the other hand we can create a logical unit with 500 GB of storage
403:40 logical unit with 500 GB of storage capacity and allocate it to the
403:44 capacity and allocate it to the server the remaining 1.5 terabytes of 2
403:47 server the remaining 1.5 terabytes of 2 terabytes can be allocated to different
403:50 terabytes can be allocated to different servers in addition to that logical unit
403:54 servers in addition to that logical unit numbers function as logical identifiers
403:56 numbers function as logical identifiers that differentiate different logical
403:59 that differentiate different logical Diss and are used to assign access and
404:02 Diss and are used to assign access and control print
404:03 control print VES now let's talk about Lun
404:06 VES now let's talk about Lun masking Lun masking provides the servers
404:09 masking Lun masking provides the servers with restricted access to the logical
404:11 with restricted access to the logical units of the storage
404:13 units of the storage array it determines which logical units
404:16 array it determines which logical units can be accessed by which
404:19 can be accessed by which servers now let's talk about how Lun
404:21 servers now let's talk about how Lun masking is implemented in servers the
404:24 masking is implemented in servers the implementation of Lun masking in a
404:26 implementation of Lun masking in a server requires that the storage array
404:29 server requires that the storage array controller supports multi-one capability
404:36 software is installed in the server that provides the Lun masking
404:39 provides the Lun masking capability this software allows the
404:41 capability this software allows the server to be configured so that it can
404:43 server to be configured so that it can access all the logical units that are
404:45 access all the logical units that are assigned to it and ignore the rest of
404:49 assigned to it and ignore the rest of the logical
404:50 the logical units the Lun masking function is
404:52 units the Lun masking function is carried out by one of the following the
404:55 carried out by one of the following the driver and the HBA of the server or the
404:57 driver and the HBA of the server or the operating system of the server
405:00 operating system of the server itself this kind of Lun masking is
405:02 itself this kind of Lun masking is ideally suited for environments that
405:04 ideally suited for environments that have a few servers connected to many
405:06 have a few servers connected to many heterogeneous storage
405:09 heterogeneous storage devices the drawback of implementing Lun
405:12 devices the drawback of implementing Lun masking in a server is that the server
405:14 masking in a server is that the server can actually see all the logical units
405:16 can actually see all the logical units but it will ignore the ones that are not
405:23 assigned however Lun masking at the server level is reliable but it requires
405:26 server level is reliable but it requires that all the servers have a common
405:32 Administration now let's talk about how Lun mask is implemented in the storage
405:35 Lun mask is implemented in the storage array Lun masking is implemented at the
405:38 array Lun masking is implemented at the frontend controller of the storage
405:40 frontend controller of the storage array using Lun masking we can bind a
405:44 array using Lun masking we can bind a server to a specific Lun so that no
405:46 server to a specific Lun so that no other servers can access
405:49 other servers can access it by implementing Lun masking the
405:52 it by implementing Lun masking the servers can access only the Lun they
405:54 servers can access only the Lun they have been specifically assigned it is
405:56 have been specifically assigned it is important to note that servers are not
405:58 important to note that servers are not aware of Lunds that were not assigned to
406:00 aware of Lunds that were not assigned to them and even if they are aware of such
406:03 them and even if they are aware of such luns they still cannot access
406:06 luns they still cannot access them we will explain Lun masking with
406:08 them we will explain Lun masking with the help of an example in our example we
406:11 the help of an example in our example we have logical units of a storage array
406:14 have logical units of a storage array that store data of the production
406:15 that store data of the production department and the accounts
406:18 department and the accounts Department in this case Lun masking is
406:20 Department in this case Lun masking is not implemented so both the Departments
406:23 not implemented so both the Departments can access each other's
406:25 can access each other's data however this should not be the case
406:28 data however this should not be the case if it violates the company's policy on
406:30 if it violates the company's policy on data integrity and security
406:34 data integrity and security so by implementing Lun masking we can
406:36 so by implementing Lun masking we can ensure that the Departments can access
406:38 ensure that the Departments can access only their respective
406:40 only their respective luns that brings us to the end of this
406:42 luns that brings us to the end of this lesson let's summarize what you have
406:44 lesson let's summarize what you have learned in this
406:46 learned in this lesson in this lesson you learned about
406:48 lesson in this lesson you learned about Lun provisioning techniques we started
406:51 Lun provisioning techniques we started by looking at what Lun provisioning is
406:53 by looking at what Lun provisioning is and then we looked at logical unit we
406:56 and then we looked at logical unit we also compared Lun with volume next we
407:00 also compared Lun with volume next we looked at Lun masking and in specific we
407:02 looked at Lun masking and in specific we looked at Lun masking in servers and
407:04 looked at Lun masking in servers and storage arrays in the next lesson you
407:07 storage arrays in the next lesson you will learn about storage virtualization
407:10 will learn about storage virtualization thank you for watching
407:37 hello and welcome to unit 4 storage virtualization in this lesson you will
407:39 virtualization in this lesson you will learn about storage
407:41 learn about storage virtualization we're going to start by
407:43 virtualization we're going to start by looking at what storage virtualization
407:45 looking at what storage virtualization is and then we will look at the benefits
407:48 is and then we will look at the benefits of storage
407:50 of storage virtualization next we will look at a
407:52 virtualization next we will look at a form of storage virtualization called
407:54 form of storage virtualization called host-based
407:56 host-based virtualization host-based virtualization
407:59 virtualization host-based virtualization requires software called a logical
408:01 requires software called a logical volume manager or or
408:03 volume manager or or lvm so we will see how lvm works and
408:07 lvm so we will see how lvm works and then we will look at the features of the
408:09 then we will look at the features of the logical volume
408:10 logical volume manager after that we will look at the
408:13 manager after that we will look at the storage controller based virtualization
408:15 storage controller based virtualization and its
408:16 and its benefits next we will look at virtual
408:19 benefits next we will look at virtual provisioning and then we will compare
408:22 provisioning and then we will compare virtual provisioning with thin
408:24 virtual provisioning with thin provisioning lastly we will look at the
408:26 provisioning lastly we will look at the benefits of virtual
408:29 benefits of virtual provisioning now let's look at what
408:30 provisioning now let's look at what storage virtualization is
408:34 storage virtualization is storage virtualization is the
408:35 storage virtualization is the abstraction of physical storage from the
408:37 abstraction of physical storage from the servers connected to
408:40 servers connected to it it Aggregates multiple physical
408:42 it it Aggregates multiple physical storage devices such as raid arrays Diss
408:47 storage devices such as raid arrays Diss and tape drives into logical storage
408:49 and tape drives into logical storage pools that can be shared among
408:51 pools that can be shared among applications more
408:59 efficiently The Logical storage so shared functions like physical storage
409:02 shared functions like physical storage examples of storage virtualization are
409:05 examples of storage virtualization are host-based logical volume management dis
409:08 host-based logical volume management dis virtualization and tape Library
409:15 virtualization storage virtualization also enables application and network
409:18 also enables application and network independent management of storage by
409:20 independent management of storage by hiding the underlying complexities of
409:22 hiding the underlying complexities of configuration and management of
409:24 configuration and management of individual physical storage
409:30 devices now let's look at the benefits of storage virtualization
409:33 of storage virtualization storage virtualization increases the
409:35 storage virtualization increases the utilization of storage
409:38 utilization of storage resources this is done by aggregating
409:40 resources this is done by aggregating the storage capacity of multiple
409:42 the storage capacity of multiple physical storage devices into a pool of
409:45 physical storage devices into a pool of logical
409:47 logical Storage storage virtualization improves
409:50 Storage storage virtualization improves the performance of storage
409:52 the performance of storage resources the aggregation of multiple
409:55 resources the aggregation of multiple physical diss also results in the
409:57 physical diss also results in the aggregation of iops which improves the
410:01 aggregation of iops which improves the performance store storage virtualization
410:03 performance store storage virtualization allows us to provision the storage needs
410:05 allows us to provision the storage needs of an application without having to
410:08 of an application without having to manually configure the specific storage
410:11 manually configure the specific storage Hardware an application storage capacity
410:14 Hardware an application storage capacity can be managed without bringing the
410:16 can be managed without bringing the application
410:17 application down host-based virtualization is a form
410:20 down host-based virtualization is a form of storage virtualization that requires
410:23 of storage virtualization that requires software called a logical volume manager
410:25 software called a logical volume manager to be running in the host the logical
410:28 to be running in the host the logical volume manager is also referred to as
410:31 volume manager is also referred to as volume manager
410:33 volume manager and it is either available as part of
410:35 and it is either available as part of the operating system or as a separate
410:38 the operating system or as a separate product in the context of a logical
410:41 product in the context of a logical volume manager the physical storage is
410:43 volume manager the physical storage is referred to as physical
410:46 referred to as physical volumes the logical volume manager
410:48 volumes the logical volume manager Aggregates the physical storage into a
410:50 Aggregates the physical storage into a logical storage pool called a volume
410:54 logical storage pool called a volume group The Host sees a volume group as if
410:57 group The Host sees a volume group as if it were a physical hard
410:59 it were a physical hard disk the logical volume manager then
411:02 disk the logical volume manager then divides the volume group into one or
411:04 divides the volume group into one or more logical
411:06 more logical volumes the host sees a logical volume
411:09 volumes the host sees a logical volume as if it is a partition on the physical
411:11 as if it is a partition on the physical hard
411:13 hard disk the created logical volumes are
411:16 disk the created logical volumes are assigned to the host
411:19 assigned to the host applications the logical volume manager
411:21 applications the logical volume manager provides a virtualization layer that
411:23 provides a virtualization layer that Maps the physical drives to the pool of
411:26 Maps the physical drives to the pool of logical
411:27 logical storage we will explain this with the
411:29 storage we will explain this with the help of an
411:30 help of an example in our diagram we have four 500
411:34 example in our diagram we have four 500 GB physical disc drives or physical
411:36 GB physical disc drives or physical volumes that can either be directly
411:39 volumes that can either be directly attached to the host or these could be
411:41 attached to the host or these could be from a sand attached storage
411:45 from a sand attached storage array the logical volume manager or lvm
411:49 array the logical volume manager or lvm virtualizes the four 500 GB physical
411:52 virtualizes the four 500 GB physical disc drives into a single aggregated
411:55 disc drives into a single aggregated volume group of 2
411:58 volume group of 2 terabytes the volume group is then
412:00 terabytes the volume group is then divided into two one terabyte logical
412:04 divided into two one terabyte logical volumes these logical volumes with a
412:07 volumes these logical volumes with a file system on it will be presented to
412:09 file system on it will be presented to the
412:14 application so the logical volume manager creates one or more logical
412:16 manager creates one or more logical volumes by virtualizing the physical
412:19 volumes by virtualizing the physical storage available to the host either
412:21 storage available to the host either locally or from a San attached storage
412:24 locally or from a San attached storage array let's look at the features of the
412:27 array let's look at the features of the logical volume
412:29 logical volume manager the volume groups and the
412:31 manager the volume groups and the logical volume can be managed even when
412:34 logical volume can be managed even when the logical volume manager is used by
412:36 the logical volume manager is used by the
412:37 the host it is possible to move a logical
412:40 host it is possible to move a logical volume during runtime to a different
412:42 volume during runtime to a different physical
412:44 physical location it's possible to increase the
412:47 location it's possible to increase the storage of an existing logical volume
412:49 storage of an existing logical volume during
412:50 during runtime it's also possible to group
412:53 runtime it's also possible to group physical discs belonging to different
412:54 physical discs belonging to different storage subsystems into one volume
412:58 storage subsystems into one volume group logical volume manager is not
413:01 group logical volume manager is not dependent on any any particular storage
413:03 dependent on any any particular storage system and treats everything as
413:07 system and treats everything as storage now let's talk about storage
413:09 storage now let's talk about storage controller based
413:12 controller based virtualization in this type of storage
413:14 virtualization in this type of storage virtualization the storage array that
413:16 virtualization the storage array that provides controller-based storage
413:18 provides controller-based storage virtualization has several heterogeneous
413:21 virtualization has several heterogeneous storage arrays connected to its
413:24 storage arrays connected to its controllers as a result the storage
413:27 controllers as a result the storage array receives external storage which it
413:29 array receives external storage which it manages just like its own storage drives
413:36 the storage controller presents the external storage to the host as if it
413:38 external storage to the host as if it belonged to this storage
413:44 array this type of virtualization is block-based hence it works with
413:47 block-based hence it works with block-based logical
413:50 block-based logical units we will explain the storage
413:52 units we will explain the storage controller based virtualization with the
413:54 controller based virtualization with the help of an
413:55 help of an example in the diagram we have two
413:58 example in the diagram we have two storage arrays whose storage needs to be
414:00 storage arrays whose storage needs to be virtualized
414:02 virtualized we have a few logical units configured
414:04 we have a few logical units configured on each of these storage
414:07 on each of these storage arrays in the center of our diagram we
414:10 arrays in the center of our diagram we have the storage array with controllers
414:12 have the storage array with controllers that are capable of virtualizing the
414:14 that are capable of virtualizing the other storage
414:16 other storage arrays the storage arrays from vendor a
414:19 arrays the storage arrays from vendor a and vendor B are connected to the
414:22 and vendor B are connected to the virtualization controllers of the
414:23 virtualization controllers of the storage array that is at the
414:25 storage array that is at the center this results in the storage of
414:28 center this results in the storage of storage arrays being presented to the
414:30 storage arrays being presented to the virtualization controllers which turn
414:33 virtualization controllers which turn virtualizes them into a pool of logical
414:36 virtualizes them into a pool of logical storage the virtualization controller
414:39 storage the virtualization controller then creates The Logical units from the
414:41 then creates The Logical units from the pool of logical storage and presents it
414:44 pool of logical storage and presents it to the host as if it belonged to this
414:46 to the host as if it belonged to this storage
414:48 storage array so that's controller-based storage
414:52 array so that's controller-based storage virtualization now let's look at the
414:54 virtualization now let's look at the benefits of controller-based storage
414:58 benefits of controller-based storage virtualization it simplifies the
415:00 virtualization it simplifies the management of heterogeneous storage
415:02 management of heterogeneous storage arrays by consolidating management
415:04 arrays by consolidating management replication and availability
415:08 replication and availability tools the Enterprise storage array that
415:10 tools the Enterprise storage array that provides controller-based storage
415:12 provides controller-based storage virtualization can offer its features
415:15 virtualization can offer its features such as replication partition migration
415:18 such as replication partition migration and thin provisioning to the virtualized
415:21 and thin provisioning to the virtualized storage
415:28 arrays the Enterprise storage array with virtualization embedded in its
415:29 virtualization embedded in its controller has a large cache that can
415:32 controller has a large cache that can increase the performance of the external
415:34 increase the performance of the external storage attached to
415:37 storage attached to it controller-based storage
415:39 it controller-based storage virtualization helps in non-disruptive
415:42 virtualization helps in non-disruptive migration of data without any
415:44 migration of data without any application
415:46 application downtime now let's talk about virtual
415:49 downtime now let's talk about virtual provisioning virtual provisioning allows
415:51 provisioning virtual provisioning allows a logical unit to exhibit more capacity
415:54 a logical unit to exhibit more capacity to the host as opposed to its actual
415:57 to the host as opposed to its actual allocated physical storage capacity on
415:59 allocated physical storage capacity on the storage array
416:02 the storage array the physical storage capacity is
416:04 the physical storage capacity is allocated on demand based on the actual
416:06 allocated on demand based on the actual need of the
416:08 need of the application virtual provisioning allows
416:11 application virtual provisioning allows for over subscription on the storage
416:13 for over subscription on the storage array by presenting applications with
416:15 array by presenting applications with more storage capacity than is physically
416:17 more storage capacity than is physically available on the storage
416:24 array virtual provisioning is much more than thin provisioning because it
416:26 than thin provisioning because it includes tools for management and
416:28 includes tools for management and monitoring of logical units
416:32 monitoring of logical units the benefits of virtual provisioning are
416:34 the benefits of virtual provisioning are that it increases storage
416:36 that it increases storage utilization improves performance and
416:39 utilization improves performance and simplifies storage management when
416:41 simplifies storage management when increasing
416:44 increasing capacity and that brings us to the end
416:46 capacity and that brings us to the end of this lesson let's summarize what you
416:48 of this lesson let's summarize what you have learned in this
416:50 have learned in this lesson in this lesson you learned about
416:52 lesson in this lesson you learned about storage
416:54 storage virtualization we started by looking at
416:56 virtualization we started by looking at what storage virtualization is and then
416:59 what storage virtualization is and then we looked at the benefits of storage
417:01 we looked at the benefits of storage virtualization
417:03 virtualization next we looked at a form of storage
417:04 next we looked at a form of storage virtualization called host-based
417:08 virtualization called host-based virtualization host-based virtualization
417:10 virtualization host-based virtualization requires software called a logical
417:13 requires software called a logical volume manager or
417:14 volume manager or lvm and we saw how lvm works and then we
417:18 lvm and we saw how lvm works and then we looked at the features of the logical
417:20 looked at the features of the logical volume manager after that we looked at
417:23 volume manager after that we looked at storage controller based virtualization
417:25 storage controller based virtualization and the benefits of it next we looked at
417:29 and the benefits of it next we looked at virtual provisioning and then we
417:31 virtual provisioning and then we compared virtu ual provisioning with
417:33 compared virtu ual provisioning with thin
417:34 thin provisioning lastly we looked at the
417:36 provisioning lastly we looked at the benefits of virtual
417:38 benefits of virtual provisioning in the next lesson you will
417:40 provisioning in the next lesson you will learn about the monitoring and alerting
417:42 learn about the monitoring and alerting capabilities needed to manage the
417:44 capabilities needed to manage the storage
417:46 storage infrastructure thank you for watching
418:12 hello and welcome to unit 5 monitoring and
418:14 and alerting in this lesson you will learn
418:17 alerting in this lesson you will learn about the monitoring and alerting
418:18 about the monitoring and alerting capabilities needed to manage the
418:20 capabilities needed to manage the storage
418:26 infrastructure we're going to start by looking at the need for monitoring and
418:28 looking at the need for monitoring and alerting
418:29 alerting capabilities and then we will talk about
418:31 capabilities and then we will talk about monitoring ing after that we will look
418:34 monitoring ing after that we will look at trending and
418:36 at trending and forecasting and then we will look at
418:38 forecasting and then we will look at capacity
418:39 capacity planning next we will look at what
418:42 planning next we will look at what alerting is and then we will look at the
418:44 alerting is and then we will look at the types of
418:45 types of alerting these are critical Alert
418:48 alerting these are critical Alert warning alert and information
418:52 warning alert and information alert we will also talk about the
418:54 alert we will also talk about the alerting methods and these are SNMP
418:57 alerting methods and these are SNMP traps email SMS and call home
419:03 traps email SMS and call home option after that we will talk about the
419:06 option after that we will talk about the audit log and then we will talk about
419:08 audit log and then we will talk about the role of network time protocol in
419:11 the role of network time protocol in audit
419:13 audit logging we will also talk about the
419:15 logging we will also talk about the advantages and importance of audit
419:18 advantages and importance of audit logs next we will talk about CIS log and
419:22 logs next we will talk about CIS log and lastly we will talk about the benefits
419:24 lastly we will talk about the benefits of CIS
419:26 of CIS log now let's look at the need for
419:28 log now let's look at the need for monitoring and alerting capabilities
419:32 monitoring and alerting capabilities we need monitoring and alerting
419:34 we need monitoring and alerting capabilities to address common concerns
419:36 capabilities to address common concerns such as the following how can we
419:38 such as the following how can we identify if the performance issue is
419:40 identify if the performance issue is because of
419:42 because of storage or how can we identify the
419:45 storage or how can we identify the storage bottlenecks before they slow
419:47 storage bottlenecks before they slow down the
419:49 down the applications or how can we better
419:52 applications or how can we better understand the storage need of the
419:58 organization monitoring data growth is essential for ensuring that applications
420:00 essential for ensuring that applications don't run out of disc capacity
420:06 without the ability to monitor the storage capacity the storage
420:08 storage capacity the storage administrators would not know when to
420:10 administrators would not know when to provision the new
420:15 diss this problem is addressed by the use of trend reports provided by the
420:18 use of trend reports provided by the storage management tools that forecast
420:20 storage management tools that forecast disk
420:26 utilization trending is the analysis of data to discover recognizable patterns
420:28 data to discover recognizable patterns that indicate a particular type of
420:30 that indicate a particular type of behavior
420:33 behavior forecasting is the prediction based on
420:35 forecasting is the prediction based on those patterns on how a particular
420:37 those patterns on how a particular Behavior will impact the
420:43 business depending on the historic and current storage usage Trends storage
420:46 current storage usage Trends storage administrators can forecast the future
420:48 administrators can forecast the future storage needs of a
420:50 storage needs of a business let's explain this with the
420:52 business let's explain this with the help of an
420:54 help of an example on the slide you can see a
420:57 example on the slide you can see a monthly Trend report which depicts the
420:59 monthly Trend report which depicts the monthly storage use and the total
421:01 monthly storage use and the total availab capacity in
421:07 gigabytes the Blue Line shows the monthly storage consumption and the
421:09 monthly storage consumption and the purple line shows the total available
421:17 capacity the Blue Line shows a trend and to be specific it is trending
421:20 to be specific it is trending upwards this means that the storage
421:22 upwards this means that the storage consumption is increasing on a
421:24 consumption is increasing on a month-to-month
421:26 month-to-month basis now if we want to know how much
421:29 basis now if we want to know how much storage space should be allocated in the
421:31 storage space should be allocated in the month of
421:32 month of December then we can add a trending line
421:35 December then we can add a trending line based on the previous month's
421:37 based on the previous month's data as you can see in the graph the
421:40 data as you can see in the graph the trending line allows us to forecast that
421:42 trending line allows us to forecast that a storage capacity of 80 GB is likely to
421:46 a storage capacity of 80 GB is likely to be used in the month of
421:47 be used in the month of December this report allows the storage
421:50 December this report allows the storage administrator to track the storage usage
421:53 administrator to track the storage usage against the total storage
421:56 against the total storage capacity the next step after trending
421:58 capacity the next step after trending and forecasting is capacity planning
422:02 and forecasting is capacity planning in capacity planning we allocate storage
422:05 in capacity planning we allocate storage capacity to satisfy the current and
422:07 capacity to satisfy the current and future needs of the
422:10 future needs of the business now we will talk about
422:12 business now we will talk about alerting while monitoring the storage
422:15 alerting while monitoring the storage infrastructure we want certain
422:17 infrastructure we want certain conditions to be brought to our
422:19 conditions to be brought to our notice alerting is a mechanism that
422:22 notice alerting is a mechanism that notifies the designated users upon the
422:25 notifies the designated users upon the occurrence or non-occurrence of an
422:28 occurrence or non-occurrence of an event the notification so triggered is
422:31 event the notification so triggered is called an alert for example when storage
422:35 called an alert for example when storage allocation or utilization reaches a
422:37 allocation or utilization reaches a predefined threshold the designated
422:40 predefined threshold the designated users are notified about
422:42 users are notified about it based on the severity of the
422:44 it based on the severity of the situation alerts can be categorized into
422:47 situation alerts can be categorized into three types critical Alert warning alert
422:51 three types critical Alert warning alert and information
422:53 and information alert a critical alert is one that needs
422:56 alert a critical alert is one that needs to be addressed immediately because it
422:58 to be addressed immediately because it may negatively affect the business for
423:01 may negatively affect the business for example
423:02 example a storage array that is running out of
423:04 a storage array that is running out of capacity requires immediate
423:06 capacity requires immediate action a warning alert is one that needs
423:09 action a warning alert is one that needs to be addressed but not
423:11 to be addressed but not immediately for example when a disk
423:14 immediately for example when a disk drive is running low on dis space it can
423:16 drive is running low on dis space it can be addressed at a later time but should
423:19 be addressed at a later time but should be addressed before it becomes
423:22 be addressed before it becomes critical an information alert is one
423:24 critical an information alert is one that doesn't require any action for
423:27 that doesn't require any action for example if a backup is successful it
423:29 example if a backup is successful it triggers an information alert
423:32 triggers an information alert now let's talk about the alerting
423:34 now let's talk about the alerting methods alerting is commonly done
423:36 methods alerting is commonly done through the following SNMP traps email
423:40 through the following SNMP traps email SMS and using the call home
423:44 SMS and using the call home option SNMP stands for simple Network
423:48 option SNMP stands for simple Network management protocol it is an application
423:51 management protocol it is an application layer protocol meant for exchanging
423:53 layer protocol meant for exchanging management data between the devices on a
423:57 management data between the devices on a network it is specifically used to
423:59 network it is specifically used to Monitor and manage devices on the networ
424:01 Monitor and manage devices on the networ Network such as routers switches servers
424:05 Network such as routers switches servers storage arrays and so
424:08 storage arrays and so on and SNMP implementation consists of
424:12 on and SNMP implementation consists of the following a managed device SNMP
424:15 the following a managed device SNMP agent and SNMP manager a manage device
424:19 agent and SNMP manager a manage device is the device on the network that
424:21 is the device on the network that requires some kind of monitoring and
424:23 requires some kind of monitoring and management for example a managed device
424:27 management for example a managed device could be a storage
424:29 could be a storage array an SNMP agent is a program that
424:32 array an SNMP agent is a program that runs on the managed device SNMP collects
424:36 runs on the managed device SNMP collects information from the device and sends it
424:38 information from the device and sends it to the SNMP manager an SNMP manager is
424:42 to the SNMP manager an SNMP manager is typically a computer that runs a network
424:44 typically a computer that runs a network management system it communicates with
424:47 management system it communicates with the SNMP
424:49 the SNMP agent management information base or Mi
424:53 agent management information base or Mi is a database maintained by an SNMP
424:57 is a database maintained by an SNMP agent it contains the information about
425:00 agent it contains the information about the managed device which is shared by
425:02 the managed device which is shared by both the SNMP agent and the SNMP
425:07 both the SNMP agent and the SNMP manager SNMP is typically enabled in a
425:10 manager SNMP is typically enabled in a storage
425:11 storage system whenever a specific event occurs
425:14 system whenever a specific event occurs in the storage system the SNMP agent
425:17 in the storage system the SNMP agent running on it will notify the SNMP
425:19 running on it will notify the SNMP manager by sending a
425:22 manager by sending a message since this message is said to
425:24 message since this message is said to trap an event it is called an SNMP
425:29 trap an event it is called an SNMP trap when the SNMP manager receives the
425:32 trap when the SNMP manager receives the event it takes action depending on the
425:36 event it takes action depending on the event in email and SMS alerting methods
425:39 event in email and SMS alerting methods when an event for which we had
425:41 when an event for which we had configured the alert occurs the device
425:43 configured the alert occurs the device will send out an email or SMS to the
425:46 will send out an email or SMS to the designated
425:48 designated users the call home alerting method
425:51 users the call home alerting method refers to the ability of some high-end
425:53 refers to the ability of some high-end storage arrays to send encrypted email
425:56 storage arrays to send encrypted email alerts to the vendor support center
425:58 alerts to the vendor support center using SSL to notify of the Fall
426:03 using SSL to notify of the Fall now we will talk about the audit log
426:06 now we will talk about the audit log audit logs provide a comprehensive
426:08 audit logs provide a comprehensive history on a user's activity such as the
426:10 history on a user's activity such as the time of access and the changes made by
426:13 time of access and the changes made by the user on the applications and
426:20 devices since the activities are tracked against date and time it is necessary to
426:23 against date and time it is necessary to use a reliable and trusted time Source
426:25 use a reliable and trusted time Source such as the network time protocol or ntp
426:29 such as the network time protocol or ntp service
426:31 service the ntp service can be configured on
426:34 the ntp service can be configured on each system to regularly synchronize
426:36 each system to regularly synchronize with the date and time of a common ntp
426:39 with the date and time of a common ntp server so that all the systems run with
426:41 server so that all the systems run with the same date and time let's look at the
426:44 the same date and time let's look at the advantages of using audit
426:46 advantages of using audit logs audit logs help administrators to
426:49 logs audit logs help administrators to find out about any suspicious activity
426:51 find out about any suspicious activity on the network or on the
426:58 system audit logs provide the information necessary to validate the
427:00 information necessary to validate the enforcement of security
427:03 enforcement of security policies audit logs provide the
427:05 policies audit logs provide the information necessary for root cause
427:07 information necessary for root cause analysis after a security
427:11 analysis after a security incident some audit logs such as those
427:13 incident some audit logs such as those related to security events must be
427:16 related to security events must be protected from unauthorized access since
427:18 protected from unauthorized access since they can be tampered with or
427:22 they can be tampered with or deleted in addition to that audit logs
427:25 deleted in addition to that audit logs can also be protected from incidents
427:27 can also be protected from incidents such as device failures by implementing
427:29 such as device failures by implementing cross-platform logging Solutions solons
427:32 cross-platform logging Solutions solons such as CIS
427:33 such as CIS log CIS log is a protocol for exchanging
427:36 log CIS log is a protocol for exchanging log
427:37 log messages it can be used by the devices
427:40 messages it can be used by the devices on the network to move audit logs to a
427:43 on the network to move audit logs to a central logging server called a Cy log
427:46 central logging server called a Cy log server Cy log allows the consolidation
427:49 server Cy log allows the consolidation of audit logs from multiple devices into
427:51 of audit logs from multiple devices into a single
427:53 a single place and that brings us to the end of
427:56 place and that brings us to the end of this lesson let's summarize what you
427:58 this lesson let's summarize what you have learned in this lesson in this
428:00 have learned in this lesson in this lesson you learn learned about the
428:01 lesson you learn learned about the monitoring and alerting capabilities
428:04 monitoring and alerting capabilities needed to manage the storage
428:07 needed to manage the storage infrastructure we started by looking at
428:09 infrastructure we started by looking at the need for monitoring and alerting
428:11 the need for monitoring and alerting capabilities and then we talked about
428:14 capabilities and then we talked about monitoring after that we looked at
428:16 monitoring after that we looked at trending and forcasting and then we
428:18 trending and forcasting and then we looked at capacity
428:20 looked at capacity planning next we looked at what alerting
428:23 planning next we looked at what alerting is and then we looked at the types of
428:26 is and then we looked at the types of alerting these are critical Alert
428:29 alerting these are critical Alert warning alert and information alert
428:32 warning alert and information alert we also talked about the alerting
428:34 we also talked about the alerting methods these are SNMP traps email SMS
428:40 methods these are SNMP traps email SMS and call home
428:43 and call home option after that we talked about the
428:45 option after that we talked about the audit log and then we talked about the
428:48 audit log and then we talked about the role of network time protocol in audit
428:54 logging we also talked about the advantages and importance of audit
428:58 advantages and importance of audit logs next we talked about CIS log and
429:01 logs next we talked about CIS log and lastly we talked about the benefits of
429:03 lastly we talked about the benefits of CIS
429:04 CIS log in the next lesson you will learn
429:07 log in the next lesson you will learn about storage performance thank you for
429:09 about storage performance thank you for watching
429:37 hello and welcome to unit one storage performance in this lesson you will
429:39 performance in this lesson you will learn about storage
429:41 learn about storage performance we're going to start by
429:43 performance we're going to start by looking at what storage performance is
429:46 looking at what storage performance is and then we will look at basic terms
429:48 and then we will look at basic terms used in storage
429:50 used in storage performance these are Io Io request
429:54 performance these are Io Io request sequential
429:55 sequential IO random IO throughput and latency
430:02 IO random IO throughput and latency next we will look at what a cach is and
430:05 next we will look at what a cach is and then we will look at the types of cach
430:08 then we will look at the types of cach these are volatile cache and
430:10 these are volatile cache and non-volatile
430:12 non-volatile cach after that we will talk about the
430:15 cach after that we will talk about the right operations to cach the right
430:18 right operations to cach the right operation to Cache is done in two ways
430:20 operation to Cache is done in two ways these are right back and right
430:25 these are right back and right through we will also look at read
430:28 through we will also look at read operation using cache and specifically
430:31 operation using cache and specifically we will look at Cash hit and cash Miss
430:34 we will look at Cash hit and cash Miss situations next we will look at storage
430:37 situations next we will look at storage tiering and the purpose of storage
430:39 tiering and the purpose of storage tiering after that we will look at the
430:42 tiering after that we will look at the types of storage taring and these are
430:44 types of storage taring and these are manual storage tiering and automatic
430:47 manual storage tiering and automatic storage
430:48 storage tearing we will also talk about an
430:51 tearing we will also talk about an automatic storage teering solution
430:53 automatic storage teering solution called hierarchical storage management
430:55 called hierarchical storage management or
430:57 or HSM next we will look at what a storage
431:00 HSM next we will look at what a storage profile is and then we will look at what
431:02 profile is and then we will look at what a partition is after that we will talk
431:05 a partition is after that we will talk about clusters and then we will talk
431:07 about clusters and then we will talk about partition alignment we'll also
431:10 about partition alignment we'll also talk about file fragmentation and
431:13 talk about file fragmentation and defragmentation next we will look at
431:15 defragmentation next we will look at what baselining is and lastly we will
431:18 what baselining is and lastly we will look at the advantages of
431:20 look at the advantages of baselining now we will look at what
431:22 baselining now we will look at what storage performance
431:24 storage performance is storage performance is a measure of
431:27 is storage performance is a measure of how effectively a storage device
431:29 how effectively a storage device operates
431:31 operates it deals with the I/O performance of a
431:33 it deals with the I/O performance of a storage
431:34 storage device let's look at a few basic terms
431:37 device let's look at a few basic terms that are used when talking about storage
431:40 that are used when talking about storage performance IO stands for input output
431:44 performance IO stands for input output which could be a write or read
431:47 which could be a write or read operation IO request an IO request is
431:51 operation IO request an IO request is the request by an application to read or
431:53 the request by an application to read or write a certain quantity of
431:56 write a certain quantity of data sequential IO blocks of IO requests
432:00 data sequential IO blocks of IO requests that are consecutively read or
432:04 that are consecutively read or written random IO blocks of IO requests
432:08 written random IO blocks of IO requests that are randomly read or
432:11 that are randomly read or written we will see how latency and
432:13 written we will see how latency and throughput impact storage
432:16 throughput impact storage performance throughput is the rate at
432:18 performance throughput is the rate at which the data is delivered by a storage
432:20 which the data is delivered by a storage device there are two ways in which
432:22 device there are two ways in which throughput is measured IO rate and data
432:27 throughput is measured IO rate and data rate IO rate is normally used for
432:29 rate IO rate is normally used for applications such as trans transaction
432:31 applications such as trans transaction processing in which IO requests are
432:35 processing in which IO requests are small it is measured in IO operations
432:38 small it is measured in IO operations per
432:39 per second data rate is normally used for
432:42 second data rate is normally used for applications such as scientific
432:44 applications such as scientific applications in which IO requests are
432:47 applications in which IO requests are large it's measured in megabytes per
432:51 large it's measured in megabytes per second latency is the time taken to
432:54 second latency is the time taken to complete an IO request it's also called
432:56 complete an IO request it's also called response time it's measured in
432:59 response time it's measured in milliseconds
433:01 milliseconds now let's talk about storage
433:03 now let's talk about storage cache cache is a semiconductor-based
433:06 cache cache is a semiconductor-based memory it temporarily stores data to
433:09 memory it temporarily stores data to reduce the time taken to service the
433:11 reduce the time taken to service the application's io
433:14 application's io requests we know that a disk drive is
433:16 requests we know that a disk drive is the slowest component of a
433:18 the slowest component of a computer being mechanical in nature the
433:21 computer being mechanical in nature the read write operations on a dis will
433:24 read write operations on a dis will normally take a few milliseconds so
433:26 normally take a few milliseconds so there will be a delay in servicing an IO
433:29 there will be a delay in servicing an IO in other words the response time or
433:31 in other words the response time or latency will be
433:33 latency will be high on the contrary accessing data from
433:36 high on the contrary accessing data from a cache for read write operations is
433:39 a cache for read write operations is faster than accessing it from
433:42 faster than accessing it from diss this is because a cache is
433:44 diss this is because a cache is semiconductor-based memory and accessing
433:47 semiconductor-based memory and accessing data doesn't involve seek time and
433:49 data doesn't involve seek time and rotational latency as in
433:53 rotational latency as in diss when an application has a right
433:55 diss when an application has a right request the right data is placed in the
433:58 request the right data is placed in the cache and then written to the disk
434:01 cache and then written to the disk the application receives an
434:03 the application receives an acknowledgement as soon as the right
434:05 acknowledgement as soon as the right data is placed in the
434:07 data is placed in the cache from the application perspective
434:10 cache from the application perspective the right operation is
434:12 the right operation is complete the advantage of buffering the
434:14 complete the advantage of buffering the right data to the cache is that the host
434:17 right data to the cache is that the host computer doesn't have to wait until the
434:19 computer doesn't have to wait until the data is actually written to the disk
434:22 data is actually written to the disk similarly to serve the read request of
434:24 similarly to serve the read request of an application the data that will be
434:26 an application the data that will be read again is placed in
434:29 read again is placed in Cache the advantage of buffering the
434:31 Cache the advantage of buffering the read data is that the host computer
434:34 read data is that the host computer doesn't have to repeatedly access the
434:36 doesn't have to repeatedly access the data from the disk hence cach increases
434:39 data from the disk hence cach increases the io performances of storage devices
434:42 the io performances of storage devices by reducing the response time to access
434:45 by reducing the response time to access the
434:50 data cash can be divided into two types volatile cach and nonvolatile
434:54 volatile cach and nonvolatile cach in volatile cache data is retained
434:57 cach in volatile cache data is retained as long as there is
434:59 as long as there is power in the event of a power outage
435:02 power in the event of a power outage data in the volatile cach will be
435:04 data in the volatile cach will be lost this type of cach is only good for
435:07 lost this type of cach is only good for reads and non-critical
435:10 reads and non-critical rights in non-volatile cach data is
435:13 rights in non-volatile cach data is retained even when there is no power in
435:16 retained even when there is no power in the event of a power outage data in the
435:18 the event of a power outage data in the nonvolatile cache will not be lost it is
435:22 nonvolatile cache will not be lost it is referred to as battery-backed right
435:25 referred to as battery-backed right cache this type of cache is good for
435:28 cache this type of cache is good for reads and
435:29 reads and writes now let's talk about the right
435:32 writes now let's talk about the right operation to cache in
435:34 operation to cache in detail the right operation to a cache is
435:37 detail the right operation to a cache is done in two ways right back and right
435:40 done in two ways right back and right through in the right back approach
435:43 through in the right back approach whenever an incoming right from the host
435:45 whenever an incoming right from the host computer is placed in the cache an
435:48 computer is placed in the cache an acknowledgement is sent to the host
435:50 acknowledgement is sent to the host computer without waiting for the data to
435:52 computer without waiting for the data to be written to the hard disk at a later
435:55 be written to the hard disk at a later time after several wrs the data of the
435:58 time after several wrs the data of the cache is deaged to the disk
436:01 cache is deaged to the disk in this approach the response time is
436:03 in this approach the response time is faster because the right operation is
436:06 faster because the right operation is not affected by the mechanical delays of
436:08 not affected by the mechanical delays of a
436:09 a disc the risk of losing data is high
436:12 disc the risk of losing data is high when there is a cash
436:14 when there is a cash failure in the right through approach
436:17 failure in the right through approach whenever an incoming right from the host
436:19 whenever an incoming right from the host computer is placed in the cache it is
436:21 computer is placed in the cache it is immediately written to the hard disk and
436:24 immediately written to the hard disk and then an acknowledgement is sent to the
436:26 then an acknowledgement is sent to the host in this approach the response time
436:29 host in this approach the response time is longer because because the right
436:31 is longer because because the right operation is affected by the mechanical
436:33 operation is affected by the mechanical delays of the
436:35 delays of the disc there is a low risk of losing data
436:38 disc there is a low risk of losing data when there is a cache failure because
436:40 when there is a cache failure because data is written to the disk as soon as
436:42 data is written to the disk as soon as it
436:44 it arrives now let's talk about the read
436:46 arrives now let's talk about the read operation to cache in
436:48 operation to cache in detail whenever a host computer requests
436:51 detail whenever a host computer requests access to data the cache is first
436:54 access to data the cache is first checked to see if the data is
436:56 checked to see if the data is available if the data is available in
436:58 available if the data is available in cach then it is retrieved and sent to
437:01 cach then it is retrieved and sent to the host this situation is called cach
437:05 the host this situation is called cach hit on the other hand if the data is not
437:08 hit on the other hand if the data is not available in Cache then it must be
437:10 available in Cache then it must be accessed from the disk the retriev data
437:13 accessed from the disk the retriev data is placed in the cache and then sent to
437:15 is placed in the cache and then sent to the host this situation is called Cash
437:18 the host this situation is called Cash Miss now let's talk about Storage
437:22 Miss now let's talk about Storage tiering storage taring is the method of
437:24 tiering storage taring is the method of storing data on different types of
437:26 storing data on different types of storage media based on the criticality
437:29 storage media based on the criticality and usage frequency of the data
437:36 in tiered storage the tiers represent the allocation of different types of
437:38 the allocation of different types of storage media based on how fast the data
437:41 storage media based on how fast the data needs to be accessed by the system for
437:44 needs to be accessed by the system for example in a three- tiered storage
437:46 example in a three- tiered storage system the top tier can represent the
437:49 system the top tier can represent the allocation of SSD drives to highly
437:52 allocation of SSD drives to highly accessed data the middle tier can
437:55 accessed data the middle tier can represent the allocation of dis drives
437:57 represent the allocation of dis drives to the less access
437:59 to the less access data and the bottom tier can represent
438:02 data and the bottom tier can represent the allocation of tape drives to the
438:04 the allocation of tape drives to the rarely accessed
438:06 rarely accessed data normally the high performing
438:09 data normally the high performing storage media will be on the tier that
438:11 storage media will be on the tier that provides a quick access to the highly
438:13 provides a quick access to the highly accessed
438:15 accessed data the remaining tiers will be
438:17 data the remaining tiers will be categorized based on the performance
438:19 categorized based on the performance requirement of the
438:21 requirement of the data and the rarely accessed data will
438:24 data and the rarely accessed data will be stored on a tier with the slowed
438:26 be stored on a tier with the slowed storage
438:27 storage media storage tiering is primarily done
438:30 media storage tiering is primarily done to reduce the cost associated with
438:32 to reduce the cost associated with storing the data because the high
438:35 storing the data because the high performance storage media is very
438:37 performance storage media is very expensive compared to the slower storage
438:40 expensive compared to the slower storage media there are two types of storage
438:42 media there are two types of storage tiering manual storage tiering and
438:45 tiering manual storage tiering and automatic storage
438:47 automatic storage tiering in manual storage tiering data
438:50 tiering in manual storage tiering data is manually moved to different types of
438:52 is manually moved to different types of storage media based on its criticality
438:55 storage media based on its criticality and usage
438:57 and usage frequency this process is labor
438:59 frequency this process is labor intensive and the the administrator has
439:01 intensive and the the administrator has to keep track of where the data was
439:03 to keep track of where the data was moved so that it can be made available
439:05 moved so that it can be made available when
439:07 when needed in automatic storage tiering the
439:10 needed in automatic storage tiering the administrator defines the policies and
439:12 administrator defines the policies and priorities that govern what data resides
439:15 priorities that govern what data resides in which tier and the system
439:17 in which tier and the system automatically moves the data to the
439:19 automatically moves the data to the relevant tiers without any human
439:23 relevant tiers without any human intervention hierarchical storage
439:25 intervention hierarchical storage management or HSM is an automatic
439:28 management or HSM is an automatic storage tiering solution that was
439:30 storage tiering solution that was developed veled by IBM in the early
439:34 developed veled by IBM in the early 1980s the HSM system will scan the data
439:37 1980s the HSM system will scan the data and find files that have not been
439:39 and find files that have not been accessed for some time then it will
439:42 accessed for some time then it will automatically move them to a lower tier
439:44 automatically move them to a lower tier of storage media such as tape this
439:46 of storage media such as tape this migration of data is not visible to the
439:49 migration of data is not visible to the user in place of the moved files small
439:52 user in place of the moved files small stubs which have internal pointers to
439:55 stubs which have internal pointers to the actual files on the tape are
439:57 the actual files on the tape are created these stubs look like the actual
440:00 created these stubs look like the actual file and if a user tries to access them
440:04 file and if a user tries to access them then the actual files will be
440:05 then the actual files will be automatically retrieved from the
440:08 automatically retrieved from the tape this recall of files may take
440:10 tape this recall of files may take several minutes as it involves mounting
440:13 several minutes as it involves mounting the tape positioning it and copying the
440:16 the tape positioning it and copying the files back to the
440:18 files back to the dis when we talk about storage
440:20 dis when we talk about storage performance it's necessary to mention
440:23 performance it's necessary to mention storage
440:25 storage profile a storage profile is what
440:27 profile a storage profile is what defines the storage performance
440:29 defines the storage performance characteristics such such as raid level
440:31 characteristics such such as raid level stripe size segment size and dedicated
440:35 stripe size segment size and dedicated hot
440:36 hot spare the default configuration of a
440:39 spare the default configuration of a storage system typically uses the
440:41 storage system typically uses the default storage profile that provides
440:44 default storage profile that provides balanced access to storage however
440:47 balanced access to storage however depending on the io requirements of the
440:49 depending on the io requirements of the applications using the storage a storage
440:51 applications using the storage a storage profile with different performance
440:53 profile with different performance characteristics can be selected or
440:55 characteristics can be selected or created by a custom storage
440:58 created by a custom storage profile now let's talk about partitions
441:01 profile now let's talk about partitions and its impact on storage
441:04 and its impact on storage performance a partition is an operating
441:06 performance a partition is an operating system concept of making a single
441:09 system concept of making a single physical hard dis Drive function as
441:11 physical hard dis Drive function as multiple logical dis drives for example
441:15 multiple logical dis drives for example on a computer with the Windows operating
441:17 on a computer with the Windows operating system a single physical hard dis Drive
441:20 system a single physical hard dis Drive can be partitioned into multiple logical
441:22 can be partitioned into multiple logical dis drives such as c d e and so
441:27 dis drives such as c d e and so on the point is that a single phys iCal
441:30 on the point is that a single phys iCal hard dis Drive can have multiple
441:33 hard dis Drive can have multiple partitions each such partition is a
441:36 partitions each such partition is a logical dis drive that can have its own
441:38 logical dis drive that can have its own file
441:39 file system for example we can have the C
441:42 system for example we can have the C drive with a Windows File system that
441:44 drive with a Windows File system that can boot into a Windows operating
441:47 can boot into a Windows operating system on the same physical hard disk
441:49 system on the same physical hard disk drive we can also have the E drive with
441:52 drive we can also have the E drive with the ex2 file system that has the ability
441:54 the ex2 file system that has the ability to boot into a Linux operating
441:58 to boot into a Linux operating system a cluster size is deced decided
442:01 system a cluster size is deced decided when a partition is formatted by the
442:02 when a partition is formatted by the operating system it can be expressed in
442:06 operating system it can be expressed in sectors for example if the sector size
442:09 sectors for example if the sector size of a physical hard dis Drive is 512
442:12 of a physical hard dis Drive is 512 bytes then a 4 kilobyte cluster will
442:15 bytes then a 4 kilobyte cluster will have eight
442:17 have eight sectors one of the factors that affect
442:19 sectors one of the factors that affect the storage performance is the Omission
442:21 the storage performance is the Omission to do a partition alignment on the
442:25 to do a partition alignment on the discs a partition is said to be
442:27 discs a partition is said to be misaligned when its clusters are not
442:29 misaligned when its clusters are not aligned with the sectors of the physical
442:31 aligned with the sectors of the physical hard disk
442:33 hard disk drive so when an application tries to
442:36 drive so when an application tries to access the data it may not be in the
442:38 access the data it may not be in the location where it is supposed to be
442:40 location where it is supposed to be because of a
442:41 because of a misalignment as a result additional read
442:44 misalignment as a result additional read write operations are involved to access
442:46 write operations are involved to access the data which degrades the storage
442:53 performance now let's talk about how file fragmentation affects the storage
442:55 file fragmentation affects the storage performance by causing additional IO
442:57 performance by causing additional IO operations
443:00 operations file fragmentation refers to a
443:02 file fragmentation refers to a phenomenon in which the data of a file
443:05 phenomenon in which the data of a file is stored in non-contiguous clusters in
443:07 is stored in non-contiguous clusters in the file
443:09 the file system this happens when the system
443:11 system this happens when the system cannot allocate contigous disk space to
443:13 cannot allocate contigous disk space to store an entire
443:16 store an entire file when a file is not contiguously
443:18 file when a file is not contiguously stored on a disk the data in the file
443:21 stored on a disk the data in the file cannot be easily located in consecutive
443:24 cannot be easily located in consecutive clusters this affects I/O performance
443:27 clusters this affects I/O performance because to access data from different
443:29 because to access data from different locations on the dis additional seek
443:32 locations on the dis additional seek time and rotational latency is
443:35 time and rotational latency is involved defragmentation is done to
443:38 involved defragmentation is done to reduce the fragmentation by reorganizing
443:41 reduce the fragmentation by reorganizing the data that makes up the
443:43 the data that makes up the file next we will talk about
443:46 file next we will talk about baselining a baseline is the initial
443:48 baselining a baseline is the initial measurement of a metric such as iops
443:51 measurement of a metric such as iops that can be continuously monitored to
443:53 that can be continuously monitored to check if anything has
443:56 check if anything has changed baselining is the method of
443:58 changed baselining is the method of analyzing current storage per
444:00 analyzing current storage per performance against a
444:03 performance against a baseline baselining can be of great
444:05 baseline baselining can be of great value when it comes to identifying the
444:08 value when it comes to identifying the cause of
444:09 cause of problems the only way to know if storage
444:11 problems the only way to know if storage is performing as expected is to capture
444:14 is performing as expected is to capture the performance metrics when the storage
444:16 the performance metrics when the storage is functioning properly at a later time
444:19 is functioning properly at a later time during troubleshooting storage
444:21 during troubleshooting storage administrators can compare this data
444:23 administrators can compare this data with the data gathered when the storage
444:25 with the data gathered when the storage performance started
444:31 deteriorating baselines are also used to identify Trends such as storage usage
444:34 identify Trends such as storage usage over a period of
444:36 over a period of time and that brings us to the end of
444:38 time and that brings us to the end of this lesson let's summarize what you
444:40 this lesson let's summarize what you have learned in this
444:42 have learned in this lesson in this lesson you learned about
444:45 lesson in this lesson you learned about storage
444:46 storage performance we started by looking at
444:48 performance we started by looking at what storage performance was and then we
444:51 what storage performance was and then we looked at the basic terms used in
444:53 looked at the basic terms used in storage performance these are Io Io
444:57 storage performance these are Io Io request sequential IO random IO
445:01 request sequential IO random IO throughput and
445:04 throughput and latency next we looked at what a cach is
445:07 latency next we looked at what a cach is and then we looked at the types of cach
445:10 and then we looked at the types of cach these are volatile cach and non-volatile
445:13 these are volatile cach and non-volatile cach after that we talked about the
445:15 cach after that we talked about the right operations to
445:17 right operations to Cache the right operations to Cache are
445:20 Cache the right operations to Cache are done in two ways these are right back
445:23 done in two ways these are right back and right through
445:25 and right through operation we also looked at read
445:27 operation we also looked at read operation using cache and in specific
445:30 operation using cache and in specific we looked at Cash hit and cash Miss
445:33 we looked at Cash hit and cash Miss situations next we looked at storage
445:36 situations next we looked at storage taring and then we looked at the purpose
445:38 taring and then we looked at the purpose of storage
445:40 of storage tearing after that we looked at the
445:42 tearing after that we looked at the types of storage tearing these are
445:45 types of storage tearing these are manual storage teering and automatic
445:47 manual storage teering and automatic storage
445:48 storage taring we also talked about an automatic
445:51 taring we also talked about an automatic storage taring solution called
445:53 storage taring solution called hierarchical storage management or
445:56 hierarchical storage management or HSM next we looked at what a storage
445:59 HSM next we looked at what a storage profile is and then we looked at what a
446:01 profile is and then we looked at what a partition
446:03 partition is after that we talked about clusters
446:06 is after that we talked about clusters and then we talked about partition
446:08 and then we talked about partition alignment we also talked about file
446:10 alignment we also talked about file fragmentation and
446:13 fragmentation and defragmentation next we looked at what
446:15 defragmentation next we looked at what baselining is and lastly we looked at
446:18 baselining is and lastly we looked at the advantages of
446:20 the advantages of baselining in the next lesson you will
446:23 baselining in the next lesson you will learn how to identify and resolve common
446:26 learn how to identify and resolve common problems in FC San thank you for
446:28 problems in FC San thank you for watching
446:57 hello and welcome to unit one FC sand troubleshooting in this lesson you will
446:59 troubleshooting in this lesson you will learn how to identify and resolve common
447:02 learn how to identify and resolve common problems in FC
447:04 problems in FC San we're going to start by looking at
447:06 San we're going to start by looking at what troubleshooting is and then we'll
447:08 what troubleshooting is and then we'll see how to troubleshoot problems next we
447:11 see how to troubleshoot problems next we will talk about troubleshooting in the
447:13 will talk about troubleshooting in the FC San
447:15 FC San environment when we cover
447:17 environment when we cover troubleshooting in FC San we will look
447:19 troubleshooting in FC San we will look at troubleshooting associated with the
447:21 at troubleshooting associated with the Fiber Optic Cables we will also look at
447:24 Fiber Optic Cables we will also look at the distances supported by fiber optic
447:26 the distances supported by fiber optic cables and in specific we will look at
447:29 cables and in specific we will look at the categories of multimode fibers that
447:31 the categories of multimode fibers that support different speeds and
447:34 support different speeds and distances after that we will talk about
447:36 distances after that we will talk about troubleshooting fiber channel switches
447:39 troubleshooting fiber channel switches and then we will look at troubleshooting
447:40 and then we will look at troubleshooting the host bus adapters while talking
447:42 the host bus adapters while talking about troubleshooting host bus adapters
447:45 about troubleshooting host bus adapters we will look at troubleshooting the
447:46 we will look at troubleshooting the problems associated with a failed HBA
447:49 problems associated with a failed HBA HBA dropping links between a server and
447:51 HBA dropping links between a server and an FC switch and a corrupted HBA
447:55 an FC switch and a corrupted HBA driver next we will talk about
447:57 driver next we will talk about troubleshooting the storage arrays and
447:59 troubleshooting the storage arrays and then we will talk about troubleshooting
448:01 then we will talk about troubleshooting the servers we'll also look at
448:03 the servers we'll also look at troubleshooting the problems associated
448:05 troubleshooting the problems associated with zoning and then we'll look at what
448:08 with zoning and then we'll look at what FCP is lastly we will look at a few
448:11 FCP is lastly we will look at a few troubleshooting
448:12 troubleshooting scenarios now let's look at what
448:14 scenarios now let's look at what troubleshooting is troubleshooting is
448:17 troubleshooting is troubleshooting is all about analyzing and solving
448:20 all about analyzing and solving problems the best thing to do is not to
448:22 problems the best thing to do is not to let the problem happen in the first
448:24 let the problem happen in the first place as the old saying goes prevention
448:27 place as the old saying goes prevention is better than the Cure so it's
448:30 is better than the Cure so it's recommended to do things right the first
448:32 recommended to do things right the first time we do them for example a new server
448:35 time we do them for example a new server may not connect to the storage because
448:37 may not connect to the storage because of an incorrect Zone
448:39 of an incorrect Zone configuration getting the Zone
448:41 configuration getting the Zone configuration right the first time saves
448:43 configuration right the first time saves the time and frustration involved in
448:45 the time and frustration involved in troubleshooting the issue an important
448:48 troubleshooting the issue an important thing about troubleshooting is that we
448:49 thing about troubleshooting is that we should know how things work correctly in
448:51 should know how things work correctly in an environment before we try to figure
448:53 an environment before we try to figure out what went wrong a logical
448:56 out what went wrong a logical step-by-step approach in troubleshooting
448:58 step-by-step approach in troubleshooting helps eliminate the possible causes in
449:01 helps eliminate the possible causes in order to identify the exact cause of the
449:03 order to identify the exact cause of the issue when the exact cause of the
449:05 issue when the exact cause of the problem is known the problem becomes
449:07 problem is known the problem becomes easy to fix in a fiber channel sand all
449:10 easy to fix in a fiber channel sand all the components involved should work
449:12 the components involved should work together properly if a server has to
449:14 together properly if a server has to access the storage in the storage
449:17 access the storage in the storage array so if a server is not able to
449:20 array so if a server is not able to access the storage then there is a
449:22 access the storage then there is a chance the problem could lie with one or
449:23 chance the problem could lie with one or more components of the FC San let's look
449:27 more components of the FC San let's look at troubleshooting of FC San components
449:30 at troubleshooting of FC San components we will start with fiber optic cable a
449:32 we will start with fiber optic cable a faulty fiber optic cable cannot
449:34 faulty fiber optic cable cannot transport data if the LEDs next to the
449:37 transport data if the LEDs next to the port where the fiber optic cable is
449:39 port where the fiber optic cable is plugged in don't light up then the
449:41 plugged in don't light up then the following options could be the problem
449:43 following options could be the problem the cable is not plugged in properly at
449:45 the cable is not plugged in properly at both ends or the cable is
449:47 both ends or the cable is broken so if the led doesn't light up
449:51 broken so if the led doesn't light up even after the fiber optic cable is
449:52 even after the fiber optic cable is plugged into the port then the problem
449:54 plugged into the port then the problem could be with the cable but what if the
449:56 could be with the cable but what if the port itself is not
449:58 port itself is not working so it is recommended to check
450:01 working so it is recommended to check the cable by plugging the cable into the
450:03 the cable by plugging the cable into the port that is working if the LED next to
450:06 port that is working if the LED next to the port still doesn't glow then the
450:08 the port still doesn't glow then the problem is with the fiber optic
450:10 problem is with the fiber optic cable when troubleshooting never look
450:13 cable when troubleshooting never look into the ends of a fiber optic cable as
450:15 into the ends of a fiber optic cable as the light coming out of it could damage
450:17 the light coming out of it could damage your
450:18 your eyes when connecting two devices such as
450:21 eyes when connecting two devices such as a server's HBA and a switch using a
450:23 a server's HBA and a switch using a fiber optic cable the transmit or TX
450:27 fiber optic cable the transmit or TX port at one end should always be
450:29 port at one end should always be connected to the the receive or RX port
450:31 connected to the the receive or RX port at the opposite end for example the
450:34 at the opposite end for example the connection from the server's HBA to a
450:36 connection from the server's HBA to a switch Port should be from the transmit
450:38 switch Port should be from the transmit port to the receive port and the receive
450:41 port to the receive port and the receive port to the transmit Port as shown in
450:43 port to the transmit Port as shown in the
450:44 the diagram if the cable is connected the
450:46 diagram if the cable is connected the other way around such as transmit port
450:48 other way around such as transmit port to transmit Port then there will be no
450:50 to transmit Port then there will be no connectivity and the switch Port will
450:52 connectivity and the switch Port will show no
450:54 show no connection we know that multimode fibers
450:56 connection we know that multimode fibers with 50 microns and 62.5 microns can
451:00 with 50 microns and 62.5 microns can support data transmission up to a
451:02 support data transmission up to a distance of 500 M and 175 M respectively
451:06 distance of 500 M and 175 M respectively in addition to that multimode fibers are
451:08 in addition to that multimode fibers are further categorized by an optical
451:10 further categorized by an optical multimode or om designator these are
451:13 multimode or om designator these are labeled from om1 to
451:16 labeled from om1 to om4 the maximum distances supported by
451:19 om4 the maximum distances supported by each of these om designators at
451:21 each of these om designators at different common fiber channel speeds
451:23 different common fiber channel speeds are tabulated as shown in the
451:25 are tabulated as shown in the slide when troubleshooting a
451:27 slide when troubleshooting a connectivity issue between fiber channel
451:30 connectivity issue between fiber channel devices it is necessary to check if the
451:32 devices it is necessary to check if the fiber optic cable supports the speed and
451:35 fiber optic cable supports the speed and distance of the link between the
451:37 distance of the link between the devices now let's look at
451:38 devices now let's look at troubleshooting fiber channel
451:41 troubleshooting fiber channel switches a fiber channel switch can be
451:43 switches a fiber channel switch can be configured to send SNMP alerts for
451:46 configured to send SNMP alerts for failures such as Port failures power
451:48 failures such as Port failures power supply failures and switch failure
451:51 supply failures and switch failure itself since it is not possible to
451:53 itself since it is not possible to remember all the port and Zone
451:55 remember all the port and Zone configurations of a switch it is
451:56 configurations of a switch it is important to back up the correct
451:58 important to back up the correct configuration of the f C switch this
452:01 configuration of the f C switch this will come in handy when we are replacing
452:03 will come in handy when we are replacing a failed switch with a new
452:05 a failed switch with a new one now let's look at troubleshooting
452:08 one now let's look at troubleshooting host bus
452:09 host bus adapters an hba's LED lights do not
452:12 adapters an hba's LED lights do not light up either when it has failed or if
452:15 light up either when it has failed or if it is not seated properly in the
452:17 it is not seated properly in the motherboard in such a situation the
452:20 motherboard in such a situation the applications on the server will display
452:22 applications on the server will display error messages because it will not be
452:24 error messages because it will not be able to connect to the storage using the
452:26 able to connect to the storage using the hba's device
452:28 hba's device driver if the HBA failed replace it with
452:31 driver if the HBA failed replace it with a new or working
452:32 a new or working HBA after replacing the failed HBA if
452:36 HBA after replacing the failed HBA if the server is not able to communicate
452:38 the server is not able to communicate with the storage then it could be that
452:39 with the storage then it could be that the Zone configuration is still using
452:41 the Zone configuration is still using the failed hba's
452:44 the failed hba's wwnn in such a situation the wwnn of the
452:48 wwnn in such a situation the wwnn of the new HBA should be replaced in the zone
452:51 new HBA should be replaced in the zone configuration in order for the server to
452:53 configuration in order for the server to see the
452:55 see the storage instead of an outright failure
452:58 storage instead of an outright failure the HBA could be dro droing the link
453:00 the HBA could be dro droing the link between the server and the FC
453:02 between the server and the FC switch before we conclude that the HBA
453:05 switch before we conclude that the HBA is defective we can test it by plugging
453:07 is defective we can test it by plugging it into a test
453:09 it into a test server if the HBA Works without any
453:12 server if the HBA Works without any issues on the test server then the
453:14 issues on the test server then the problem could be related to the
453:15 problem could be related to the following a corrupted HBA driver or the
453:19 following a corrupted HBA driver or the HBA configuration file on the server
453:22 HBA configuration file on the server changed the issue of a corrupted driver
453:25 changed the issue of a corrupted driver can be fixed by uninstalling the
453:26 can be fixed by uninstalling the existing driver and reinstalling the
453:29 existing driver and reinstalling the recommended driver driver from the HBA
453:31 recommended driver driver from the HBA vendor if the driver you are installing
453:33 vendor if the driver you are installing is a new version then it is highly
453:35 is a new version then it is highly recommended to test it on a test server
453:37 recommended to test it on a test server before installing it on a production
453:39 before installing it on a production server to avoid any
453:42 server to avoid any issues now let's look at troubleshooting
453:44 issues now let's look at troubleshooting storage arrays storage arrays have
453:47 storage arrays storage arrays have redundant components so even if a
453:49 redundant components so even if a component fails its redundant component
453:51 component fails its redundant component will resume operation and the diagnostic
453:54 will resume operation and the diagnostic software of the San array will send
453:57 software of the San array will send alerts to the designated user about the
453:59 alerts to the designated user about the failure
454:00 failure or use the call home option to get
454:02 or use the call home option to get support from the storage array vendor
454:05 support from the storage array vendor now let's look at troubleshooting the
454:06 now let's look at troubleshooting the server occasionally a server may not
454:09 server occasionally a server may not show the storage from the storage array
454:11 show the storage from the storage array in such cases rebooting the server
454:13 in such cases rebooting the server without affecting the production may
454:15 without affecting the production may help resolve the issue it's a good
454:18 help resolve the issue it's a good practice to run the patches or updates
454:20 practice to run the patches or updates recommended by the storage vendors the
454:22 recommended by the storage vendors the patches or updates usually contain fixes
454:24 patches or updates usually contain fixes for known issues or bugs now let's look
454:27 for known issues or bugs now let's look at troubleshooting zoning zoning
454:30 at troubleshooting zoning zoning establishes a connection between the
454:31 establishes a connection between the server and the storage the zoning
454:34 server and the storage the zoning configuration contains the wwnn of the
454:37 configuration contains the wwnn of the hbas port and the wnn of the storage
454:40 hbas port and the wnn of the storage arrays Port if the zoning configuration
454:43 arrays Port if the zoning configuration is altered or if a different zoning
454:45 is altered or if a different zoning configuration is implemented then it
454:47 configuration is implemented then it will affect the connection between the
454:48 will affect the connection between the server and the
454:51 server and the storage if a server is not able to see
454:53 storage if a server is not able to see the storage when there is no problem
454:55 the storage when there is no problem with any of the sand components then
454:57 with any of the sand components then it's likely to be a problem with the
454:59 it's likely to be a problem with the zoning config
455:01 zoning config configuration now let's look at what FCP
455:04 configuration now let's look at what FCP is FCP is a command line utility that is
455:07 is FCP is a command line utility that is used to check the connectivity to a
455:09 used to check the connectivity to a Remote device on a fiber channel sand in
455:12 Remote device on a fiber channel sand in order to check if the Remote device is
455:14 order to check if the Remote device is accessible we give the FCP and command
455:17 accessible we give the FCP and command the worldwide Port name or the worldwide
455:19 the worldwide Port name or the worldwide node name of the Remote device this is
455:23 node name of the Remote device this is done on the command line of the host or
455:25 done on the command line of the host or switch by typing the FCP and command
455:27 switch by typing the FCP and command followed by the worldwide name of the
455:29 followed by the worldwide name of the Remote device as shown in the
455:32 Remote device as shown in the slide when we use FCP the command
455:35 slide when we use FCP the command performs a zoning check between the
455:37 performs a zoning check between the source and the
455:38 source and the destination irrespective of the remote
455:41 destination irrespective of the remote devices zoning configuration the FCP and
455:43 devices zoning configuration the FCP and command sends the Els frame to the
455:46 command sends the Els frame to the destination Port if FCP is successful
455:49 destination Port if FCP is successful then the device is accessible from The
455:52 then the device is accessible from The Source but if the Remote device doesn't
455:54 Source but if the Remote device doesn't respond or rejects the Els request then
455:57 respond or rejects the Els request then it is possible that the Remote device is
455:59 it is possible that the Remote device is not supporting the El Echo
456:02 not supporting the El Echo request in such a situation it's not
456:05 request in such a situation it's not safe to assume that the Remote device is
456:07 safe to assume that the Remote device is not connected and you should continue
456:09 not connected and you should continue with further
456:11 with further troubleshooting now let's discuss a few
456:14 troubleshooting now let's discuss a few troubleshooting
456:15 troubleshooting scenarios scenario one in this scenario
456:18 scenarios scenario one in this scenario we have a server with two hbas going to
456:21 we have a server with two hbas going to one switch which in turn is connected to
456:23 one switch which in turn is connected to a storage
456:25 a storage array The Zone configuration is based on
456:28 array The Zone configuration is based on the WW ends of the
456:31 the WW ends of the hbas you notice that after a cleaning
456:33 hbas you notice that after a cleaning person left the server room a server
456:35 person left the server room a server lost its connectivity on one of its dual
456:37 lost its connectivity on one of its dual redundant paths to the
456:40 redundant paths to the sand though the cables are secured
456:42 sand though the cables are secured tightly from the two hbas on the server
456:44 tightly from the two hbas on the server to the switch you notice that the fault
456:47 to the switch you notice that the fault light next to the port of one of the
456:48 light next to the port of one of the hbas is on and the corresponding port on
456:52 hbas is on and the corresponding port on the switch is also not lit what could be
456:54 the switch is also not lit what could be the
457:01 problem resolution of scenario 1 the HBA is definitely not unseated if it were
457:03 is definitely not unseated if it were unseated then we would not see the fault
457:05 unseated then we would not see the fault light next to the HBA
457:10 port in spite of the cable being properly plugged in to both the HBA port
457:13 properly plugged in to both the HBA port and the switch Port if the LED lights
457:15 and the switch Port if the LED lights are not green on these ports then the
457:17 are not green on these ports then the problem could be with the
457:19 problem could be with the cable in order to confirm this we can
457:22 cable in order to confirm this we can plug both the ends of the cable into
457:24 plug both the ends of the cable into ports that we know
457:26 ports that we know work if the LEDs next to the ports don't
457:29 work if the LEDs next to the ports don't light up up then the problem is with the
457:31 light up up then the problem is with the cable it must be
457:38 damaged scenario 2 in this scenario a server has been in use on a storage area
457:40 server has been in use on a storage area network for a year all of a sudden it
457:42 network for a year all of a sudden it cannot see the remote
457:45 cannot see the remote storage the server HBA doesn't have its
457:48 storage the server HBA doesn't have its LED lit up the switch Port where the
457:51 LED lit up the switch Port where the server connects also doesn't show any
457:54 server connects also doesn't show any connectivity moving the cable from the
457:56 connectivity moving the cable from the server to a different working port on
457:58 server to a different working port on the same switch didn't help help because
458:00 the same switch didn't help help because there was also no sign of
458:02 there was also no sign of connectivity assuming the cable is
458:04 connectivity assuming the cable is working fine what could be the
458:07 working fine what could be the problem resolution of scenario 2 the
458:11 problem resolution of scenario 2 the problem is with the server HBA it may be
458:13 problem is with the server HBA it may be unseeded from the PCI expansion slot of
458:16 unseeded from the PCI expansion slot of the
458:17 the server or it could have
458:19 server or it could have failed a failed HBA doesn't have its LED
458:23 failed a failed HBA doesn't have its LED lit and the application on the server
458:25 lit and the application on the server would throw error messages because it
458:27 would throw error messages because it wouldn't be able to connect to the
458:28 wouldn't be able to connect to the storage using the failed hba's device
458:36 driver scenario 3 in this scenario we have a new server with a single HBA
458:39 have a new server with a single HBA connected to an FC switch which in turn
458:41 connected to an FC switch which in turn is connected to a storage
458:43 is connected to a storage array the user is experiencing
458:45 array the user is experiencing connectivity issues between the server
458:47 connectivity issues between the server and the FC
458:49 and the FC switch the link between the server and
458:51 switch the link between the server and the FC switch is running at 4 GBS per
458:54 the FC switch is running at 4 GBS per second on a 100 m cable that has Optical
458:57 second on a 100 m cable that has Optical multimode designator om1
459:01 multimode designator om1 one assuming that all the other sand
459:03 one assuming that all the other sand components are working fine what could
459:05 components are working fine what could be the
459:07 be the problem resolution of scenario 3 when we
459:10 problem resolution of scenario 3 when we troubleshoot the connectivity issue
459:12 troubleshoot the connectivity issue between a new server and an FC switch if
459:14 between a new server and an FC switch if the cable is a multimode fiber optic
459:17 the cable is a multimode fiber optic cable then the first thing we need to
459:18 cable then the first thing we need to check is the optical multimode
459:22 check is the optical multimode designator in our case the cable has an
459:24 designator in our case the cable has an optical multimode designator om1 that
459:27 optical multimode designator om1 that only supports a distance of 70 M at 4
459:30 only supports a distance of 70 M at 4 gbits per second
459:33 gbits per second speed hence the cable used should be om2
459:37 speed hence the cable used should be om2 not
459:43 om1 scenario 4 in this scenario all the servers in the storage area network lost
459:46 servers in the storage area network lost their connection with the remote storage
459:48 their connection with the remote storage when a firewall was installed in the
459:49 when a firewall was installed in the data center what could be the
459:53 data center what could be the problem resolution the firewall has
459:56 problem resolution the firewall has blocked the communication between the
459:58 blocked the communication between the servers and the remote storage stage in
460:00 servers and the remote storage stage in order to resolve this issue TCP Port
460:03 order to resolve this issue TCP Port 3260 must be permitted on the
460:07 3260 must be permitted on the firewall scenario 5 in this scenario a
460:11 firewall scenario 5 in this scenario a user is not able to see the newly added
460:13 user is not able to see the newly added Lun on the
460:14 Lun on the server as a storage administrator what
460:17 server as a storage administrator what should you do to help the user see the
460:19 should you do to help the user see the newly created storage on the
460:22 newly created storage on the server resolution when we run a dis
460:25 server resolution when we run a dis rescan on the server the user should be
460:27 rescan on the server the user should be able to see the newly added LUN on it
460:30 able to see the newly added LUN on it and that brings us to the end of this
460:32 and that brings us to the end of this lesson let's summarize what you have
460:35 lesson let's summarize what you have learned in this
460:36 learned in this lesson in this lesson you learned how to
460:39 lesson in this lesson you learned how to identify and resolve common problems in
460:41 identify and resolve common problems in FC
460:42 FC San we started by looking at what
460:45 San we started by looking at what troubleshooting is and then we saw how
460:47 troubleshooting is and then we saw how to troubleshoot problems next we talked
460:50 to troubleshoot problems next we talked about troubleshooting in an FC San
460:53 about troubleshooting in an FC San environment when we covered
460:54 environment when we covered troubleshooting in FC sand we looked at
460:57 troubleshooting in FC sand we looked at troubleshooting associated with the
460:58 troubleshooting associated with the fiber optic
461:00 fiber optic cables we also looked at the distances
461:02 cables we also looked at the distances supported by fiber optic cables and
461:05 supported by fiber optic cables and specifically we looked at the categories
461:07 specifically we looked at the categories of multimode fibers that support
461:09 of multimode fibers that support different speeds and
461:14 distances after that we talked about troubleshooting fiber channel switches
461:17 troubleshooting fiber channel switches and then we looked at troubleshooting
461:18 and then we looked at troubleshooting the host bus
461:19 the host bus adapters while talking about
461:21 adapters while talking about troubleshooting host bus adapters we
461:23 troubleshooting host bus adapters we looked at troubleshooting the problems
461:25 looked at troubleshooting the problems associated with a failed HBA HBA
461:28 associated with a failed HBA HBA dropping links between between a server
461:29 dropping links between between a server and an FC switch and a corrupted HBA
461:33 and an FC switch and a corrupted HBA driver next we talked about
461:35 driver next we talked about troubleshooting the storage arrays and
461:37 troubleshooting the storage arrays and then we talked about troubleshooting the
461:40 then we talked about troubleshooting the servers we also looked at
461:41 servers we also looked at troubleshooting the problems associated
461:43 troubleshooting the problems associated with zoning and then we looked at what
461:45 with zoning and then we looked at what FCP
461:47 FCP is lastly we looked at a few
461:49 is lastly we looked at a few troubleshooting
461:51 troubleshooting scenarios in the next lesson you will
461:53 scenarios in the next lesson you will learn about the command line tools that
461:55 learn about the command line tools that are used to troubleshoot TCP IP networks
461:58 are used to troubleshoot TCP IP networks thank you thank you for
462:23 watching hello and welcome to unit two land
462:24 land troubleshooting in this lesson you will
462:26 troubleshooting in this lesson you will learn about land troubleshooting we're
462:28 learn about land troubleshooting we're going to start by by looking at the
462:29 going to start by by looking at the command line tools that are used to
462:31 command line tools that are used to troubleshoot TCP IP
462:34 troubleshoot TCP IP networks we will look at the Ping
462:36 networks we will look at the Ping command which tests for connectivity and
462:39 command which tests for connectivity and then we will look at the trace route
462:41 then we will look at the trace route command which traces a ping route you
462:44 command which traces a ping route you should know about the icmp protocol the
462:46 should know about the icmp protocol the control messaging
462:48 control messaging protocol it is used by both the Ping and
462:50 protocol it is used by both the Ping and trace route
462:52 trace route command after that we will talk about
462:54 command after that we will talk about the NS lookup which stands for name
462:57 the NS lookup which stands for name server lookup
462:59 server lookup it converts between an IP address to a
463:01 it converts between an IP address to a fully qualified domain name such as
463:04 fully qualified domain name such as yahoo.com we will also talk about ip
463:07 yahoo.com we will also talk about ip config or IP
463:10 config or IP configuration next we will look at
463:12 configuration next we will look at troubleshooting common networking
463:13 troubleshooting common networking problems including no
463:16 problems including no connectivity intermittent connectivity
463:19 connectivity intermittent connectivity and slow
463:21 and slow connectivity first we will talk about
463:23 connectivity first we will talk about the ping command the Ping command is
463:25 the ping command the Ping command is used to test endtoend
463:28 used to test endtoend connectivity it sends the packet of
463:30 connectivity it sends the packet of information which is icmp through a
463:32 information which is icmp through a connection and waits to receive some
463:35 connection and waits to receive some packets
463:36 packets back it can also be used to test the
463:39 back it can also be used to test the maximum transmission units or
463:42 maximum transmission units or mtus MTU is the maximum size of a data
463:45 mtus MTU is the maximum size of a data packet that can be sent over a
463:49 packet that can be sent over a network using ping we can test the time
463:51 network using ping we can test the time it takes for data to travel from source
463:54 it takes for data to travel from source to
463:55 to destination ping can also be used on the
463:57 destination ping can also be used on the Local Host
463:59 Local Host the IP address of the Local Host is
464:03 the IP address of the Local Host is 127.0.0.1
464:10 we can test this by opening the command prompt in the Windows operating
464:13 prompt in the Windows operating system and typing in ping and then the
464:16 system and typing in ping and then the IP
464:17 IP address we are at the command prompt and
464:20 address we are at the command prompt and we will type for example ping
464:23 we will type for example ping 127.0.0.1
464:25 127.0.0.1 which is the IP address of the Local
464:28 which is the IP address of the Local Host
464:30 Host we will press enter and the trip time as
464:33 we will press enter and the trip time as you can see is less than 1 millisecond
464:36 you can see is less than 1 millisecond and there is no loss of
464:38 and there is no loss of data if we ping
464:45 google.com you will notice that it will first figure out the IP
464:51 address you will also notice that it also presents the time it takes to get
464:53 also presents the time it takes to get to google.com and come back we also see
464:56 to google.com and come back we also see the Ping statistics for example four
464:59 the Ping statistics for example four packets were sent and four of them
465:01 packets were sent and four of them received with zero loss on an average it
465:04 received with zero loss on an average it takes 27 milliseconds from here to
465:11 Google we will talk about some of the switches that are used with the Ping
465:13 switches that are used with the Ping command p-t pings the host until the
465:17 command p-t pings the host until the command is stopped
465:19 command is stopped manually so let's say we want to know if
465:21 manually so let's say we want to know if a server on our local area network is up
465:24 a server on our local area network is up and running after we rebooted
465:26 and running after we rebooted it the ping command with a dash T switch
465:30 it the ping command with a dash T switch can be used to find out if the server is
465:32 can be used to find out if the server is up and
465:33 up and running ping dasn pings a host a
465:36 running ping dasn pings a host a specific number of
465:39 specific number of times we can specify the number for
465:41 times we can specify the number for example when we want to Ping a host 10
465:44 example when we want to Ping a host 10 times or 20
465:46 times or 20 times there is also ping DL which pings
465:50 times there is also ping DL which pings a host by specifying the number of bytes
465:52 a host by specifying the number of bytes we want to send as opposed to
465:55 we want to send as opposed to 32 for example we might want to send 64
465:59 32 for example we might want to send 64 byes or something for a bigger
466:01 byes or something for a bigger test we will demonstrate the Ping
466:04 test we will demonstrate the Ping feature with the Local Host so we will
466:06 feature with the Local Host so we will type ping1
466:09 type ping1 127.0.0.1
466:12 127.0.0.1 followed by
466:15 followed by DT and then we will press
466:22 enter what it does is it continuously pings the same IP address again and
466:25 pings the same IP address again and again so if we are waiting for our
466:27 again so if we are waiting for our server to come back online
466:29 server to come back online this will tell us whether it is online
466:31 this will tell us whether it is online or
466:32 or not we can exit the running pin command
466:35 not we can exit the running pin command by pressing contrl
466:38 by pressing contrl C the next one we will talk about is
466:40 C the next one we will talk about is trace route trace route tests where
466:43 trace route trace route tests where connectivity may have been lost trace
466:46 connectivity may have been lost trace route like Ping also uses the icmp
466:50 route like Ping also uses the icmp protocol it shows the time taken for a
466:52 protocol it shows the time taken for a packet to travel between different
466:54 packet to travel between different devices from the source to
466:57 devices from the source to destination a portion of path traversed
466:59 destination a portion of path traversed by a packet between the source and the
467:02 by a packet between the source and the destination is called a
467:04 destination is called a hop for example let's say a packet has
467:07 hop for example let's say a packet has to travel across four computers to reach
467:09 to travel across four computers to reach the destination
467:11 the destination computer each one of these portions
467:13 computer each one of these portions represents a
467:15 represents a hop trace route is also used to find the
467:18 hop trace route is also used to find the location of a router that is
467:21 location of a router that is down and now we will go to the command
467:24 down and now we will go to the command prompt
467:32 and let's say we want to do a trace route to
467:40 google.com now it's going to show all the different Hops and it's going to
467:42 the different Hops and it's going to tell us how long it takes to get from
467:44 tell us how long it takes to get from one location to
467:54 California and now it has started to go out
468:06 it traversed different locations and it took 14 hops before it finally reached
468:08 took 14 hops before it finally reached Google at this
468:14 location the next thing we will talk about is the name server lookup or NS
468:17 about is the name server lookup or NS lookup it is a networking tool that is
468:20 lookup it is a networking tool that is used to query the domain name system to
468:22 used to query the domain name system to get information on a domain name or IP
468:26 get information on a domain name or IP address for example we can do an NS
468:29 address for example we can do an NS lookup to find out the IP address of
468:33 lookup to find out the IP address of google.com let's go to the command
468:37 google.com let's go to the command prompt and let's type
468:39 prompt and let's type nslookup
468:49 yahoo.com as you can see it will give us all the IP addresses of
468:52 all the IP addresses of yahoo.com it also tells us whether it is
468:54 yahoo.com it also tells us whether it is authoritative or non-authoritative
469:00 authoritative would be the information from a DNS server on the
469:02 from a DNS server on the internet non-authoritative means it
469:05 internet non-authoritative means it might be a local DNS
469:13 server now we will talk about ip config ip config stands for the Internet
469:15 ip config stands for the Internet Protocol configuration or IP
469:18 Protocol configuration or IP configuration it shows current TCP IP
469:21 configuration it shows current TCP IP network configuration
469:23 network configuration values for example we can get
469:26 values for example we can get information such as our IPv6 address
469:29 information such as our IPv6 address ipv4 address subnet mask default gateway
469:34 ipv4 address subnet mask default gateway and so
469:36 and so on we use this as the first tool for
469:39 on we use this as the first tool for troubleshooting network connectivity
469:41 troubleshooting network connectivity because it will tell us if the computer
469:43 because it will tell us if the computer is receiving an IP
469:49 address let's go to the Windows command prompt and type ip
469:56 config and we get all this information we now see the I
469:58 information we now see the I configuration for every adapter that is
470:00 configuration for every adapter that is on the computer for example we have a
470:03 on the computer for example we have a local area connection and wireless
470:06 local area connection and wireless connection you'll notice that we are
470:08 connection you'll notice that we are only getting the IP address the subnet
470:11 only getting the IP address the subnet mask and the default gateway and we are
470:13 mask and the default gateway and we are not getting any other
470:15 not getting any other information there are several options
470:18 information there are several options that can be used with ip
470:19 that can be used with ip config let's use ip config SL all
470:36 bigger now we can see a lot more information including the DNS server
470:39 information including the DNS server address which was not included
470:41 address which was not included originally and the MAC address which we
470:43 originally and the MAC address which we call the physical address of the adapter
470:46 call the physical address of the adapter we also have the gateway address which
470:48 we also have the gateway address which is provided it also shows you whether
470:51 is provided it also shows you whether DHCP is
470:53 DHCP is enabled now we will look at a few of the
470:56 enabled now we will look at a few of the other options that can be used with ip
470:58 other options that can be used with ip config
470:59 config the first is IP config
471:02 the first is IP config /ras which will release the current IP
471:09 address let's say we have a bad IP address somehow we got an IP address
471:11 address somehow we got an IP address that another device already
471:14 that another device already had we could use IP release to release
471:17 had we could use IP release to release that IP address to another
471:20 that IP address to another device we can also use renew which will
471:23 device we can also use renew which will renew the current IP
471:26 renew the current IP address while release let's go of an IP
471:29 address while release let's go of an IP address renew actually lets it go and
471:32 address renew actually lets it go and then gets another IP
471:34 then gets another IP address in that sense renew is a little
471:37 address in that sense renew is a little more
471:38 more helpful finally let's see ip config slf
471:42 helpful finally let's see ip config slf flushdns which erases and ropagnol
472:36 DNS it tells us that we have successfully flushed the DNS resolver
472:39 successfully flushed the DNS resolver cache and the cache being the
472:41 cache and the cache being the information it stores on the
472:44 information it stores on the computer so now every time if we go out
472:46 computer so now every time if we go out to a website it's going to begin to
472:49 to a website it's going to begin to reprop the data from the
472:52 reprop the data from the internet ip config can be used for
472:55 internet ip config can be used for troubleshooting because it can tell us
472:57 troubleshooting because it can tell us if a computer is receiving an actual IP
472:59 if a computer is receiving an actual IP address from a DHCP server or if it is
473:03 address from a DHCP server or if it is receiving an apipa or a self- assigned
473:11 address we can also find out whether the IP address is private or public by
473:13 IP address is private or public by looking at the IP
473:15 looking at the IP address you'll notice that 1
473:23 192.168.1.2 is a private IP address this means that we are behind a
473:26 means that we are behind a router If you're receiving a public IP
473:28 router If you're receiving a public IP address then we know that we are
473:30 address then we know that we are directly connected to the
473:32 directly connected to the ISP it can also tell if something might
473:35 ISP it can also tell if something might be wrong with our
473:36 be wrong with our setup if the users are not able to
473:39 setup if the users are not able to connect to the internet then the DNS
473:41 connect to the internet then the DNS server address is probably not
473:43 server address is probably not configured
473:44 configured correctly and by releasing and renewing
473:47 correctly and by releasing and renewing the IP address and flushing the DNS it
473:49 the IP address and flushing the DNS it will help to resolve that
473:51 will help to resolve that problem now let's talk about common
473:54 problem now let's talk about common networking
473:55 networking problems we will first talk about no
473:57 problems we will first talk about no connectivity
473:59 connectivity the first thing we need to do is to
474:01 the first thing we need to do is to verify the physical cables are present
474:03 verify the physical cables are present and properly plugged in we should also
474:06 and properly plugged in we should also make sure that the network adapter is
474:07 make sure that the network adapter is enabled and has valid addressing for the
474:15 subnet for example in the Windows operating system we can go to device
474:25 manager and we can make sure the network adapter is running properly as you can
474:27 adapter is running properly as you can see here here there's no problem with
474:29 see here here there's no problem with any of
474:31 any of them the next thing we can do is Ping
474:34 them the next thing we can do is Ping the server to make sure tcpip is
474:36 the server to make sure tcpip is installed
474:38 installed properly it could be that the hardware
474:40 properly it could be that the hardware is working properly but the protocols
474:42 is working properly but the protocols are not installed properly which is not
474:44 are not installed properly which is not allowing us to
474:47 allowing us to connect it's necessary to verify if
474:49 connect it's necessary to verify if anything has been changed recently as it
474:52 anything has been changed recently as it could have caused the
474:54 could have caused the problems for example sometimes firewall
474:57 problems for example sometimes firewall software up updates can cause
474:59 software up updates can cause connectivity
475:01 connectivity issues and rebooting the computer can
475:04 issues and rebooting the computer can sometimes fix the
475:06 sometimes fix the issue sometimes we might have disabled a
475:09 issue sometimes we might have disabled a port from a switch so it is necessary to
475:12 port from a switch so it is necessary to make sure that the disabled ports are
475:14 make sure that the disabled ports are enabled for
475:16 enabled for connectivity basically it's important to
475:19 connectivity basically it's important to check all the physical connections and
475:21 check all the physical connections and then The Logical
475:23 then The Logical connections so to summarize no
475:26 connections so to summarize no connectivity problems can result from
475:28 connectivity problems can result from Bad cables and
475:31 Bad cables and connectors bad Port disabled port or
475:34 connectors bad Port disabled port or misconfigured switch
475:36 misconfigured switch Port bad Nick or misconfigured
475:40 Port bad Nick or misconfigured Nick or corrupted or misconfigured
475:43 Nick or corrupted or misconfigured software
475:44 software drivers next we will talk about
475:46 drivers next we will talk about intermittent connections or connection
475:49 intermittent connections or connection drops intermittent connections can be a
475:52 drops intermittent connections can be a result of situations that prevent
475:54 result of situations that prevent setting up a connection in the first
475:56 setting up a connection in the first place so it's it's good to consider the
475:59 place so it's it's good to consider the situations that were discussed under the
476:01 situations that were discussed under the no connection
476:03 no connection heading the intermittent connections or
476:05 heading the intermittent connections or dropped connections can happen because
476:08 dropped connections can happen because of a loss of physical or logical
476:14 connectivity ping and trace route can be used to find the location of
476:16 used to find the location of intermittent
476:18 intermittent problems intermittent connections or
476:21 problems intermittent connections or dropped connections typically result
476:23 dropped connections typically result from Bad cables and
476:25 from Bad cables and connections bad Nick bad port and
476:29 connections bad Nick bad port and electrical
476:30 electrical noise now let's talk about networks
476:32 noise now let's talk about networks being slow a poor Network performance
476:35 being slow a poor Network performance can be the result of situations that
476:38 can be the result of situations that prevent setting up a connection in the
476:39 prevent setting up a connection in the first
476:41 first place so it's good to consider the
476:43 place so it's good to consider the situations discussed under the no
476:45 situations discussed under the no connection
476:47 connection heading the common causes for slow
476:49 heading the common causes for slow performance include the following
476:51 performance include the following overloaded servers traffic congestions
476:54 overloaded servers traffic congestions on a network
476:55 on a network link severe frame loss and inappropriate
476:59 link severe frame loss and inappropriate switch or router
477:02 switch or router configurations and that brings us to the
477:04 configurations and that brings us to the end of this lesson let's summarize what
477:07 end of this lesson let's summarize what you have learned in this
477:09 you have learned in this lesson in this lesson you learned about
477:11 lesson in this lesson you learned about land
477:13 land troubleshooting we started by looking at
477:15 troubleshooting we started by looking at the command line tools that are used to
477:16 the command line tools that are used to troubleshoot tcpip
477:19 troubleshoot tcpip networks we looked at the Ping command
477:21 networks we looked at the Ping command which tests for connectivity and then we
477:24 which tests for connectivity and then we looked at the trace route command which
477:26 looked at the trace route command which traces a ping route
477:29 traces a ping route after that we talked about NS lookup
477:32 after that we talked about NS lookup which stands for name server
477:35 which stands for name server lookup it converts an IP address to a
477:37 lookup it converts an IP address to a fully qualified domain name such as
477:42 fully qualified domain name such as yahoo.com we also talked about ip config
477:45 yahoo.com we also talked about ip config or IP
477:47 or IP configuration next we looked at
477:49 configuration next we looked at troubleshooting common networking
477:50 troubleshooting common networking problems including no connectivity
477:53 problems including no connectivity intermittent connectivity and slow
477:57 intermittent connectivity and slow connectivity thank you for watching