Back to Contents Page

Storage Management Concepts

Dell OpenManage™ Array Manager 3.4

  The Array Manager Storage Model

  Arrays

  Disks

  Volumes

  Redundant Fibre Channel RAID Controller Concepts

  Organizing Data Storage for Availability and Performance

This chapter describes the storage model that the Array Manager graphical user interface (GUI) uses to display storage subsystem components. This chapter also covers concepts associated with the redundant Fibre Channel RAID controllers. In addition, the chapter gives information about what RAID is and how different levels of RAID work.


The Array Manager Storage Model

The Array Manager storage model represents the different components in a storage subsystem as either physical or logical (virtual) storage objects. These objects are displayed in a hierarchical order. A computer object representing a local or remote computer running Array Manager is at the top of the hierarchy for each storage subsystem. The Array Manager GUI can display multiple storage subsystems. You can expand the tree view (display lower levels of the subsystem hierarchy) by clicking on the storage subsystem objects. You can also customize the default tree view. For information on customizing the tree view, see Configuring the Console.

By default, the tree view displays three main objects below each computer:

The Array Manager Window with Storage Objects

In the sample screen above, the computer object at the top of the hierarchy, 82ALX, represents the local computer running Array Manager. It has three storage objects beneath it, Arrays, Disks, and Volumes. The plus sign in front of each object indicates that the object can be expanded to display additional lower-level objects.

All the storage objects have context menus associated with them. To access a storage object's context menu, right-click the object.

You can change the storage objects that you view in Array Manager by using the Add Categories function. See Change the Array Manager Category Display in The Array Manager Console chapter for more details.


Arrays

Arrays represent the physical and logical (or virtual) components connected to a controller. By expanding the Arrays object, you can see various storage components. These components include subsystems (a family of controllers), individual controllers, and array groups. Each of these objects further expands to display additional components. For example, the Array Groups object expands to display the array and virtual disks attached to the controller. The controller object expands to show the channels or enclosures attached to the controller.

Arrays Storage Objects in the Tree View

In the sample screen above, the fully expanded Arrays object hierarchy shows both the Physical Array and the Logical Array. The Physical Array object has a PERC 2/SC controller that contains a single channel, Channel 0, with five array disks. The array disk numbering corresponds with the number of the channel and the SCSI ID. In the screen above, the array disks 0:0, 0:1, 0:2, 0:3, and 0:4 are connected to Channel 0 and occupy SCSI IDs 0 to 4.

The Logical Array object contains array groups. An array group has array disks that are controlled by a particular array controller. An array group is named by the number of the array controller it is associated with. For example, array disks attached to Controller 0 belong to Array Group 0. With SCSI RAID controllers, you can create multiple virtual disks from disks in an array group. However, with a redundant Fibre Channel RAID controller, which supports a large number of physical disks, you will need to define one or more disk groups from the array group before creating virtual disks. You then create virtual disks from a disk group rather than from the larger array group. Disk groups are not shown in the screen above.

A virtual disk is an abstract entity made up of array disks and/or array disk segments presented to an operating system as a single contiguous block of storage space. When you create a virtual disk, you are asked to specify a hardware RAID level.

An array disk is a disk controlled by the array controller. This disk can be placed in an array group, and if you are using redundant Fibre Channel RAID controllers, in a disk group.

As shown in the sample screen above, virtual disks created in a particular array group are listed under that Array Group object. These virtual disks may also appear as Microsoft Windows disks under the Disks object. Virtual disks are placed under the Disks object because once a virtual disk is created, it is viewed by the operating system as a regular hard disk.

Note When you have a PowerVault™ 660F storage system that is part of a Storage Area Network (SAN), you will have to first use the Dell OpenManage Storage Consolidation software to assign the virtual disk to the server before it will appear under the Disks object. For Windows NT servers, this may also require a reboot.

Disks

Disks represent the disks recognized by a Windows operating system. This can include regular hard disks and virtual disks created through Array Manager. This view also includes removable media, such as CD-ROM drives or removable disks, such as Zip disks. The screen below shows an example of a Disks section of the tree view.

Disks Storage Objects in the Tree View

Disks are further classified as basic or dynamic.

Basic Disk

A basic disk adheres to the partition-oriented scheme of a Windows operating system. Basic disks can also contain RAID volumes that were created in NT Disk Administrator, including spanned volumes (volume sets), mirrored volumes (mirror sets), striped volumes (stripe sets), and RAID-5 volumes (stripe sets with parity). In addition, virtual disks, CD-ROMs, and removable-media disks are considered basic disks.

Dynamic Disk

Dynamic disks are created by upgrading basic disks using Array Manager. A dynamic disk is a physical disk that can contain dynamic volumes created by Array Manager.

For more information about disks, see the Disk Management chapter.

There are particular considerations regarding dynamic disks and volumes on NetWare, Windows Server 2003, and Linux. See Dynamic Disk and Volume Support on NetWare, Windows Server 2003, and Linux for more information.


Volumes

A volume is a logical entity that is made up of portions of one or more physical disks. A volume can be formatted with a file system and can be accessed by a drive letter. The maximum size of a volume depends on the quantity of free disk space and the type of volume selected. The screen that follows shows an example of a Volumes section of the tree view.

Volume Storage Objects in the Tree View

Volumes can be:

Simple Volumes

In Array Manager, a simple volume is a volume that occupies contiguous space on a single disk. You can extend a simple volume within the same disk or onto additional disks. If you extend a simple volume across multiple disks or across noncontiguous areas on the same disk, it becomes a spanned volume. You can create simple volumes only on dynamic disks. Simple volumes are not fault tolerant but can be mirrored.

When a basic disk with a partition is upgraded, the partition becomes a simple volume. An extended partition on a basic disk also becomes a simple volume when the disk is upgraded to dynamic.

Primary and Extended Partitions

In Array Manager, primary and extended partitions and logical drives exist only on basic disks. Basic disks use the traditional disk partitioning mechanism used by MS-DOS or a Windows operating system. A basic disk can have up to either four primary partitions or three primary partitions plus an extended partition. This extended partition can be subdivided into as many as 32 logical drives.

Logical Drives

A logical drive is a simple volume that resides on an extended partition of a basic disk. You can use all or part of the free space in an extended partition when creating logical drives.

Basic Volumes

Basic volumes refer to all the volumes that are on basic disks. Basic volumes can be primary or extended partitions, simple logical drives that reside on extended partitions, or RAID volumes that were originally created in Windows NT Disk Administrator.

Dynamic Volumes

A dynamic volume is a logical volume that can be created from one or more dynamic disks using Array Manager software. Dynamic volume types include simple, spanned, striped, mirrored, and RAID-5. By using spanned dynamic volumes, you can dynamically increase the size of your dynamic volumes as the need arises. For more information, see Working with Dynamic Volumes in the chapter on Volume Management.

There are particular considerations regarding dynamic disks and volumes on NetWare, Windows Server 2003, and Linux. See Dynamic Disk and Volume Support on NetWare, Windows Server 2003, and Linux for more information.

For detailed instructions about creating and/or managing Array Manager storage model objects, see the specific sections as follows:


Redundant Fibre Channel RAID Controller Concepts

This section describes concepts associated with redundant Fibre Channel RAID controllers. This section's topics are:

Redundant Fibre Channel RAID Controller Configurations

Redundant Fibre Channel RAID controller configurations interconnect two identical controllers that share a common set of array disks. This configuration allows a surviving controller to take over resources of a failed controller. This failover process is transparent to the applications running on the host.

Redundant Fibre Channel RAID controller support provides the system with the mechanisms for the following:

Note A controller-controller nexus refers to the state in which both redundant controllers are in communication. In this state, each controller can copy write-back data to its partner controller and can determine whether the other controller is operating.

The two Fibre Channel RAID controllers in the disk enclosure will both be connected to a server in a direct attach configuration or in a SAN configuration. The two controllers communicate with each other to verify that both are functioning properly through a ping/acknowledgment sequence. Failure to acknowledge the ping triggers failover.

The redundant Fibre Channel RAID controller configuration also supports dual or multiple server communication in a SAN, offering the advantages of being able to sustain data access in the event of a host failure. If configured in a cluster or high-availability environment, it is also able to sustain data access in the event of the failure of a server or a host bus adapter (HBA). This configuration requires implementation of alternate path software on the server.

Redundant Fibre Channel RAID Controller Configuration Requirements

Both controllers in a redundant Fibre Channel RAID controller system must be identical. The following requirements must be satisfied in order to provide optimal operation.

Failover and Failback

In redundant Fibre Channel RAID controller configurations, maintaining continuous access to data requires that a failed controller be replaced in a manner that is transparent to the host. Drive channel ports on the controllers are configured as inactive ports until those ports are needed to respond to requests of a failed controller.

In the event of a controller failure in a redundant Fibre Channel RAID controller system, the failed controller's operations are assumed by the surviving controller. The failed controller can then be removed and replaced while the system is online. When the surviving controller detects the presence of the new controller, the new controller resumes processing array operations. During failover and failback, write cache coherency is maintained with the disk drives.

When a controller is participating in a controller-controller nexus (C-C nexus) and detects a communication error with its partner controller, it initiates the failover process. The following steps outline the failover process executed by the surviving controller:

  1. On detection of the controller failure, the surviving controller holds the failed controller in disable partner mode.

  2. The surviving controller activates the failover port.

  3. Cached data is flushed to the disk drives.

  4. Conservative Cache is enabled (if the Conservative Cache Mode controller option has been enabled).

  5. The surviving controller begins handling I/O requests for the failed controller.

When a failed controller is replaced, the system either automatically detects the replacement (if the Auto Restore controller option has been enabled) or is informed of the replacement by the Enable Partner command. The following steps outline the failback process executed by the surviving controller:

  1. A replacement controller is detected.

  2. The surviving controller enables the replacement controller.

  3. Once the replacement controller completes initialization and is ready to resume I/O requests, the surviving controller quiesces both ports by responding with BUSY status to new I/O requests.

  4. The surviving controller disables the failover port.

  5. The surviving controller clears the BUSY condition.

  6. The replacement controller enables its primary ports.

  7. Both controllers disable Conservative Cache mode (if enabled) for write-back system drives and resume normal redundant Fibre Channel RAID controller operation.

Fibre Channel Concepts

The following sections discuss basic fibre channel concepts. For more information on Fibre Channel technology, refer to the following websites:

www.fibrechannel.com

www.T11.org

www.ncits.org

www.scsita.org

Fibre Channel

A data transfer interface technology that allows for high-speed I/O and networking functionality in a single connectivity technology. The Fibre Channel Standard supports several topologies, including Fibre Channel Point-to-Point, Fibre Channel Fabric (generic switching topology), and Fibre Channel Arbitrated Loop (FC_AL).

Fibre Channel Point-to-Point

The simplest Fibre Channel connection. This provides a direct connection between the transmit output of one system and the receive input of a second system. A second connection is provided between the opposite connectors in order to complete the signals. The physical connection between the two systems is called a link. No switches, loops, or fabric elements are needed. This is also known as a direct attach configuration.

Fibre Channel Fabric

A topology that uses one or more switches to route frames between systems in a Fibre Channel network. The routing of frames is transparent to the systems or devices. This is also known as a Storage Area Network (SAN) configuration.

Fibre Channel Arbitrated Loop

A fibre channel arbitrated loop is a method for connecting multiple devices on a single contiguous loop. All systems in the loop share the same bandwidth. A fibre channel loop provides certain advantages such as fault tolerance and hotswapping among others.


Organizing Data Storage for Availability and Performance

A key aspect of efficient data storage is the ability to create virtual disks or volumes that span more than one physical disk. The operating system views a spanned or concatenated disk (which may comprise a single or multiple array disks) as a single, contiguous chunk of disk space. This view enables the operating system to more efficiently perform read and write operations.

Redundant Array of Independent Disks (RAID) technology enables you to maintain data redundancy and choose different methods for organizing data on multiple disks. Maintaining redundant data enables you to reconstruct data in the event of a disk failure. Redundant data includes mirrors (duplicate data) and parity information (reconstructing data using an algorithm).

RAID provides different methods for organizing the disk storage. These methods are called RAID levels and are referred to by number, such as RAID 0 or RAID 5. In addition to mirrors and parity information, a RAID level can also use striping, which is writing data to equal-sized chunks of disk space across multiple disks. Not all RAID levels provide redundancy. RAID levels can, however, imply an increase or decrease in the system's I/O (read and write) performance. Additionally, maintaining redundant data requires the use of additional array disks. As more disks become involved, the likelihood of a disk failure increases. Because of the differences in I/O performance and redundancy, one RAID level may be more appropriate than another based on the applications in the operating environment and the nature of the data being stored.

When choosing concatenation or a RAID level, the following performance and cost considerations apply:

For more information, see Concatenation (Spanned Volume) and RAID.

Concatenation (Spanned Volume)

In Array Manager, concatenation refers to storing data on either one array disk or on disk space that spans multiple array disks. When spanning more than one disk, concatenation enables the operating system to view multiple array disks as a single disk.

Data stored on a single disk can be considered a simple volume. This disk could also be defined as a virtual disk that comprises only a single array disk. Data that spans more than one array disk can be considered a spanned volume. Multiple concatenated disks can also be defined as a virtual disk that comprises more than one array disk.

In Array Manager, a dynamic volume that spans to separate areas of the same disk is also considered concatenated.

When an array disk in a concatenated or spanned volume fails, the entire volume becomes unavailable. Because the data is not redundant, it cannot be restored by rebuilding from a mirrored disk or parity information. Restoring from a backup is the only option.

Because concatenated volumes do not use disk space to maintain redundant data, they are more cost-efficient than volumes that use mirrors or parity information. A concatenated volume may be a good choice for data that is temporary, easily reproduced, or that does not justify the cost of data redundancy. In addition, a concatenated volume can easily be expanded by adding an additional array disk.

Concatenating Disks

Related Information:

RAID

RAID (Redundant Array of Independent Disks) is a collection of specifications that describe a system for storing data on multiple array disks to ensure availability and performance. The RAID Advisory Board (www.raid-advisory.com) defines the various RAID levels. Each RAID level provides a different method for organizing the disk storage. These methods are referred to by number, such as RAID 0 or RAID 5.

RAID functions can be implemented with either hardware or software. A system using hardware RAID has a RAID controller that implements the RAID levels and processes data reads and writes to the array disks. When using software RAID, the operating system must implement the RAID levels. For this reason, using software RAID by itself can slow system performance. You can, however, use software RAID on top of hardware RAID volumes to provide greater performance and variety in the configuration of RAID volumes. For example, you can mirror a pair of hardware RAID-5 volumes across two RAID controllers to provide RAID controller redundancy.

Note Careful consideration should be given to a decision between hardware RAID and software RAID. Software RAID generally has lower performance; and when it is used with SAN Fibre Channel storage, the volumes cannot be moved from server to server.

Although the RAID Advisory Board (RAB) defines the RAID levels, commercial implementation of RAID levels by different vendors may vary from the actual RAID specifications.

In addition to the RAID level, a RAID storage subsystem may employ other features such as a hot spare or consistency check to ensure the continuous availability and accuracy of data in the event of a disk failure. A RAID storage subsystem may include the following features:

Different RAID levels provide varying degrees of improved reliability and performance. For more information, see Choosing RAID Levels.

Choosing RAID Levels

RAID defines different methods (RAID levels) of organizing data storage on multiple array disks.

The following sections provide specific information on how each RAID level stores data on the array disks as well as performance characteristics.

RAID Level 0 (Striping)

RAID 0 uses data striping, which is writing data in equal-sized segments across the array disks. RAID 0 does not provide data redundancy.

Striping Disks

RAID 0 Characteristics:
Note The PERC 2/DC and PERC 2/SC controllers are limited to eight drives when using RAID 0, RAID 3, or RAID 5.
Related Information:

RAID Level 1 (Mirroring)

RAID 1 is the simplest form of maintaining redundant data. In RAID 1, data is mirrored or duplicated on one or more drives. If one drive fails, then the data can be rebuilt using the mirror.

Mirroring Disks

RAID 1 Characteristics:
Related Information:

RAID Levels 3 (Striping with a dedicated parity drive)

RAID 3 provides data redundancy by using data striping in combination with parity information. Data is striped across the array disks, with one disk dedicated to parity information. If a drive fails, the data can be reconstructed from the parity.

Striping Disks with Dedicated Parity Drive

RAID 3 Characteristics:
Note With RAID-3 the parity is usually dedicated to a single array disk. The PowerVault 660F storage system, however, implements RAID-3 in the same manner as RAID-5, with distributed parity across multiple disks.
Related Information:

RAID Levels 5 (Striping with distributed parity)

Similar to RAID 3, RAID 5 provides data redundancy by using data striping in combination with parity information. Rather than dedicating a drive to parity, however, the parity information is striped across all disks in the array.

Striping Disks with Distributed Parity Drive

RAID 5 Characteristics:
Note The PERC 2/DC and PERC 2/SC controllers are limited to eight drives when using RAID 0, RAID 3, or RAID 5.
Related Information:

RAID Level 50 (Concatenated distributed parity)

RAID 50 is a concatenation of RAID 5 across more than one three-drive spans. For example, a RAID 5 array that is implemented with three drives and then continues on with three more array drives would be a RAID 50 array.

It is possible to implement RAID 50 even when the hardware does not directly support it. In this case, you can implement more than one RAID 5 virtual disks and then convert the RAID 5 disks to dynamic disks. You can then create a dynamic volume that is spanned across all RAID 5 virtual disks.

RAID 50

RAID 50 Characteristics:
Related Information:

RAID Level 10 (Striping over mirror sets)

The RAID Advisory Board considers RAID Level 10 to be an implementation of RAID level 1. RAID 10 combines mirrored drives (RAID 1) with data striping (RAID 0). With RAID 10, data is striped across multiple drives. The set of striped drives is then mirrored onto another set of drives. RAID 10 can be considered a mirror of stripes.

Striping Over Mirror Sets

Note This RAID level is used only with PERC 2, PERC 2/Si, PERC 3/Si, and PERC 3/Di controllers.
RAID 10 Characteristics:
Note On the PERC 2/SC, 2/DC, 3/SC, 3/DCL, 3/DC, 3/QC, 4/SC, 4/DC, 4/Di, and CERC ATA100/4ch controllers, RAID-10 is implemented as RAID-1 Concatenated. See the next topic RAID Level 1-Concatenated (Concatenated mirror).
Related Information:

RAID Level 1-Concatenated (Concatenated mirror)

RAID-10 on PERC 2/SC, 2/DC, 3/SC, 3/DCL, 3/DC, 3/QC, 4/SC, 4/DC, 4/Di, and CERC ATA100/4ch controllers is implemented as RAID-1 Concatenated.

RAID-1 Concatenated is a RAID-1 array that spans across more than a single pair of array disks. This combines the advantages of concatenation with the redundancy of RAID-1. No striping is involved in this RAID type.

Also, RAID-1 Concatenated can be implemented on hardware that supports only RAID-1 by creating multiple RAID-1 virtual disks, upgrading the virtual disks to dynamic disks, and then using spanning to concatenate all of the RAID-1 virtual disks into one large dynamic volume.

RAID 1-Concatenated

Related Information:

RAID Level 0+1 (Striping over mirror sets)

The RAID Advisory Board considers RAID Level 0+1 to be an implementation of RAID level 1. RAID 0+1 combines data striping (RAID 0) with mirrored drives (RAID 1). With RAID 0+1, data is striped across multiple drives and their mirrors. RAID 0+1 can be considered a stripe of mirrors.

Striping Over Mirror Sets

Note In Array Manager, RAID 0+1 is only used with the PowerVault 660F storage system. This RAID level also allows you to use an odd number of disks.
RAID 0+1 Characteristics:
Related Information:

Comparing RAID Level Performance

The following table compares the performance characteristics associated with the more common RAID levels. This table provides general guidelines for choosing a RAID level. Keep in mind the needs of your particular environment when choosing a RAID level.

Note The following table does not show all RAID levels supported by Array Manager. For information on all RAID levels supported by Array Manager, see Choosing RAID Levels.

RAID Level Performance Comparison

RAID
Level

Data
Availability

Read Performance

Write Performance

Rebuild Performance

Minimum Disks Required

Suggested Uses

RAID 0

None

Very Good

Very Good

N/A

N

Noncritical data

RAID 1

Excellent

Very Good

Good

Good

2N x X

Small databases, database logs, critical information

RAID 3

Good

Sequential reads: very good. Transactional reads: poor

Sequential writes: good. Transactional writes: poor

Fair

N + 1
(N = at least two disks)

Single -user data-intensive environments such as video image processing

RAID 5

Good

Sequential reads: good. Transactional reads: Very good

Fair, unless using write-back cache

Poor

N + 1
(N = at least two disks)

Databases and other read-intensive transactional uses

RAID 10

Excellent

Very Good

Fair

Good

2N x X

Data-intensive environments (large records)

RAID 30

Excellent

Very Good

Fair

Fair

N + 2
(N = at least 4)

Medium-sized transactional or data-intensive uses

RAID 50

Excellent

Very Good

Fair

Fair

N + 2
(N = at least 4)

Medium-sized transactional or data-intensive uses

N = Disk space required for the data
X = Number of RAID sets


Back to Contents Page