SAS RAID Cards ARC-1680 Series (PCIe to SAS RAID Controllers ) USER’S Manual Version: 1.
Copyright and Trademarks The information of the products in this manual is subject to change without prior notice and does not represent a commitment on the part of the vendor, who assumes no liability or responsibility for any errors that may appear in this manual. All brands and trademarks are the properties of their respective owners. This manual contains materials protected under International Copyright Conventions. All rights reserved.
Contents 1. Introduction............................................................... 10 1.1 Overview........................................................................ 10 1.2 Features......................................................................... 12 2. Hardware Installation................................................ 16 2.1 Before You Begin Installing................................................ 16 2.2 Board Layout...................................................................
• Volume Name................................................................. 62 • Raid Level...................................................................... 63 • Capacity........................................................................ 63 • Stripe Size..................................................................... 64 • SCSI Channel................................................................. 65 • SCSI ID.........................................................................
3.6.7.4 Time To Spin Down Idle Hdd (Minutes)...................... 89 3.7.7 Ethernet Configuration ................................................ 89 3.7.7.1 DHCP Function....................................................... 90 3.7.7.2 Local IP address..................................................... 91 3.7.7.3 HTTP Port Number.................................................. 91 3.7.7.4 Telnet Port Number................................................. 92 3.7.7.5 SMTP Port Number...................
(Out-of-Band) ................................................................ 118 6.2 SAS RAID controller McRAID Storage Manager................... 119 6.3 Main Menu ................................................................... 120 6.4 Quick Function............................................................... 120 6.5 Raid Set Functions......................................................... 121 6.5.1 Create Raid Set ........................................................ 121 6.5.
• HDD Read Ahead Cache................................................. 137 • Volume Data Read Ahead .............................................. 137 • HDD Queue Depth . ...................................................... 137 • Empty HDD Slot LED..................................................... 137 • SES2 Support............................................................... 137 • SAS Mux Setting (ARC-1680 Only).................................. 137 • Auto Activate Incomplete Raid..................
Appendix C................................................................... 155 SNMP Operation & Installation............................................... 155 Appendix D................................................................... 162 Event Notification Configurations......................................... 162 A. Device Event............................................................... 162 B. Volume Event.............................................................. 163 C. RAID Set Event...
INTRODUCTION 1. Introduction This section presents a brief overview of the SAS RAID controller, ARC1680 series. (PCIe to SAS RAID controllers) 1.1 Overview SAS builds on parallel SCSI by providing higher performance, improving data availability, and simplifying system design. The SAS interface supports both SAS disk drives for data-intensive applications and Serial ATA (SATA) drives for low-cost bulk storage of reference data.
INTRODUCTION DIMM socket with default 512MB of ECC DDR2-533 SDRAM with optional battery backup module, upgrade to 4GB. The test result is against overall performance compared to other SAS RAID controllers. The powerful Intel new I/O processors integrated 8 SAS ports on chip delivers high performance for servers and workstations.
INTRODUCTION Easy RAID Management The controllers contain an embedded McBIOS RAID manager that can access via hot key at M/B BIOS boot-up screen. This pre-boot McBIOS RAID manager can use to simplify the setup and management of RAID controller. The controller firmware also contains a browser-based McRAID storage manager which can be accessed through the Ethernet port or ArcHttp proxy server in Windows, Linux, FreeBSD and more environments.
INTRODUCTION • Online capacity expansion and RAID level migration simultaneously • Online volume set growth • Instant availability and background initialization • Automatic drive insertion/removal detection and rebuilding • Greater than 2TB per volume set (64-bit LBA support) • Support spin down drives when not in use to extend service life (MAID) • Support NTP protocol synchronize RAID controller clock over the on board Ethernet port Monitors/Notification • System status indication through global HDD activ
INTRODUCTION • • • • Novell Netware 6.5 Solaris 10 x86/x86_64 SCO UnixWare 7.1.4 Mac OS 10.x (EFI BIOS support) (For latest supported OS listing visit http://www.areca.com.tw) SAS RAID card Model name ARC-1680ix-12 I/O Processor Full Height: 98.4(H) x 237.
INTRODUCTION SAS RAID card Model name ARC-1680ix-8 ARC-1680IXL-12 I/O Processor Form Factor 69(H) x 210(L) mm Host Bus Type Driver Connector Drive Support RAID Level On-Board Cache Management Port Enclosure Ready ARC-1680IXL-16 Intel IOP348 1200MHz 69(H) x 240(L) mm PCIe x8 Lanes 2xSFF-8087 1xSFF-8088 3xSFF-8087 1xSFF-8088 4xSFF-8087 1xSFF-8088 Up to 128 SAS/SATA HDDs 0, 1, 10(1E), 3, 5, 6, 30, 50, 60, Single Disk, JBOD 512MB on-board DDR2-533 SDRAM In-Band: PCIe Out-of-Band: BIOS, LCD, LAN Port I
HARDWARE INSTALLATION 2. Hardware Installation This section describes the procedures for installing the SAS RAID controllers. 2.1 Before You Begin Installing Thanks for purchasing the SAS RAID controller as your RAID data storage subsystem. This user manual gives simple step-by-step instructions for installing and configuring the SAS RAID controller. To ensure personal safety and to protect your equipment and data, reading the following information package list carefully before you begin installing.
HARDWARE INSTALLATION Figure 2-1, ARC-1680ix-12/16/24 SAS RAID Controller Connector Type 1. (J3) Battery Backup Module Connector 2. (J4) RS232 Port for CLI to configure the expander functions on the RAID controller (*1) 3. (CN1) SAS 25-28 Ports (External) 4. (J10) Ethernet Port Description 12-pin box header RJ11 connector Min SAS 4x RJ45 5. (J7) Manufacture Purpose Port 10-pin header 6. (J9) Individaul Fault LED Header 24-pin header 24-pin header 7.
HARDWARE INSTALLATION Figure 2-2, ARC-1680ix-8 Internal/External SAS RAID Controller Connector Type Battery Backup Module Connector 2.(SCN2) SAS 9-12 Ports (External) 3. (J4) Ethernet Port 4. (J5) Individual Activity (HDD) LED Header 8-pin header 5. (J8) Individual Fault LED Header 8-pin header 6. (J6) Global Fault/Activity LED 4-pin header 7. (J3) I2C/LCD Connector 8. (J1) Manufacture Purpose Port 9. (SCN1) SAS 1-4 Ports (Internal) Min SAS 4i 10.
HARDWARE INSTALLATION Figure 2-3, ARC-1680LP SAS RAID Controller Connector Type Description 1. (J2) Battery Backup Module Connector 12-pin box header 2. (J1) Manufacture Purpose Port 10-pin header 3. (J6) Global Fault/Activity LED 4-pin header 4. (J3) I2C/LCD Connector 8-pin header 5. (J5) Individual Fault/Activity LED Header 8-pin header 6. (SCN1) SAS 1-4 Ports (Internal) Min SAS 4i 7. (SCN2) SAS 5-8 Ports (External) Min SAS 4x 8.
HARDWARE INSTALLATION Figure 2-4, ARC-1680i SAS RAID Controller Connector Type Description 1. (J4) Ethernet Port 2. (JP2) Individual Fault LED Header 4-pin header RJ45 3. (J5) Individual Activity (HDD) LED Header 4-pin header 4. (J6) Global Fault/Activity LED 4-pin header 5. (J2) Battery Backup Module Connector 6. (J1) Manufacture Purpose Port 12-pin box header 10-pin header 7. (J3) I2C/LCD Connector 8. (SCN1) SAS 1-4 Ports (Internal) Min SAS 4i 9.
HARDWARE INSTALLATION Front Side Back Side Figure 2-5, ARC-1680IXL-12/16 SAS RAID Controller Connector Type Description Front Side 1. (SCN5) SAS 5-8 Ports (External) 2. (J4) Ethernet port Min SAS 4x 3. (J1) Manufacture Purpose Port 10-pin header 4. (JP1) RS232 Port for CLI to configure the expander functions 10-pin header 5. (J2) Battery Backup Module Connector 6. (J6) Global Fault/Activity LED 4-pin header 7. (J8) Individual Fault LED Header 8-pin header RJ45 12-pin box header 8.
HARDWARE INSTALLATION Figure 2-6, ARC-1680x SAS RAID Controller Connector Type Description 1.(J1) Manufacture Purpose Port 10-pin header 2. (J4) Signal for Ethernet Daughterboard 10-pin header 3. (J2) Battery Backup Module Connector 12-pin box header 4. (J3) I2C/LCD Connector 8-pin header 5. (J7) Ethernet Port 6. (SCN1) SAS 1-4 Ports (External) Min SAS 4x RJ45 7.
HARDWARE INSTALLATION Tools Required An ESD grounding strap or mat is required. Also required are standard hand tools to open your system’s case. System Requirement The SAS RAID controller can be installed in a universal PCIe slot and requires a motherboard that: ARC-1680 series SAS RAID controller requires: • Comply with the PCIe x8 It can work on the PCIe x1, x4, x8, and x16 signal with x8 or x16 slot M/B.
HARDWARE INSTALLATION • Always wear a grounding strap or work on an ESD-protective mat. • Before opening the system cover, turn off power switches and unplug the power cords. Do not reconnect the power cords until you have replaced the covers. Electrostatic Discharge Static electricity can cause serious damage to the electronic components on this SAS RAID controller.
HARDWARE INSTALLATION Step 4. Install the PCIe SAS RAID Cards To install the SAS RAID controller, remove the mounting screw and existing bracket from the rear panel behind the selected PCIe slot. Align the gold-fingered edge on the card with the selected PCIe slot. Press down gently but firmly to ensure that the card is properly seated in the slot, as shown in Figure 2-7. Then, screw the bracket into the computer chassis. ARC-1680 series cards require a PCIe x8 slot.
HARDWARE INSTALLATION In the backplane solution, SAS/SATA drives are directly connected to SAS system backplane or through an expander board. The number of SAS/SATA drives is limited to the number of slots available on the backplane. Some backplanes support daisy chain expansion to the next backplanes. The SAS RAID controller can support daisy-chain up to 8 enclosures. The maximum drive no. is 128 devices through 8 enclosures.
HARDWARE INSTALLATION Step 6. Install SAS Cable This section describes SAS cable how to connect on controller.
HARDWARE INSTALLATION Step 7. Install the LED Cable (option) The preferred I/O connector for server backplanes is the Min SAS 4i internal connector. This connector has eight signal pins to support four SAS/SATA drives and six pins for the SGPIO (Serial General Purpose Input/Output) side-band signals. The SGPIO bus is used for efficient LED management and for sensing drive Locate status. See SFF 8485 for the specification of the SGPIO bus. For backplane without SGPIO supporting, Please refer to Section 2.
HARDWARE INSTALLATION Step 9. Re-check Fault LED Cable Connections (optional) Be sure that the proper failed drive channel information is displayed by the fault LEDs. An improper connection will tell the user to ‘‘Hot Swap’’ the wrong drive. This can result in removing the wrong disk (one that is functioning properly) from the controller. This can result in failure and loss of system data. Step 10.
HARDWARE INSTALLATION Step 13. Configure Volume Set The controller configures RAID functionality through the McBIOS RAID manager. Please refer to Chapter 3, McBIOS RAID Manager, for the detail. The RAID controller can also be configured through the McRAID storage manager with ArcHttp proxy server installed, LCD module (refer to LCD manual) or through on-board LAN port. For this option, please refer to Chapter 6, Web Browser-Based Configuration. Step 14.
HARDWARE INSTALLATION (2).Ghost (such as Carbon Copy Cloner ghost utility) the Mac OS X 10.4.X or 10.5 system disk on the Mac Pro to the Areca External PCIe SAS RAID adapter volume set. Carbon Copy Cloner is an archival type of back up software. You can take your whole Mac OS X system and make a carbon copy or clone to Areca volume set like an other hard drive. You can also directly install the Mac OS X 10.5 Leopard to Areca Intel IOP Based volume set without using the Ghost utility. (3).
HARDWARE INSTALLATION Figure 2-11, Internal Min SAS 4i to 4x SATA Cable 2.4.2 Internal Min SAS 4i to 4xSFF-8482 Cable These controllers can be installed in a server RAID enclosure with out a backplane. The kind of cable will attach directly to the SAS disk drives. The following diagram shows the picture of Min SAS 4i to 4xSFF-8482 cables.
HARDWARE INSTALLATION 2.4.3 Internal Min SAS 4i to Internal Min SAS 4i cable The SAS RAID controllers have 1-6 Min SAS 4i internal connectors, each of them can support up to four SAS/SATA signals. These adapters can be installed in a server RAID enclosure with Min SAS 4i internal connectors backplane. This Min SAS 4i cable has eight signal pins to support four SAS/SATA drives and six pins for the SGPIO (Serial General Purpose Input/Output) side-band signals.
HARDWARE INSTALLATION 2.5 LED Cables There is no SGPIO supported in the most of old version SATA backplane. The SAS controller also provides two kinds of alternative LED cable header to support the fault/activity status for those backplanes. The Global Indicator Connector is used by the server global indicator LED. The following electronics schematic is the SAS RAID controller logical of fault/activity header. The signal for each pin is cathode (-) side.
HARDWARE INSTALLATION LED Fault LED Normal Status When the fault LED is solid illuminated, there is no disk present. When the fault LED is off, then disk is present and status is normal. Problem Indication When the fault LED is slow blinking (2 times/sec), that disk drive has failed and should be hot-swapped immediately. When the activity LED is illuminated and fault LED is fast blinking (10 times/sec) there is rebuilding activity on that disk drive.
HARDWARE INSTALLATION Figure 2-17, ARC-1680LP individual LED for each channel drive and global indicator connector for computer case. Figure 2-18, ARC-1680i individual LED for each channel drive and global indicator connector for computer case. Figure 2-19, ARC-1680IXL12/16 individual LED for each channel drive and global indicator connector for computer case.
HARDWARE INSTALLATION B: I2C Connector You can also connect the I2C interface to a proprietary SAS/SATA backplane enclosure. This can reduce the number of activity LED and/or fault LED cables. The I2C interface can also cascade to another SAS/SATA backplane enclosure for the additional channel status display. Figure 2-20, Activity/Fault LED I2C connector connected between SAS RAID Controller & 4 SATA HDD backplane.
HARDWARE INSTALLATION C: SGPIO bus The preferred I/O connector for server backplanes is the Min SAS 4i (SFF-8087) internal serial-attachment connector. This connector has eight signal pins to support four SATA drives and six pins for the SGPIO (Serial General Purpose Input/Output) sideband signals which use to replace the individual LED cable. The SGPIO bus is used for efficient LED management and for sensing drive locate status. See SFF 8485 for the specification of the SGPIO bus.
HARDWARE INSTALLATION The following signal defines the sideband connector which can work with Areca sideband cable on its SFF-8087 to 4 SATA cable. The sideband header is located at backplane. For SGPIO to work properly, please connect Areca 8-pin sideband cable to the sideband header as shown above. See the table for pin definitions. 2.6 Hot-plug Drive Replacement The RAID controller supports the ability of performing a hot-swap drive replacement without powering down the system.
HARDWARE INSTALLATION Note: The capacity of the replacement drives must be at least as large as the capacity of the other drives in the raid set. Drives of insufficient capacity will be failed immediately by the RAID adapter without starting the Automatic Data Rebuild. 2.7 Summary of the installation The flow chart below describes the installation procedures for SAS RAID controllers.
HARDWARE INSTALLATION McRAID Storage Manager Before launching the firmware-embedded web server, McRAID storage manager through the PCIe bus, you need first to install the ArcHttp proxy server on your server system.
BIOS CONFIGURATION 3. McBIOS RAID Manager The system mainboard BIOS automatically configures the following SAS RAID controller parameters at power-up: • I/O Port Address • Interrupt Channel (IRQ) • Adapter ROM Base Address Use McBIOS RAID manager to further configure the SAS RAID controller to suit your server hardware and operating system. 3.1 Starting the McBIOS RAID Manager This section explains how to use the McBIOS RAID manager to configure your RAID system.
BIOS CONFIGURATION Areca Technology Corporation RAID Setup Select An Adapter To Configure ( 001/ 0/0) I/O=28000000h, IRQ = 9 ArrowKey Or AZ:Move Cursor, Enter: Select, **** Press F10 (Tab) to Reboot **** Use the Up and Down arrow keys to select the controller you want to configure. While the desired controller is highlighted, press the Enter key to enter the main menu of the McBIOS RAID manager.
BIOS CONFIGURATION • • • • • • • Add physical drives, Define volume sets, Modify volume sets, Modify RAID level/stripe size, Define pass-through disk drives, Modify system functions and Designate drives as hot spares. 3.3 Configuring Raid Sets and Volume Sets You can configure RAID sets and volume sets with McBIOS RAID manager automatically. Using “Quick Volume/Raid Setup” or manually using “Raid Set/Volume Set Function”. Each configuration method requires a different level of user input.
BIOS CONFIGURATION 3.5 Using Quick Volume /Raid Setup Configuration “Quick Volume / Raid Setup configuration” collects all available drives and includes them in a RAID set. The RAID set you created is associated with exactly one volume set. You will only be able to modify the default RAID level, stripe size and capacity of the new volume set. Designating drives as hot spares is also possible in the “Raid Level” selection option.
BIOS CONFIGURATION 3 The capacity for the current volume set is entered after highlighting the desired RAID level and pressing the Enter key. The capacity for the current volume set is displayed. Use the UP and DOWN arrow keys to set the capacity of the volume set and press the Enter key to confirm. The available stripe sizes for the current volume set are then displayed. 4 Use the UP and DOWN arrow keys to select the current volume set stripe size and press the Enter key to confirm.
BIOS CONFIGURATION Step Action 1 To setup the hot spare (option), choose “Raid Set Function” from the main menu. Select the “Create Hot Spare” and press the Enter key to define the hot spare. 2 Choose “RAID Set Function” from the main menu. Select “Create Raid Set” and press the Enter key. 3 The “Select a Drive For Raid Set” window is displayed showing the SAS/ SATA drives connected to the SAS RAID controller. 4 Press the UP and DOWN arrow keys to select specific physical drives.
BIOS CONFIGURATION 10 Choosing Foreground (Fast Completion) Press Enter key to define fast initialization or selected the Background (Instant Available) or No Init (To Rescue Volume). In the “Background Initialization”, the initialization proceeds as a background task, the volume set is fully accessible for system reads and writes. The operating system can instantly access to the newly created arrays without requiring a reboot and waiting the initialization complete.
BIOS CONFIGURATION Option Description Quick Volume/Raid Setup Create a default configuration based on the number of physical disk installed Raid Set Function Create a customized RAID set Volume Set Function Create a customized volume set Physical Drives View individual disk information Raid System Function Setup the RAID system configuration Hdd Power Management Manage HDD power based on usage patterns Ethernet Configuration Ethernet LAN setting View System Events Record all system events i
BIOS CONFIGURATION 4. If you need to add an additional volume set, use the main menu “Create Volume Set” function. The total number of physical drives in a specific RAID set determine the RAID levels that can be implemented within the RAID set. Select “Quick Volume/Raid Setup” from the main menu; all possible RAID level will be displayed on the screen.
BIOS CONFIGURATION This option works on different OS which supports 16 bytes CDB. Such as: Windows 2003 with SP1 Linux kernel 2.6.x or latter • Use 4K Block It change the sector size from default 512 Bytes to 4k Bytes. the maximum volume capacity up to 16TB. This option works under Windows platform only. And it can not be converted to “Dynamic Disk”, because 4k sector size is not a standard format. For more details, please download pdf file from ftp://ftp. areca.com.
BIOS CONFIGURATION I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Available Capacity : 2400.0GB Quick Volume/Raid Setup Selected Capacity: 2400.
BIOS CONFIGURATION Select “Foreground (Faster Completion)” or “Background (Instant Available)” for initialization and “No Init (To Rescue Volume)” for recovering the missing RAID set configuration. I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Available Capacity : 2400.0GB Quick Volume/Raid Setup Selected Capacity: 2400.
BIOS CONFIGURATION 3.7.2.1 Create Raid Set The following is the RAID set features for the SAS RAID controller. 1. Up to 32 disk drives can be included in a single RAID set. 2. Up to 128 RAID sets can be created per controller, but RAID level 30 50 and 60 only can support eight sub-volumes (RAID set). To define a RAID set, follow the procedures below: 1. Select “Raid Set Function” from the main menu. 2. Select “Create Raid Set “ from the “Raid Set Function” dialog box. 3.
BIOS CONFIGURATION Note: To create RAID 30/50/60 volume, you need create multiple RAID sets (up to 8 RAID sets) first with the same disk members on each RAID set. The max no. disk drives per volume set: 32 for RAID 0/1/10/3/5/6 and 128 for RAID 30/50/60.
BIOS CONFIGURATION 3.7.2.3 Expand Raid Set I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Raid Set Function Volume Set Function Physical DrivesCreate Raid Set Select IDE Drives For Raid Set Expansion Raid System Function Delete Raid Set [*]E#1Solt#2 : 500.1GBAre : HDS725050KLA360 you Sure? Hdd Power Management Expand Set Expand[ Raid Raid Set ]E#1Solt#3 : 500.
BIOS CONFIGURATION Note: 4. RAID set expansion is a quite critical process, we strongly recommend customer backup data before expand. Unexpected accident may cause serious data corruption.
BIOS CONFIGURATION The following screen is used to activate the RAID set after one of its disk drive was removed in the power off state. When one of the disk drives is removed in power off state, the RAID set state will change to “Incomplete State”. If user wants to continue to work while the SAS RAID controller is powered on, the user can use the “Activate Incomplete Raid Set” option to active the RAID set. After user selects this function, the RAID state will change to “Degraded Mode” and start to work.
BIOS CONFIGURATION I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Raid Set Function Quick Volume/Raid Setup Create Raid Set Raid Set Function Raid Set VolumeDelete Set Function Expand Physical DrivesRaid Set Activate Raid Set Raid System Function Select The HotSpare Device To Be Deleted Create Hot Spare Hdd Power Management Delete Hot Spare [ ]E#1Solt#3 :: 500.1GB [*]E#1Solt#3 500.
BIOS CONFIGURATION set. If multiple volume sets reside on a specified RAID set, all volume sets will reside on all physical disks in the RAID set. Thus each volume set on the RAID set will have its data spread evenly across all the disks in the RAID set rather than one volume set using some of the available disks and another volume set using other disks.
BIOS CONFIGURATION 3.7.3.
BIOS CONFIGURATION 5. After completed the modification of the volume set, press the Esc key to confirm it. An “Initialization Mode” screen appears. •Select “Foreground (Faster Completion)” for faster initialization of the selected volume set. • Select “Background (Instant Available)” for normal initialization of the selected volume set. • Select “No Init (To Rescue Volume)” for no initialization of the selected volume.
BIOS CONFIGURATION • Raid Level Set the RAID level for the volume set. Highlight RAID Level and press the Enter key. The available RAID levels for the current volume set are displayed. Select a RAID level and press the Enter key to confirm.
BIOS CONFIGURATION If volume capacity will exceed 2TB, controller will show the "Greater Two TB Volume Support" sub-menu.
BIOS CONFIGURATION I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Volume Creation Set Functions Volume SetVolume Function Volume Name : ARC-1680-VOL#00 Physical Drives Create VolumeSet Set Creat Volume Raid Level : 5 Raid System Function Creat Raid30/50/60Capacity : 2400.
BIOS CONFIGURATION I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Volume Creation Raid Set Function Volume Set Functions Volume Set Function Volume Name : ARC-1680-VOL#00 Create VolumeSet Set Creat Volume Physical Drives Raid Level : 5 Raid30/50/60 Raid SystemCreat Function Capacity : 2400.
BIOS CONFIGURATION I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Volume Creation Raid Set Function Volume Set Functions Volume Set Function Volume Name : ARC-1680-VOL#00 Create VolumeSet Set Creat Volume Physical Drives Raid Level : 5 Raid30/50/60 Raid SystemCreat Function Capacity : 2400.
BIOS CONFIGURATION The created new volume set attribute option allows users to select the Volume Name, Capacity, RAID Level, Strip Size, SCSI ID/LUN, Cache Mode, and Tagged Command Queuing. The detailed description of those parameters can refer to section 3.7.3.1. User can modify the default values in this screen; the modification procedures are in section 3.7.3.4.
BIOS CONFIGURATION I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Volume Set Functions Volume Set Function Creat Volume Set Physical Drives Raid30/50/60 Raid SystemCreat Function Select Delete Volume Set Volume To Delete Hdd Power Management Delete Volume Set Modify Volume Set ARC-1680-VOL#00(Raid Set #00) Ethernet Configuration Volume Set View SystemCheck Events Yes StopVolume Check
BIOS CONFIGURATION 3.7.3.4.1 Volume Growth Use “Expand RAID Set” function to add disk to a RAID set. The additional capacity can be used to enlarge the last volume set size or to create another volume set. The “Modify Volume Set” function can support the “Volume Modification” function. To expand the last volume set capacity , move the cursor bar to the “ Capacity” item and entry the capacity size. When finished the above action, press the ESC key and select the Yes option to complete the action.
BIOS CONFIGURATION I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu The Volume Set Information Quick Volume/Raid Setup Volume Set Name : ARC-1680-VOL # 00 Raid Set Function Set Function Raid Set Name : Raid Set # 00 VolumeVolume Set Function Capacity : 1200.
BIOS CONFIGURATION 3.7.3.6 Stop Volume Set Check Use this option to stop all of the “Check Volume Set” operations. 3.7.3.7 Display Volume Set Info. To display volume set information, move the cursor bar to the desired volume set number and then press the Enter key. The “Volume Set Information” screen will be shown. You can only view the information of this volume set in this screen, but can not modify it.
BIOS CONFIGURATION Choose this option from the main menu to select a physical disk and perform the operations listed above. Move the cursor bar to an item, then press Enter key to select the desired function. 3.7.4.1 View Drive Information When you choose this option, the physical disks connected to the SAS RAID controller are listed. Move the cursor to the desired drive and press Enter key to view drive information.
BIOS CONFIGURATION I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Physical Drive Function Raid Set Function Select The Drives VolumeView Set Function Drive Information Drive Information Physical Drives E#1Solt#2 : 500.1GB : HDS725050KLA360 Create Pass-Throught Pass-Through Disk Attribute Raid System Function E#1Solt#3 : 500.
BIOS CONFIGURATION To delete a pass-through drive from the pass-through drive pool, move the cursor bar to the “Delete Pass-Through Drive” item, then press the Enter key. The “Delete Pass-Through confirmation” screen will appear; select Yes to delete it. 3.7.4.5 Identify Selected Drive To prevent removing the wrong drive, the selected disk fault LED Indicator will light for physically locating the selected disk when the “Identify Selected Device” is selected.
BIOS CONFIGURATION I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Physical Drive function Raid Set Function VolumeView Set Function Drive Information DrivePass-Through Information Disk Physical Drives Create Raid System ModifyFunction Pass-Through Disk Hdd Power Management Delete Pass-Through Disk Select The Enclosure Ethernet Configuration Indentify Selected Drive View System Events Identify Enclosure
BIOS CONFIGURATION 3.7.5.
BIOS CONFIGURATION 3.7.5.3 Change Password The manufacture default password is set to 0000. The password option allows user to set or clear the password protection feature. Once the password has been set, the user can monitor and configure the controller only by providing the correct password. This feature is used to protect the internal RAID system from unauthorized access. The controller will check the password only when entering the main menu from the initial screen.
BIOS CONFIGURATION I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Raid System Function Mute The Alert Beeper Alert Beeper Setting Quick Volume/Raid Setup Password Change Raid Set FunctionJBOD/RAID JBOD/RAID Function Function Volume Set Function Background Task Priority JBOD/RAID Function Physical Drives SATA NCQ Support Raid System Function HDD Read Ahead Cache RAID Hdd Power Management Volume Data Read Ahead JBOD Ethernet Configuration
BIOS CONFIGURATION ing mechanisms for outstanding and completed portions of the workload. The SAS RAID controller allows the user to select the SATA NCQ support: “Enabled” or “Disabled”.
BIOS CONFIGURATION 3.7.5.8 Volume Data Read Ahead The volume ,read data ahead parameter specifies the controller firmware algorithms which process the Read Ahead data blocks from the disk. The Read Ahead parameter is normal by default. To modify the value, you must set it from the command line using the Read Ahead option. The default normal option satisfies the performance requirements for a typical volume. The disabled value implies no read ahead.
BIOS CONFIGURATION I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Raid System Function Main Menu Mute The Alert Beeper Alert Beeper Setting HDD Queue Depth Quick Volume/Raid Setup Change Password Raid Set Function 1 JBOD/RAID Function Volume Set Function Background Task Priority 2 Physical Drives 4 SATA NCQ Support Raid Raid System System Function Function HDD Read Ahead Cache 8 Hdd Power Management Volume Data Read Ahead16 Ethernet C
BIOS CONFIGURATION 3.7.5.11 Controller Fan Detection Included in the product box is a field replaceable passive heatsink to be used only if there is enough airflow to adequately cool the passive heatsink. The “Controller Fan Detection” function is available in the firmware for preventing the buzzer warning. When using the passive heatsink, disable the “Controller Fan Detection” function through this McBIOS RAID manager setting.
BIOS CONFIGURATION "Internal" in the setup manual then restart the system to set the active channel CH5-8 on the internal port.
BIOS CONFIGURATION 3.7.5.14 Disk Write Cache Mode User can set the “Disk Write Cache Mode” to Auto, Enabled, or Disabled. “Enabled” increases speed, “Disabled” increases reliability.
BIOS CONFIGURATION No Truncation: It does not truncate the capacity.
BIOS CONFIGURATION 3.7.6.1 Stagger Power On In a PC system with only one or two drives, the power can supply enough power to spin up both drives simultaneously. But in systems with more than two drives, the startup current from spinning up the drives all at once can overload the power supply, causing damage to the power supply, disk drives and other system components. This damage can be avoided by allowing the host to stagger the spin-up of the drives.
BIOS CONFIGURATION 3.7.6.2 Time to Hdd Low Power Idle (Minutes) This option delivers lower power consumption by automatically unloading recording heads during the setting idle time.
BIOS CONFIGURATION 3.6.7.4 Time To Spin Down Idle Hdd (Minutes) This function can automatically spin down the drive if it hasn't been accessed for a certain amount of time. This value is used by the drive to determine how long to wait (with no disk activity, before turning off the spindle motor to save power.
BIOS CONFIGURATION 3.7.7.1 DHCP Function DHCP (Dynamic Host Configuration Protocol) allows network administrators centrally manage and automate the assignment of IP (Internet Protocol) addresses on a computer network. When using the TCP/IP protocol (Internet protocol), it is necessary for a computer to have a unique IP address in order to communicate to other computer systems. Without DHCP, the IP address must be entered manually at each computer system.
BIOS CONFIGURATION 3.7.7.2 Local IP address If you intend to set up your client computers manually (no DHCP), make sure that the assigned IP address is in the same range as the default router address and that it is unique to your private network. However, it is highly recommend to use DHCP if that option is available on your network. An IP address allocation scheme will reduce the time it takes to set-up client computers and eliminate the possibilities of administrative errors and duplicate addresses.
BIOS CONFIGURATION I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Ethernet Configuration Volume Set Function Physical Drives DHCP Function : Enable Raid System Function Local IP Address : 192.168.001.
BIOS CONFIGURATION I/O Port Addr : 28000000h, F2(Tab): Select Controller, F10: Reboot System Areca Technology Corporation RAID Controller Main Menu Quick Volume/Raid Setup Raid Set Function Ethernet Configuration Volume Set Function Physical Drives DHCP Function : Enable Raid System Function Local IP Address : 192.168.001.
BIOS CONFIGURATION 3.7.8 View System Events To view the SAS RAID controller’s system events information, move the cursor bar to the main menu and select the “View System Events” link, then press the Enter key. The SAS RAID controller’s events screen appear. Choose this option to view the system events information: Timer, Device, Event type, Elapsed Time, and Errors. The RAID system does not have a build-in real time clock. The time information is the relative time from the SAS RAID controller powered on.
BIOS CONFIGURATION 3.7.10 Hardware Monitor To view the RAID controller’s hardware monitor information, move the cursor bar to the main menu and click the “Hardware Monitor” link. The “Controller H/W Monitor” screen appears. The “Controller H/W Monitor” provides the CPU temperature, controller temperature, voltage and fan speed (I/O Processor fan) of the SAS RAID controller.
DRIVER INSTALLATION 4. Driver Installation This chapter describes how to install the SAS RAID controller driver to your operating system. The installation procedures use the following terminology: Installing operating system on the SAS/SATA Volume If you have a new drive configuration without an operating system and want to install operating system on a disk drive managed by the SAS RAID Controller. The driver installation is a part of the operating system installation.
DRIVER INSTALLATION system installations. Determine the correct kernel version and identify which diskette images contain drivers for that kernel. If the driver file ends in .img, create the appropriate driver diskette using “dd” utility. The following steps are required to create the driver diskettes: 1. The computer system BIOS must be set to boot-up from the CD-ROM. 2. Insert the SATA controller driver CD disc into the CD-ROM drive. 3. The system will boot-up from CD-ROM Drive.
DRIVER INSTALLATION 4.2 Driver Installation for Windows The SAS RAID controller can be used with Microsoft Windows 2000/ XP/2003/Vista. The SAS RAID controllers support SCSI Miniport and StorPort Drivers for Windows Server 2003/Vista. 4.2.1 New Storage Device Drivers in Windows 2003/Vista The Storport driver is new to Windows Server 2003/XP-64/Vista. Storport implements a new architecture designed for better performance with RAID systems and in Storage Area Network (SAN) environments.
DRIVER INSTALLATION volume set is created and configured, continue with next step to install the operating system. 3. Insert the Windows setup CD and reboot the system to begin the Windows installation. Note: The computer system BIOS must support bootable from CDROM. 4. Press F6 as soon as the Windows screen shows ”Setup is inspecting your computer’s hardware configuration”. A message stating “Press F6 to specify thrid-party RAID controller” will display during this time.
DRIVER INSTALLATION 9. After the installation is completed, reboot the system to load the new driver/operating system. 10. See Chapter 5 in this manual to customize your RAID volume sets using McRAID Storage Manager. 4.2.2.2 Making Volume Sets Available to Windows System When you reboot the system, log in as a system administrator. Continue with the following steps to make any additional volume sets or pass-through disks accessible to Windows.
DRIVER INSTALLATION 1. Follow the instructions in Chapter 2, the Hardware Installation Chapter, to install the controller and connect the disk drives or enclosure. 2. Start the system and then press Tab+F6 to enter the controller McBIOS RAID manager. Use the configuration utility to create the RAID set and volume set. For details, see Chapter 3, McBIOS RAID Manager. Once a volume set is created and configured, continue with installation of the driver. 3.
DRIVER INSTALLATION the diskette from the drive and click Yes to restart the computer to load the new drivers. 12. See Chapter 5 in this manual for information on customizing your RAID volumes using McRAID Storage Manager. 4.2.3.1 Making Volume Sets Available to Windows System When you reboot the system, log in as a system administrator. The following steps show how to make any new disk arrays or independent disks accessible to Windows 2000/XP/2003/Vista.
DRIVER INSTALLATION 1. Ensure that you have closed all applications and are logged in with administrative rights. 2. Open “Control Panel” and start the “Add/Remove Program” icon and uninstall and software for the SAS RAID controller. 3. Go to “Control Panel” and select “System”. Select the “Hardware” tab and then click the “Device Manager” button. In device manager, expand the “SCSI and RAID Controllers” section. Right click on the Areca SAS RAID controller and select “Uninstall”. 4.
DRIVER INSTALLATION ed version driver for RedHat, SuSE and other versions of Linux. Please refer to the “readme.txt” file on the included Areca CD or website to make driver diskette and to install driver to the system. 4.4 Driver Installation for FreeBSD This chapter describes how to install the SAS RAID controller driver to FreeBSD. Before installing the SAS RAID driver to FreeBSD, complete following actions: 1.
DRIVER INSTALLATION 4.6.1 Installation Procedures You must have administrative level permissions to install Areca Mac driver & software. You can install driver& software on your Power Mac G5 or Mac Pro as below: 1. Insert the Areca Mac Driver & Software CD that came with your Areca SAS RAID controller. 2. Double-click on the following file that resides at \ packages\MacOS to add the installer on the Finder. a). install_mraid_mac.zip (For Power Mac G5) b). install_mraid_macpro.zip (For Mac Pro) 3.
DRIVER INSTALLATION only upgrade the driver, archttp64 or arc_cli individual item that resides at \packages\MacOS Arc-cli performs many tasks at the command line. You can download arc-cli manual from Areca website or software CD \ DOCS directory. 4.6.2 Making Volume Sets Available to Mac OS X When you create a volume through McRAID storage manager, the Mac OS X recognizes that a new disk is avail, and displays a message asking what you next want to do.
ARCHTTP PROXY SERVER INSTALLATION 5. ArcHttp Proxy Server Installation Overview After hardware installation, the SAS/SATA disk drives connected to the SAS RAID controller must be configured and the volume set units initialized before they are ready to use. The user interface for these tasks can be accessed through the builtin configuration that resides in the controller’s firmware.
ARCHTTP PROXY SERVER INSTALLATION 5.1 For Windows You must have administrative level permissions to install SAS RAID software. This procedure assumes that the SAS RAID hardware and Windows are installed and operational in your system. Screen captures in this section are taken from a Windows XP installation. If you are running another version of Windows, your installation screen may look different, but the ArcHttp proxy server installation is essentially the same. 1.
ARCHTTP PROXY SERVER INSTALLATION Click on the “Start” button in the Windows task bar and then click “Program”, select the “McRAID” and run “ ArcHttp proxy server”. The ArcHttp dialog box appears. 1. When you select “Controller#01(PCI)” then click “Start” button. Then web broswer appears. 2. If you select “Cfg Assistant” then click “Start” button. The “ArcHttp Configuration” appears. (Please refer to section 5.6 ArcHttp Configuration) 5.
ARCHTTP PROXY SERVER INSTALLATION usr/local/sbin). Or (1). Download from the www.areca.com.tw or from the email attachment. 2. You must have administrative level permissions to install SATA RAID controller ArcHttp proxy server software. This procedure assumes that the SATA RAID hardware and driver are installed and operational in your system. The following details are the installation procedure of the SATA RAID controller for Linux ArcHttp proxy server software. (1).
ARCHTTP PROXY SERVER INSTALLATION See the next chapter detailing the McRAID Storage Manager to customize your RAID volume set. (3). If you need the “Cfg Assistant”, please refer to section 5.6 ArcHttp Configuration. (4). See the next chapter detailing the McRAID storage manager to customize your RAID volume set. For Mozilla user: Because our management need Java support, so user may need upgrade to version 1.6 or later. 5.
ARCHTTP PROXY SERVER INSTALLATION driver, archttp64 and arc-cli from software CD < CD >\package\ Mac OS directory at the same time. 5.6 ArcHttp Configuration The ArcHttp proxy server will automatically assign one additional port for setup its configuration. If you want to change the "archttpsrv.conf" setting up of ArcHttp proxy server configuration, for example: General Configuration, Mail Configuration, and SNMP Configuration, please start Web Browser http:\\localhost: Cfg Assistant.
ARCHTTP PROXY SERVER INSTALLATION the RAID controller email sending function, click on the “Mail Configuration” link. The "SMTP Server Configurations" menu will show as following: When you open the mail configuration page, you will see following settings: • SMTP Server Configuration: SMTP Server IP Address: Enter the SMTP server IP address which is not MCRAID manager IP.Ex: 192.168.0.2 • Mail Address Configurations: Sender Name: Enter the sender name that will be shown in the outgoing mail.
ARCHTTP PROXY SERVER INSTALLATION Error Notification: Send only urgent event Serious Error Notification: Send urgent and serious event Warning Error Notification: Send urgent, serious and warning Event Information Notification: Send all event Notification For No Event: Notify user if no event occurs within 24 hours.
ARCHTTP PROXY SERVER INSTALLATION • SNMP Trap Notification Configurations Before the client side manager application accepts the SAS RAID controller traps, it is necessary to integrate the MIB into the management application’s database of events and status indicator codes. This process is known as compiling the MIB into the application. This process is highly vendor-specific and should be well-covered in the User’s Guide of your SNMP application.
WEB BROWSER-BASED CONFIGURATION 6. Web Browser-based Configuration Before using the firmware-based browser McRAID storage manager, do the initial setup and installation of this product. If you need to boot up the operating system from a RAID volume set, you must first create a RAID volume by using McBIOS RAID manager. Please refer to section 3.3 Using Quick Volume /Raid Setup Configuration for information on creating this initial volume set.
WEB BROWSER-BASED CONFIGURATION more and a supported browser. A locally managed system requires all of the following components: • A supported Web browser, which should already be installed on the system. • Install ArcHttp proxy server on the SAS RAID system. (Refer to Chapter 5, Archttp Proxy Server Installation) • Remote and managed systems must have a TCP/IP connection.
WEB BROWSER-BASED CONFIGURATION • Start-up McRAID Storage Manager from Linux/ FreeBSD/Solaris/Mac Local Administration To configure the internal SAS RAID controller. You need to know its IP address. You can find the IP address assigned by the Archttp proxy server installation:Binding IP:[X.X.X.X] and controller listen port. (1). Launch your McRAID storage manager by entering http:// [Computer IP Address]:[Port Number] in the web browser. (2).
WEB BROWSER-BASED CONFIGURATION to the 10/100 RJ45 LAN port. To configure RAID controller on a remote machine, you need to know its IP address. The IP address will default show in McBIOS RAID manager of “Ethernet Configuration” or “System Information” option. Launch your firmware-embedded TCP/IP & web browser-based McRAID storage manager by entering http://[IP Address] in the web browser. Note: You can find controller Ethernet port IP address in McBIOS RAID manager “System Information” option. 6.
WEB BROWSER-BASED CONFIGURATION 6.3 Main Menu The main menu shows all available functions, accessible by clicking on the appropriate link. Individual Category Description Quick Function Create a default configuration, which is based on the number of physical disks installed; it can modify the volume set Capacity, Raid Level, and Stripe Size. Raid Set Functions Create a customized RAID set. Volume Set Functions Create customized volume sets and modify the existed volume sets parameter.
WEB BROWSER-BASED CONFIGURATION Note: In “Quick Create”, your volume set is automatically configured based on the number of disks in your system. Use the “Raid Set Functions” and “Volume Set Functions” if you prefer to customize your volume set, or RAID 30/50/60 volume set. 6.5 Raid Set Functions Use the “Raid Set Function” and “Volume Set Function” if you prefer to customize your volume set.
WEB BROWSER-BASED CONFIGURATION Note: To create RAID 30/50/60 volume, you need create multiple RAID sets first (up to 8 RAID sets) with the same disk members on each RAID set. The max no. disk drives per RAID set: 32 for RAID 0/1/10(1E)/3/50/60 and 128 for RAID 30/50/60. 6.5.2 Delete Raid Set To delete a RAID set, click on the “Deleted Raid Set” link. A “Select The RAID Set To Delete” screen is displayed showing all exist RAID sets in the current controller.
WEB BROWSER-BASED CONFIGURATION Press the Yes to start the expansion on the RAID set. The new additional capacity can be utilized by one or more volume sets. The volume sets associated with this RAID set appear for you to have chance to modify RAID level or stripe size. Follow the instruction presented in the “Modify Volume Set ” to modify the volume sets; operation system specific utilities may be required to expand operating system partitions. Note: 1.
WEB BROWSER-BASED CONFIGURATION To activate the incomplete the RAID set, click on the “Activate Raid Set” link. A “Select The RAID SET To Activate” screen is displayed showing all RAID sets existing on the current controller. Click the RAID set number to activate in the select column. Click on the “Submit” button on the screen to activate the RAID set that had a disk removed (or failed) in the power off state. The SAS RAID controller will continue to work in degraded mode. 6.5.
WEB BROWSER-BASED CONFIGURATION 6.5.7 Rescue Raid Set When the system is powered off in the RAID set update/creation period, the configuration possibly could disappear due to this abnormal condition. The “RESCUE” function can recover the missing RAID set information. The RAID controller uses the time as the RAID set signature. The RAID set may have different time after the RAID set is recovered. The “SIGANT” function can regenerate the signature for the RAID set.
WEB BROWSER-BASED CONFIGURATION It is organized in a RAID level with one or more physical disks. RAID level refers to the level of data performance and protection of a volume set. A volume set capacity can consume all or a portion of the disk capacity available in a RAID set. Multiple volume sets can exist on a group of disks in a RAID set. Additional volume sets created in a specified RAID set will reside on all the physical disks in the RAID set.
WEB BROWSER-BASED CONFIGURATION • Volume Name The default volume name will always appear as “ARC-1680VOL”. You can rename the volume set providing it does not exceed the 15 characters limit. • Volume Raid Level Set the RAID level for the volume set. Highlight the desired RAID Level and press Enter key. The available RAID levels for the current volume set are displayed. Select a RAID level and press Enter key to confirm. • Capacity The maximum volume size is the default initial setting.
WEB BROWSER-BASED CONFIGURATION formance, especially if your computer does mostly sequential reads. However, if you are sure that your computer does random reads more often, select a smaller stripe size. Note: RAID level 3 can’t modify the cache strip size. • Cache Mode The SAS RAID controller supports “Write Through” and “Write Back” cache. • Tagged Command Queuing The “Enabled” option is useful for enhancing overall system performance under multi-tasking operating systems.
WEB BROWSER-BASED CONFIGURATION The new create volume set attribute allows user to select the Volume Name, RAID Level , Capacity, Greater Two TB Volume Support, Initialization Mode, Stripe Size, Cache Mode, Tagged Command Queuing, SCSI Channel/SCSI ID/SCSI Lun. Please refer to above section for details description of each item. Note: RAID level 30 50 and 60 can support up to eight RAID set (four pairs), but it can not support expansion and migration. 6.6.
WEB BROWSER-BASED CONFIGURATION 6.6.4 Modify Volume Set To modify a volume set from a RAID set: (1). Click on the “Modify Volume Set” link. (2). Click the volume set check box from the list that you wish to modify. Click the “Submit” button. The following screen appears. Use this option to modify the volume set configuration. To modify volume set attributes, move the cursor bar to the volume set attribute menu and click it. The “Enter The Volume Attribute” screen appears.
WEB BROWSER-BASED CONFIGURATION • You can expand volume capacity, but can’t reduce volume capacity size. • After volume expansion, the volume capacity can't be decreased. For greater 2TB expansion: • If your system installed in the volume, don't expand the volume capacity greater 2TB, currently OS can’t support boot up from a greater 2TB capacity device. • Expand over 2TB used LBA64 mode. Please make sure your OS supports LBA64 before expand it. 6.6.4.
WEB BROWSER-BASED CONFIGURATION 6.6.5 Check Volume Set To check a volume set from a RAID set: (1). Click on the “Check Volume Set” link. (2). Click on the volume set from the list that you wish to check. Tick on “Confirm The Operation” and click on the “Submit” button. Use this option to verify the correctness of the redundant data in a volume set.
WEB BROWSER-BASED CONFIGURATION 6.7 Physical Drive Choose this option to select a physical disk from the main menu and then perform the operations listed below. 6.7.1 Create Pass-Through Disk To create pass-through disk, move the mouse cursor to the main menu and click on the “Create Pass-Through” link. The relative setting function screen appears. A pass-through disk is not controlled by the SAS RAID controller firmware, it cann’t be a part of a volume set.
WEB BROWSER-BASED CONFIGURATION bute” screen appears, modify the drive attribute values, as you want. After you complete the selection, mark the check box for “Confirm The Operation” and click on the “Submit” button to complete the selection action. 6.7.3 Delete Pass-Through Disk To delete a pass-through drive from the pass-through drive pool, move the mouse cursor bar to the main menus and click the “Delete Pass Through” link.
WEB BROWSER-BASED CONFIGURATION 6.7.5 Identify Drive To prevent removing the wrong drive, the selected disk fault LED indicator will light for physically locating the selected disk when the “Identify Selected Device” is selected. 6.8 System Controls 6.8.1 System Config To set the RAID system function, move the cursor to the main menu and click the “System Controls” link. The “Raid System Function” menu will show all items, then select the desired function.
WEB BROWSER-BASED CONFIGURATION • System Beeper Setting The “System Beeper Setting” function is used to “Disabled” or “Enabled” the SAS RAID controller alarm tone generator. • Background Task Priority The “Background Task Priority” is a relative indication of how much time the controller devotes to a rebuild operation. The SAS RAID controller allows the user to choose the rebuild priority (UltraLow, Low, Normal and High) to balance volume set access and rebuild tasks appropriately.
WEB BROWSER-BASED CONFIGURATION • HDD Read Ahead Cache Allow Read Ahead (Default: Enabled)—When Enabled, the drive’s read ahead cache algorithm is used, providing maximum performance under most circumstances. • Volume Data Read Ahead The volume read data ahead parameter specifies the controller firmware algorithms which process the Read Ahead data blocks from the disk. The Read Ahead parameter is normal by default. To modify the value, you must set it from the command line using the Read Ahead option.
WEB BROWSER-BASED CONFIGURATION PHY will automatically enter the sleep mode. In this condition, our firmware will set no linkage on those channels. Since some HDDs have this behavior, our controller firmware will configure the active channel CH5-8 on the external port. We added this function for customer to set, if the controller automatically configuration function detect the wrong direction of CH5-8 internal channels.
WEB BROWSER-BASED CONFIGURATION Multiples Of 1G: If you have 123 GB drives from different vendors; chances are that the capacity varies slightly. For example, one drive might be 123.5 GB, and the other 123.4 GB. Multiples Of 1G truncates the fractional part. This makes capacity for both of these drives so that one could replace the other. No Truncation: It does not truncate the capacity. 6.8.2 HDD Power Management Areca has automated the ability to manage HDD power based on usage patterns.
WEB BROWSER-BASED CONFIGURATION designed to meet short-term startup power demand as well as steady state conditions. Areca RAID controller has included the option for customer to select the disk drives sequentially stagger power up value. The values can be selected from 0.4s to 6s per step which powers up one drive. 6.8.2.2 Time to Hdd Low Power Idle (Minutes) This option delivers lower power consumption by automatically unloading recording heads during the setting idle time. 6.8.2.
WEB BROWSER-BASED CONFIGURATION IP address must be entered manually at each computer system. DHCP lets a network administrator supervise and distribute IP addresses from a central point. The purpose of DHCP is to provide the automatic (dynamic) allocation of IP client configurations for a specific time period (called a lease period) and to eliminate the work necessary to administer a large IP network.
WEB BROWSER-BASED CONFIGURATION 6.8.5 SNMP Configuration To configure the RAID controller SNMP function, click on the “System Controls” link. The “System Controls” menu will show available items. Select the “SNMP Configuration” item. This function can only set via web-based configuration. The firmware SNMP agent manager monitors all system events and the SNMP function becomes functional with no agent software required. • SNMP Trap Configurations Enter the SNMP Trap IP Address.
WEB BROWSER-BASED CONFIGURATION • SNMP Trap Notification Configurations Please refer to Appendix E of Event Notification Configurations. 6.8.6 NTP Configuration The Network Time Protocol (NTP) is used to synchronize the time of a computer client or server to another server or reference time source, such as a radio or satellite receiver or modem.
WEB BROWSER-BASED CONFIGURATION Note: NTP feature works through onboard Ethernet port. So you must make sure that you have connected onboard Ethernet port. 6.8.7 View Events/Mute Beeper To view the SAS RAID controller’s event information, click on the “View Event/Mute Beeper” link. The SAS RAID controller “System events Information” screen appears. The mute beeper function automatically enable by clicking on “View Events/Mute Beeper”.
WEB BROWSER-BASED CONFIGURATION 6.8.9 Clear Events Buffer Use this feature to clear the entire events buffer information. 6.8.10 Modify Password To set or change the SAS RAID controller password, select “System Controls” from the menu and click on the “Modify Password” link. The “Modify System Password” screen appears. The manufacture default password is set to 0000. The password option allows user to set or clear the SAS RAID controller’s password protection feature.
WEB BROWSER-BASED CONFIGURATION 6.8.11 Update Firmware Please refer to the appendix A Upgrading Flash ROM Update Process. 6.9 Information 6.9.1 Raid Set Hierarchy Use this feature to view the SAS RAID controller current RAID set, current volume set and physical disk information. The volume state and capacity are also shown in this screen.
WEB BROWSER-BASED CONFIGURATION 6.9.2 System Information To view the SAS RAID controller’s system information, move the mouse cursor to the main menu and click on the “System Information” link. The SAS RAID controller “RAID Subsystem Information” screen appears. Use this feature to view the SAS RAID controller’s system information.
APPENDIX Appendix A Upgrading Flash ROM Update Process Since the PCIe SAS RAID controller features flash ROM firmware, it is not necessary to change the hardware flash chip in order to upgrade the RAID firmware. The user can simply re-program the old firmware through the In-Band PCIe bus or Out-of-Band Ethernet port McRAID Storage manager and nflash DOS utility. New releases of the firmware are available in the form of a DOS file on the shipped CD or Areca website.
APPENDIX Upgrading Firmware Through McRAID Storage Manager Get the new version firmware for your SAS RAID controller. For example, download the bin file from your OEM’s web site onto the C: drive 1. To upgrade the SAS RAID controller firmware, move the mouse cursor to “Upgrade Firmware” link. The “Upgrade The Raid System Firmware or Boot Rom” screen appears. 2. Click "Browse". Look in the location to which the Firmware upgrade software was downloaded. Select the file name and click “Open”. 3.
APPENDIX Controller with onboard LAN port, you can directly plug an Ethernet cable to the controller LAN port, then enter the McBIOS RAID manager to configure the network setting. After network setting configured and saved, you can find the current IP address in the McBIOS RAID manager "System Information" page. From a remote pc, you can directly open a web browser and enter the IP address. Then enter user name and password to login and start your management.
APPENDIX A:\nflash Raid Controller Flash Utility V1.11 2007-11-8 Command Usage: NFLASH FileName NFLASH FileName /cn --> n=0,1,2,3 write binary to controller#0 FileName May Be ARC1680FIRM.BIN or ARC1680* For ARC1680* Will Expand To ARC1680BOOT /FIRM/BIOS.BIN A:\>nflash arc168~1.bin Raid Controller Flash Utility V1.11 2007-11-8 NODEL : ARC-1680 MEM FE620000 FE7FF000 File ARC168~1.
APPENDIX Appendix B Battery Backup Module (ARC-6120-BATxxx) The SAS RAID controller operates using cache memory. The Battery Backup Module is an add-on module that provides power to the SAS RAID controller cache memory in the event of a power failure. The Battery Backup Module monitors the write back cache on the SAS RAID controller, and provides power to the cache memory if it contains data not yet written to the hard drives when power failure occurs.
APPENDIX 4.Low profile bracket also provided. Battery Backup Capacity Battery backup capacity is defined as the maximum duration of a power failure for which data in the cache memory can be maintained by the battery. The BBM’s backup capacity varied with the memory chips that installed on the SAS RAID controller. Capacity Memory Type Battery Backup Duration (Hours) 512MB DDR2 Low Power (14.6mA) 72Hr - 76Hr Operation 1. Battery conditioning is automatic.
APPENDIX Note: Do not remove BBM while system is running. 2. Disconnect the BBM cable from J2 on the SAS RAID controller. 3. Disconnect the battery pack cable from JP2 on the BBM. 4. Install a new battery pack and connect the new battery pack to JP2. 5. Connect the BBM to J2 on the SAS controller. 6. Disable the write-back function from the McBIOS RAID manager or McRAID storage manager. Battery Functionality Test Procedure: 1. Writing amount of data into controller volume, about 5GB or bigger. 2.
APPENDIX Appendix C SNMP Operation & Installation Overview The McRAID storage manager includes a firmware-embedded Simple Network Management Protocol (SNMP) agent and SNMP Extension Agent for the SAS RAID controller. An SNMP-based management application (also known as an SNMP manager) can monitor the disk array. An example of An SNMP management application is Hewlett-Packard’s Open View.
APPENDIX MIB Compilation and Definition File Creation Before the manager application accesses the RAID controller, it is necessary to integrate the MIB into the management application’s database of events and status indicator codes. This process is known as compiling the MIB into the application. This process is highly vendor-specific and should be well-covered in the User’s Guide of your SNMP application. Ensure the compilation process successfully integrates the contents of the ARECARAID.
APPENDIX Starting the SNMP Function Setting • Community Name Community name acts as a password to screen accesses to the SNMP agent of a particular network device. Type in the community names of the SNMP agent. Before access is granted to a request station, this station must incorporate a valid community name into its request; otherwise, the SNMP agent will deny access to the system. Most network devices use “public” as default of their community names. This value is case-sensitive.
APPENDIX 2. Run the setup.exe file that resides at: \packages\ windows\http\setup.exe on the CD. (If SNMP service was not installed, please install SNMP service first.) 3. Click on the “Setup.exe” file then the welcome screen appears. 4. Click the “Next” button and then the “Ready Install the Program” screen will appear. Follow the on-screen prompts to complete Areca SNMP extension agent installation.
APPENDIX 5. A Progress bar appears that measures the progress of the Areca SNMP extension agent setup. When this screen completes, you have completed the Areca SNMP extension agent setup. 6. After a successful installation, the “Setup Complete” dialog box of the installation program is displayed. Click the “Finish” button to complete the installation. Starting SNMP Trap Notification Configurations To start "SNMP Trap Notification Configruations", There have two methods.
APPENDIX SNMP Community Configurations Please refer to the community name in this appendix. SNMP Trap Notification Configurations The "Community Name" should be the same as firmwareembedded SNMP Community. The "SNMP Trap Notification Configurations" include level 1: Serious, level 2: Error, level 3: Warning and level 4: Information.
APPENDIX ware and Linux are installed and operational in your system. For the SNMP extension agent installation for Linux procedure, please refer to \packages\Linux\SNMP\Readme or download from areca.com.tw SNMP Extension Agent Installation for FreeBSD You must have administrative level permission to install SAS RAID software. This procedure assumes that the SAS RAID hardware and FreeBSD are installed and operational in your system.
APPENDIX Appendix D Event Notification Configurations The controller classifies disk array events into four levels depending on their severity. These include level 1: Urgent, level 2: Serious, level 3: Warning and level 4: Information.
APPENDIX PassThrough Disk Created Inform Pass Through Disk created PassThrough Disk Modified Inform Pass Through Disk modified PassThrough Disk Deleted Inform Pass Through Disk deleted B.
APPENDIX C. RAID Set Event Event Level Meaning Create RaidSet Warning New RAID set created Action Delete RaidSet Warning Raidset deleted Expand RaidSet Warning Raidset expanded Rebuild RaidSet Warning Raidset rebuilding RaidSet Degraded Urgent Raidset degraded Replace HDD D.
APPENDIX Telnet Log Serious a Telnet login detected InVT100 Log In Serious a VT100 login detected API Log In Serious a API login detected Lost Rebuilding/ MigrationLBA Urgent Some rebuilding/ migration raidset member disks missing before power on. Reinserted the missing member disk back, controller will continued the incompleted rebuilding/migration.
APPENDIX Appendix E RAID Concept RAID Set A RAID set is a group of disks connected to a SAS RAID controller. A RAID set contains one or more volume sets. The RAID set itself does not define the RAID level (0, 1, 10, 3, 5, 6, 30, 50 60, etc); the RAID level is defined within each volume set. Therefore, volume sets are contained within RAID sets and RAID Level is defined within the volume set.
APPENDIX In the illustration, volume 1 can be assigned a RAID level 5 of operation while volume 0 might be assigned a RAID level 1E of operation. Alternatively, the free space can be used to create volume 2, which could then be set to use RAID level 5. Ease of Use Features • Foreground Availability/Background Initialization RAID 0 and RAID 1 volume sets can be used immediately after creation because they do not create parity data.
APPENDIX on the existing volume sets (residing on the newly expanded RAID set) is redistributed evenly across all the disks. A contiguous block of unused capacity is made available on the RAID set. The unused capacity can be used to create additional volume sets. A disk, to be added to a RAID set, must be in normal mode (not failed), free (not spare, in a RAID set, or passed through to host) and must have at least the same capacity as the smallest disk capacity already in the RAID set.
APPENDIX • Online RAID Level and Stripe Size Migration For those who wish to later upgrade to any RAID capabilities, a system with Areca online RAID level/stripe size migration allows a simplified upgrade to any supported RAID level without having to reinstall the operating system. The SAS RAID controllers can migrate both the RAID level and stripe size of an existing volume set, while the server is online and the volume set is in use.
APPENDIX tion is completed, the volume set transitions to degraded mode. If a global hot spare is present, then it further transitions to rebuilding state. • Online Volume Expansion Performing a volume expansion on the controller is the process of growing only the size of the latest volume. A more flexible option is for the array to concatenate an additional drive into the RAID set and then expand the volumes on the fly.
APPENDIX automatically take its place and he data previously located on the failed drive is reconstructed on the Global Hot Spare. For this feature to work properly, the global hot spare must have at least the same capacity as the drive it replaces. Global Hot Spares only work with RAID level 1, 10(1E), 3, 5, 6, 30, 50, or 60 volume set. You can configure up to three global hot spares with SAS RAID controller. The “Create Hot Spare” option gives you the ability to define a global hot spare disk drive.
APPENDIX condition, the Auto Declare Hot-Spare status will be disappeared if the RAID subsystem has since powered off/on. The Hot-Swap function can be used to rebuild disk drives in arrays with data redundancy such as RAID level 1, 10(1E), 3, 5, 6, 30, 50 and 60. • Auto Rebuilding If a hot spare is available, the rebuild starts automatically when a drive fails. The SAS RAID controllers automatically and transparently rebuild failed drives in the background at user-definable rebuild rates.
APPENDIX The SAS RAID controller allows user to choose the task priority (Ultra Low (5%), Low (20%), Medium (50%), High (80%)) to balance volume set access and background tasks appropriately. For high array performance, specify an Ultra Low value. Like volume initialization, after a volume rebuilds, it does not require a system reboot.
APPENDIX defective. If it is found to have a defect, data will be automatically relocated, and the defective location is mapped out to prevent future write attempts. In the event of an unrecoverable read error, the error will be reported to the host and the location will be flagged as being potentially defective. A subsequent write to that location will initiate a sector test and relocation should that location prove to have a defect.
APPENDIX The batteries in the BBM are recharged continuously through a trickle-charging process whenever the system power is on. The batteries protect data in a failed server for up to three or four days, depending on the size of the memory module. Under normal operating conditions, the batteries last for three years before replacement is necessary. • Recovery ROM The SAS RAID controller firmware is stored on the flash ROM and is executed by the I/O processor.
APPENDIX Appendix F Understanding RAID RAID is an acronym for Redundant Array of Independent Disks. It is an array of multiple independent hard disk drives that provides high performance and fault tolerance. The SAS RAID controller implements several levels of the Berkeley RAID technology. An appropriate RAID level is selected when the volume sets are defined or created. This decision should be based on the desired disk capacity, data availability (fault tolerance or redundancy), and disk performance.
APPENDIX RAID 1 RAID 1 is also known as “disk mirroring”; data written on one disk drive is simultaneously written to another disk drive. Read performance will be enhanced if the array controller can, in parallel, access both members of a mirrored pair. During writes, there will be a minor performance penalty when compared to writing to a single disk. If one drive fails, all data (and software applications) are preserved on the other drive.
APPENDIX RAID 10(1E) RAID 10(1E) is a combination of RAID 0 and RAID 1, combing stripping with disk mirroring. RAID Level 10 combines the fast performance of Level 0 with the data redundancy of level 1. In this configuration, data is distributed across several disk drives, similar to Level 0, which are then duplicated to another set of drive for data protection. RAID 10 has been traditionally implemented using an even number of disks, some hybrids can use an odd number of disks as well.
APPENDIX RAID 5 RAID 5 is sometimes called striping with parity at byte level. In RAID 5, the parity information is written to all of the drives in the controllers rather than being concentrated on a dedicated parity disk. If one drive in the system fails, the parity information can be used to reconstruct the data from that drive. All drives in the array system can be used for seek operations at the same time, greatly increasing the performance of the RAID system.
APPENDIX RAID 6 RAID 6 provides the highest reliability. It is similar to RAID 5, but it performs two different parity computations or the same computation on overlapping subsets of the data. RAID 6 can offer fault tolerance greater than RAID 1 or RAID 5 but only consumes the capacity of 2 disk drives for distributed parity data. RAID 6 is an extension of RAID 5 but uses a second, independent distributed parity scheme.
APPENDIX Important: RAID level 30, 50 and 60 can support up to eight sub-Volumes (RAID set). If the volume is RAID level 30, 50, or 60, you cannot change the volume to another RAID level. If the volume is RAID level 0, 1, 10(1E), 3, 5, or 6, you cannot change the volume to RAID level 30, 50, or 60. JBOD (Just a Bunch Of Disks) A group of hard disks in a RAID box are not set up as any type of RAID configuration. All drives are available to the operating system as an individual disk.
APPENDIX Summary of RAID Levels RAID subsystem supports RAID Level 0, 1, 10(1E), 3, 5, 6, 30, 50 and 60. The following table provides a summary of RAID levels. Features and Performance RAID Level Description 0 Also known as stripping Data distributed across multiple drives in the array. There is no data protection. 1 No data Protection 1 Also known as mirroring All data replicated on N Separated disks. N is almost always 2.
APPENDIX 6 As RAID level 5, but with additional independently computed redundant information 4 Two-disk failure 30 RAID 30 is a combination multiple RAID 3 volume sets with RAID 0 (striping) 6 Up to one disk failure in each sub-volume 50 RAID 50 is a combination multiple RAID 5 volume sets with RAID 0 (striping) 6 Up to one disk failure in each sub-volume 60 RAID 60 is a combination multiple RAID 6 volume sets with RAID 0 (striping) 8 Up to two disk failure in each sub-volume 183
HISTORY Version History Revision 184 Page Description 1.4 p.17 Added note for expander CLI manual 1.4 p.91, p141 Added note for HTTP Port Number 1.4 p.