Acronis Storage 2.
Copyright Statement Acronis International GmbH, 2002-2016. All rights reserved. ”Acronis” and ”Acronis Secure Zone” are registered trademarks of Acronis International GmbH. ”Acronis Compute with Confidence”, ”Acronis Startup Recovery Manager”, ”Acronis Active Restore”, ”Acronis Instant Restore” and the Acronis logo are trademarks of Acronis International GmbH. Linux is a registered trademark of Linus Torvalds. VMware and VMware Ready are trademarks and/or registered trademarks of VMware, Inc.
Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 About This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 About Acronis Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2. Accessing Acronis Storage Clusters via iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.
3.1.1 Object Storage Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.1.2 Object Storage Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.2.1 Multipart Uploads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.2.2 S3 Storage Interaction with a Acronis Storage Cluster . . . . . . . . . . . . . . . . . 18 3.1.3 Object Storage Components . . . . . . . . . . . . . . . . . . . . . . . .
3.5.1 Bucket and Key Naming Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.5.2 Improving Performance of PUT Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.6 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.6.1 Appendix A: Supported Amazon S3 REST Operations . . . . . . . . . . . . . . . . . . . . . . . 37 3.6.2 Appendix B: Supported Amazon Request Headers . . . . . . . . . . . .
CHAPTER 1 Introduction This chapter provides basic information about this guide and Acronis Storage. 1.1 About This Guide This guide complements documentation on managing Acronis Storage via the web-based management panel. It is recommended to manage Acronis Storage via the management panel. If you have it installed, consider the command-line tools secondary and use them with caution.
Chapter 1. Introduction 1.2 About Acronis Storage Acronis Storage is a solution allowing you to quickly and easily transform low-cost commodity storage hardware and network equipment into a protected enterprise-level storage, like SAN (Storage Area Network) and NAS (Network Attached Storage). Acronis Storage is optimized for storing large amounts of data and provides replication, high-availability, and self-healing features for your data.
CHAPTER 2 Accessing Acronis Storage Clusters via iSCSI Acronis Storage allows you to export cluster disk space outside Acronis Storage bounds to operating systems and third-party virtualization solutions. Using dedicated vstorage-iscsi tools, you can export Acronis Storage disk space as LUN block devices over iSCSI in a SAN-like manner. In Acronis Storage, you can create and run multiple iSCSI targets per Acronis Storage cluster node. In turn, each iSCSI target can have multiple LUNs (virtual disks).
Chapter 2. Accessing Acronis Storage Clusters via iSCSI In this example are three Acronis Hardware nodes working in a Acronis Storage cluster. Two nodes host one iSCSI target each while the third hosts two iSCSI targets. Each node has a static or dynamic IP address assigned from the Storage BackNet (created along with the Acronis Storage cluster) and the FrontNet. Each iSCSI target has a static IP address assigned from the FrontNet. 2.
2.2. Creating and Running Acronis Storage iSCSI Targets 3. Create a directory in the Acronis Storage cluster where you will store iSCSI targets and their configuration. For example, /vstorage/stor1/iscsi. 4. Set the ISCSI_ROOT variable in /etc/vstorage/iscsi/config to the directory from the previous step. For example: ISCSI_ROOT=/vstorage/stor1/iscsi You are now ready to create and run iSCSI targets in your Acronis Storage cluster. 2.2 Creating and Running Acronis Storage iSCSI Targets Note: 1.
Chapter 2. Accessing Acronis Storage Clusters via iSCSI To check that the target is up, run the vstorage-iscsi list command with the target’s name as the option. For example: # vstorage-iscsi list -t iqn.2014-04.com.vstorage:test1 Target iqn.2014-04.com.vstorage:test1: Portals: 192.168.10.100 Status: running Registered: yes Host: fefacc38a2f140ca LUN: 1, Size: 102400M, Used: 1M, Online: Yes For information about the command output, see Listing Acronis Storage iSCSI Targets on page 6.
2.3. Listing Acronis Storage iSCSI Targets # vstorage-iscsi list -t iqn.2014-04.com.vstorage:test1 Target iqn.2014-04.com.vstorage:test1: Portals: 192.168.10.100 Status: running Registered: yes Host: LUN: fefacc38a2f140ca 1, Size: 102400M, Used: 1M, Online: Yes The command outputs above show the following data: Item Description Target Unique alphanumeric name of the iSCSI target. Portals Target’s IP address(es). Status Target’s current state.
Chapter 2. Accessing Acronis Storage Clusters via iSCSI 2.4 Transferring Acronis Storage iSCSI Targets Between Acronis Storage Nodes You can transfer stopped iSCSI targets between Acronis Storage Nodes. After the transfer, you will be able to start and manage the iSCSI target on the destination Node. On the source Node, you will only be able to delete the transferred target with the --force option (for more details, see Deleting Acronis Storage iSCSI Targets on page 9).
2.6. Deleting Acronis Storage iSCSI Targets # vstorage-iscsi stop -t iqn.2014-04.com.vstorage:test1 -f Breaking the iSCSI connection in such a way may result in I/O errors on the iSCSI initiator’s side. 2.6 Deleting Acronis Storage iSCSI Targets You can delete Acronis Storage iSCSI targets with the vstorage-iscsi delete command. Deleting a Acronis Storage iSCSI target, you will also delete all the LUNs within it. To delete a Acronis Storage iSCSI target, do the following: 1.
Chapter 2. Accessing Acronis Storage Clusters via iSCSI In this example are three Acronis Hardware Nodes working in a Acronis Storage cluster. Two Nodes host one iSCSI target each while the third hosts two iSCSI targets. Each Hardware Node is assigned a static or dynamic IP address from the FrontNet 1 and the same from the FrontNet 2. In turn, each iSCSI target is assigned a static IP address from the FrontNet 1 and a static IP address from the FrontNet 2.
2.8. Managing CHAP Accounts for Acronis Storage iSCSI Targets 2.8 Managing CHAP Accounts for Acronis Storage iSCSI Targets Acronis Storage allows you to restrict access to iSCSI targets by means of CHAP authentication. To make use of CHAP authentication, you need to: 1. Create a CHAP account. 2. Create an iSCSI target bound to this CHAP account. These actions are described in detail in the following subsections. 2.8.
Chapter 2. Accessing Acronis Storage Clusters via iSCSI Verify password: The new password will become active after target reboot. 2.8.4 Listing CHAP Accounts and Acronis Storage iSCSI Targets Assigned to Them To list existing CHAP accounts, use the vstorage-iscsi account-list command. For example: # vstorage-iscsi account-list user1 To list Acronis Storage iSCSI targets assigned to a specific CHAP account, use the vstorage-iscsi account-list com- mand with the -u option.
2.9. Managing LUN Snapshots 2.9.2 Listing LUN Snapshots To list snapshots for the specified LUN, use the vstorage-iscsi snapshot-list command. For example, for LUN 1 on target iqn.2014-04.com.vstorage:test1: # vstorage-iscsi snapshot-list -t iqn.2014-04.com.vstorage:stor4 -l 1 CREATED C UUID 2014-04-11 13:16:51 PARENT_UUID a1f54314-bc06-40c6-a587-{...} 00000000-0000-0000-{...} 2014-04-11 13:16:57 * 9c98b442-7482-4fd0-9c45-{...} a1f54314-bc06-40c6-{...
Chapter 2. Accessing Acronis Storage Clusters via iSCSI 2.9.5 Deleting LUN Snapshots To delete the specifed LUN snapshot, use the vstorage-iscsi snapshot-delete command. For example: # vstorage-iscsi snapshot-delete -u a1f54314-bc06-40c6-a587-965feb9d85bb If the snapshot has no any children, it will be deleted. If the snapshot has a single child, it will be merged to that child. Note: 1. You can only delete offline snapshots. 2. Deleting a snapshot that has multiple children is currently not supported.
CHAPTER 3 Accessing Acronis Storage Clusters via S3 Protocol Acronis Storage can export data via an Amazon S3-compatible API, enabling service providers to: • run S3-based services in their own Acronis Storage infrastructures, • sell S3-based storage-as-a-service to customers along with Acronis Storage. The support for S3 expands the functionality of Acronis Storage and requires a working Acronis Storage cluster. 3.
Chapter 3. Accessing Acronis Storage Clusters via S3 Protocol are to return the new state after the write has been completed. Readers can observe the old state for an undefined period of time until the write is propagated to all the replicas (copies). This is very important for storage availability as geographically distant data centers may not be able to perform data update synchronously (e.g.
3.1. About Object Storage • Object server (OS) stores actual object data (contents) received from S3 gateway. It stores its own data in regular Acronis Storage with built-in high availability. • Name server stores object metadata received from S3 gateway. Metadata includes object name, size, ACL (access control list), location, owner, and such. Name server (NS) also stores its own data in regular Acronis storage with built-in high availability.
Chapter 3. Accessing Acronis Storage Clusters via S3 Protocol storage services run on hosts, no virtual environments (or respective licenses) are required for object storage. 3.1.2 Object Storage Overview In terms of S3 object storage, a file is an object. Object servers store each object loaded via the S3 API as a pair of entities: • Object names and associated object metadata stored on an NS.
3.1. About Object Storage 3.1.3.1 Gateway Gateway performs the following functions: • Receives S3 requests from the web server (via nginx and FastCGI). • Parses S3 packets and validates S3 requests (checks fields of a request and XML documents in its body). • Authenticates S3 users. • Validates access permissions to buckets and objects using ACL. • Collects statistics on the number of various requests as well as the amount of the data received and transmitted.
Chapter 3. Accessing Acronis Storage Clusters via S3 Protocol The data is stored in a B-tree where to each object’s name corresponds that object’s metadata structure. S3 object metadata consists of three parts: information on object, user-defined headers (optional), and ACL for the object. Files are stored in the corresponding directory on base shared storage (i.e. Acronis Storage). Name server is responsible for a subset of S3 cluster object namespace.
3.1. About Object Storage of small objects as even a 1-byte object will occupy a 4KB block. In addition, about 4KB of metadata per object will be stored on NS. However, this approach allows achieving the maximum performance, eliminates free space fragmentation, and offers guaranteed object insert performance. Moreover, the larger the object, the less noticeable the overhead. Finally, when an object is deleted, its pool block is marked free and can be used to store new objects.
Chapter 3. Accessing Acronis Storage Clusters via S3 Protocol name—an object server to store the object’s data. Name and object server lists are stored in a vstorage cluster directory intended for object storage data and available to anyone with a cluster access. This directory includes subdirectories that correspond to services hosted on name and object servers. The names of subdirectories match hexadecimal representations of the service’s ID.
3.1. About Object Storage 1. Requesting user authentication data. It will be stored on a name server in a specific format (see Service Buckets). To receive data (identifier, e-mail, access keys), a request with a lookup operation code is sent to an appropriate name server. 2. Authenticating the user. 3. Requesting bucket’s and object’s metadata. To receive it, another request with a lookup operation code is sent to the name server that stores names of objects and buckets. 4.
Chapter 3. Accessing Acronis Storage Clusters via S3 Protocol To fulfill the request, object server transmits fixed-size messages with the object’s data to the gateway. The last message contains an end-of-data flag. 3.1.5.4 Delete Operation To delete an object (and its name) from the storage, gateway determines a name server’s identifier based on the directory’s part of a name and sends a request with a delete operation code to the server.
3.2. Deploying Object Storage To set up object storage services, do the following: 1. Plan the S3 network. Like a Acronis Storage cluster, an object storage cluster needs two networks: • An internal network in which NS, OS, and GW will interact. These services will generate traffic similar in amount to the total (incoming and outgoing) S3 user traffic. If this is not going to be much, it is reasonable to use the same internal network for both object storage and Acronis Storage.
Chapter 3. Accessing Acronis Storage Clusters via S3 Protocol cluster in the same way as virtual environments, iSCSI, and other services do. So the OS and NS services depend on vstorage-mount (client) and can only work when the cluster is mounted. Unlike them, gateway is a stateless service that has no data. It is thus independent on vstorage-mount and can theoretically be run even on nodes where the Acronis Storage cluster is not mounted.
3.2. Deploying Object Storage # systemctl start ostor-cfgd.service # systemctl enable ostor-cfgd.service 6. Initialize new object storage on the first node. The ostor_dir directory will be created in the root of your cluster. # ostor-ctl init-storage -n -s You will need to provide the IP address and object storage password specified on step 3. 7. Add to the DNS public IP addresses of nodes that will run GW services.
Chapter 3. Accessing Acronis Storage Clusters via S3 Protocol command on every such node: # ostor-ctl add-host -H -r / This command will automatically detect and use the node’s hostname and have the object storage agent service listen on an internal IP address. You will need to provide the object storage password set on step 3. 9. Create S3 gateway instances on chosen nodes with Internet access and external IP addresses.
3.2. Deploying Object Storage OS 1000000000000003 ACTIVE 452 4f0038db65274507 10.30.29.124:52831 NS 0800000000000002 ACTIVE 647 fcbf5602197245da 10.30.27.69:56463 NS 0800000000000001 ACTIVE 452 4f0038db65274507 10.30.29.124:53044 NS 0800000000000000 ACTIVE 647 fcbf5602197245da 10.30.27.69:37876 12. Install one nginx Web server per each S3 endpoint you need. On nodes where you install nginx, replace the contents of its configuration file /etc/nginx/conf.d/nginx.
Chapter 3. Accessing Acronis Storage Clusters via S3 Protocol # ostor-s3-admin create-user -e user@email.com Created user: email=user@email.com,user id=81d406fa613ad6c1 Key pair[0]: access key id=81d406fa613ad6c1S8HL, secret access key=ya8iq3yrEYEhpErCkSmui6ifBghDDLdN2vso3sJn The access key ID and secret access key pair, along with S3 endpoint, are required to connect to object storage from a client application.
3.3.
Chapter 3. Accessing Acronis Storage Clusters via S3 Protocol # ostor-s3-admin create-user -e user@email.com UserEmail:user@email.com UserId:a49e12a226bd760f KeyPair[0]:S3AccessKeyId:a49e12a226bd760fGHQ7 KeyPair[0]:S3SecretAccessKey:HSDu2DA00JNGjnRcAhLKfhrvlymzOVdLPsCK2dcq Flags:none S3 user ID is a 16-digit hexadecimal string. The generated access key pair is used to sign requests to the S3 object storage according to the Amazon S3 Signature Version 2 authentication scheme. 3.3.
3.3. Managing S3 Users 3.3.3 Querying S3 User Information To display information about the specified user, use the ostor-s3-admin query-user-info command. You need to specify either the user email (-e) or S3 ID (-i). For example: # ostor-s3-admin query-user-info -e user@email.com Query user: user id=d866d9d114cc3d20, user email=user@email.
Chapter 3. Accessing Acronis Storage Clusters via S3 Protocol secret access key=83tTsNAuuRyoBBqhxMFqHAC60dhKHtTCCkQe54zu Note: It is recommended to periodically revoke old and generate new access key pairs. 3.3.7 Revoking S3 User Access Key Pairs You can revoke the specified access key pair of the specified user with the ostor-s3-admin revoke-access-key com- mand. You need to specify the access key in the key pair you want to delete as well as the user email or S3 ID.
3.4. Managing Object Storage Buckets 3.4.1 Listing Bucket Contents You can list bucket contents with a web browser. To do this, visit the URL that consists of the external DNS name for the S3 endpoint that you specified when creating the S3 cluster and the bucket name. For example, mys3storage.example.com/mybucket. Note: You can also copy the link to bucket contents by right-clicking it in CyberDuck, and then selecting Copy URL. 3.4.1.1 Managing Buckets from Command Line 3.4.
Chapter 3. Accessing Acronis Storage Clusters via S3 Protocol 3.4.4 Changing Object Storage Bucket Owners You can pass ownership of a bucket to the specified user with the ostor-s3-admin For example, to make user with ID bf0b3b15eb7c9019 the owner of -c change-bucket-owner command. bucket1: # ostor-s3-admin -c change-bucket-owner -b bucket1 -i bf0b3b15eb7c9019 Changed owner of the bucket bucket1. New owner bf0b3b15eb7c9019 3.4.
3.6. Appendices 3.5.2 Improving Performance of PUT Operations Object storage supports uploading of objects as large as 5 GB in size with a single PUT request. Upload performance can be improved, however, by splitting large objects into pieces and uploading them concurrently with multipart upload API. This approach will divide the load between multiple OS services. It is recommended to use multipart uploads for objects larger than 5 MB. 3.
Chapter 3. Accessing Acronis Storage Clusters via S3 Protocol • PUT Bucket versioning Object operations: • DELETE Object • DELETE Multiple Objects • GET Object • GET Object ACL • HEAD Object • POST Object • PUT Object • PUT Object - Copy • PUT Object acl • Initiate Multipart Upload • Upload Part • Complete Multipart Upload • Abort Multipart Upload • List Parts Note: For a complete list of Amazon S3 REST operations, see Amazon S3 REST API documentation. 3.6.
3.6. Appendices • x-amz-grant-read-acp • x-amz-grant-read • x-amz-grant-write • x-amz-grant-write-acp • x-amz-meta-** • x-amz-version-id • x-amz-copy-source • x-amz-metadata-directive • x-amz-copy-source-version-id 3.6.3 Appendix C: Supported Authentication Schemes The following authentication scheme is supported by the Acronis Storage implementation of the Amazon S3 protocol: • Signature Version 2. • Signature Version 4.
CHAPTER 4 Monitoring Acronis Storage Clusters Monitoring a Acronis Storage cluster is very important because it allows you to check the status and health of all computers in the cluster and react as necessary. This chapter explains how to monitor your Acronis Storage cluster. 4.1 Monitoring General Cluster Parameters By monitoring general parameters, you can get detailed information about all components of a Acronis Storage cluster, its overall status and health.
4.1. Monitoring General Cluster Parameters The command above shows detailed information about the stor1 cluster. The general parameters (highlighted in red) are explained in the table below. Parameter Description Cluster Overall status of the cluster: • healthy. All chunk servers in the cluster are active. • unknown. There is not enough information about the cluster state (e.g., because the master MDS server was elected a while ago). • degraded. Some of the chunk servers in the cluster are inactive.
Chapter 4. Monitoring Acronis Storage Clusters Parameter Description Space Amount of disk space in the cluster: • free. Free physical disk space in the cluster. • allocatable. Amount of logical disk space available to clients. Allocatable disk space is calculated on the basis of the current replication parameters and free disk space on chunk servers. It may also be limited by license.
4.2. Monitoring Metadata Servers 4.2 Monitoring Metadata Servers MDS servers are a critical component of any Acronis Storage cluster, and monitoring the health and state of MDS servers is a very critical task. To monitor MDS servers, use the vstorage -c top command, for example: The command above shows detailed information about the stor1 cluster.
Chapter 4. Monitoring Acronis Storage Clusters Parameter Description UPTIME Time elapsed since the last MDS server start. HOST MDS server hostname or IP address. 4.3 Monitoring Chunk Servers By monitoring chunk servers, you can keep track of the disk space available in a Acronis Storage cluster. To monitor chunk servers, use the vstorage -c top command, for example: The command above shows detailed information about the stor1 cluster.
4.3. Monitoring Chunk Servers Parameter Description STATUS Chunk server status: • active. The chunk server is up and running. • Inactive. The chunk server is temporarily unavailable. A chunk server is marked as inactive during its first 5 minutes of inactivity. • offline. The chunk server is inactive for more than 5 minutes. After the chunk server goes offline, the cluster starts replicating data to restore the chunks that were stored on the affected chunk server. • dropped.
Chapter 4. Monitoring Acronis Storage Clusters # vstorage -c stor1 top connected to MDS#1 Cluster ’stor1’: healthy Space: [OK] allocatable 180GB of 200GB, free 1.6TB of 1.7TB ... In this command output: • 1.7TB is the total disk space in the stor1 cluster. The total disk space is calculated on the basis of used and free disk space on all partitions in the cluster.
4.3. Monitoring Chunk Servers 4.3.1.1 Understanding Allocatable Disk Space When monitoring disk space information in the cluster, you also need to pay attention to the space reported by the vstorage top utility as allocatable. Allocatable space is the amount of disk space that is free and can be used for storing user data. Once this space runs out, no data can be written to the cluster.
Chapter 4. Monitoring Acronis Storage Clusters data chunk. So once the disk space on the smallest chunk server (200 GB) runs out, no more chunks in the cluster can be created until a new chunk server is added or the replication factor is decreased.
4.3. Monitoring Chunk Servers Note: The FS field shows the size of all user data in the cluster without consideration for replicas. 4.3.2 Exploring Chunk States The table below lists all possible states a chunk can have. Status Description healthy Percentage of chunks that have enough active replicas. The normal state of chunks. replicating Percentage of chunks which are being replicated. Write operations on such chunks are frozen until replication ends.
Chapter 4. Monitoring Acronis Storage Clusters Status Description degraded Percentage of chunks with the number of active replicas lower than normal but equal to or higher than the set minimum. Such chunks can be read from and written to. However, in the latter case a degraded chunk becomes urgent. urgent Percentage of chunks which are degraded and have non-identical replicas. Replicas of a degraded chunk may become non-identical if some of them are not accessible during a write operation.
4.4. Monitoring Clients The command above shows detailed information about the stor1 cluster. The monitoring parameters for clients (highlighted in red) are explained in the table below: Parameter Description CLID Client identifier (ID). LEASES Average number of files opened for reading/writing by the client and not yet closed, for the last 20 seconds. READ Average rate, in bytes per second, at which the client reads data, for the last 20 seconds.
Chapter 4. Monitoring Acronis Storage Clusters 4.5 Monitoring Physical Disks The S.M.A.R.T. status of physical disks is monitored by the smartctl tool installed along with Acronis. The tool is run every 10 minutes as a cron job also added during Acronis installation. The smartctl tool polls all physical disks attached to Hardware Nodes in the cluster, including caching and journaling SSDs, and reports the results to the MDS server. Note: For the tool to work, enable the S.M.A.R.T.
4.6. Monitoring Event Logs The disks table shows the following parameters: Parameter Description DISK Disk name assigned by operating system. SMART Disk’s S.M.A.R.T. status: • OK: The disk is healthy. • Warn: The disk is in pre-failure condition. Pre-failure condition means that at least one of these S.M.A.R.T. counters is nonzero: • Reallocated Sector Count • Reallocated Event Count • Current Pending Sector Count • Offline Uncorrectable TEMP Disk temperature in Celsius. CAPACITY Disk capacity.
Chapter 4. Monitoring Acronis Storage Clusters The command above shows the latest events in the stor1 cluster. The information on events (highlighted in red) is given in the table with the following columns: Column Description TIME Time when the event happened. SYS Component of the cluster where the event happened (e.g., MDS for an MDS server or JRN for local journal). SEV Event severity. MESSAGE Event description. 4.6.
4.6. Monitoring Event Logs Event Severity Description MDS# (:) lags JRN err Generated by the MDS master server when it detects behind for more than 1000 that MDS# is stale. rounds This message may indicate that some MDS server is very slow and lags behind. MDS# (:) JRN err didn’t accept commits for M sec Generated by the MDS master server if MDS# did not accept commits for M seconds. MDS# gets marked as stale.
Chapter 4. Monitoring Acronis Storage Clusters Event Severity Description The cluster is degraded with N MDS warn Generated when the cluster status changes to de- active, M inactive, K offline CS graded or when a new MDS master server is elected. This message indicates that some chunk servers in the cluster are • inactive (do not send any registration messages) or • offline (are inactive for a period longer than mds.wd.offline_tout = 5min (by default)).
4.7. Monitoring the Status of Replication Parameters Event Severity Description CS# has not registered dur- MDS warn Generated when the chunk server CS# has been ing the last T sec and is marked unavailable for a while. In this case, the chunk server as inactive/offline first gets marked as inactive. After 5 minutes, the state is changed to offline, which starts the automatic replication of data to restore the replicas that were stored on the offline chunk server.
Chapter 4. Monitoring Acronis Storage Clusters CS nodes: 2 of 2 (2 avail, 0 inactive, 0 offline) License: PCSS.02444715.0000 is ACTIVE, 6399TB capacity Replication: 3 norm, 2 limit Chunks: [OK] 431 (100%) healthy, 0 (0%) blocked, 0 (0%) degraded, 0 (0%) offline, 0 (0%) overcommitted, 0 (0%) urgent, 0 (0%) replicating, 0 (0%) deleting, 0 (0%) void ... 3.
CHAPTER 5 Managing Cluster Security This chapter describes some situations that may affect your cluster security. 5.1 Security Considerations This section describes the security limitations you should keep in mind when deploying a Acronis Storage cluster. Traffic sniffing Acronis Storage does not protect you from traffic sniffing. Anyone who has access to your network can capture and analyze the data being sent and received through your network.
Chapter 5. Managing Cluster Security 5.2 Securing Server Communication in Clusters A Acronis Storage cluster can contain three types of servers: • MDS servers • chunk servers • clients During cluster operation, the servers communicate with each other. To secure their communication, you should keep all servers on an isolated private network—BackNet. The figure below shows an example cluster configuration where all servers are set up on the BackNet.
5.3. Password-based Authentication # vstorage -c Cluster-Name make-mds -I -a MDS-IP-Address -r Journal-Directory -p The specified address will then be used for MDS interconnection and intercommunication with the other servers in the cluster. 2. You set up a chunk server: # vstorage -c Cluster-Name make-cs -r CS-Directory Once it is created, the chunk server connects to the MDS server and binds to the IP address it uses to establish the connection.
Chapter 5. Managing Cluster Security 2. You add new MDS servers, chunk servers, or clients to the cluster and use the vstorage auth-node command to authenticate them. During authentication, you use the password you set when creating the first MDS server. 3. Acronis Storage compares the provided password with the one stored on the first MDS server, and if the passwords match, successfully authenticates the server. For each physical server, authentication is a one-time process.
CHAPTER 6 Maximizing Cluster Performance This chapter describes recommendations for maximizing the performance of your Acronis Storage clusters. Note: Also consider updating hardware nodes in the cluster. 6.1 Carrying Out Performance Benchmarking When testing the performance of a Acronis Storage cluster and comparing it with non-Acronis Storage setups: • Compare configurations with similar redundancy levels.
Chapter 6. Maximizing Cluster Performance 6.2 Checking Data Flushing Before creating the cluster, you are recommended to check that all storage devices (hard disk drives, solid disk drives, RAIDs, etc.) you plan to include in your cluster can successfully flush data to disk when the server power goes off unexpectedly. Doing so will help you detect possible problems with devices that may lose data stored in their cache in the event of a power failure.
6.2. Checking Data Flushing • -s vstorage1.example.com is the hostname of the computer where the vstorage-hwflush-check server is run- ning. • -d /vstorage/stor1-ssd/test defines the directory to use for testing data flushing. During its execution, the client creates a file in this directory and writes data blocks to it. • -t 50 sets the number of threads for the client to write data to disk. Each thread has its own file and counter. You can increase the number of threads (max.
Chapter 6. Maximizing Cluster Performance To be on the safe side, repeat the procedure several times. Once you check your first storage device, continue with all remaining devices you plan to use in the cluster. 6.3 Using 1 GbE and 10 GbE Networks 1 Gbit/s Ethernet networks can deliver 110-120 MB/s, which is close to a single drive performance on sequential I/O. Since several drives on a single server can deliver higher throughput than a single 1 Gbit/s Ethernet link, networking may become a bottleneck.
6.4. Setting Up Network Bonding byte jumbo frames). Such settings require switch configuration and often lead to human errors. 10 Gbit/s network adapters, on the other hand, need to be configured to use jumbo frames to achieve full performance. 3. For maximum efficiency, use the balance-xor bonding mode with the layer3+4 hash policy. If you want to use the 802.3ad bonding mode, also configure your switch to use the layer3+4 hash policy. 6.
Chapter 6. Maximizing Cluster Performance Note: 1. Make sure to enter the correct values in the IPADDR and PREFIX lines. 2. The balance-xor mode is recommended, because it offers both fault tolerance and better performance. For more details, see the documents listed below. 3. Make sure the configuration file of each Ethernet interface you want to bond (e.g.
6.5. Improving High-Capacity HDD Performance issues an unaligned write request, the HDD has to align the beginning and end of that request to 4KB boundaries. To do this, the HDD reads the request’s head and tail ranges to determine an even number of sectors to modify. For example, on a request to write a 4KB block at a 2KB offset, HDD will read the 0-2KB and 6-8KB ranges to modify the entire 0-8KB data range. The typical reasons of poor performance with 4KB sector HDDs are: 1.