Acronis Cyber Infrastructure 3.
Copyright Statement Copyright ©Acronis International GmbH, 2002-2019. All rights reserved. ”Acronis” and ”Acronis Secure Zone” are registered trademarks of Acronis International GmbH. ”Acronis Compute with Confidence”, ”Acronis Startup Recovery Manager”, ”Acronis Instant Restore”, and the Acronis logo are trademarks of Acronis International GmbH. Linux is a registered trademark of Linus Torvalds. VMware and VMware Ready are trademarks and/or registered trademarks of VMware, Inc.
Contents 1. Deployment Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2. Planning Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.1 Storage Architecture Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.1.1 Storage Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.1.2 Metadata Role . . . .
2.3.6.6 2.4 2.3.7 Raw Disk Space Considerations 2.3.8 Checking Disk Data Flushing Capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Planning Virtual Machine Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.4.1 2.5 HDD + SSD, 3 Tiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Running on VMware . . . . . . . . . . . . . . . . . . . . . . .
4.1 Preparing Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.1.1 Installing PXE Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.1.2 Configuring TFTP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.1.3 Setting Up DHCP Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.1.4 Setting Up HTTP Server . . . . . . . . . . . . . . . . . . .
CHAPTER 1 Deployment Overview To deploy Acronis Cyber Infrastructure for evaluation purposes or in production, you will need to do the following: 1. Plan the infrastructure. 2. Install and configure Acronis Cyber Infrastructure on required servers. 3. Create the storage cluster. 4. Create a compute cluster and/or set up data export services.
CHAPTER 2 Planning Infrastructure To plan the infrastructure, you will need to decide on the hardware configuration of each server, plan networks, choose a redundancy method and mode, and decide which data will be kept on which storage tier. Information in this chapter is meant to help you complete all of these tasks. 2.1 Storage Architecture Overview The fundamental component of Acronis Cyber Infrastructure is a storage cluster: a group of physical servers interconnected by network.
Chapter 2. Planning Infrastructure 2.1.1 Storage Role Storage nodes run chunk services, store all data in the form of fixed-size chunks, and provide access to these chunks. All data chunks are replicated and the replicas are kept on different storage nodes to achieve high availability of data. If one of the storage nodes fails, the remaining healthy storage nodes continue providing the data chunks that were stored on the failed node.
Chapter 2. Planning Infrastructure 2.2 Compute Architecture Overview The following diagram shows the major compute components of Acronis Cyber Infrastructure. Admin panel Identity service Networking service Compute service Provides VM network connectivity QEMU/KVM Image service Storage service Stores VM images Provides storage volumes to VMs Storage • The storage service provides virtual disks to virtual machines. This service relies on the base storage cluster for data redundancy.
Chapter 2. Planning Infrastructure 2.3 Planning Node Hardware Configurations Acronis Cyber Infrastructure works on top of commodity hardware, so you can create a cluster from regular servers, disks, and network cards. Still, to achieve the optimal performance, a number of requirements must be met and a number of recommendations should be followed. Note: If you are unsure of what hardware to choose, consult your sales representative. You can also use the online hardware calculator. 2.3.
Chapter 2. Planning Infrastructure Table 2.3.2.1.
Chapter 2. Planning Infrastructure Table 2.3.2.2.
Chapter 2. Planning Infrastructure Table 2.3.2.3.
Chapter 2. Planning Infrastructure 2.3.3.1 Storage Cluster Composition Recommendations Designing an efficient storage cluster means finding a compromise between performance and cost that suits your purposes. When planning, keep in mind that a cluster with many nodes and few disks per node offers higher performance while a cluster with the minimal number of nodes (3) and a lot of disks per node is cheaper. See the following table for more details. Table 2.3.3.1.
Chapter 2. Planning Infrastructure 2.3.3.2 General Hardware Recommendations • At least five nodes are required for a production environment. This is to ensure that the cluster can survive failure of two nodes without data loss. • One of the strongest features of Acronis Cyber Infrastructure is scalability. The bigger the cluster, the better Acronis Cyber Infrastructure performs.
Chapter 2. Planning Infrastructure • SSD memory cells can withstand a limited number of rewrites. An SSD drive should be viewed as a consumable that you will need to replace after a certain time. Consumer-grade SSD drives can withstand a very low number of rewrites (so low, in fact, that these numbers are not shown in their technical specifications). SSD drives intended for storage clusters must offer at least 1 DWPD endurance (10 DWPD is recommended).
Chapter 2. Planning Infrastructure • Running metadata services on SSDs improves cluster performance. To also minimize CAPEX, the same SSDs can be used for write caching. • If capacity is the main goal and you need to store non-frequently accessed data, choose SATA disks over SAS ones. If performance is the main goal, choose NVMe or SAS disks over SATA ones. • The more disks per node the lower the CAPEX.
Chapter 2. Planning Infrastructure • Use one 1 Gbit/s link per each two HDDs on the node (rounded up). For one or two HDDs on a node, two bonded network interfaces are still recommended for high network availability. The reason for this recommendation is that 1 Gbit/s Ethernet networks can deliver 110-120 MB/s of throughput, which is close to sequential I/O performance of a single disk.
Chapter 2. Planning Infrastructure • The maintenance mode is not supported. Use SSH to shut down or reboot a node. • One node can be a part of only one cluster. • Only one S3 cluster can be created on top of a storage cluster. • Only predefined redundancy modes are available in the admin panel. • Thin provisioning is always enabled for all data and cannot be configured otherwise.
Chapter 2. Planning Infrastructure evaluate services such as iSCSI, ABGW, etc. However, such a configuration will have two key limitations: 1. Just one MDS will be a single point of failure. If it fails, the entire cluster will stop working. 2. Just one CS will be able to store just one chunk replica. If it fails, the data will be lost. Important: If you deploy Acronis Cyber Infrastructure on a single node, you must take care of making its storage persistent and redundant to avoid data loss.
Chapter 2. Planning Infrastructure Table 2.3.6.1 – continued from previous page Node # 1st disk role 5+ nodes in 2nd disk role 3rd+ disk roles Access points 5 MDSs in total 5+ CSs in total All nodes run required total access points. A production-ready cluster can be created from just five nodes with recommended hardware.
Chapter 2. Planning Infrastructure Table 2.3.6.2.1 – continued from previous page Nodes 1-5 (base) Nodes 6+ (extension) Disk # Disk type Disk roles Disk # Disk type Disk roles 3 HDD CS 3 HDD CS … … … … … … N HDD CS N HDD CS 2.3.6.3 HDD + SSD This configuration is good for creating performance-oriented clusters. Table 2.3.6.3.
Chapter 2. Planning Infrastructure Table 2.3.6.4.1: SSD only configuration Nodes 1-5 (base) Nodes 6+ (extension) Disk # Disk type Disk roles Disk # Disk type Disk roles 1 SSD System, MDS 1 SSD System 2 SSD CS 2 SSD CS 3 SSD CS 3 SSD CS … … … … … … N SSD CS N SSD CS 2.3.6.5 HDD + SSD (No Cache), 2 Tiers In this configuration example, tier 1 is for HDDs without cache and tier 2 is for SSDs. Tier 1 can store cold data (e.g., backups), tier 2 can store hot data (e.g.
Chapter 2. Planning Infrastructure Table 2.3.6.6.1: HDD + SSD 3-tier configuration Nodes 1-5 (base) Nodes 6+ (extension) Disk # Disk type Disk roles 1 HDD/SSD 2 SSD Tier Disk # Disk type Disk roles System 1 HDD/SSD System MDS, T2 2 SSD T2 cache Tier cache 3 HDD CS 1 3 HDD CS 1 4 HDD CS 2 4 HDD CS 2 5 SSD CS 3 5 SSD CS 3 … … … … … … … … N HDD/SSD CS 1/2/3 N HDD/SSD CS 1/2/3 2.3.
Chapter 2. Planning Infrastructure • The server keeps track of counters incoming from the client and always knows the next counter number. If the server receives a counter smaller than the one it has (e.g., because the power has failed and the storage device has not flushed the cached data to disk), the server reports an error. To check that a storage device can successfully flush data to disk when power fails, follow the procedure below: 1.
Chapter 2. Planning Infrastructure id: -> which means one of the following: • If the counter on the disk is lower than the counter on the server, the storage device has failed to flush the data to the disk. Avoid using this storage device in production, especially for CS or journals, as you risk losing data.
Chapter 2. Planning Infrastructure 2.
Chapter 2. Planning Infrastructure • Each node must have Internet access so updates can be installed. • The MTU value is set to 1500 by default. See Step 2: Configuring the Network (page 36) for information on setting an optimal MTU value. • Network time synchronization (NTP) is required for correct statistics. It is enabled by default using the chronyd service. If you want to use ntpdate or ntpd, stop and disable chronyd first.
Chapter 2. Planning Infrastructure • The management node must have a network interface for internal network traffic and a network interface for the public network traffic (e.g., to the datacenter or a public network) so the admin panel can be accessed via a web browser. The management node must have the port 8888 open by default. This will allow access to the admin panel from the public network as well as access to the cluster node from the internal network.
Chapter 2. Planning Infrastructure The next figure shows a sample network configuration for a node with an S3 storage access point. S3 access points use ports 443 (HTTPS) and 80 (HTTP) to listen for incoming connections from the public network. In the scenario pictured above, the internal network is used for both the storage and S3 cluster traffic.
Chapter 2. Planning Infrastructure access point. Backup Gateway access points use port 44445 for incoming connections from both internal and public networks and ports 443 and 8443 for outgoing connections to the public network. • A node that runs compute services must have a network interface for the internal network traffic and a network interface for the public network traffic. 2.5.
Chapter 2. Planning Infrastructure 2.6 Understanding Data Redundancy Acronis Cyber Infrastructure protects every piece of data by making it redundant. It means that copies of each piece of data are stored across different storage nodes to ensure that the data is available even if some of the storage nodes are inaccessible. Acronis Cyber Infrastructure automatically maintains a required number of copies within the cluster and ensures that all the copies are up-to-date.
Chapter 2. Planning Infrastructure Note: The 1+0 and 1+2 encoding modes are meant for small clusters that have insufficient nodes for other erasure coding modes but will grow in the future. As redundancy type cannot be changed once chosen (from replication to erasure coding or vice versa), this mode allows one to choose erasure coding even if their cluster is smaller than recommended. Once the cluster has grown, more beneficial redundancy modes can be chosen.
Chapter 2. Planning Infrastructure • Replication in Acronis Cyber Infrastructure is much faster than that of a typical online RAID 1/5/10 rebuild. The reason is that Acronis Cyber Infrastructure replicates chunks in parallel, to multiple storage nodes. • The more storage nodes are in a cluster, the faster the cluster will recover from a disk or node failure. High replication performance minimizes the periods of reduced redundancy for the cluster.
Chapter 2. Planning Infrastructure Data stream Fragment M chunks 1 2 3 4 5 N parity chunks 6 7 M + N chunks Storage nodes 2.6.3 No Redundancy Warning: Danger of data loss! Without redundancy, singular chunks are stored on storage nodes, one per node. If the node fails, the data may be lost. Having no redundancy is highly not recommended no matter the scenario, unless you only want to evaluate Acronis Cyber Infrastructure on a single server. 2.
Chapter 2. Planning Infrastructure The following policies are available: • Host as a failure domain (default). If a single host running multiple CS services fails (e.g., due to a power outage or network disconnect), all CS services on it become unavailable at once. To protect against data loss under this policy, Acronis Cyber Infrastructure never places more than one data replica per host. This policy is highly recommended for clusters of three nodes and more. • Disk, the smallest possible failure domain.
Chapter 2. Planning Infrastructure supported. 2.9 Understanding Cluster Rebuilding The storage cluster is self-healing. If a node or disk fails, a cluster will automatically try to restore the lost data, i.e. rebuild itself. The rebuild process involves the following steps. Every CS sends a heartbeat message to an MDS every 5 seconds. If a heartbeat is not sent, the CS is considered inactive and the MDS informs all cluster components that they stop requesting operations on its data.
Chapter 2. Planning Infrastructure The second prerequisite can be explained on the following example. In a cluster that has ten 10 TB nodes, at least 1 TB on each node should be kept free, so if a node fails, its 9 TB of data can be rebuilt on the remaining nine nodes. If, however, a cluster has ten 10 TB nodes and one 20 TB node, each smaller node should have at least 2 TB free in case the largest node fails (while the largest node should have 1 TB free).
CHAPTER 3 Installing Using GUI After planning out the infrastructure, proceed to install the product on each server included in the plan. Important: Time needs to be synchronized via NTP on all nodes in the same cluster. Make sure that the nodes can access the NTP server. 3.1 Obtaining Distribution Image To obtain the distribution ISO image, visit the product page and submit a request for the trial version. 3.
Chapter 3. Installing Using GUI 3.2.1 Preparing for Installation from USB Storage Drives To install Acronis Cyber Infrastructure from a USB storage drive, you will need a 4 GB or higher-capacity USB drive and the Acronis Cyber Infrastructure distribution ISO image. Make a bootable USB drive by transferring the distribution image to it with dd. Important: Be careful to specify the correct drive to transfer the image to. For example, on Linux: # dd if=storage-image.
Chapter 3. Installing Using GUI If you choose Install Acronis Cyber Infrastructure, you will be asked to complete these steps: 1. Read and accept the user agreement. 2. Set up the network. 3. Choose a time zone. The date and time will be configured via NTP. 4. Choose what storage cluster node you are installing: first or second/other. You can also choose to skip this step so you can add the node to the storage cluster manually later. 5. Choose the destination disk to install Acronis Cyber Infrastructure on.
Chapter 3. Installing Using GUI Important: The MTU value must be the same across the entire network. You will need to configure the same MTU value on: • Each router and switch on the network (consult your network equipment manuals) • Each node’s network card as well as each bond or VLAN It is also recommended to create two bonded connections as described in Planning Network (page 22) and create three VLAN interfaces on one of the bonds.
Chapter 3. Installing Using GUI 2. Link Monitoring to MII (recommended) 3. Monitoring frequency, Link up delay, and Link down delay to 300 Note: It is also recommended to manually set xmit_hash_policy to layer3+4 after the installation. 3. In the Bonded connections section on the Bond tab, click Add. 4. In the Editing bond slave… window, select a network interface to bond from the Device drop-down list.
Chapter 3. Installing Using GUI 5. Configure MTU if required and click Save. 6. Repeat steps 3 to 5 for each network interface you need to add to the bonded connection. 7. Configure IPv4 settings if required and click Save. The connection will appear in the list on the Network and hostname screen. 3.5.
Chapter 3. Installing Using GUI 3. Configure IPv4 settings if required and click Save. The VLAN adapter will appear in the list on the Network and hostname screen. 3.6 Step 3: Choosing the Time Zone On this step, select your time zone. The date and time will be set via NTP. You will need an Internet connection for synchronization to complete. 3.
Chapter 3. Installing Using GUI infrastructure during installation. • Choose Yes, create a new cluster if you are just starting to set up Acronis Cyber Infrastructure and want to create a new storage cluster. This primary node, also called the management node, will host cluster management services and the admin panel. It will also serve as a storage node. Only one primary node is required.
Chapter 3. Installing Using GUI 3. Create and confirm a password for the superadmin account of the admin panel. 4. Click Next. 3.7.2 Deploying Secondary Nodes If you chose to deploy a secondary node, you will need to provide the IP address of the management node and the token that can only be obtained from the cluster admin panel. A single token can be used to deploy multiple secondary nodes in parallel. To obtain the token and management node address: 1. Log in to the admin panel on port 8888.
Chapter 3. Installing Using GUI Note: You can generate a new token if needed. Generating a new token invalidates the old one. Back on the installation screen, enter the management node address and the token and click Next. The node may appear on the INFRASTRUCTURE > Nodes screen in the UNASSIGNED list as soon as token is validated. However, you will be able to join it to the storage cluster only after the installation is complete. 3.
Chapter 3. Installing Using GUI It is recommended to create RAID1 from disks of the same size as the volume equals the size of the smallest disk. Click Next. Important: All information on all disks recognized by the installer will be destroyed. 3.9 Step 6: Setting the Root Password On the last step, enter and confirm the password for the root account and click Start installation. Once the installation is complete, the node will reboot automatically.
Chapter 3. Installing Using GUI described in the Administrator’s Guide.
CHAPTER 4 Installing Using PXE This chapter explains how to install Acronis Cyber Infrastructure over network using a preboot execution environment (PXE) server. You will need to do the following: 1. Get the distribution image as described in Obtaining Distribution Image (page 34). 2. Set up the TFTP, DHCP, and HTTP (or FTP) servers. 3. Boot the node where you will install Acronis Cyber Infrastructure from network and launch the Acronis Cyber Infrastructure installer. 4.
Chapter 4. Installing Using PXE • HTTP server. This is a machine serving Acronis Cyber Infrastructure installation files over network. You can also share Acronis Cyber Infrastructure distribution over network via FTP (e.g., with vsftpd) or NFS. The easiest way is to set up all of these on the same physical machine: # yum install tftp-server syslinux httpd dhcp You can also use servers that already exist in your infrastructure. For example, skip httpd and dhcp if you already have the HTTP and DHCP servers.
Chapter 4. Installing Using PXE # mkdir /tftpboot/pxelinux.cfg # touch /tftpboot/pxelinux.cfg/default 4. Add the following lines to default: default menu.c32 prompt 0 timeout 100 ontimeout INSTALL menu title Boot Menu label INSTALL menu label Install kernel vmlinuz append initrd=initrd.img ip=dhcp For detailed information on parameters you can specify in this file, see the documentation for syslinux. 5. Restart the xinetd service: # /etc/init.d/xinetd restart 6.
Chapter 4. Installing Using PXE 2. Copy the contents of your Acronis Cyber Infrastructure installation DVD to some directory on the HTTP server (e.g., /var/www/html/distrib). 3. On the PXE server, specify the path to the Acronis Cyber Infrastructure installation files in the append line of the /tftpboot/pxelinux.cfg/default file. For EFI-based systems, the file you need to edit has the name of /tftpboot/pxelinux.cfg/efidefault or /tftpboot/pxelinux.cfg/.
Chapter 4. Installing Using PXE 4.3 Creating Kickstart File If you plan to perform an unattended installation of Acronis Cyber Infrastructure, you can use a kickstart file. It will automatically supply to the Acronis Cyber Infrastructure installer the options you would normally choose by hand. Acronis Cyber Infrastructure uses the same kickstart file syntax as Red Hat Enterprise Linux.
Chapter 4. Installing Using PXE network Configures network devices and creates bonds and VLANs. raid Creates a software RAID volume. part Creates a partition on the server. Note: The size of the /boot partition must be at least 1 GB. rootpw --iscrypted Sets the root password for the server. The value is your password’s hash obtained with the algorithm specified in the --passalgo parameter. For example, to create a SHA-512 hash of your password, run python -c 'import crypt; print(crypt.
Chapter 4. Installing Using PXE 4.3.2.1 Installing Packages In the body of the %packages script, specify the package group hci to be installed on the server: %packages @^hci %end 4.3.2.2 Installing Admin Panel and Storage Only one admin panel is required, install it on the first node only. To deploy all other nodes, you will need to obtain a token from a working admin panel. For more information, see the Deploying Secondary Nodes (page 42).
Chapter 4. Installing Using PXE # /usr/libexec/vstorage-ui-agent/bin/register-storage-node.sh -m To install the components without running scripts afterwards at the expense of exposing the password and token, specify the interfaces for the public (external) and private (internal) networks and the password for the superadmin account of the admin panel in the kickstart file. For example: %addon com_vstorage --management --internal-iface= \ --external-iface=
Chapter 4. Installing Using PXE # Use the SHA-512 encryption for user passwords and enable shadow passwords. auth --enableshadow --passalgo=sha512 # Use the US English keyboard. keyboard --vckeymap=us --xlayouts='us' # Use English as the installer language and the default system language. lang en_US.UTF-8 # Specify the encrypted root password for the node. rootpw --iscrypted xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Disable SELinux.
Chapter 4. Installing Using PXE #%addon com_vstorage --storage --token=xxxxxxxxx --mgmt-node-addr=10.37.130.1 #%end 4.3.3.1 Creating the System Partition on Software RAID1 To create a system partition on a software RAID1 volume, you will need to do the following instead of using autopart: 1. Partition the disks. 2. Create a RAID1 volume. 3. Create swap and root LVM volumes. It is recommended to create RAID1 from disks of the same size as the volume equals the size of the smallest disk.
Chapter 4. Installing Using PXE 4.4 Using Kickstart File To install Acronis Cyber Infrastructure using a kickstart file, you first need to make the kickstart file accessible over the network. To do this: 1. Copy the kickstart file to the same directory on the HTTP server where the Acronis Cyber Infrastructure installation files are stored (e.g., to /var/www/html/astor). 2. Add the following string to the /tftpboot/pxelinux.cfg/default file on the PXE server: inst.
CHAPTER 5 Additional Installation Modes This chapter describes additional installation modes that may be of help depending on your needs. 5.1 Installing via VNC To install Acronis Cyber Infrastructure via VNC, boot to the welcome screen and do the following: 1. Select the main installation option and press E to start editing it. 2. Add text at the end of the line starting with linux /images/pxeboot/vmlinuz. For example: linux /images/pxeboot/vmlinuz inst.stage2=hd:LABEL= quiet ip=dhcp logo.
CHAPTER 6 Troubleshooting Installation This chapter describes ways to troubleshoot installation of Acronis Cyber Infrastructure. 6.1 Installing in Basic Graphics Mode If the installer cannot load the correct driver for your graphics card, you can try to install Acronis Cyber Infrastructure in the basic graphics mode. To select this mode, on the welcome screen, choose Troubleshooting–>, then Install in basic graphics mode. In this mode, however, you may experience issues with the user interface.
Chapter 6. Troubleshooting Installation environment. 4. In the rescue environment, you can choose one of the following options: • Continue (press 1): mount the Acronis Cyber Infrastructure installation in read and write mode under /mnt/sysimage. • Read-only mount (press 2): mount the Acronis Cyber Infrastructure installation in read-only mode under /mnt/sysimage. • Skip to shell (press 3): load shell, if your file system cannot be mounted; for example, when it is corrupted.