acronis.com Acronis Cyber Infrastructure 5.
Table of contents Supported storage types 3 Accessing S3 buckets 4 Managing buckets via the Acronis Cyber Infrastructure user panel Logging in to the user panel 4 Adding, deleting, and listing S3 buckets 5 Creating, deleting, and listing folders 6 Uploading and downloading files 7 Obtaining and validating file certificates 7 Accessing S3 storage with CyberDuck Managing S3 bucket versions Mounting S3 storage with Mountain Duck Creating S3 buckets on Mounted S3 Storage S3 bucket and key naming p
Supported storage types Your service provider can configure Acronis Cyber Infrastructure to keep your data in three storage types: l S3 object storage for storing an unlimited number of objects (files). l iSCSI block storage for virtualization, databases, and other needs. l NFS shares for storing an unlimited number of files via a distributed filesystem. The following sections describe the ways to access data in Acronis Cyber Infrastructure in detail.
Accessing S3 buckets To access S3 buckets, get the following information (credentials) from your system administrator: l User panel IP address l DNS name of the S3 endpoint l Access key ID l Secret access key Acronis Cyber Infrastructure allows you to access your S3 data in several ways: l Via the Acronis Cyber Infrastructure user panel l Via a third-party S3 application like Cyberduck, Mountain Duck, etc.
2. On the login screen, enter your credentials, and then click Log in. Once you log in to the web interface, you will see the Buckets screen with the list of your buckets. From here, you can manage buckets, as well as folders and files stored inside the buckets. To log out, click the user icon in the upper right corner of any screen, and then click Log out.
l To add a new bucket, click Add bucket, specify a name, and click Add. Use bucket names that comply with DNS naming conventions. For more information on bucket naming, refer to "S3 bucket and key naming policies" (p. 11). l To delete a bucket, select it, and then click Delete. l To list the bucket contents, click the bucket name on the list. Listing S3 bucket contents in a browser You can list bucket contents with a web browser.
l To delete a folder, select it, and then click Delete. l To list the folder contents, click the folder name. Uploading and downloading files On the bucket or folder contents screen: l To upload files to S3, click Upload, and then choose files to upload. l To download files, select them, and then click Download.
l To get a notarization certificate for a file, select it, and then click Get Certificate. l To check the validity of a file’s certificate, click Verify. Accessing S3 storage with CyberDuck To access Acronis Cyber Infrastructure with CyberDuck, do the following: 1. In CyberDuck, click Open Connection. 2. Specify your credentials: l The DNS name of the S3 endpoint. l The Access Key ID and the Password, the secret access key of an object storage user.
With versioning, you can easily recover from both unintended user actions and application failures. For more information about bucket versioning, refer to the Amazon documentation. Bucket versioning is turned off by default. In CyberDuck, you can enable it in bucket properties. For example: Mounting S3 storage with Mountain Duck Mountain Duck enables you to mount and access Acronis Cyber Infrastructure S3 storage as a regular disk drive. Do the following: 1.
3. In the properties window, select Amazon S3 profile from the first drop-down list and specify the following parameters: l Disk drive name in the Nickname field l Endpoint DNS name in the Server field l Access key ID in the Username field Click Connect. 4. In the login window, specify Secret Access Key and click Login. Mountain Duck will mount the S3 storage as a disk drive. On the disk, you can manage buckets and store files in them.
Creating S3 buckets on Mounted S3 Storage Windows and macOS, operating systems supported by Mountain Duck, treat buckets as folders in case the S3 storage is mounted as a disk drive. In both operating systems, the default folder name contains spaces. This violates bucket naming conventions (refer to "S3 bucket and key naming policies" (p. 11)), therefore you cannot create a new bucket directly on the mounted S3 storage.
Accessing iSCSI targets This section describes ways to attach iSCSI targets to operating systems and third-party virtualization solutions that support the explicit ALUA mode. Accessing iSCSI targets from VMware ESXi Before using Acronis Cyber Infrastructure volumes with VMware ESXi, you need to configure it to properly work with ALUA Active/Passive storage arrays. It is recommended to switch to the VMW_PSP_ RR path selection policy (PSP) to avoid any issues. For example, on VMware ESXi 6.
4. Select the disk and click New datastore. In the wizard that appears, enter a name for the datastore and select partitioning options. Click Finish to actually partition the disk. Warning! Partitioning the disk will erase all data from it. The ready-to-use disk will appear in the list of datastores. You can now view its contents it with the datastore browser and provision it to VMs. Note If your ESXi host loses connectivity to VMFS3 or VMFS5 datastores, follow the instructions in KB article #2113956.
device { vendor "VSTORAGE" product "VSTOR-DISK" features "2 pg_init_retries 50" hardware_handler "1 alua" path_grouping_policy group_by_node_name path_selector "round-robin 0" no_path_retry queue user_friendly_names no flush_on_last_del yes failback followover path_checker tur detect_prio no prio alua } } ... 3. Load the kernel module and launch the multipathing service. # modprobe dm-multipath # systemctl start multipathd; systemctl enable multipathd 4. If necessary, enable CHAP parameters node.session.
| `- 6:0:0:1 sdf 8:80 active ready running |-+- policy='round-robin 0' prio=1 status=enabled | `- 8:0:0:1 sdj 8:144 active ghost running `-+- policy='round-robin 0' prio=1 status=enabled `- 7:0:0:1 sdh 8:112 active ghost running # fdisk -l | grep 360000000000000000000b50326ea44e3 Disk /dev/mapper/360000000000000000000b50326ea44e3: 10.7 GB, \ 10737418240 bytes, 20971520 sectors You can also find out the multipath device ID by adding 360000000000000000000 to the last six bytes of the volume ID.
Your server will automatically reboot to finalize the installation. 2. In the Windows PowerShell console, configure MPIO as follows: a. Enable support for iSCSI disks: > Enable-MSDSMAutomaticClaim -BusType iSCSI b. Set the failover policy to Fail Over Only. The policy uses a single active path for sending all I/O, and all other paths are standby. If the active path fails, one of the standby paths is used. When the path recovers, it becomes active again.
d. In the Discover Target Portal window, enter the target IP address and click OK. Repeat this step for each target from the target group. e. On the Targets tab, click Refresh to discover the added targets.
f. Click Connect for each target to connect it to the initiator. In the Connect To Target window, select the Enable multi-path checkbox and click OK. g. On the Targets tab, click Devices.., select the connected LUN, and click MPIO...
h. Make sure the connected LUN has several paths. You can now initialize the newly added disk for use in Microsoft Hyper-V. Do the following: 1. Open Disk Management, right-click the added disk, and choose Properties from the dropdown menu.
2. Check the settings on the MPIO tab. The first connected target becomes Active/Optimized and the preferred path.
3. Partition and format the disk as usual.
Accessing NFS shares This section describes ways to mount Acronis Cyber Infrastructure NFS shares on Linux and macOS. Note Acronis Cyber Infrastructure currently does not support the Windows built-in NFS client. Mounting NFS exports on Linux You can mount an NFS export created in Acronis Cyber Infrastructure like any other directory exported via NFS. You will need the share IP address (or hostname) and the volume identifier. In console, run the following commands: # mkdir /mnt/nfs # mount -t nfs -o vers=4.
1. Set the NFS version to 4.0. To do this, add the nfs.client.mount.options = vers=4.0 line to the /etc/nfs.conf file. 2. In the Finder > Go > Connect to server window, specify nfs://192.168.0.51:// where: o 192.168.0.51 is the share IP address. You can also use the share hostname. o // is the root export path. For user exports, specify their full path, for example: //export1. 3. Click Connect. The Finder will mount the export to /Volumes//.