Acronis Cyber Infrastructure 3.
Copyright Statement Copyright ©Acronis International GmbH, 2002-2019. All rights reserved. ”Acronis” and ”Acronis Secure Zone” are registered trademarks of Acronis International GmbH. ”Acronis Compute with Confidence”, ”Acronis Startup Recovery Manager”, ”Acronis Instant Restore”, and the Acronis logo are trademarks of Acronis International GmbH. Linux is a registered trademark of Linus Torvalds. VMware and VMware Ready are trademarks and/or registered trademarks of VMware, Inc.
Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Providing Credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Managing Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2. Managing Storage Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1 2.
2.4 2.5 2.6 2.7 Managing Node Network Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4.1 vinfra node iface list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4.2 vinfra node iface show . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.4.3 vinfra node iface up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.4.4 vinfra node iface down . . . . . . . .
3.5 3.6 3.7 3.8 3.9 3.4.3 vinfra service compute node show . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.4.4 vinfra service compute node fence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.4.5 vinfra service compute node unfence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4.6 vinfra service compute node release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Managing Virtual Networks . . . . . . . . . . . . . . . .
3.10 3.9.3 vinfra service compute flavor show . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.9.4 vinfra service compute flavor delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Managing Storage Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.10.1 vinfra cluster storage-policy create . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.10.2 vinfra cluster storage-policy list . . . . . . . . . . . . .
3.14.6 vinfra service compute server iface attach . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 3.14.7 vinfra service compute server iface list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.14.8 vinfra service compute server iface detach . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.14.9 vinfra service compute server volume attach . . . . . . . . . . . . . . . . . . . . . . . . . 96 3.14.10 vinfra service compute server volume list . . . . . . . . . . . . . . . . . .
4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.3.3 vinfra domain user show . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112 4.3.4 vinfra domain user set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112 4.3.5 vinfra domain user delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114 Managing Domain Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115 4.4.
5.1 Monitoring General Storage Cluster Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . .134 5.2 Monitoring Metadata Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137 5.3 Monitoring Chunk Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138 5.3.1 5.3.2 Understanding Disk Space Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140 5.3.1.1 Understanding Allocatable Disk Space . . . . .
6.7 6.8 6.6.1 Creating Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .165 6.6.2 Adding and Removing Target Portals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .165 6.6.3 Deleting Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166 Managing CHAP Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166 6.7.1 Creating and Listing CHAP Accounts . . . . . . . . .
7.6 Creating SSH-Enabled Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .182 7.6.1 Creating SSH-Enabled Linux Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . .182 7.6.2 Creating SSH-Enabled Windows Templates . . . . . . . . . . . . . . . . . . . . . . . . . . .182 7.7 Securing OpenStack API Traffic with SSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .186 7.8 Enabling Metering for Compute Resources . . . . . . . . . . . . . . .
CHAPTER 1 Introduction This guide describes the syntax and parameters of the vinfra command-line tool that can be used to manage Acronis Cyber Infrastructure from console and automate such management tasks. Note: While the following chapters provide information on specific operations that you can perform with vinfra, you can also run vinfra help to get a list of all supported commands and their descriptions. For help on a specific command, either run vinfra help or vinfra --help.
Chapter 1. Introduction • setting encoding or replication of cluster root 1.1 Providing Credentials The vinfra CLI tool requires the following information: • IP address or hostname of the management node (set to backend-api.svc.vstoragedomain by default). • Username (admin by default). • Password (created during installation of Acronis Cyber Infrastructure). This information can be supplied via the --vinfra-portal, --vinfra-username, and --vinfra-password command-line parameters with each command.
Chapter 1. Introduction +--------------------------------------+---------+-----------------------------------------+ # vinfra task show 8fc27e7a-ba73-471d-9134-e351e1137cf4 +---------+----------------------------------------+ | Field | Value | +---------+----------------------------------------+ | args | - stor1 | | | - 7ffa9540-5a20-41d1-b203-e3f349d62565 | | | - null | | | - null | | kwargs | {} | | name | backend.tasks.cluster.
CHAPTER 2 Managing Storage Cluster 2.1 Managing Tokens 2.1.1 vinfra node token show Display the backend token: usage: vinfra node token show Example: # vinfra node token show +-------+---------------+ | Field | Value | +-------+---------------+ | host | 10.37.130.101 | | token | dc56d4d2 | | ttl | 86398 | +-------+---------------+ This command shows the details of the current token. 2.1.
Chapter 2. Managing Storage Cluster Example: # vinfra node token create --ttl 86400 +-------+---------------+ | Field | Value | +-------+---------------+ | host | 10.37.130.101 | | token | dc56d4d2 | | ttl | 86398 | +-------+---------------+ This command creates a new token with the time to live (TTL) of 86400 seconds. 2.1.
Chapter 2. Managing Storage Cluster Traffic type name Example: # vinfra cluster traffic-type create "MyTrafficType" --port 6900 +-----------+-----------------+ | Field | Value | +-----------+-----------------+ | exclusive | False | | name | MyTrafficType | | port | 6900 | | type | custom | +-----------+-----------------+ This command creates a custom traffic type MyTrafficType on port 6900. 2.2.
Chapter 2. Managing Storage Cluster 2.2.3 vinfra cluster traffic-type show Show details of a traffic type: usage: vinfra cluster traffic-type show Traffic type name Example: # vinfra cluster traffic-type show Storage +-----------+------------+ | Field | Value | +-----------+------------+ | exclusive | True | | name | Storage | | port | | | type | predefined | +-----------+------------+ This command shows the details of the traffic type Storage. 2.2.
Chapter 2. Managing Storage Cluster | name | MyOtherTrafficType | | port | 6901 | | type | custom | +-----------+--------------------+ This command renames the traffic type MyTrafficType to MyOtherTrafficType and changes its port to 6901. 2.2.
Chapter 2. Managing Storage Cluster +-------+--------------------------------------+ This command creates a custom network MyNet and assigns the traffic type SSH to it. 2.2.
Chapter 2. Managing Storage Cluster | id | 03d5eeb3-1833-4626-885d-dd066635f5de | | name | MyNet | | roles | - SSH | | type | Custom | +-------+--------------------------------------+ This command shows the details of the custom network MyNet. 2.2.
Chapter 2. Managing Storage Cluster # vinfra task show b29f6f66-37d7-47de-b02e-9f4087ad932b +---------+-------------------------------------------------------------+ | Field | Value | +---------+-------------------------------------------------------------+ | args | - 03d5eeb3-1833-4626-885d-dd066635f5de | | | kwargs | name: MyOtherNet | | roles: | | | - ssh | | | - iscsi | | | - nfs | | name | backend.presentation.network.roles.tasks.
Chapter 2. Managing Storage Cluster Task outcome: # vinfra task show c774f55d-c45b-42cd-ac9e-16fc196e9283 +---------+-----------------------------------------------------------------+ | Field | Value | +---------+-----------------------------------------------------------------+ | details | | | name | backend.presentation.network.roles.tasks.
Chapter 2. Managing Storage Cluster usage: vinfra node join [--disk :[:]] --disk : [:
Chapter 2. Managing Storage Cluster | args | - f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4 | | | - 1 | | kwargs | disks: | | | - id: 85F32403-94A9-465A-9E6C-C1A2B41294FC | | | role: mds-system | | | service_params: {} | | | - id: FE0B5876-E054-489B-B0FD-72429BEFD46A | | | role: cs | | | service_params: {} | | | - id: D3BEF4BB-AA3B-4DB6-9376-BC7CDA636700 | | | role: cs | | | service_params: {} | | name | backend.tasks.node.
Chapter 2. Managing Storage Cluster 2.3.3 vinfra node show Show storage node details: usage: vinfra node show Node ID or hostname Example: # vinfra node show 4f96acf5-3bc8-4094-bcb6-4d1953be7b55 +---------------+--------------------------------------+ | Field | Value | +---------------+--------------------------------------+ | cpu_cores | 2 | | host | stor-1.example.com.vstoragedomain. | | id | 4f96acf5-3bc8-4094-bcb6-4d1953be7b55 | | ipaddr | stor-1.example.com.vstoragedomain.
Chapter 2. Managing Storage Cluster Example: # vinfra node release f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4 +---------+--------------------------------------+ | Field | Value | +---------+--------------------------------------+ | task_id | c2a653a2-8991-4b3a-8bdf-5c0872aa75b3 | +---------+--------------------------------------+ This command creates a task to release the node with the ID f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4 from the storage cluster with migration of data to maintain the set redundancy mode.
Chapter 2. Managing Storage Cluster # vinfra task show 0eac3b74-e8f5-4974-9efe-a9070187d83c +---------+----------------------------------------+ | Field | Value | +---------+----------------------------------------+ | args | - fd1e46de-6e17-4571-bf6b-1ac34ec1c225 | | | kwargs | {} | name | backend.tasks.node.DeleteNodeTask | | state | success | | task_id | 0eac3b74-e8f5-4974-9efe-a9070187d83c | +---------+----------------------------------------+ 2.4 Managing Node Network Interfaces 2.4.
Chapter 2. Managing Storage Cluster --node Node ID or hostname (default: node001.vstoragedomain) Network interface name Example: # vinfra node iface show eth0 --node 4f96acf5-3bc8-4094-bcb6-4d1953be7b55 +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | contained_in | | | dhcp4 | 10.94.29.
Chapter 2. Managing Storage Cluster 4f96acf5-3bc8-4094-bcb6-4d1953be7b55. 2.4.3 vinfra node iface up Bring a network interface up: usage: vinfra node iface up [--node ] --node Node ID or hostname (default: node001.
Chapter 2. Managing Storage Cluster | speeds | current: null | | | max: null | | state | up | | tx_bytes | 1116 | | tx_dropped | 0 | | tx_errors | 0 | | tx_overruns | 0 | | tx_packets | 8 | | type | iface | +-----------------------+--------------------------------------+ This command brings up the network interface eth2 located on the node with the ID 4f96acf5-3bc8-4094-bcb6-4d1953be7b55. 2.4.
Chapter 2.
Chapter 2.
Chapter 2. Managing Storage Cluster Network interface name Example: # vinfra node iface set eth2 --network Private \ --node 4f96acf5-3bc8-4094-bcb6-4d1953be7b55 +---------+--------------------------------------+ | Field | Value | +---------+--------------------------------------+ | task_id | 8a378098-6760-4fe9-ac20-1f18a8ed9d2e | +---------+--------------------------------------+ This command creates a task to assign the network interface eth2 located on the node with the ID 4f96acf5-3bc8-4094-bcb6
Chapter 2. Managing Storage Cluster | | rx_packets: 225 | | | speeds: | | | current: null | | | max: null | | | state: up | | | tx_bytes: 13087 | | | tx_dropped: 0 | | | tx_errors: 0 | | | tx_overruns: 0 | | | tx_packets: 145 | | | type: iface | | state | success | | task_id | 8a378098-6760-4fe9-ac20-1f18a8ed9d2e | +---------+---------------------------------------------------------------+ 2.4.
Chapter 2. Managing Storage Cluster --no-dhcp4 Disable DHCPv4 --dhcp6 Enable DHCPv6 --no-dhcp6 Disable DHCPv6 --auto-routes-v4 Enable automatic IPv4 routes --ignore-auto-routes-v4 Ignore automatic IPv4 routes --auto-routes-v6 Enable automatic IPv6 routes --ignore-auto-routes-v6 Ignore automatic IPv6 routes --network Network ID or name --bonding-opts Additional bonding options --bond-type Bond type (balance-rr, active-backup, balance-xor, broadcast, 802.
Chapter 2. Managing Storage Cluster This command creates a task to bond network interfaces eth2 and eth3 into bond0 of the type balance-xor on the node with the ID fd1e46de-6e17-4571-bf6b-1ac34ec1c225.
Chapter 2. Managing Storage Cluster | state | success | | task_id | becf96ad-9e39-4bec-b82c-4e1219a196de | +---------+----------------------------------------------------------------------+ 2.4.
Chapter 2. Managing Storage Cluster --ignore-auto-routes-v4 Ignore automatic IPv4 routes --auto-routes-v6 Enable automatic IPv6 routes --ignore-auto-routes-v6 Ignore automatic IPv6 routes --network Network ID or name --node Node ID or hostname (default: node001.vstoragedomain) --iface Interface name --tag VLAN tag number Example: # vinfra node iface create-vlan --iface eth2 --tag 100 --dhcp4 \ --node fd1e46de-6e17-4571-bf6b-1ac34ec1c225 +---------+----------------------------
Chapter 2. Managing Storage Cluster | | dhcp6_enabled: false | | | duplex: null | | | gw4: null | | | gw6: null | | | ignore_auto_routes_v4: true | | | ignore_auto_routes_v6: true | | | ipv4: [] | | | ipv6: | | | - fe80::21c:42ff:fe81:27d0/64 | | | mac_addr: 00:1c:42:81:27:d0 | | | mtu: 1500 | | | multicast: true | | | name: eth2.
Chapter 2. Managing Storage Cluster # vinfra node iface delete --node fd1e46de-6e17-4571-bf6b-1ac34ec1c225 eth2.100 +---------+--------------------------------------+ | Field | Value | +---------+--------------------------------------+ | task_id | 16503616-6c1c-48f9-999a-9d87b617d9ee | +---------+--------------------------------------+ This command creates a task to delete a VLAN interface eth1.100 from the node with the ID fd1e46de-6e17-4571-bf6b-1ac34ec1c225.
Chapter 2. Managing Storage Cluster | 49D792CA-<...> | 94d58604-<...> | sdc | 2.1GiB | 1007.8GiB | cs | +----------------+----------------+--------+--------+-----------+------------+ This command lists disks on the node with the ID 94d58604-6f30-4339-8578-adb7903b7277. (The output is abridged to fit on page.) 2.5.2 vinfra node disk show Show details of a disk: usage: vinfra node disk show [--node ] --node Node ID or hostname Disk ID or device name (default: node001.
Chapter 2. Managing Storage Cluster | | used: 2246164480 | | tasks | | | temperature | 0.0 | | transport | | | type | hdd | +--------------------+--------------------------------------+ This command shows the details of the disk with the ID EAC7DF5D-9E60-4444-85F7-5CA5738399CC attached to the node with the ID 94d58604-6f30-4339-8578-adb7903b7277. 2.5.3 vinfra node disk assign Add multiple disks to the storage cluster: usage: vinfra node disk assign --disk :[:
Chapter 2.
Chapter 2. Managing Storage Cluster Example: # vinfra node disk release sdc --node f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4 +---------+--------------------------------------+ | Field | Value | +---------+--------------------------------------+ | task_id | 587a936d-3953-481c-a2cd-b1223b890bec | +---------+--------------------------------------+ This command creates a task to release the role cs from the disk sdc on the node with the ID f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4.
Chapter 2. Managing Storage Cluster # vinfra node disk blink on sda --node f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4 This command starts blinking the disk sda on the node with the ID f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4. 2.5.6 vinfra node disk blink off Stop blinking the specified disk bay: usage: vinfra node disk blink off [--node ] --node Node ID or hostname (default: node001.
Chapter 2. Managing Storage Cluster Target name Example: # vinfra node iscsi target add iqn.2014-06.com.vstorage:target1 \ --portal 172.16.24.244:3260 --node f1931be7-0a01-4977-bfef-51a392adcd94 +---------+--------------------------------------+ | Field | Value | +---------+--------------------------------------+ | task_id | c42bfbe5-7292-41c2-91cb-446795535ab9 | +---------+--------------------------------------+ This command creates a task to connect a remote iSCSI target iqn.2014-06.com.
Chapter 2. Managing Storage Cluster Target name Example: # vinfra node iscsi target delete iqn.2014-06.com.vstorage:target1 \ --node f1931be7-0a01-4977-bfef-51a392adcd94 +---------+--------------------------------------+ | Field | Value | +---------+--------------------------------------+ | task_id | c8dc74ee-86d6-4b89-8b6f-153ff1e78cb7 | +---------+--------------------------------------+ This command creates a task to disconnect a remote iSCSI target iqn.2014-06.com.
Chapter 2. Managing Storage Cluster • tier: disk tier (0, 1, 2 or 3) • journal-tier: journal (cache) disk tier (0, 1, 2 or 3) • journal-type: journal (cache) disk type (no_cache, inner_cache or external_cache) • journal-disk: journal (cache) disk ID or device name • journal-size: journal (cache) disk size, in bytes • bind-address: bind IP address for the metadata service E.g., sda:cs:tier=0,journal-type=inner_cache. This option can be used multiple times.
Chapter 2. Managing Storage Cluster | name | backend.tasks.cluster.CreateNewCluster | | result | cluster_id: 1 | | state | success | | task_id | d9ca8e1d-8ac8-4459-898b-2d803efd7bc6 | +---------+----------------------------------------+ 2.6.2 vinfra cluster delete Delete the storage cluster: usage: vinfra cluster delete Example: # vinfra cluster delete Operation waiting (timeout=600s) [Elapsed Time: 0:01:09] ... | Operation successful This command releases all nodes from the storage cluster. 2.
Chapter 2.
Chapter 2. Managing Storage Cluster # vinfra cluster show +-------+--------------------------------------------+ | Field | Value | +-------+--------------------------------------------+ | id | 1 | | | name | stor1 | nodes | - host: stor-4.example.com.vstoragedomain | | | id: 4b83a87d-9adf-472c-91f0-782c47b2d5f1 | | | is_installing: false | | | is_releasing: false | | | - host: stor-3.example.com.
CHAPTER 3 Managing Compute Cluster 3.1 Creating and Deleting the Compute Cluster 3.1.1 vinfra service compute create Create a compute cluster: usage: vinfra service compute create [--public-network ] [--subnet cidr=CIDR[,key=value,...]] [--cpu-model ] [--force] [--enable-metering] --nodes --public-network A physical network to connect the public virtual network to. It must include the ‘VM public’ traffic type. --subnet cidr=CIDR[,key=value,...
Chapter 3. Managing Compute Cluster • dns-server: DNS server IP address, specify multiple times to set multiple DNS servers. Example: --subnet cidr=192.168.5.0/24,dhcp=enable. --cpu-model CPU model for virtual machines. --force Skip checks for minimal hardware requirements. --enable-metering Enable metering services. --nodes A comma-separated list of node IDs or hostnames. Example: # vinfra service compute create --virtual-ip 10.94.50.244 \ --nodes 7ffa9540-5a20-41d1-b203-e3f349d62565
Chapter 3. Managing Compute Cluster | | - end_address: 10.94.129.79 | | | start_address: 10.94.129.64 | | | cidr: 10.94.0.0/16 | | | dns_servers: | | | - 10.30.0.27 | | | - 10.30.0.28 | | | enable_dhcp: true | | | gateway: 10.94.0.1 | | | nodes: | | | - 7ffa9540-5a20-41d1-b203-e3f349d62565 | | | - 02ff64ae-5800-4090-b958-18b1fe8f5060 | | | - 6e8afc28-7f71-4848-bdbe-7c5de64c5013 | | | - 37c70bfb-c289-4794-8be4-b7a40c2b6d95 | | | - 827a1f4e-56e5-404f-9113-88748c18f0c2 | | name | backend.presentation.compute.
Chapter 3. Managing Compute Cluster 3.2 Showing Compute Cluster Details and Overview 3.2.
Chapter 3. Managing Compute Cluster | | title: Red Hat Enterprise Linux 7 | | | - id: rhel8 | | | os_type: linux | | | title: Red Hat Enterprise Linux 8 | | | - id: ubuntu18.04 | | | os_type: linux | | | title: Ubuntu 18.04 | | | - id: ubuntu16.04 | | | os_type: linux | | | title: Ubuntu 16.
Chapter 3. Managing Compute Cluster 3.2.2 vinfra service compute stat Display compute cluster statistics: usage: vinfra service compute stat Example: # vinfra service compute stat +----------+-------------------------------+ | Field | Value | +----------+-------------------------------+ | compute | block_capacity: 0 | | | block_usage: 0 | | | cpu_usage: 0.0 | | | mem_total: 0 | | | mem_usage: 0 | | | vcpus: 0 | | datetime | 2018-09-11T15:50:18.
Chapter 3. Managing Compute Cluster --enable-metering Enable metering services. Example: # vinfra service compute set --cpu-model Haswell This command sets the default CPU model for VMs to Haswell. 3.4 Managing Compute Nodes 3.4.
Chapter 3. Managing Compute Cluster # vinfra task show 4c58e63c-31b6-406a-8070-9197445ec794 +----------+---------------------------------------------------------------+ | Field | Value | +----------+---------------------------------------------------------------+ | args | [] | | | kwargs | nodes: | | - 827a1f4e-56e5-404f-9113-88748c18f0c2 | | name | backend.presentation.compute.tasks.
Chapter 3. Managing Compute Cluster # vinfra service compute node show 7ffa9540-5a20-41d1-b203-e3f349d62565 +---------------------+----------------------------------------------+ | Field | Value | +---------------------+----------------------------------------------+ | host_ip | 10.37.130.101 | | | hypervisor_hostname | stor-1.example.com.
Chapter 3. Managing Compute Cluster 3.4.5 vinfra service compute node unfence Unfence a compute node: usage: vinfra service compute node unfence Node ID or hostname Example: # vinfra service compute node unfence e6255aed-d6e7-41b2-ba90-86164c1cd9a6 Operation successful This command unfences the node with the ID e6255aed-d6e7-41b2-ba90-86164c1cd9a6. 3.4.
Chapter 3. Managing Compute Cluster # vinfra task show 3b39738c-80a6-40a6-a50d-c3c8118ed212 +---------+------------------------------------------------------------------+ | Field | Value | +---------+------------------------------------------------------------------+ | args | [] | | | kwargs | nodes: | | - 827a1f4e-56e5-404f-9113-88748c18f0c2 | | name | backend.presentation.compute.tasks.
Chapter 3. Managing Compute Cluster --ip-version Network IP version --physical-network A physical network to link to a public network --cidr Subnet range in CIDR notation Network name Example: # vinfra service compute network create myprivnet --type vxlan \ --cidr 192.128.128.0/24 --gateway 192.128.128.
Chapter 3. Managing Compute Cluster +----------------+-----------+-------+------------------+------------------------+ | 1bf2c9da-<...> | private | vxlan | 192.168.128.0/24 | - end: 192.168.128.254 | | | | | | start: 192.168.128.2 | | 3848fb5d-<...> | myprivnet | vxlan | 192.128.128.0/24 | - end: 192.128.128.254 | | | | | | start: 192.128.128.2 | | 417606ac-<...> | public | flat | 10.94.0.0/16 | - end: 10.94.129.79 | | | | | | start: 10.94.129.
Chapter 3. Managing Compute Cluster usage: vinfra service compute network set [--dhcp | --no-dhcp] [--dns-nameserver ] [--allocation-pool ] [--gateway | --no-gateway] [--name ] --dhcp Enable DHCP --no-dhcp Disable DHCP --dns-nameserver DNS server IP address. This option can be used multiple times. --allocation-pool Allocation pool to create inside the network in the format: ip_addr_start-ip_addr_end.
Chapter 3. Managing Compute Cluster | | enable_dhcp: false | | | gateway_ip: 192.128.128.1 | | | ip_version: 4 | | type | vxlan | +------------------+--------------------------------------+ This command disables DHCP for the private network myprivnet. 3.5.
Chapter 3. Managing Compute Cluster --disable-snat Disable source NAT on the external gateway --fixed-ip Desired IP on the external gateway --internal-interface | Specify an internal interface. This option can be used multiple times. • network: name of a private virtual network. • ip-addr: an unused IP address from the selected private network to assign to the interface; specify if the default gateway of the selected private network is in use.
Chapter 3. Managing Compute Cluster # vinfra service compute router list -c id -c external_gateway_info -c name -c status +---------------------+---------------------------------+----------+--------+ | id | external_gateway_info | name | status | +---------------------+---------------------------------+----------+--------+ | b9d8b000-5d06-<...> | enable_snat: true | myrouter | ACTIVE | | ip_addresses: | | | | | | - 10.94.129.76 | | | | | network_id: 720e45bc-4225-<...
Chapter 3.
Chapter 3. Managing Compute Cluster | name | myrouter | | project_id | 894696133031439f8aaa7e4868dcbd4d | | routes | [] | | status | ACTIVE | +-----------------------+--------------------------------------------------+ This command disables SNAT on the external gateway of the virtual router myrouter. 3.6.
Chapter 3. Managing Compute Cluster router Virtual router name or ID Example: # vinfra service compute router iface list myrouter +-------------------------------------------------+-------------+-----------------+--------+ | network_id | is_external | ip_addresses | status | +-------------------------------------------------+-------------+-----------------+--------+ | 720e45bc-4225-49de-9346-26513d8d1262 (public) | True | - 10.94.129.
Chapter 3. Managing Compute Cluster 3.6.8 vinfra service compute router delete Delete a virtual router: usage: vinfra service compute router delete Virtual router ID or name Example: # vinfra service compute router delete myrouter Operation successful This command deletes the virtual router myrouter. 3.7 Managing Floating IP Addresses 3.7.
Chapter 3. Managing Compute Cluster Example: # vinfra service compute floatingip create 720e45bc-4225-49de-9346-26513d8d1262 \ --port-id 418c8c9e-aaa5-42f2-8da7-24bfead6f28b --fixed-ip-address 192.168.128.5 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attached_to | a172cb6a-1c7b-4157-9e86-035f3077646f | | description | | | fixed_ip_address | 192.168.128.5 | | floating_ip_address | 10.94.129.
Chapter 3. Managing Compute Cluster 3.7.
Chapter 3. Managing Compute Cluster ID of the floating IP address Example: # vinfra service compute floatingip set a709f884-c43f-4a9a-a243-a340d7682ef8 \ --description "Floating IP for myvm" +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attached_to | a172cb6a-1c7b-4157-9e86-035f3077646f | | description | Floating IP for myvm | | fixed_ip_address | 192.168.128.5 | | floating_ip_address | 10.94.
Chapter 3. Managing Compute Cluster 3.8 Managing Images 3.8.
Chapter 3. Managing Compute Cluster | task_id | 03874663-d03f-4891-a10b-64837e7faf43 | +---------+--------------------------------------+ This command creates a task to create a Cirros image from the local file and upload it to Acronis Cyber Infrastructure.
Chapter 3.
Chapter 3.
Chapter 3. Managing Compute Cluster # vinfra service compute image save 4741274f-5cca-4205-8f66-a2e89fb346cc --file cirros.qcow2 Operation successful This command downloads the default Cirros image to the local disk as cirros.qcow2. 3.8.
Chapter 3. Managing Compute Cluster Example: # vinfra service compute flavor create myflavor --vcpus 1 --ram 3072 +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 561a48ea-0c1c-4152-8b7d-e4b4af276c2d | | name | myflavor | | ram | 3072 | | swap | 0 | | vcpus | 1 | +-------+--------------------------------------+ This command creates a flavor myflavor with 1 vCPU and 3 GB RAM. 3.9.
Chapter 3. Managing Compute Cluster # vinfra service compute flavor show myflavor +-------+--------------------------------------+ | Field | Value | +-------+--------------------------------------+ | id | 561a48ea-0c1c-4152-8b7d-e4b4af276c2d | | | name | myflavor | ram | 3072 | | swap | 0 | | vcpus | 1 | +-------+--------------------------------------+ This command shows the details of the flavor myflavor. 3.9.
Chapter 3.
Chapter 3. Managing Compute Cluster Example: # vinfra cluster storage-policy list +----------------+--------------+------+--------------+----------------+ | id | name | tier | redundancy | failure_domain | +----------------+--------------+------+--------------+----------------+ | 2199e71e-<...> | mystorpolicy | 3 | encoding=3+2 | host | | 4274d6fd-<...
Chapter 3.
Chapter 3. Managing Compute Cluster 3.10.5 vinfra cluster storage-policy delete The default policy cannot be deleted. Remove an existing storage policy: usage: vinfra cluster storage-policy delete Storage policy ID or name Example: # vinfra cluster storage-policy delete mystorpolicy Operation successful This command deletes the storage policy mystorpolicy. 3.11 Managing Volumes 3.11.
Chapter 3.
Chapter 3. Managing Compute Cluster 3.11.
Chapter 3. Managing Compute Cluster | os-vol-mig-status-attr:name_id | | | project_id | 72a5db3a033c403a86756021e601ef34 | | replication_status | | | size | 8 | | snapshot_id | | | source_volid | | | status | available | | storage_policy_name | default | | updated_at | 2018-09-12T12:30:33.
Chapter 3. Managing Compute Cluster # vinfra service compute volume set myvolume --storage-policy mystorpolicy +--------------------------------+-----------------------------------------------------+ | Field | Value | +--------------------------------+-----------------------------------------------------+ | attachments | [] | | nova | availability_zone | | bootable | False | | consistencygroup_id | | | created_at | 2018-09-12T12:30:12.
Chapter 3. Managing Compute Cluster 3.11.6 vinfra service compute volume delete Delete a compute volume: usage: vinfra service compute volume delete Volume ID or name Example: # vinfra service compute volume delete myvolume2 Operation successful This command deletes the volume myvolume2. 3.12 Managing Volume Snapshots 3.12.
Chapter 3. Managing Compute Cluster | id | 3fdfe5d6-8bd2-4bf5-8599-a9cef50e5b71 | | metadata | {} | | name | mysnapshot | | project_id | fd0ae61496d04ef6bb637bc3167b7eaf | | size | 8 | | status | creating | | volume_id | 92dc3bd7-713d-42bf-83cd-4de40c24fed9 | +-------------+--------------------------------------+ This command initiates creation of a snapshot mysnapshot of the volume myvolume. 3.12.
Chapter 3. Managing Compute Cluster | description | | | id | 3fdfe5d6-8bd2-4bf5-8599-a9cef50e5b71 | | metadata | {} | | name | mysnapshot | | project_id | fd0ae61496d04ef6bb637bc3167b7eaf | | size | 8 | | status | available | | volume_id | 92dc3bd7-713d-42bf-83cd-4de40c24fed9 | +-------------+--------------------------------------+ This command shows the details for the volume snapshot mysnapshot. 3.12.
Chapter 3. Managing Compute Cluster 3.12.5 vinfra service compute volume snapshot upload-to-image Create a compute image from a compute volume snapshot: usage: vinfra service compute volume snapshot upload-to-image [--name ] --name Image name Volume snapshot ID or name Example: # vinfra service compute volume snapshot upload-to-image --name myvm-image \ mynewsnapshot +------------------+--------------------------------------+ | Field | Value | +-------------
Chapter 3. Managing Compute Cluster usage: vinfra service compute volume snapshot revert Volume snapshot ID or name Example: # vinfra service compute volume snapshot revert mynewsnapshot +-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | created_at | 2019-04-30T13:12:54.
Chapter 3. Managing Compute Cluster +-------------+--------------------------------------+ This command resets the state of the volume snapshot mynewsnapshot. 3.12.8 vinfra service compute volume snapshot delete Delete a volume snapshot: usage: vinfra service compute volume snapshot delete Volume snapshot ID or name Example: # vinfra service compute volume snapshot delete mynewsnapshot Operation successful This command deletes the volume snapshot mynewsnapshot. 3.
Chapter 3. Managing Compute Cluster +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | created_at | 2019-04-25T13:41:14.241736+00:00 | | description | public key | | name | publickey | +-------------+----------------------------------+ This command creates a public SSH key publickey. 3.13.
Chapter 3. Managing Compute Cluster | public_key_fingerprint | 1a:fb:de:d8:1e:0a:84:30:fc:ff:e4:fd:89:e7:96:a9 | +------------------------+-------------------------------------------------+ This command shows the details of the SSH key publickey. 3.13.4 vinfra service compute key delete Delete a compute SSH key: usage: vinfra service compute key delete SSH key name Example: # vinfra service compute key delete publickey Operation successful This command deletes the SSH key publickey.
Chapter 3. Managing Compute Cluster --user-data User data file --key-name Key pair to inject --config-drive Use an ephemeral drive --count If count is specified and greater than 1, the name argument is treated as a naming pattern. --ha-enabled {true,false} Enable or disable HA for the compute server --network Create a compute server with a specified network. Specify this option multiple times to create multiple networks.
Chapter 3. Managing Compute Cluster • type: block device type (disk or cdrom) • rm: remove block device on compute server termination (yes or no) • storage-policy: block device storage policy --flavor Flavor ID or name A new name for the compute server Example: # vinfra service compute server create myvm \ --network id=private,fixed-ip=192.168.128.100 \ --volume source=image,id=cirros,size=1 --flavor tiny +--------------+--------------------------------------+ | Field | Value |
Chapter 3. Managing Compute Cluster 3.14.2 vinfra service compute server list List compute servers: usage: vinfra service compute server list Example: # vinfra service compute server list +--------------------------------------+------+--------+------------------------+ | id | name | status | host | +--------------------------------------+------+--------+------------------------+ | 8cd29296-8bee-4efb-828d-0e522d816c6e | myvm | ACTIVE | node001.
Chapter 3. Managing Compute Cluster | name | myvm | | networks | - id: 79b3da71-c6a2-49e8-97f8-9431a065bed7 | | | ipam_enabled: true | | | ips: | | | - 192.168.128.
Chapter 3. Managing Compute Cluster 3.14.
Chapter 3. Managing Compute Cluster | orig_hostname | node001 | | power_state | RUNNING | | project_id | b4267de6fd0c442da99542cd20f5932c | | status | ACTIVE | | task_state | | | updated | 2019-05-29T11:24:21Z | | user_data | | | volumes | - delete_on_termination: false | | | id: edd3df0a-95f5-4892-9053-2793a3976f94 | +---------------+--------------------------------------------+ This command adds a description to the virtual machine myvm and disables HA for it. 3.14.
Chapter 3. Managing Compute Cluster | spoofing<...> | False | +---------------+--------------------------------------+ This command attaches the private network myprivnet to the virtual machine myvm. 3.14.
Chapter 3. Managing Compute Cluster This command detaches the network interface with the ID 471e37fd-13ae-4b8f-b70c-90ac02cc4386 from the VM with the ID 6c80b07f-da46-4a8a-89a4-eecb8faceb27. 3.14.
Chapter 3. Managing Compute Cluster | e4cb5363-1fb2-41f5-b24b-18f98a388cba | /dev/vdb | | b325cc6e-8de1-4b6c-9807-5a497e3da7e3 | /dev/vda | +--------------------------------------+----------+ This command lists the volumes attached to the virtual machine myvm. 3.14.
Chapter 3. Managing Compute Cluster Example: # vinfra service compute server volume detach e4cb5363-1fb2-41f5-b24b-18f98a388cba \ --server 871fef54-519b-4111-b18d-d2039e2410a8 Operation successful This command detaches the volume with the ID e4cb5363-1fb2-41f5-b24b-18f98a388cba from the VM with the ID 871fef54-519b-4111-b18d-d2039e2410a8. 3.14.
Chapter 3. Managing Compute Cluster --node e6255aed-d6e7-41b2-ba90-86164c1cd9a6 Operation successful This command starts migration of the VM with the ID 6c80b07f-da46-4a8a-89a4-eecb8faceb27 to the compute node with the ID e6255aed-d6e7-41b2-ba90-86164c1cd9a6. 3.14.
Chapter 3. Managing Compute Cluster 3.14.17 vinfra service compute server pause Pause a compute server: usage: vinfra service compute server pause Compute server ID or name Example: # vinfra service compute server pause myvm This command pauses the running virtual machine myvm. 3.14.
Chapter 3. Managing Compute Cluster # vinfra service compute server suspend myvm Operation successful This command suspends the running virtual machine myvm. 3.14.20 vinfra service compute server resume Resume a compute server: usage: vinfra service compute server resume Compute server ID or name Example: # vinfra service compute server resume myvm Operation successful This command resumes the suspended virtual machine myvm. 3.14.
Chapter 3. Managing Compute Cluster 3.14.22 vinfra service compute server reset-state Reset compute server state: usage: vinfra service compute server reset-state [--state-error] --state-error Reset server to ‘ERROR’ state Compute server ID or name Example: # vinfra service compute server reset-state myvm Operation successful This command resets the transitional state of the virtual machine myvm to the previous one. 3.14.
Chapter 3. Managing Compute Cluster 3.14.24 vinfra service compute server shelve Shelve a compute server: usage: vinfra service compute server shelve Compute server ID or name. Example: # vinfra service compute server shelve myvm This command unbinds the virtual machine myvm from the node it is hosted on and releases its reserved resources such as CPU and RAM. 3.14.
Chapter 3. Managing Compute Cluster # vinfra service compute server evacuate myvm Operation successful This command evacuates the stopped VM myvm from its node to another, healthy compute node. 3.14.27 vinfra service compute server delete Delete a compute server: usage: vinfra service compute server delete Compute server ID or name Example: # vinfra service compute server delete myvm Operation successful This command deletes the virtual machine myvm.
CHAPTER 4 Managing General Settings 4.1 Managing Licenses 4.1.1 vinfra cluster license load Load a license from a key. usage: vinfra cluster license load --key --type --key License key to register. Specify this option multiple times to register multiple keys. --type License type (prolong or upgrade) Example: # vinfra cluster license load --key A38600ML-3P6W746P-RZSK58BV-Y9ZH05Q5-2X7J48J6-KVRXRYPY-\ Z2FK7ZQ6-Y7FGZNYF --type upgrade +------------+----
Chapter 4. Managing General Settings 4.1.
Chapter 4. Managing General Settings # vinfra domain create mydomain +----------------+----------------------------------+ | Field | Value | +----------------+----------------------------------+ | description | | | True | enabled | | id | ed408d00561c4a398f933c29e87cadab | | name | domain1 | | projects_count | 0 | +----------------+----------------------------------+ This command creates and enables the domain mydomain. 4.2.
Chapter 4. Managing General Settings | description | | | enabled | True | | id | 24986479ee3246048d3ef2a065ea99f5 | | name | mydomain | | projects_count | 0 | +----------------+----------------------------------+ This command shows the details of the domain mydomain. 4.2.
Chapter 4. Managing General Settings 4.2.5 vinfra domain delete Delete a domain: usage: vinfra domain delete Domain ID or name Example: # vinfra domain delete mydomain Operation successful This command deletes the domain mydomain. 4.3 Managing Domain Users 4.3.
Chapter 4. Managing General Settings --domain-permissions A comma-separated list of domain permissions: • domain_admin: can manage virtual objects in all projects within the assigned domain as well as project and user assignment in the self-service panel. • image_upload: can upload images.
Chapter 4. Managing General Settings # vinfra domain user create --domain mydomain --name myuser \ --domain-permissions domain_admin Password: +--------------------+----------------------------------+ | Field | Value | +--------------------+----------------------------------+ | assigned_projects | [] | | description | | | domain_permissions | - domain_admin | | email | | | enabled | True | | a9c67c6acf1f4df1818fdeeee0b4bd5e | | id | name | myuser | | role | domain_admin | | system_permissions | [] | +-----
Chapter 4. Managing General Settings 4.3.
Chapter 4. Managing General Settings --email User email --description User description --assign Assign a user to a project with one or more permission sets. Specify this option multiple times to assign the user to multiple projects.
Chapter 4. Managing General Settings --enable Enable user --disable Disable user --name User name --domain Domain name or ID User ID or name Example: # vinfra domain user set myuser --domain mydomain \ --assign myproject project_admin +--------------------+----------------------------------+ | Field | Value | +--------------------+----------------------------------+ | assigned_projects | [] | | description | | | domain_permissions | - domain_admin | | email | | | enabled | True | | i
Chapter 4. Managing General Settings User ID or name Example: # vinfra domain user delete myuser --domain mydomain Operation successful This command deletes the user myuser from the domain mydomain. 4.4 Managing Domain Projects 4.4.
Chapter 4. Managing General Settings | id | d1c4d6198fb940e6b971cf306571ebbd | | name | myproject | +-------------+----------------------------------+ This command creates and enables the project myproject within the domain mydomain and adds a description to it. 4.4.
Chapter 4. Managing General Settings +---------------+----------------------------------+ | description | A custom project | | domain_id | 9f7e68938fe946a2a862e360bbe40d98 | | enabled | True | | id | d1c4d6198fb940e6b971cf306571ebbd | | members_count | 0 | | name | myproject | +---------------+----------------------------------+ This command shows the details of the project myproject from the domain mydomain. 4.4.
Chapter 4. Managing General Settings | name | myproject | +-------------+----------------------------------+ This command disables the project myproject from the domain mydomain. 4.4.
Chapter 4. Managing General Settings # vinfra domain project user remove myproject --domain mydomain --user myuser Operation successful This command removes the user myuser from the project myproject within the domain mydomain. 4.4.
Chapter 4. Managing General Settings +---------+--------------------------------------+ This command creates a task to add a public SSH key from the file mykey.pub to the list of trusted keys.
Chapter 4. Managing General Settings | id | key | label | +--------------------------------------+---------------------------------------+------------------+ | 8ccf7f1b-6a53-4d74-99ce-c410d51a9921 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACA | user@example.
Chapter 4. Managing General Settings # vinfra task show 053802b2-b4c3-454d-89e2-6d6d312dd2ed +---------+-------------------------------------------------------+ | Field | Value | +---------+-------------------------------------------------------+ | args | - admin | | - 1 | | | | - 8ccf7f1b-6a53-4d74-99ce-c410d51a9921 | | kwargs | {} | | name | backend.presentation.nodes.ssh.tasks.
Chapter 4. Managing General Settings # vinfra cluster settings dns set --nameservers 8.8.8.8 +------------------+---------------+ | Field | Value | +------------------+---------------+ | dhcp_nameservers | - 10.10.0.10 | | - 10.10.0.11 | | | | - 10.37.130.2 | | nameservers | - 8.8.8.8 | +------------------+---------------+ This command sets the external DNS server to 8.8.8.8. 4.7 Configuring Management Node High Availability 4.7.
Chapter 4. Managing General Settings | Field | Value | +---------+--------------------------------------+ | task_id | 80a00e55-335d-4d41-bac4-5fee4791d423 | +---------+--------------------------------------+ This command creates a task to create a management node HA cluster from nodes with the IDs 94d58604-6f30-4339-8578-adb7903b7277, f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4, and 7d7d37b8-4c06-4f1a-b3a6-4b54257d70ce.
Chapter 4. Managing General Settings +---------+-------------------------------------------------------+ 4.7.2 vinfra cluster ha update Update the HA configuration: usage: vinfra cluster ha update [--virtual-ip ] [--nodes ] [--force] --virtual-ip HA configuration mapping in the format: • network: network to include in the HA configuration (must include at least one of these traffic types: Internal management, Admin panel, Self-service panel, or Compute API).
Chapter 4. Managing General Settings | name | backend.presentation.ha.tasks.UpdateHaConfigTask | | result | compute_task_id: 84994caf-3a02-43ea-b904-48632f0379c7 | | | ha_cluster_location: | | | - https://10.94.129.79:8888 | | | nodes: | | | - id: f59dabdb-bd1c-4944-8af2-26b8fe9ff8d4 | | | ipaddr: 10.37.130.134 | | | is_primary: true | | | - id: 4b83a87d-9adf-472c-91f0-782c47b2d5f1 | | | ipaddr: 10.37.130.127 | | | is_primary: false | | | - id: 94d58604-6f30-4339-8578-adb7903b7277 | | | ipaddr: 10.37.130.
Chapter 4. Managing General Settings | | - ip: 10.94.129.79 | | | roles_set: 5f0adc1d-c10f-46c1-b7b8-dd1aacab613b | +-----------------------+---------------------------------------------------+ This command shows the management node HA cluster configuration. 4.7.
Chapter 4. Managing General Settings # vinfra cluster settings encryption show +-------+-------+ | Field | Value | +-------+-------+ | tier0 | False | | tier1 | False | | tier2 | False | | tier3 | False | +-------+-------+ This command shows encryption status of each storage tier. 4.8.
Chapter 4.
Chapter 4. Managing General Settings | datetime | 2018-08-30T18:02:14.855302+00:00 | | details | host: stor-1.example.com.vstoragedomain. | | enabled | True | | group | node | | host | stor-1.example.com.vstoragedomain. | | id | 1 | | message | Network interface "eth1" on node "stor-1.example.com.vstoragedomain.
Chapter 4. Managing General Settings | type | Network warning | +--------------+------------------------------------------------------------------------------+ This command deletes the alert with the ID 1 from the log. 4.10 Managing Audit Log 4.10.
Chapter 4. Managing General Settings +----+----------+------------------------+---------------------------+---------------------+ This command lists the audit log entries. 4.10.
Chapter 4. Managing General Settings --description Problem description --send Generate the problem report archive and send it to the technical support team Example: # vinfra cluster problem-report --email test@example.
CHAPTER 5 Monitoring Storage Cluster Monitoring the storage cluster is very important because it allows you to check the status and health of all computers in the cluster and react as necessary. The main command for monitoring is vstorage -c top. It invokes a text user interface that you can control with keys (press h for help). 5.
Chapter 5. Monitoring Storage Cluster The command above shows detailed information about the stor1 cluster. The general parameters (highlighted in red) are as follows. Cluster Overall status of the cluster: Healthy All chunk servers in the cluster are active. Unknown There is not enough information about the cluster state (e.g., because the master MDS server was elected a while ago). Degraded Some of the chunk servers in the cluster are inactive.
Chapter 5. Monitoring Storage Cluster Space Amount of disk space in the cluster: Free Free physical disk space in the cluster. Allocatable Amount of logical disk space available to clients. Allocatable disk space is calculated on the basis of the current replication parameters and free disk space on chunk servers. It may also be limited by license. Note: For more information on monitoring and understanding disk space usage in clusters, see Understanding Disk Space Usage (page 140).
Chapter 5. Monitoring Storage Cluster IO Disk IO activity in the cluster: • Speed of read and write I/O operations, in bytes per second. • Number of read and write I/O operations per second. 5.2 Monitoring Metadata Servers MDS servers are a critical component of any storage cluster, and monitoring the health and state of MDS servers is a crucial task. To monitor MDS servers, use the vstorage -c top command. For example: The command above shows detailed information about the stor1 cluster.
Chapter 5. Monitoring Storage Cluster %CTIME Total time the MDS server spent writing to the local journal. COMMITS Local journal commit rate. %CPU MDS server activity time. MEM Amount of physical memory the MDS server uses. UPTIME Time elapsed since the last MDS server start. HOST MDS server hostname or IP address. 5.3 Monitoring Chunk Servers By monitoring chunk servers, you can keep track of the disk space available in the storage cluster.
Chapter 5. Monitoring Storage Cluster The command above shows detailed information about the stor1 cluster. The monitoring parameters for chunk servers (highlighted in red) are as follows: CSID Chunk server identifier (ID). STATUS Chunk server status: active The chunk server is up and running. inactive The chunk server is temporarily unavailable. A chunk server is marked as inactive during its first 5 minutes of inactivity. offline The chunk server is inactive for more than 5 minutes.
Chapter 5. Monitoring Storage Cluster SPACE Total amount of disk space on the chunk server. AVAIL Available disk space on the chunk server. REPLICAS Number of replicas stored on the chunk server. UNIQUE Number of chunks that do not have replicas. IOWAIT Percentage of time spent waiting for I/O operations being served. IOLAT Average/maximum time, in milliseconds, the client needed to complete a single IO operation during the last 20 seconds. QDEPTH Average chunk server I/O queue depth.
Chapter 5. Monitoring Storage Cluster For example: # vstorage -c stor1 top connected to MDS#1 Cluster 'stor1': healthy Space: [OK] allocatable 180GB of 200GB, free 1.6TB of 1.7TB ... In this command output: • 1.7TB is the total disk space in the stor1 cluster. The total disk space is calculated on the basis of used and free disk space on all partitions in the cluster.
Chapter 5. Monitoring Storage Cluster 5.3.1.1 Understanding Allocatable Disk Space When monitoring disk space information in the cluster, you also need to pay attention to the space reported by the vstorage top utility as allocatable. Allocatable space is the amount of disk space that is free and can be used for storing user data. Once this space runs out, no data can be written to the cluster. Calculation of allocatable disk space is illustrated on the following example: • The cluster has 3 chunk servers.
Chapter 5. Monitoring Storage Cluster new chunk server is added or the replication factor is decreased. If the replication factor changes to 2, the vstorage top command will report the available disk space as 700 GB: # vstorage set-attr -R /mnt/vstorage replicas=2:1 # vstorage -c stor1 top connected to MDS#1 Cluster 'stor1': healthy Space: [OK] allocatable 680GB of 700GB, free 1.6TB of 1.7TB ...
Chapter 5. Monitoring Storage Cluster 5.3.2 Exploring Chunk States The following is a list of all possible chunk states. Healthy Number and percentage of chunks that have enough active replicas. The normal state of chunks. Offline Number and percentage of chunks all replicas of which are offline. Such chunks are completely inaccessible for the cluster and cannot be replicated, read from or written to. All requests to an offline chunk are frozen until a CS that stores that chunk’s replica goes online.
Chapter 5. Monitoring Storage Cluster Urgent Number and percentage of chunks which are degraded and have non-identical replicas. Replicas of a degraded chunk may become non-identical if some of them are not accessible during a write operation. As a result, some replicas happen to have the new data while some still have the old data. The latter are dropped by the cluster as fast as possible.
Chapter 5. Monitoring Storage Cluster The command above shows detailed information about the stor1 cluster. The monitoring parameters for clients (highlighted in red) are as follows. CLID Client identifier (ID). LEASES Average number of files opened for reading/writing by the client and not yet closed, for the last 20 seconds. READ Average rate, in bytes per second, at which the client reads data, for the last 20 seconds.
Chapter 5. Monitoring Storage Cluster IOLAT Average/maximum time, in milliseconds, the client needed to complete a single IO operation, for the last 20 seconds. HOST Client hostname or IP address. 5.5 Monitoring Physical Disks The S.M.A.R.T. status of physical disks is monitored by the smartctl tool installed along with Acronis Cyber Infrastructure. For it to work, S.M.A.R.T. functionality must be enabled in the node’s BIOS. The tool is run every 10 minutes as a cron job also added during installation.
Chapter 5. Monitoring Storage Cluster The disks table shows the following parameters: DISK Disk name assigned by operating system. SMART Disk’s S.M.A.R.T. status: OK The disk is healthy. Warn The disk is in pre-failure condition. Pre-failure condition means that at least one of these S.M.A.R.T. counters is nonzero: • Reallocated Sector Count • Reallocated Event Count • Current Pending Sector Count • Offline Uncorrectable TEMP Disk temperature in Celsius. CAPACITY Disk capacity. SERIAL Disk serial number.
Chapter 5. Monitoring Storage Cluster To disable S.M.A.R.T. disk monitoring, delete the corresponding cron job. 5.6 Monitoring Event Logs You can use the vstorage -c top utility to monitor significant events happening in the storage cluster. For example: The command above shows the latest events in the stor1 cluster. The information on events (highlighted in red) is given in a table with the following columns: TIME Time of event. SYS Component of the cluster where the event happened (e.g.
Chapter 5. Monitoring Storage Cluster Table 5.6.1: Basic events Event Severity Description MDS# (:) lags JRN err Generated by the MDS master server when it behind for more than 1000 detects that MDS# is stale. rounds This message may indicate that some MDS server is very slow and lags behind. MDS# (:) didn’t JRN err accept commits for M sec Generated by the MDS master server if MDS# did not accept commits for M seconds. MDS# gets marked as stale.
Chapter 5. Monitoring Storage Cluster Table 5.6.1 – continued from previous page Event Severity Description The cluster is degraded with N MDS warn Generated when the cluster status changes to active, M inactive, K offline CS degraded or when a new MDS master server is elected. This message indicates that some chunk servers in the cluster are • inactive, i.e. do not send any registration messages, or • offline, i.e. have been inactive for longer than mds.wd.offline_tout, which is 5 min by default.
Chapter 5. Monitoring Storage Cluster Table 5.6.1 – continued from previous page Event Severity Description CS# has reported hard error MDS warn Generated when the chunk server CS# detects on path disk data corruption. You are recommended to check the hardware for errors and replace corrupted disks as soon as possible. CS# has not registered MDS warn Generated when the chunk server CS# has during the last T sec and is been unavailable for a while.
Chapter 5. Monitoring Storage Cluster MDS nodes: 3 of 3, epoch uptime: 20d 0h CS nodes: 3 of 3 (3 avail, 0 inactive, 0 offline) License: ACTIVE (expiration: 01/10/2021, capacity: 10TB, used: 20.3GB) Replication: 3 norm, 2 limit Chunks: [Warning] 187 (57%) healthy, 0 (0%) standby, 0 (0%) degraded, 135 (41%) urgent, 0 (0%) blocked, 0 (0%) pending, 0 (0%) offline, 1 (0%) replicating, 0 (0%) overcommitted, 0 (0%) deleting, 0 (0%) void IO: read 0B/s ( 0ops/s), write 106KB/s ( 7ops/s) ... 3.
CHAPTER 6 Accessing Storage Clusters via iSCSI Acronis Cyber Infrastructure allows you to export cluster disk space to external operating systems and third-party virtualization solutions in the form of LUN block devices over iSCSI in a SAN-like manner. Note: Acronis Cyber Infrastructure is certified by VMware for iSCSI scenarios as stated in the VMware Compatibility Guide. In Acronis Cyber Infrastructure, you can create groups of redundant targets running on different storage nodes.
Chapter 6. Accessing Storage Clusters via iSCSI Target group Node 2 Storage Portal 2 IP4:port Ac�ve/ Op�mized Volume 2 Network 2 switch Ini�ator eth1 eth1 Target 2 iSCSI disk 2 ` LUN 1 Portal 1 IP3:port eth0 Standby Volume 1 VM Standby LUN 0 Ac�ve/ Op�mized Node 1 Target 1 Portal 2 IP2:port eth1 Portal 1 IP1:port eth0 iSCSI disk 1 Network 1 switch eth0 The figure shows two volumes located on redundant storage provided by Acronis Cyber Infrastructure.
Chapter 6. Accessing Storage Clusters via iSCSI target group. See Managing Traffic Types and Networks (page 5). 2. Create a target group on chosen nodes, providing details for target WWNs and portals. Targets will be created automatically and added to the group. Target portals will be created on specified network interfaces and ports. See Creating Target Groups (page 157). 3. Create volumes and attach them to the target group. See Managing Volumes (page 76). 4.
Chapter 6. Accessing Storage Clusters via iSCSI { "ClusterName": "cluster1", "VolumesRoot": "/vols/iscsi/vols", } where ClusterName is the name of your storage cluster and VolumesRoot is the path to the directory for iSCSI volumes. You can also set these optional parameters: • "PcsLogLevel", log level, ranges from 1 (log errors only) to 7 (log all, including debug messages). • "LogPath", path to log files, the default is "/var/log/vstorage" (the log will be saved to vstorage-target.log).
Chapter 6. Accessing Storage Clusters via iSCSI [ { "NodeId": "01baeabee73e4a0d", "WWN": "iqn.2013-10.com.vstorage:test1", "Portals": [ { "Addr": "192.168.10.11", "Port": 3025 } ] }, { "NodeId": "0d90158e9d2444e1", "WWN": "iqn.2013-10.com.vstorage:test2", "Portals": [ { "Addr": "192.168.10.12", "Port": 3025 } ] }, { "NodeId": "a9eca47661a64031", "WWN": "iqn.2013-10.com.vstorage:test3", "Portals": [ { "Addr": "192.168.10.
Chapter 6. Accessing Storage Clusters via iSCSI tg-create command. For example, to create an iSCSI target group, run: # vstorage-target tg-create -name tg1 -targets tg1.json -type ISCSI { "Id": "3d8364f5-b830-4211-85af-3a19d30ebac4" } When you run the command, targets are created on the nodes specified in the configuration file and joined to the target group, target portals are created on the specified network interfaces and ports. 6.3.
Chapter 6. Accessing Storage Clusters via iSCSI "Name": "tg2", "Type": "ISCSI", "Running": false, "ACL": false, "ChapAuth": false, "CHAP": {}, "Mode": 0 } ] To print complete information about all target groups, use vstorage-target tg-list -all. 6.3.4 Printing Details of Target Groups To print the details of a specific group, use the vstorage-target tg-status command.
Chapter 6. Accessing Storage Clusters via iSCSI 6.4 Managing iSCSI Volumes This section describes how to create and manage volumes to be exported via iSCSI. 6.4.1 Creating iSCSI Volumes To create a volume, use the vstorage-target vol-create command. For example: # vstorage-target vol-create -name vol1 -size 1G \ -vstorage-attr "replicas=3:2 failure-domain=host tier=0" { "Id": "3277153b-5296-49c5-9b66-4c200ddb343d" } This command creates a 1 GB volume named vol1 on storage tier 0 with 3:2 replication and h
Chapter 6. Accessing Storage Clusters via iSCSI 6.4.4 Viewing and Setting iSCSI Volume Parameters To view and set volume parameters, e.g. redundancy mode, failure domain, or tier, use the commands vstorage-target vol-attr get and vstorage-target vol-attr set, respectively.
Chapter 6. Accessing Storage Clusters via iSCSI 6.4.8 Deleting iSCSI Volumes To delete a volume, use the vstorage-target vol-delete command. You cannot delete volumes attached to target groups. For example: # vstorage-target vol-delete -id d5cc3c13-cfb4-4890-a20d-fb80e2a56278 This command deletes the volume with the ID d5cc3c13-cfb4-4890-a20d-fb80e2a56278. 6.5 Managing Nodes This section describes how to manage nodes in relation to target groups. 6.5.
Chapter 6. Accessing Storage Clusters via iSCSI 6.5.2 Setting Node Status To enable or disable a node in a specific target group or all target groups at once, use the vstorage-target node-set command. Enabling a node starts its targets, while disabling a node stops its targets and moves the PREFERRED bit to another node.
Chapter 6. Accessing Storage Clusters via iSCSI 6.6 Managing Targets and Portals This section describes how to create and manage targets. The optimal way is to create a single target per node if you use the iSCSI protocol and one target per FC port if you use the FC protocol. 6.6.1 Creating Targets Typically, targets are created automatically when you create target groups or add nodes to them.
Chapter 6. Accessing Storage Clusters via iSCSI # vstorage-target target-portal del -wwn iqn.2013-10.com.vstorage:test2 -addr 10.94.104.90 \ -tg 3d8364f5-b830-4211-85af-3a19d30ebac4 This command deletes the portal created before. 6.6.3 Deleting Targets To delete a target from a target group (as well as the node it is on), use the vstorage-target target-delete command. For example: # vstorage-target target-delete -tg 3d8364f5-b830-4211-85af-3a19d30ebac4 \ -wwn iqn.2013-10.com.
Chapter 6. Accessing Storage Clusters via iSCSI 6.7.2 Changing CHAP Account Details To change the password or description of a CHAP account, use the vstorage-target account-set command. For example: # vstorage-target account-set description -user user1 -desc "A new description" # vstorage-target account-set password -user user1 Enter Password: 6.7.3 Assigning CHAP Accounts to Target Groups To assign a CHAP account to a target group, use the vstorage-target tg-chap command.
Chapter 6. Accessing Storage Clusters via iSCSI 6.8.1 Creating LUN Views To create a LUN view for an initiator, use the commands vstorage-target tg-initiator add or vstorage-target view-add. The former command adds an initiator to target group’s ACL and creates a view for it. The latter command is used to add views to initiators that are already on the ACL. For example: # vstorage-target tg-initiator add -alias initiator2 -luns 0,1 \ -tg ee764519-80e3-406e-b637-8d63712badf1 -wwn iqn.1994-05.com.
Chapter 6. Accessing Storage Clusters via iSCSI 6.8.4 Deleting LUN Views To delete a LUN view for an initiator, use the vstorage-target view-del command. # vstorage-target view-del -lun 1 -tg ee764519-80e3-406e-b637-8d63712badf1 \ -wwn iqn.1994-05.com.redhat:1535946874d This command deletes the view for LUN 1 for the initiator with the IQN iqn.1994-05.com.redhat:1535946874d.
CHAPTER 7 Advanced Tasks This chapter describes miscellaneous configuration and management tasks that you may need to perform. 7.1 Updating Kernel with ReadyKernel ReadyKernel is a kpatch-based service shipped with Acronis Cyber Infrastructure and available out-of-the-box on physical servers with active licenses. ReadyKernel offers a more convenient, rebootless alternative to updating the kernel the usual way and allows you not to wait for scheduled server downtime to apply critical security updates.
Chapter 7. Advanced Tasks 7.1.1 Installing ReadyKernel Patches Automatically ReadyKernel is enabled by default and checks for new patches daily at 12:00 server time by means of a cron.d script. If a patch is available, ReadyKernel will download, install, and load it for the current kernel.
Chapter 7. Advanced Tasks # readykernel load-replace • If no older patches are loaded, load the latest patch by running: # readykernel load To unload the patch from the current kernel, run # readykernel unload 7.1.2.
Chapter 7. Advanced Tasks 7.1.2.5 Disabling Loading of ReadyKernel Patches on Boot If for some reason you do not want ReadyKernel patches to be applied at boot time, run the following command: # readykernel autoload disable To re-enable automatic loading of ReadyKernel patches on boot, run # readykernel autoload enable 7.1.2.6 Managing ReadyKernel Logs ReadyKernel logs event information in /var/log/messages and /var/log/kpatch.log.
Chapter 7. Advanced Tasks Next, you need to attach the created image to a VM and run the guest tools installer. The steps differ for new and already existing VMs and are described in the following subsections. 7.2.1.1 Installing Guest Tools in New VMs When you create a new VM, you can attach the guest tools image to it and install the guest tools after the operating system. To do this, perform the following steps on a compute node: 1. Create a new VM with the guest tools image.
Chapter 7. Advanced Tasks 7.2.1.2 Installing Guest Tools in Existing VMs The steps you need to perform to install the guest tools in existing VMs depend on the guest OS type. They are described in the following subsections. 7.2.1.2.1 Installing Guest Tools in Existing Linux VMs To install the guest tools in an existing Linux virtual machine, do the following: 1. Create a volume from the uploaded guest tools image.
Chapter 7. Advanced Tasks # vinfra service compute server stop win10 2. Convert its system volume to a template image. You will need the volume ID that you can obtain with vinfra service compute volume list. For example, to use the win10 VM boot volume, run: # | | # | vinfra service compute volume list | grep win10 7116d747-a1e1-4200-bd4a-25cc51ef006c | win10/windows_10_pro_x64.iso/Boot volume | <...> | ef2f1979-7811-4df6-9955-07e2fc942858 | win10/windows_10_pro_x64.iso/CD/DVD volume | <...
Chapter 7. Advanced Tasks 7.2.2 Uninstalling Guest Tools The steps you need to perform to remove guest tools depend on the guest OS and are described in the following sections. 7.2.2.1 Uninstalling Guest Tools from Linux VMs To uninstall the guest tools from a Linux guest, log in to the virtual machine and do as follows: 1. Remove the packages: 1. On RPM-based systems (CentOS and other): # yum remove dkms-vzvirtio_balloon prl_nettool qemu-guest-agent-vz vz-guest-udev 2.
Chapter 7. Advanced Tasks 2. Uninstall QEMU guest agent and guest tools from the list of installed applications. 3. Stop and delete Guest Tools Monitor: > sc stop VzGuestToolsMonitor > sc delete VzGuestToolsMonitor 4. Unregister Guest Tools Monitor from Event Log: > reg delete HKLM\SYSTEM\CurrentControlSet\services\eventlog\Application\VzGuestToolsMonitor 5. Delete the autorun registry key for RebootNotifier: > reg delete HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run /v VzRebootNotifier 6.
Chapter 7. Advanced Tasks loop1 7:1 ��live-rw 253:0 ��live-base 253:1 loop2 7:2 ��live-rw 253:0 sda 8:0 sdc 8:32 sr0 11:0 0 0 0 0 0 0 0 1 5G 5G 5G 32G 5G 64G 1G 2G 1 0 1 0 0 0 1 0 loop dm / dm loop dm / disk disk rom /run/initramfs/live To copy a file to a Linux VM, use the virsh x-exec and cat commands. For example: # virsh x-exec 1d45a54b-0e20-4d5e-8f11-12c8b4f300db \ --shell 'cat > test.file' < /home/test.file To get a file from a Linux VM, use the virsh x-exec and cat commands as well.
Chapter 7. Advanced Tasks # virsh x-exec bbf4a6ec-865f-4e2c-ac21-8639d1bfb85c \ --shell 'type c:\\test\\test.file' > test.file 7.4 Setting Virtual Machines CPU Model Virtual machines are created with the host CPU model by default. If nodes in the compute cluster have different CPUs, live migration of VMs between compute nodes may not work or applications inside VMs that depend on particular CPUs may not function properly.
Chapter 7. Advanced Tasks 1. Install the diskimage-builder package: # yum install diskimage-builder 2. For the RHEL 7 guest OS, download the cloud image from the Red Hat Customer Portal (login required) and execute: # export DIB_LOCAL_IMAGE= 3. Execute the following command to build a disk image with installed cloud-init for the desired Linux guest. For example: # disk-image-create vm centos7 -t qcow2 -o centos7 where • centos7 is the name of a guest OS.
Chapter 7. Advanced Tasks where myuser is the name of a custom user and password is a password for the account. 6. Launch the deployment of a VM from the disk image using the configuration file as user data: # vinfra service compute server create centos7-vm --flavor medium --network public \ --user-data user-data --volume source=image,id=centos7-image,size=10 where • centos7-vm is the name of a new VM, • user-data is the configuration file created in step 5, • centos7-image is the image added to the compute cl
Chapter 7. Advanced Tasks # vinfra service compute image create windows10-image --os-distro win10 --file where • windows10-image is the name of a new image. • win10 is the OS distribution. To list available distributions, run vinfra service compute show. 2. Create a VM from the ISO image. For example: # vinfra service compute server create windows10-vm --flavor medium --network public \ --volume source=blank,size=64,boot-index=0,type=disk \ --volume source=image,id=windows10-image,size=5,bo
Chapter 7. Advanced Tasks > & 'C:/Program Files/OpenSSH-Win64/install-sshd.ps1' 3. Start the OpenSSH SSH Server service in the Control Panel > System and Security > Administrative Tools > Services and set its startup type to Automatic: 4. Open TCP port 22 for the OpenSSH service in the Windows Firewall: > New-NetFirewallRule -Protocol TCP -LocalPort 22 -Direction Inbound \ -Action Allow -DisplayName OpenSSH 5. Open the C:\ProgramData\ssh\sshd_config file: > notepad 'C:\ProgramData\ssh\sshd_config' Comment
Chapter 7. Advanced Tasks > cd C:\Users\ > mkdir .ssh > notepad .ssh\authorized_keys The created file will have the .txt extension. To remove it, run: > move .\.ssh\authorized_keys.txt .\.ssh\authorized_keys 7. Modify the permissions for the created file to disable inheritance as follows: > icacls .ssh\authorized_keys /inheritance:r 6. Download Cloudbase-Init (for example, from the official site) and launch the installation: 1.
Chapter 7. Advanced Tasks 1. Click Finish. After the VM shuts down, you can either • delete it to make its boot volume available for creating new VMs from it or • convert the VM boot volume to a template (see the section “Creating Images from Volumes” in the Administrator’s Guide). 7.7 Securing OpenStack API Traffic with SSL By means of the Compute API traffic type, Acronis Cyber Infrastructure exposes a public endpoint that listens to OpenStack API requests.
Chapter 7. Advanced Tasks 1. Upload the certificate and then private key in the admin panel, on the SETTINGS > Management node > SSL ACCESS screen. 2. Place the CA certificate file to operating system’s trusted bundle: # cp ca.pem /etc/pki/ca-trust/source/anchors/ # update-ca-trust extract Alternatively, you can append the --os-cacert ca.pem option to each OpenStack client call. 3.
Chapter 7. Advanced Tasks export OS_IDENTITY_API_VERSION=3 Now you can run OpenStack commands without the --insecure option. 7.8 Enabling Metering for Compute Resources You can collect usage data of compute resources using Gnocchi. It is a time series database that processes and stores measurement data of compute resources and provides access to it via REST API or the command-line tool. Gnocchi aggregates and stores measures for compute resource metrics according to their archive policy.
Chapter 7. Advanced Tasks Cumulative metrics are polled every 5 minutes and increase over time, while gauge metrics are updated on events and show fluctuating values. You can enable metering services in your compute cluster by doing one of the following: • If you have no compute cluster yet, deploy it and enable metering by adding the --enable-metering option to the vinfra service compute cluster create command. For example: # vinfra service compute create --nodes [,,...
Chapter 7. Advanced Tasks The output shows that at 2 and 3 p.m. on August 13, 2019 VMs included in the two projects were allocated 2 and 4 vCPUs, respectively. To see the full list of gnocchi commands, refer to the official documentation. 7.9 Configuring Memory Policy for Storage Services You can configure memory limits and guarantees for storage services at runtime using the vinfra memory-policy vstorage-services commands. You can do this for the entire cluster or a specific node.
Chapter 7. Advanced Tasks parameters are configured manually, the memory management is performed automatically by the vcmmd daemon as follows: • Each CS (e.g., storage disk) requires 512 MiB of RAM for page cache. • The page cache minimum is 1 GiB. • If the total memory is less than 48 GiB, the page cache maximum is calculated as two-thirds of it. • If the total memory is greater than 48 GiB, the page cache maximum is 32 GiB.
Chapter 7. Advanced Tasks +-----------+------------------------+ | Field | Value | +-----------+------------------------+ | cache | maximum: 3298534883328 | | | minimum: 1099511627776 | | | ratio: 0.
Chapter 7. Advanced Tasks --guarantee Reset only the guarantee. --swap Reset only the swap size. --cache Reset only cache values. Example: # vinfra memory-policy vstorage-services per-cluster reset --cache +-----------+---------------+ | Field | Value | +-----------+---------------+ | cache | | | guarantee | 8796093022208 | | swap | 1099511627776 | +-----------+---------------+ This command resets the manually configured page cache limits to default for all nodes in the storage cluster. 7.9.
Chapter 7. Advanced Tasks --node Node ID or hostname Example: # vinfra memory-policy vstorage-services per-node change --guarantee 8796093022208 --swap 1099511627776 --cache-ratio 0.5 --cache-minimum 1099511627776 --cache-maximum 3298534883328 --node 7ffa9540-5a20-41d1-b203-e3f349d62565 +-----------+------------------------+ | Field | Value | +-----------+------------------------+ | cache | maximum: 3298534883328 | | | minimum: 1099511627776 | | | ratio: 0.
Chapter 7. Advanced Tasks | swap | 1099511627776 | +-----------+-------------------------+ This command lists the storage services memory parameters set for the node with the ID 7ffa9540-5a20-41d1-b203-e3f349d62565. 7.9.6 vinfra memory-policy vstorage-services per-node reset Reset per-node memory parameters to defaults: usage: vinfra memory-policy vstorage-services per-node reset [--guarantee] [--swap] [--cache] --node --guarantee Reset only the guarantee. --swap Reset only the swap size.