Dell PowerEdge Configuration Guide for the M I/O Aggregator 9.14.1.5 May 2019 Rev.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2018 - 2019 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents Chapter 1: About this Guide......................................................................................................... 13 Audience...............................................................................................................................................................................13 Conventions.........................................................................................................................................................................
Ethernet Enhancements in Data Center Bridging..................................................................................................... 33 Priority-Based Flow Control......................................................................................................................................34 Enhanced Transmission Selection...........................................................................................................................
FC-MAP Value............................................................................................................................................................74 Bridge-to-FCF Links................................................................................................................................................74 Impact on other Software Features.................................................................................................................
Displaying Port Channel Information.............................................................................................................100 Interface Range............................................................................................................................................................ 102 Bulk Configuration Examples............................................................................................................................ 102 Monitor and Maintain Interfaces.
Enabling the Verification of Member Links Utilization in a LAG Bundle............................................................. 128 Monitoring the Member Links of a LAG Bundle....................................................................................................... 128 Verifying LACP Operation and LAG Configuration..................................................................................................129 Multiple Uplink LAGs..........................................................
Displaying Tracked Objects........................................................................................................................................... 163 Chapter 16: Port Monitoring...................................................................................................... 165 Configuring Port Monitoring......................................................................................................................................... 165 Important Points to Remember...........
MIB Support to Display Reason for Last System Reboot......................................................................................201 Viewing the Reason for Last System Reboot Using SNMP............................................................................ 201 MIB Support to Display Egress Queue Statistics.....................................................................................................201 Monitoring BGP sessions via SNMP......................................................
Troubleshooting a Switch Stack................................................................................................................................. 232 Failure Scenarios....................................................................................................................................................... 234 Upgrading a Switch Stack............................................................................................................................................
Overview..................................................................................................................................................................... 264 Setting up VLT...........................................................................................................................................................265 Virtual Link Trunking (VLT) in PMUX Mode.......................................................................................................
Displaying NPIV Proxy Gateway Information........................................................................................................... 302 show interfaces status Command Example........................................................................................................303 show fcoe-map Command Examples .................................................................................................................. 304 show qos dcb-map Command Examples .............................
1 About this Guide This guide describes the supported protocols and software features, and provides configuration instructions and examples, for the Dell Networking M I/O Aggregator running Dell Networking OS version 9.7(0.0). The MI/O Aggregator is installed in a Dell PowerEdge M I/O Aggregator. For information about how to install and perform the initial switch configuration, refer to the Getting Started Guides on the Dell Support website at http://www.dell.
* (Exception). This symbol is a note associated with additional text on the page that is marked with an asterisk.
2 Before You Start To install the Aggregator in a Dell PowerEdge M1000e Enclosure, use the instructions in the Dell PowerEdge M I/O Aggregator Getting Started Guide that is shipped with the product.The I/O Aggregator (also known as Aggregator) installs with zero-touch configuration. After you power it on, an Aggregator boots up with default settings and auto-configures with software features enabled.
Dell(conf)#stack-unit 0 iom-mode programmable-mux Select this mode to configure PMUX mode CLI commands. For more information on the PMUX mode, see PMUX Mode of the IO Aggregator. Stacking mode stack-unit unit iom-mode stack CONFIGURATION mode Dell(conf)#stack-unit 0 iom-mode stack Select this mode to configure Stacking mode CLI commands. For more information on the Stacking mode, see Stacking.
● Internet group management protocol (IGMP) snooping. ● Jumbo frames: Ports are set to a maximum MTU of 12,000 bytes by default. ● Link tracking: Uplink-state group 1 is automatically configured. In uplink state-group 1, server-facing ports auto-configure as downstream interfaces; the uplink port-channel (LAG 128) auto-configures as an upstream interface. Server-facing links are auto-configured to be brought up only if the uplink port-channel is up.
Link Tracking By default, all server-facing ports are tracked by the operational status of the uplink LAG. If the uplink LAG goes down, the aggregator loses its connectivity and is no longer operational; all server-facing ports are brought down after the specified defer-timer interval, which is 10 seconds by default. If you have configured VLAN, you can reduce the defer time by changing the defer-timer value or remove it by using the no defer-timer command.
Deploying FN I/O Module This section provides design and configuration guidance for deploying the Dell PowerEdge FN I/O Module (FN IOM). By default the FN IOM is in Standalone Mode.
3. Verify the connection. By default the network ports on the PowerEdge FC-Series servers installed in the FX2 chassis is down, until the uplink port channel is operational on the FN IOM system. It is due to an Uplink Failure Detection, by that when upstream connectivity fails; the FN IOM disables the downstream links.
DHCP Client-ID :f8b1566efc59 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed auto Flowcontrol rx off tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 01:26:42 Queueing strategy: fifo Input Statistics: 941 packets, 98777 bytes 83 64-byte pkts, 591 over 64-byte pkts, 267 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 694 Multicasts, 247 Broadcasts, 0 Unicasts 0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics
To bring up the downstream (server) ports on the FN IOM, port channel 128 must be up. When the Port channel 128 is up, it is connected to a configured port channel on an upstream switch. To enable the Port channel 128, connect any combination of the FN IOM’s external Ethernet ports (ports TenGigabitethernet 0/9-12) to the upstream switch. The port channel may have a minimum of one and a maximum of four links.
3 Configuration Fundamentals The Dell Networking Operating System (OS) command line interface (CLI) is a text-based interface you can use to configure interfaces and protocols. The CLI is structured in modes for security and management purposes. Different sets of commands are available in each mode, and you can limit user access to modes using privilege levels. In Dell Networking OS, after you enable a command, it is entered into the running configuration file.
CLI Modes Different sets of commands are available in each mode. A command found in one mode cannot be executed from another mode (except for EXEC mode commands with a preceding do command (refer to the do Command section). The Dell Networking OS CLI is divided into three major mode levels: ● EXEC mode is the default mode and has a privilege level of 1, which is the most restricted level.
Table 1. Dell Command Modes (continued) CLI Command Mode Prompt Access Command CONFIGURATION Dell(conf)# ● From EXEC privilege mode, enter the configure command. ● From every mode except EXEC and EXEC Privilege, enter the exit command.
Example of Viewing Disabled Commands Dell(conf)# interface managementethernet 0/0 Dell(conf-if-ma-0/0)# ip address 192.168.5.6/16 Dell(conf-if-ma-0/0)# Dell(conf-if-ma-0/0)# Dell(conf-if-ma-0/0)#show config ! interface ManagementEthernet 0/0 ip address 192.168.5.
● Key combinations are available to move quickly across the command line. The following table describes these short-cut key combinations. Short-Cut Key Action Combination CNTL-A Moves the cursor to the beginning of the command line. CNTL-B Moves the cursor back one character. CNTL-D Deletes character at cursor. CNTL-E Moves the cursor to the end of the line. CNTL-F Moves the cursor forward one character. CNTL-I Completes a keyword.
The grep command displays only the lines containing specified text. The following example shows this command used in combination with the show linecard all command.
If either of these messages appears, Dell Networking recommends coordinating with the users listed in the message so that you do not unintentionally overwrite each other’s configuration changes. Configuring a Unique Host Name on the System While you can manually configure a host name for the system, you can also configure the system to have a unique host name. The unique host name is a combination of the platform type and the serial number of the system. The unique host name appears in the command prompt.
4 Configuration Cloning Configuration Cloning enables you to clone the configuration from one aggregator to one or more aggregators. You can identify the source aggregator where running configuration is check-pointed, extracted and downloaded to the target aggregator for further use. The target aggregator checks the compatibilities of the cloning file based on the version, mode and optional modules.
● Cloning detailed status displays a string that gives detailed description of cloning status. When multiple error or warning messages are present, the status is separated by the ; delimiter. ● Cloning status codes are useful when there are multiple warning or failure messages. Each warning or failure message is given a code number; this status can list the message codes that can be decoded when the cloning status string could not accommodate all the errors and warnings.
in reboot. A counter is maintained to inform the user about number of reboots required to make the target aggregator up and running with the cloning file. The counter is incremented for the first instance that would require reboot. The counter is incremented only when a conflicted or dependent instance is encountered. The counter is not incremented for the cases which are mutually exclusive. The current list identifies all command that requires reboot to take into effect. Table 3.
5 Data Center Bridging (DCB) On an I/O Aggregator, data center bridging (DCB) features are auto-configured in standalone mode. You can display information on DCB operation by using show commands. NOTE: DCB features are not supported on an Aggregator in stacking mode.
● Storage traffic based on Fibre Channel media uses the SCSI protocol for data transfer. This traffic typically consists of large data packets with a payload of 2K bytes that cannot recover from frame loss. To successfully transport storage traffic, data center Ethernet must provide no-drop service with lossless links. ● Servers use InterProcess Communication (IPC) traffic within high-performance computing clusters to share information. Server traffic is extremely sensitive to latency requirements.
○ DCBx communicates with the remote peer by link layer discovery protocol (LLDP) type, length, value (TLV) to determine current policies, such as PFC support and enhanced transmission selection (ETS) BW allocation. ○ If the negotiation succeeds and the port is in DCBX Willing mode to receive a peer configuration, PFC parameters from the peer are used to configured PFC priorities on the port. If you enable the link-level flow control mechanism on the interface, DCBX negotiation with a peer is not performed.
Table 4. ETS Traffic Groupings (continued) Traffic Groupings Description Group ID A 4-bit identifier assigned to each priority group. The range is from 0 to 7. Group bandwidth Percentage of available bandwidth allocated to a priority group. Group transmission selection algorithm (TSA) Type of queue scheduling a priority group uses. In the Dell Networking OS, ETS is implemented as follows: ● ETS supports groups of 802.
Data Center Bridging in a Traffic Flow The following figure shows how DCB handles a traffic flow on an interface. Figure 3. DCB PFC and ETS Traffic Handling Enabling Data Center Bridging DCB is automatically configured when you configure FCoE or iSCSI optimization. Data center bridging supports converged enhanced Ethernet (CEE) in a data center network. DCB is disabled by default. It must be enabled to support CEE.
Configuring DCB Maps and its Attributes This topic contains the following sections that describe how to configure a DCB map, apply the configured DCB map to a port, configure PFC without a DCB map, and configure lossless queues. DCB Map: Configuration Procedure A DCB map consists of PFC and ETS parameters. By default, PFC is not enabled on any 802.1p priority and ETS allocates equal bandwidth to each priority. To configure user-defined PFC and ETS settings, you must create a DCB map. 1.
To apply a DCB map to an Ethernet port, follow these steps: 1. Enter interface configuration mode on an Ethernet port. CONFIGURATION mode interface {tengigabitEthernet slot/port | fortygigabitEthernet slot/port 2.
2. Open a DCB map and enter DCB map configuration mode. INTERFACE mode dcb-map name 3. Disable PFC. DCB MAP mode no pfc mode on 4. Return to interface configuration mode. DCB MAP mode exit 5. Apply the DCB map, created to disable the PFC operation, on the interface. INTERFACE mode dcb-map {name | default} 6. Configure the port queues that still function as no-drop queues for lossless traffic. Range: 0-3.
NOTE: Normally, interfaces do not flap when DCB is automatically enabled. DCB processes VLAN-tagged packets and dot1p priority values. Untagged packets are treated with a dot1p priority of 0. For DCB to operate effectively, ingress traffic is classified according to its dot1p priority so that it maps to different data queues. The dot1p-queue assignments used on an Aggregator are shown in Table 6-1 in dcb enable auto-detect on-next-reload Command Example QoS dot1p Traffic Classification and Queue Assignment.
To reconfigure the Aggregator so that all interfaces come up with DCB disabled and link-level flow control enabled, use the no dcb enable on-next-reload command. PFC buffer memory is automatically freed. Enabling Auto-DCB-Enable Mode on Next Reload To configure the Aggregator so that all interfaces come up in auto-DCBenable mode with DCB disabled and flow control enabled, use the dcb enable aut-detect on-next-reload command. 1.
pfc mode on The default is PFC mode is on. 5. (Optional) Enter a text description of the input policy. DCB INPUT POLICY mode description text The maximum is 32 characters. 6. Exit DCB input policy configuration mode. DCB INPUT POLICY mode exit 7. Enter interface configuration mode. CONFIGURATION mode interface type slot/port 8. Apply the input policy with the PFC configuration to an ingress interface. INTERFACE mode dcb-policy input policy-name 9.
How Priority-Based Flow Control is Implemented Priority-based flow control provides a flow control mechanism based on the 802.1p priorities in converged Ethernet traffic received on an interface and is enabled by default. As an enhancement to the existing Ethernet pause mechanism, PFC stops traffic transmission for specified priorities (CoS values) without impacting other priority classes. Different traffic types are assigned to different priority classes.
● Dell Networking OS supports hierarchical scheduling on an interface. Dell Networking OS control traffic is redirected to control queues as higher priority traffic with strict priority scheduling. After control queues drain out, the remaining data traffic is scheduled to queues according to the bandwidth and scheduler configuration in the dcb-map. The available bandwidth calculated by the ETS algorithm is equal to the link bandwidth after scheduling non-ETS higher-priority traffic.
● If priority group 3 has free bandwidth, it is distributed as follows: 20% of the free bandwidth to priority group 1 and 30% of the free bandwidth to priority group 2. ● If priority group 1 or 2 has free bandwidth, (20 + 30)% of the free bandwidth is distributed to priority group 3. Priority groups 1 and 2 retain whatever free bandwidth remains up to the (20+ 30)%.
● If the received peer configuration is not compatible with the currently configured port configuration, the link with the DCBx peer port is disabled and a syslog message for an incompatible configuration is generated. The network administrator must then reconfigure the peer device so that it advertises a compatible DCB configuration. The configuration received from a DCBx peer or from an internally propagated configuration is not stored in the switch’s running configuration.
Asymmetric DCB parameters are exchanged between a DCBx-enabled port and a peer port without requiring that a peer port and the local port use the same configured values for the configurations to be compatible. For example, ETS uses an asymmetric exchange of parameters between DCBx peers. Symmetric DCB parameters are exchanged between a DCBx-enabled port and a peer port but requires that each configured parameter value be the same for the configurations in order to be compatible.
DCBx operations on a port are performed according to the auto-configured DCBx version, including fast and slow transmit timers and message formats. If a DCBx frame with a different version is received, a syslog message is generated and the peer version is recorded in the peer status table. If the frame cannot be processed, it is discarded and the discard counter is incremented.
DCBx Prerequisites and Restrictions The following prerequisites and restrictions apply when you configure DCBx operation on a port: ● DCBx requires LLDP in both send (TX) and receive (RX) modes to be enabled on a port interface. If multiple DCBx peer ports are detected on a local DCBx interface, LLDP is shut down. ● The CIN version of DCBx supports only PFC, ETS, and FCOE; it does not support iSCSI, backward congestion management (BCN), logical link down (LLD), and network interface virtualization (NIV).
Verifying the DCB Configuration To display DCB configurations, use the following show commands. Table 5. Displaying DCB Configurations Command Output show qos dot1p-queue mapping Displays the current 802.1p priority-queue mapping. show qos dcb-map map-name Displays the DCB parameters configured in a specified DCB map. show dcb [stack-unit unit-number] Displays the data center bridging status, number of PFCenabled ports, and number of PFC-enabled queues.
Example of the show interface pfc statistics Command Dell#show interfaces tengigabitethernet 0/3 pfc statistics Interface TenGigabitEthernet 0/3 Priority Rx XOFF Frames Rx Total Frames Tx Total Frames ------------------------------------------------------0 0 0 0 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 5 0 0 0 6 0 0 0 7 0 0 0 Example of the show interfaces pfc summary Command Dell# show interfaces tengigabitethernet 0/4 pfc summary Interface TenGigabitEthernet 0/4 Admin mode is on Admin is enabled Remote is enabled,
Fields Description Admin mode is on; Admin is enabled PFC Admin mode is on or off with a list of the configured PFC priorities . When PFC admin mode is on, PFC advertisements are enabled to be sent and received from peers; received PFC configuration takes effect. The admin operational status for a DCBx exchange of PFC configuration is enabled or disabled.
Fields Description PFC TLV Statistics: Error pkts Number of PFC error packets received. PFC TLV Statistics: Pause Tx pkts Number of PFC pause frames transmitted. PFC TLV Statistics: Pause Rx pkts Number of PFC pause frames received. Input Appln Number of Application Priority TLVs received. Priority TLV pkts Output Appln Number of Application Priority TLVs transmitted. Priority TLV pkts Error Appln Number of Application Priority error packets received.
2 13% 3 13% 4 12% 5 12% 6 12% 7 12% Oper status is init Conf TLV Tx Status is disabled Traffic Class TLV Tx Status is disabled ETS ETS ETS ETS ETS ETS Example of the show interface ets detail Command Dell# show interfaces tengigabitethernet Interface TenGigabitEthernet 0/4 Max Supported TC Groups is 4 Number of Traffic Classes is 8 Admin mode is on Admin Parameters : -----------------Admin is enabled TC-grp Priority# Bandwidth 0 0,1,2,3,4,5,6,7 100% 1 0% 2 0% 3 0% 4 0% 5 0% 6 0% 7 0% 0/4 ets detail TSA
Field Description When on, the scheduling and bandwidth allocation configured in an ETS output policy or received in a DCBx TLV from a peer can take effect on an interface. Admin Parameters ETS configuration on local port, including priority groups, assigned dot1p priorities, and bandwidth allocation. Remote Parameters ETS configuration on remote peer port, including Admin mode (enabled if a valid TLV was received or disabled), priority groups, assigned dot1p priorities, and bandwidth allocation.
Admin Parameters: -------------------Admin is enabled TC-grp Priority# Bandwidth TSA -----------------------------------------------0 0,1,2,3,4,5,6,7 100% ETS 1 2 3 4 5 6 7 8 Stack unit 1 stack port all Max Supported TC Groups is 4 Number of Traffic Classes is 1 Admin mode is on Admin Parameters: -------------------Admin is enabled TC-grp Priority# Bandwidth TSA -----------------------------------------------0 0,1,2,3,4,5,6,7 100% ETS 1 2 3 4 5 6 7 8 Example of the show interface DCBx detail Command Dell# s
2 Input PG TLV Pkts, 3 Output PG TLV Pkts, 0 Error PG TLV Pkts 2 Input Appln Priority TLV pkts, 0 Output Appln Priority TLV pkts, 0 Error Appln Priority TLV Pkts Total DCBX Frames transmitted 27 Total DCBX Frames received 6 Total DCBX Frame errors 0 Total DCBX Frames unrecognized 0 The following table describes the show interface DCBx detail command fields. Field Description Interface Interface type with chassis slot and port number.
Field Description Peer DCBx Acknowledgement number transmitted in Control TLVs received from peer device. Status: Acknowledgment Number Total DCBx Frames transmitted Number of DCBx frames sent from local port. Total DCBx Frames received Number of DCBx frames received from remote peer port. Total DCBx Frame errors Number of DCBx frames with errors received. Total DCBx Frames unrecognized Number of unrecognizable DCBx frames received.
Field Description Appln Priority TLV Pkts QoS dot1p Traffic Classification and Queue Assignment DCB supports PFC, ETS, and DCBx to handle converged Ethernet traffic that is assigned to an egress queue according to the following QoS methods: Honor dot1p dot1p priorities in ingress traffic are used at the port or global switch level. Layer 2 class maps dot1p priorities are used to classify traffic in a class map and apply a service policy to an ingress port to map traffic to egress queues.
Reason Description LLDP Rx/Tx is disabled LLDP is disabled (Admin Mode set to rx or tx only) globally or on the interface. Waiting for Peer Waiting for peer or detected peer connection has aged out. Multiple Peer Detected Multiple peer connections detected on the interface. Version Conflict DCBx version on peer version is different than the local or globally configured DCBx version.
S6000-109-Dell(conf)#dcb enable 2. Configure the shared PFC buffer size and the total buffer size. A maximum of 4 lossless queues are supported. CONFIGURATION mode S6000-109-Dell(conf)#dcb pfc-shared-buffer-size 4000 S6000-109-Dell(conf)#dcb pfc-total-buffer-size 5000 3. Configure the number of PFC queues. CONFIGURATION mode Dell(conf)#dcb enable pfc-queues 4 The number of ports supported based on lossless queues configured will depend on the buffer.
6 Dynamic Host Configuration Protocol (DHCP) The Aggregator is auto-configured to operate as a dynamic host configuration protocol (DHCP) client. The DHCP server, DHCP relay agent, and secure DHCP features are not supported. The DHCP is an application layer protocol that dynamically assigns IP addresses and other configuration parameters to network end-stations (hosts) based on configuration policies determined by network administrators.
4. After receiving a DHCPREQUEST, the server binds the clients’ unique identifier (the hardware address plus IP address) to the accepted configuration parameters and stores the data in a database called a binding table. The server then broadcasts a DHCPACK message, which signals to the client that it may begin using the assigned parameters. There are additional messages that are used in case the DHCP negotiation deviates from the process previously described and shown in the illustration below.
Debugging DHCP Client Operation To enable debug messages for DHCP client operation, enter the following debug commands: ● Enable the display of log messages for all DHCP packets sent and received on DHCP client interfaces. EXEC Privilege [no] debug ip dhcp client packets [interface type slot/port] ● Enable the display of log messages for the following events on DHCP client interfaces: IP address acquisition, IP address release, Renewal of IP address and lease time, and Release of an IP address.
1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: DHCP RELEASE sent in Interface Ma 0/0 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :Transitioned to state STOPPED 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP IP RELEASED CMD sent to FTOS in state STOPPED Dell# renew dhcp int Ma 0/0 Dell#1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP RENEW CMD Received in state S
You can override the DHCP-assigned address on the OOB management interface by manually configuring an IP address using the CLI or CMC interface. If no user-configured IP address exists for the OOB interface exists and if the OOB IP address is not in the startup configuration, the Aggregator will automatically obtain it using DHCP. You can also manually configure an IP address for the VLAN 1 default management interface using the CLI.
DHCP Client on a VLAN The following conditions apply on a VLAN that operates as a DHCP client: ● The default VLAN 1 with all ports auto-configured as members is the only L3 interface on the Aggregator. ● When the default management VLAN has a DHCP-assigned address and you reconfigure the default VLAN ID number, the Aggregator: ○ Sends a DHCP release to the DHCP server to release the IP address. ○ Sends a DHCP request to obtain a new IP address.
Option Number and Description ● 8: DHCPINFORM Parameter Request List Renewal Time Option 55 Clients use this option to tell the server which parameters it requires. It is a series of octets where each octet is DHCP option code. Option 58 Specifies the amount of time after the IP address is granted that the client attempts to renew its lease with the original server.
release dhcp interface type slot/port ● Acquire a new IP address with renewed lease time from a DHCP server. EXEC Privilege mode renew dhcp interface type slot/port Viewing DHCP Statistics and Lease Information To display DHCP client information, enter the following show commands: ● Display statistics about DHCP client interfaces. EXEC Privilege show ip dhcp client statistics interface type slot/port ● Clear DHCP client statistics on a specified or on all interfaces.
7 FIP Snooping This chapter describes about the FIP snooping concepts and configuration procedures.
FIP provides a functionality for discovering and logging in to an FCF. After discovering and logging in, FIP allows FCoE traffic to be sent and received between FCoE end-devices (ENodes) and the FCF. FIP uses its own EtherType and frame format. The below illustration about FIP discovery, depicts the communication that occurs between an ENode server and an FCoE switch (FCF).
You must enable FIP snooping on an Aggregator and configure the FIP snooping parameters. When you enable FIP snooping, all ports on the switch by default become ENode ports. Dynamic ACL generation on an Aggregator operating as a FIP snooping bridge functions as follows: ● Global ACLs are applied on server-facing ENode ports. ● Port-based ACLs are applied on ports directly connected to an FCF and on server-facing ENode ports. ● Port-based ACLs take precedence over global ACLs.
● Performs FIP snooping (allowing and parsing FIP frames) globally on all VLANs or on a per-VLAN basis. ● Set the FCoE MAC address prefix (FC-MAP) value used by an FCF to assign a MAC address to an ECoE end-device (server ENode or storage device) after a server successfully logs in. ● Set the FCF mode to provide additional port security on ports that are directly connected to an FCF. ● Check FIP snooping-enabled VLANs to ensure that they are operationally active.
FIP Snooping Prerequisites On an Aggregator, FIP snooping requires the following conditions: ● A FIP snooping bridge requires DCBX and PFC to be enabled on the switch for lossless Ethernet connections (refer to Data Center Bridging (DCB) ). Dell recommends that you also enable ETS; ETS is recommended but not required. DCBX and PFC mode are auto-configured on Aggregator ports and FIP snooping is operational on the port.
By default, a port is configured for bridge-to-ENode links. 5. Configure the port for bridge-to-FCF links. INTERFACE or CONFIGURATION mode fip-snooping port-mode fcf NOTE: All these configurations are available only in PMUX mode. NOTE: To disable the FIP snooping feature or FIP snooping on VLANs, use the no version of a command; for example, no feature fip-snooping or no fip-snooping enable. .
show fip-snooping sessions Command Example Dell#show fip-snooping sessions Enode MAC Enode Intf aa:bb:cc:00:00:00 Te 0/42 aa:bb:cc:00:00:00 Te 0/42 aa:bb:cc:00:00:00 Te 0/42 aa:bb:cc:00:00:00 Te 0/42 aa:bb:cc:00:00:00 Te 0/42 FCoE MAC 0e:fc:00:01:00:01 0e:fc:00:01:00:02 0e:fc:00:01:00:03 0e:fc:00:01:00:04 0e:fc:00:01:00:05 FC-ID 01:00:01 01:00:02 01:00:03 01:00:04 01:00:05 FCF MAC aa:bb:cd:00:00:00 aa:bb:cd:00:00:00 aa:bb:cd:00:00:00 aa:bb:cd:00:00:00 aa:bb:cd:00:00:00 Port WWPN 31:00:0e:fc:00:00:00:00 4
show fip-snooping fcf Command Example Dell# show fip-snooping fcf FCF MAC FCF Interface Enodes ------------------------------54:7f:ee:37:34:40 Po 22 VLAN FC-MAP FKA_ADV_PERIOD ---- ------ -------------- 100 0e:fc:00 4000 No. of 2 show fip-snooping fcf Command Description Field Description FCF MAC MAC address of the FCF. FCF Interface Slot/port number of the interface to which the FCF is connected. VLAN VLAN ID number used by the session. FC-MAP FC-Map value advertised by the FCF.
Number Number Number Number Number of of of of of FLOGO Rejects CVL FCF Discovery Timeouts VN Port Session Timeouts Session failures due to Hardware Config :0 :0 :0 :0 :0 show fip-snooping statistics (port channel) Command Example Dell# show fip-snooping statistics interface port-channel 22 Number of Vlan Requests :0 Number of Vlan Notifications :2 Number of Multicast Discovery Solicits :0 Number of Unicast Discovery Solicits :0 Number of FLOGI :0 Number of FDISC :0 Number of FLOGO :0 Number of Enode Ke
Field Description Discovery Advertisements Number of FLOGI Number of FIP FLOGI accept frames received on the interface. Accepts Number of FLOGI Number of FIP FLOGI reject frames received on the interface. Rejects Number of FDISC Number of FIP FDISC accept frames received on the interface. Accepts Number of FDISC Number of FIP FDISC reject frames received on the interface. Rejects Number of FLOGO Accepts Number of FIP FLOGO accept frames received on the interface.
FIP Snooping Example The below illustration shows an Aggregator used as a FIP snooping bridge for FCoE traffic between an ENode (server blade) and an FCF (ToR switch). The ToR switch operates as an FCF and FCoE gateway. Figure 9. FIP Snooping on an Aggregator In tbe above figure, DCBX and PFC are enabled on the Aggregator (FIP snooping bridge) and on the FCF ToR switch. On the FIP snooping bridge, DCBX is configured as follows: ● A server-facing port is configured for DCBX in an auto-downstream role.
Debugging FIP Snooping To enable debug messages for FIP snooping events, enter the debug fip-snooping command.. 1. Enable FIP snooping debugging on for all or a specified event type, where: ● all enables all debugging options. ● acl enables debugging only for ACL-specific events. ● error enables debugging only for error conditions. ● ifm enables debugging only for IFM events. ● info enables debugging only for information events. ● ipc enables debugging only for IPC events.
8 Internet Group Management Protocol (IGMP) On an Aggregator, IGMP snooping is auto-configured. You can display information on IGMP by using show ip igmp command. Multicast is based on identifying many hosts by a single destination IP address. Hosts represented by the same IP address are a multicast group. The internet group management protocol (IGMP) is a Layer 3 multicast protocol that hosts use to join or leave a multicast group.
Figure 10. IGMP Version 2 Packet Format Joining a Multicast Group There are two ways that a host may join a multicast group: it may respond to a general query from its querier, or it may send an unsolicited report to its querier. ● Responding to an IGMP Query. ○ One router on a subnet is elected as the querier. The querier periodically multicasts (to all-multicast-systems address 224.0.0.1) a general query to all hosts on the subnet.
Figure 11. IGMP version 3 Membership Query Packet Format Figure 12. IGMP version 3 Membership Report Packet Format Joining and Filtering Groups and Sources The below illustration shows how multicast routers maintain the group and source information from unsolicited reports. ● The first unsolicited report from the host indicates that it wants to receive traffic for group 224.1.1.1. ● The host’s second report indicates that it is only interested in traffic from group 224.1.1.1, source 10.11.1.1.
Leaving and Staying in Groups The below illustration shows how multicast routers track and refreshes the state change in response to group-and-specific and general queries. ● Host 1 sends a message indicating it is leaving group 224.1.1.1 and that the included filter for 10.11.1.1 and 10.11.1.2 are no longer necessary.
Displaying IGMP Information Use the show commands from the below table, to display information on IGMP. If you specify a group address or interface: ● Enter a group address in dotted decimal format; for example, 225.0.0.0. ● Enter an interface in one of the following formats: tengigabitethernet slot/port, port-channel portchannel-number, or vlan vlan-number.
Last reporter Last reporter mode Last report received Group source list Source address 1.1.1.2 Member Ports: Po 1 Dell# 1.1.1.
9 Interfaces This chapter describes 100/1000/10000 Mbps Ethernet, 10 Gigabit Ethernet, and 40 Gigabit Ethernet interface types, both physical and logical, and how to configure them with the Dell Networking Operating Software (OS).
Interface Auto-Configuration An Aggregator auto-configures interfaces as follows: ● All interfaces operate as layer 2 interfaces at 10GbE in standalone mode. FlexIO module interfaces support only uplink connections. You can only use the 40GbE ports on the base module for stacking. ○ By default, the two fixed 40GbE ports on the base module operate in 4x10GbE mode with breakout cables and support up to eight 10GbE uplinks. You can configure the base-module ports as 40GbE links for stacking.
The following example shows the configuration and status information for one interface.
Disabling and Re-enabling a Physical Interface By default, all port interfaces on an Aggregator are operationally enabled (no shutdown) to send and receive Layer 2 traffic. You can reconfigure a physical interface to shut it down by entering the shutdown command. To re-enable the interface, enter the no shutdown command. 1.
Management Interfaces An Aggregator auto-configures with a DHCP-based IP address for in-band management on VLAN 1 and remote out-of-band (OOB) management. The Aggregator management interface has both a public IP and private IP address on the internal Fabric D interface. The public IP address is exposed to the outside world for WebGUI configurations/WSMAN and other proprietary traffic.
Configuring a Static Route for a Management Interface When an IP address used by a protocol and a static management route exists for the sample prefix, the protocol route takes precedence over the static management route. To configure a static route for the management port, use the following command in CONFIGURATION mode: 1. Assign a static route to point to the management interface or forwarding router.
Port-Based VLANs Port-based VLANs are a broadcast domain defined by different ports or interfaces. In Dell Networking OS, a port-based VLAN can contain interfaces from different stack units within the chassis. Dell Networking OS supports 4094 port-based VLANs. Port-based VLANs offer increased security for traffic, conserve bandwidth, and allow switch segmentation. Interfaces in different VLANs do not communicate with each other, adding some security to the traffic on those interfaces.
vlan-range specifies a range of tagged VLANs. Separate VLAN IDs with a comma; specify a VLAN range with a dash; for example, vlan tagged 3,5-7. To reconfigure an interface as a member of only specified untagged VLANs, enter the vlan untagged command in INTERFACE mode: 1. Add the interface as an untagged member of one or more VLANs. INTERFACE mode vlan untagged {vlan-id | vlan-range} vlan-id specifies an untagged VLAN number. Range: 2-4094 vlan-range specifies a range of untagged VLANs.
Codes: * - Default VLAN, G - GVRP VLANs, R - Remote Port Mirroring VLANs, P - Primary, C - Community, I - Isolated Q: U - Untagged, T - Tagged x - Dot1x untagged, X - Dot1x tagged G - GVRP tagged, M - Vlan-stack, H - VSN tagged i - Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged, C - CMC tagged NUM 2 Status Active Description Dell(conf-if-te-1/7) Q U T T Ports Po1(Te 0/7,18) Po128(Te 0/50-51) Te 1/7 Except for hybrid ports, only a tagged interface can be a member of multiple V
2. Configure the tagged VLANs 10 through 15 and untagged VLAN 20 on this port. Dell(conf-if-te-0/1)#vlan tagged 10-15 Dell(conf-if-te-0/1)#vlan untagged 20 Dell(conf-if-te-0/1)# 3. Show the running configurations on this port. Dell(conf-if-te-0/1)#show config ! interface TenGigabitEthernet 0/1 portmode hybrid switchport vlan tagged 10-15 vlan untagged 20 no shutdown Dell(conf-if-te-0/1)#end Dell# 4. Initialize the port-channel with configurations such as admin up, portmode, and switchport.
15 Active 20 Active T Te 0/1 T Po128(Te 0/4-5) T Te 0/1 U Po128(Te 0/4-5) U Te 0/1 Dell# You can remove the inactive VLANs that have no member ports using the following command: Dell#configure Dell(conf)#no interface vlan vlan-id vlan-id — Inactive VLAN with no member ports You can remove the tagged VLANs using the no vlan tagged vlan-range command. You can remove the untagged VLANs using the no vlan untagged command in the physical port/port-channel.
As soon as a port channel is auto-configured, the Dell Networking OS treats it like a physical interface. For example, IEEE 802.1Q tagging is maintained while the physical interface is in the port channel. Member ports of a LAG are added and programmed into hardware in a predictable order based on the port ID, instead of in the order in which the ports come up. With this implementation, load balancing yields predictable results across switch resets and chassis reloads.
1 Dell# L2 down 00:00:00 Te 0/16 (Down) To display detailed information on a port channel, enter the show interfaces port-channel command in EXEC Privilege mode. The below example shows the port channel’s mode (L2 for Layer 2, L3 for Layer 3, and L2L3 for a Layer 2 port channel assigned to a routed VLAN), the status, and the number of interfaces belonging to the port channel.
Output 34.00 Mbits/sec, 12314 packets/sec, 0.36% of line-rate Time since last interface status change: 00:13:57 Interface Range An interface range is a set of interfaces to which other commands may be applied, and may be created if there is at least one valid interface within the range. Bulk configuration excludes from configuring any non-existing interfaces from an interface range. A default VLAN may be configured only if the interface range being configured consists of only VLAN ports.
Exclude a Smaller Port Range If the interface range has multiple port ranges, the smaller port range is excluded from the prompt. Interface Range Prompt Excluding a Smaller Port Range Dell(conf)#interface range tengigabitethernet 2/0 - 23 , tengigab 2/1 - 10 Dell(conf-if-range-te-2/0-23)# Overlap Port Ranges If overlapping port ranges are specified, the port range is extended to the smallest start port number and largest end port number.
Interface: Te 0/1, Disabled, Link is Down, Linespeed is 1000 Mbit Traffic statistics: Input bytes: Output bytes: Input packets: Output packets: 64B packets: Over 64B packets: Over 127B packets: Over 255B packets: Over 511B packets: Over 1023B packets: Error statistics: Input underruns: Input giants: Input throttles: Input CRC: Input IP checksum: Input overrun: Output underruns: Output throttles: m l T q - Current 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Rate Bps Bps pps pps pps pps pps pps pps pps Delta
INTERFACE mode wavelength 1529.0 The wavelength range is from 1528.3 nm to 1568.77nm. ● Verify configuration changes. INTERFACE mode show config Flow Control Using Ethernet Pause Frames An Aggregator auto-configures to operate in auto-DCB-enable mode (Refer to Data Center Bridging: Auto-DCB-Enable Mode).
○ tx on: enter the keywords tx on to send control frames from this port to the connected device when a higher rate of traffic is received. ○ tx off: enter the keywords tx off so that flow control frames are not sent from this port to the connected device when a higher rate of traffic is received. ○ negotiate: enable pause-negotiation with the egress port of the peer device. If the negotiate command is not used, pause-negotiation is disabled. NOTE: The default is rx off. .
Setting the Speed and Duplex Mode of Ethernet Interfaces To discover whether the remote and local interface requires manual speed synchronization, and to manually synchronize them if necessary, use the following command sequence. 1. Determine the local interface status. Refer to the following example. EXEC Privilege mode show interfaces [interface] status 2. Determine the remote interface status. EXEC mode or EXEC Privilege mode [Use the command on the remote system that is equivalent to the first command.
In the previous example, several ports display “Auto” in the Speed field, including port 0/1. In the following example, the speed of port 0/1 is set to 100Mb and then its auto-negotiation is disabled.
INTERFACE mode no negotiation auto 8. Verify configuration changes. INTERFACE mode show config NOTE: The show interfaces status command displays link status, but not administrative status. For link and administrative status, use the show ip interface [interface | brief] [configuration] command.
Table 11.
The below example illustrates the possible show commands that have the available configured keyword.
Clearing an Interface: Dell#clear counters tengig 0/0 Clear counters on TenGigabitEthernet 0/0 [confirm] Dell# Enabling the Management Address TLV on All Interfaces of an Aggregator The management address TLV, which is an optional TLV of type 8, denotes the network address of the management interface, and is supported by the Dell Networking OS. It is advertised on all the interfaces on an I/O Aggregator in the Link Layer Discovery Protocol (LLDP) data units.
10 iSCSI Optimization An Aggregator enables internet small computer system interface (iSCSI) optimization with default iSCSI parameter settings(Default iSCSI Optimization Values) and is auto-provisioned to support: iSCSI Optimization: Operation To display information on iSCSI configuration and sessions, use show commands. iSCSI optimization enables quality-of-service (QoS) treatment for iSCSI traffic.
NOTE: After a switch is reloaded, powercycled, or upgraded, any information exchanged during the initial handshake is not available. If the switch establishes communication after reloading, it detects that a session was in progress but could not obtain complete information for it. Any incomplete information is not available in the show commands.
● ● ● ● ● ISID (Initiator defined session identifier) Initiator’s IQN (iSCSI qualified name) Target’s IQN Initiator’s TCP Port Target’s TCP Port If no iSCSI traffic is detected for a session during a user-configurable aging period, the session data clears.
Table 12. Displaying iSCSI Optimization Information (continued) show iscsi Displays the currently configured iSCSI settings. show iscsi sessions Displays information on active iSCSI sessions on the switch that have been established since the last reload. show iscsi sessions detailed [session isid] Displays detailed information on active iSCSI sessions on the switch. To display detailed information on specified iSCSi session, enter the session’s iSCSi ID.
11 Isolated Networks for Aggregators An Isolated Network is an environment in which servers can only communicate with the uplink interfaces and not with each other even though they are part of same VLAN. If the servers in the same chassis need to communicate with each other, it requires a non-isolated network connectivity between them or it needs to be routed in the TOR. Isolated Networks can be enabled on per VLAN basis.
12 Link Aggregation Unlike IOA Automated modes (Standalone and VLT modes), the IOA Programmable MUX (PMUX) can support multiple uplink LAGs. You can provision multiple uplink LAGs. The I/O Aggregator auto-configures with link aggregation groups (LAGs) as follows: ● All uplink ports are automatically configured in a single port channel (LAG 128).
NOTE: In Standalone, VLT, and Stacking modes, you can configure a maximum of 16 members in port-channel 128. In PMUX mode, you can have multiple port-channels with up to 16 members per channel. Uplink LAG When the Aggregator power is on, all uplink ports are configured in a single LAG (LAG 128). Server-Facing LAGs Server-facing ports are configured as individual ports by default.
LACP Example The below illustration shows how the LACP operates in an Aggregator stack by auto-configuring the uplink LAG 128 for the connection to a top of rack (ToR) switch and a server-facing LAG for the connection to an installed server that you configured for LACP-based NIC teaming. Figure 17.
Configuration Tasks for Port Channel Interfaces To configure a port channel (LAG), use the commands similar to those found in physical interfaces. By default, no port channels are configured in the startup configuration. In VLT mode, port channel configurations are allowed in the startup configuration.
The interface variable is the physical interface type and slot/port information. 2. Double check that the interface was added to the port channel. INTERFACE PORT-CHANNEL mode show config To view the port channel’s status and channel members in a tabular format, use the show interfaces port-channel brief command in EXEC Privilege mode, as shown in the following example.
fip-snooping port-mode fcf no shutdown link-bundle-monitor enable Dell(conf-if-po-128)# Reassigning an Interface to a New Port Channel An interface can be a member of only one port channel. If the interface is a member of a port channel, remove it from the first port channel and then add it to the second port channel. Each time you add or remove a channel member from a port channel, Dell Networking OS recalculates the hash algorithm for the port channel.
Configuring VLAN Tags for Member Interfaces To configure and verify VLAN tags for individual members of a port channel, perform the following: 1. Configure VLAN membership on individual ports INTERFACE mode Dell(conf-if-te-0/2)#vlan tagged 2,3-4 2. Use the switchport command in INTERFACE mode to enable Layer 2 data transmissions through an individual interface INTERFACE mode Dell(conf-if-te-0/2)#switchport This switchport configuration is allowed only in PMUX mode.
In VLT mode, the global auto LAG is automatically synced to the peer VLT through ICL message. 2. Enable the auto LAG on a specific server port. Interface Configuration mode auto-lag enable Dell(conf-if-te-0/1)# auto-lag enable To disable the auto LAG, use the no auto-lag enable command. When disabled, the server port is removed from the LAG and if the server port is the last member of the LAG, the LAG itself gets removed. Any LACPDUs received on the server port are discarded.
Sample Configuration Dell# config terminal Dell(conf)# no io-aggregator auto-lag enable Dell(conf)# end Dell# show io-aggregator auto-lag status Auto LAG creation on server port(s) is disabled Dell# Dell# config terminal Dell(config)# interface tengigabitethernet 0/1 Dell(config-if-te-0/1)# no auto-lag enable Dell(config-if-te-0/1)# show config ! interface TenGigabitEthernet 0/1 mtu 12000 portmode hybrid switchport no auto-lag enable ! protocol lldp advertise management-tlv management-address system-name dc
0 packets, 0 bytes 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 0 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics: 0 packets, 0 bytes, 0 underruns 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 0 Multicasts, 0 Broadcasts, 0 Unicasts 0 throttles, 0 discarded, 0 collisions, 0 wreddrops Rate info (inte
The following log message is displayed when LACP link–falback is removed: Feb 26 15:53:32: %STKUNIT0-M:CP %SMUX-5-SMUX_LACP_PDU_RECEIVED_FROM_PEER: LACP PDU received from PEER and connectivity to PEER will be restored to Uplink Port-channel 128.
Table 13.
Input 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Output 00.00 Mbits/sec, 4 packets/sec, 0.00% of line-rate Time since last interface status change: 00:11:42 show lacp 128 Command Example Dell# show lacp 128 Port-channel 128 admin up, oper up, mode lacp Actor System ID: Priority 32768, Address 0001.e8e1.e1c3 Partner System ID: Priority 32768, Address 0001.e88b.
Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0/50 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0/51 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Part
135 packets, 19315 bytes, 0 underruns 0 64-byte pkts, 79 over 64-byte pkts, 32 over 127-byte pkts 24 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 93 Multicasts, 42 Broadcasts, 0 Unicasts 0 throttles, 0 discarded, 0 collisions, 0 wreddrops Rate info (interval 299 seconds): Input 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Output 00.00 Mbits/sec, 0 packets/sec, 0.
Dell(conf-if-te-0/43-lacp)#end Dell# 3. Show the LAG configurations and operational status. Dell#show interface port-channel brief Codes: L - LACP Port-channel O - OpenFlow Controller Port-channel LAG Mode Status Uptime Ports L 10 L3 up 00:01:00 Te 0/41 L 11 Dell# L3 up 00:00:01 (Up) Te 0/43 (Up) Te 0/42 (Up) 4. Configure the port mode, VLAN, and so forth on the port-channel.
Multiple Uplink LAGs with 40G Member Ports By default in IOA, native 40G QSFP+ optional module ports are used in Quad (4x10G) mode, to convert Quad mode to Native 40G mode, refer to the sample configuration. Also note, converting between Quad mode and Native mode, and vice versa, requires that you reload the system for the configuration changes to take effect. The following sample commands configure multiple dynamic uplink LAGs with 40G member ports based on LACP. 1.
Dell(conf-if-po-21)#portmode hybrid Dell(conf-if-po-21)#switchport Dell(conf-if-po-21)#no shut Dell(conf-if-po-21)#end Dell# 5. Show the port channel status.
13 Layer 2 The Aggregator supports CLI commands to manage the MAC address table: ● Clearing the MAC Address Entries ● Displaying the MAC Address Table The Aggregator auto-configures with support for Network Interface Controller (NIC) Teaming. NOTE: On an Aggregator, all ports are configured by default as members of all (4094) VLANs, including the default VLAN. All VLANs operate in Layer 2 mode.
Displaying the MAC Address Table To display the MAC address table, use the following command. ● Display the contents of the MAC address table. EXEC Privilege mode NOTE: This command is available only in PMUX mode. show mac-address-table [address | aging-time [vlan vlan-id]| count | dynamic | interface | static | vlan] ○ ○ ○ ○ ○ ○ ○ address: displays the specified entry. aging-time: displays the configured aging-time.
Figure 18. Redundant NOCs with NIC Teaming MAC Address Station Move When you use NIC teaming, consider that the server MAC address is originally learned on Port 0/1 of the switch (see figure below). If the NIC fails, the same MAC address is learned on Port 0/5 of the switch. The MAC address is disassociated with one port and re-associated with another in the ARP table; in other words, the ARP entry is “moved”. The Aggregator is auto-configured to support MAC Address station moves. Figure 19.
MAC Move Optimization Station-move detection takes 5000ms because this is the interval at which the detection algorithm runs.
14 Link Layer Discovery Protocol (LLDP) Link layer discovery protocol (LLDP) advertises connectivity and management from the local station to the adjacent stations on an IEEE 802 LAN. LLDP facilitates multi-vendor interoperability by using standard management tools to discover and make available a physical topology for network management. The Dell Networking operating software implementation of LLDP is based on IEEE standard 801.1ab.
Figure 20. Type, Length, Value (TLV) Segment TLVs are encapsulated in a frame called an LLDP data unit (LLDPDU), which is transmitted from one LLDP-enabled device to its LLDP-enabled neighbors. LLDP is a one-way protocol. LLDP-enabled devices (LLDP agents) can transmit and/or receive advertisements, but they cannot solicit and do not respond to advertisements. There are five types of TLVs (as shown in the below table). All types are mandatory in the construction of an LLDPDU except Optional TLVs.
Related Configuration Tasks ● ● ● ● ● Viewing the LLDP Configuration Viewing Information Advertised by Adjacent LLDP Agents Configuring LLDPDU Intervals Configuring a Time to Live Debugging LLDP Important Points to Remember ● LLDP is enabled by default. ● Dell Networking systems support up to eight neighbors per interface. ● Dell Networking systems support a maximum of 8000 total neighbors per system.
Enabling LLDP LLDP is enabled by default. Enable and disable LLDP globally or per interface. If you enable LLDP globally, all UP interfaces send periodic LLDPDUs. To enable LLDP, use the following command. 1. Enter Protocol LLDP mode. CONFIGURATION or INTERFACE mode protocol lldp 2. Enable LLDP. PROTOCOL LLDP mode no disable Disabling and Undoing LLDP To disable or undo LLDP, use the following command. ● Disable LLDP globally or for an interface.
In the following example, LLDP is enabled globally. R1 and R2 are transmitting periodic LLDPDUs that contain management, 802.1, and 802.3 TLVs. Figure 22. Configuring LLDP Optional TLVs The Dell Networking Operating System (OS) supports the following optional TLVs: Management TLVs, IEEE 802.1 and 802.3 organizationally specific TLVs, and TIA-1057 organizationally specific TLVs. Management TLVs A management TLV is an optional TLVs sub-type.
Table 15. Optional TLV Types (continued) Type TLV Description 4 Port description A user-defined alphanumeric string that describes the port. The Dell Networking OS does not currently support this TLV. 5 System name A user-defined alphanumeric string that identifies the system. 6 System description A user-defined alphanumeric string that identifies the system.
Table 15. Optional TLV Types (continued) Type TLV Description 127 Maximum Frame Size Indicates the maximum frame size capability of the MAC and PHY. LLDP-MED Capabilities TLV The LLDP-MED capabilities TLV communicates the types of TLVs that the endpoint device and the network connectivity device support. LLDP-MED network connectivity devices must transmit the Network Policies TLV.
● Layer 2 priority ● DSCP value An integer represents the application type (the Type integer shown in the following table), which indicates a device function for which a unique network policy is defined. An individual LLDP-MED network policy TLV is generated for each application type that you specify with the CLI (XXAdvertising TLVs).
● Power Priority — there are three possible priorities: Low, High, and Critical. On Dell Networking systems, the default power priority is High, which corresponds to a value of 2 based on the TIA-1057 specification. You can configure a different power priority through the CLI. Dell Networking also honors the power priority value the powered device sends; however, the CLI configuration takes precedence. ● Power Value — Dell Networking advertises the maximum amount of power that can be supplied on the port.
The system increments the TLV discard counter and does not store unrecognized LLDP TLV information in following scenarios: ● If there are multiple TLVs with the same information is received ● If DCBX is down on the receiving interface The organizational specific TLV list is limited to store 256 entries per neighbor. If TLV entries are more than 256, then the oldest entry (of that neighbor) in the list is replaced.
Viewing Information Advertised by Adjacent LLDP Agents To view brief information about adjacent devices or to view all the information that neighbors are advertising, use the following commands. ● Display brief information about adjacent devices. show lldp neighbors ● Display all of the information that neighbors are advertising.
Remote Port ID: 00:00:c9:ad:f6:12 Local Port ID: TenGigabitEthernet 0/3 Configuring LLDPDU Intervals LLDPDUs are transmitted periodically; the default interval is 30 seconds. To configure LLDPDU intervals, use the following command. ● Configure a non-default transmit interval.
no disable R1(conf-lldp)#no multiplier R1(conf-lldp)#show config ! protocol lldp advertise dot1-tlv port-protocol-vlan-id port-vlan-id advertise dot3-tlv max-frame-size advertise management-tlv system-capabilities system-description no disable R1(conf-lldp)# Clearing LLDP Counters You can clear LLDP statistics that are maintained on an Aggregator for LLDP counters for frames transmitted to and received from neighboring devices on all or a specified physical interface.
Figure 27. The debug lldp detail Command — LLDPDU Packet Dissection Relevant Management Objects Dell Networkings OS supports all IEEE 802.1AB MIB objects. The following tables list the objects associated with: ● Received and transmitted TLVs ● LLDP configuration on the local agent ● IEEE 802.1AB Organizationally Specific TLVs ● Received and transmitted LLDP-MED TLVs Table 19.
Table 19. LLDP Configuration MIB Objects (continued) MIB Object Category LLDP Statistics LLDP Variable LLDP MIB Object Description mibMgmtAddrInstanceTxEnable lldpManAddrPortsTxEnable The management addresses defined for the system and the ports through which they are enabled for transmission. statsAgeoutsTotal lldpStatsRxPortAgeoutsTotal Total number of times that a neighbor’s information is deleted on the local system due to an rxInfoTTL timer expiration.
Table 20.
Table 22.
Table 22.
15 Object Tracking IPv4 or IPv6 object tracking is available on Dell Networking OS. Object tracking allows the Dell Networking OS client processes, such as virtual router redundancy protocol (VRRP), to monitor tracked objects (for example, interface or link status) and take appropriate action when the state of an object changes. NOTE: In Dell Networking OS release version 9.7(0.0), object tracking is supported only on VRRP.
Figure 28. Object Tracking Example When you configure a tracked object, such as an IPv4/IPv6 a route or interface, you specify an object number to identify the object. Optionally, you can also specify: ● UP and DOWN thresholds used to report changes in a route metric. ● A time delay before changes in a tracked object’s state are reported to a client. Track Layer 2 Interfaces You can create an object to track the line-protocol state of a Layer 2 interface.
A tracked route matches a route in the routing table only if the exact address and prefix length match an entry in the routing table. For example, when configured as a tracked route, 10.0.0.0/24 does not match the routing table entry 10.0.0.0/8. If no route-table entry has the exact address and prefix length, the tracked route is considered to be DOWN.
To remove object tracking on a Layer 2 interface, use the no track object-id command. To configure object tracking on the status of a Layer 2 interface, use the following commands. 1. Configure object tracking on the line-protocol state of a Layer 2 interface. CONFIGURATION mode track object-id interface interface line-protocol Valid object IDs are from 1 to 65535. 2. (Optional) Configure the time delay used before communicating a change in the status of a tracked interface.
Valid object IDs are from 1 to 65535. 2. (Optional) Configure the time delay used before communicating a change in the status of a tracked interface. OBJECT TRACKING mode delay {[up seconds] [down seconds]} Valid delay times are from 0 to 180 seconds. The default is 0. 3. (Optional) Identify the tracked object with a text description. OBJECT TRACKING mode description text The text string can be up to 80 characters. 4. (Optional) Display the tracking configuration and the tracked object’s status.
ARP cache ages out for a route tracked for its reachability, an attempt is made to regenerate the ARP cache entry to see if the next-hop address appears before considering the route DOWN. ● By comparing the threshold for a route’s metric with current entries in the route table. The UP/DOWN state of the tracked route is determined by the threshold for the current value of the route metric in the routing table.
IPv6 route 2050::/64 reachability Reachability is Up (STATIC) 5 changes, last change 00:02:16 First-hop interface is TenGigabitEthernet 1/2 Tracked by: VRRP TenGigabitEthernet 2/30 IPv6 VRID 1 Track 4 Interface TenGigabitEthernet 1/4 ip routing IP routing is Up 3 changes, last change 00:03:30 Tracked by: Example of the show track brief Command Router# show track brief ResId State 1 Resource LastChange IP route reachability Parameter 10.16.0.
16 Port Monitoring The Aggregator supports user-configured port monitoring. See Configuring Port Monitoring for the configuration commands to use. Port monitoring copies all incoming or outgoing packets on one port and forwards (mirrors) them to another port. The source port is the monitored port (MD) and the destination port is the monitoring port (MG).
Figure 29. Port Monitoring Example Important Points to Remember ● Port monitoring is supported on physical ports only; virtual local area network (VLAN) and port-channel interfaces do not support port monitoring. ● The monitored (the source, [MD]) and monitoring ports (the destination, [MG]) must be on the same switch. ● The monitored (source) interface must be a server-facing interface in the format slot/port, where the valid slot numbers are 0 or 1 and server-facing port numbers are from 1 to 32.
Port Monitoring The Aggregator supports multiple source-destination statements in a monitor session, but there may only be one destination port in a monitoring session. There may only be one destination port in a monitoring session (% Error: Only one MG port is allowed in a session.). The number of source ports the Dell Networking OS allows within a port-pipe is equal to the number of physical ports in the port-pipe (n).
100 TenGig 0/25 TenGig 0/38 110 TenGig 0/26 TenGig 0/39 300 TenGig 0/17 TenGig 0/1 Dell(conf-mon-sess-300)# tx tx tx interface Port-based interface Port-based interface Port-based A source port may only be monitored by one destination port (% Error: Exceeding max MG ports for this MD port pipe.), but a destination port may monitor more than one source port.
17 Security The Aggregator provides many security features. This chapter describes several ways to provide access security to the Dell Networking system. For details about all the commands described in this chapter, see the Security chapter in the Dell PowerEdge Command Line Reference Guide for the M I/O Aggregator . Supported Modes Standalone, PMUX, VLT, Stacking NOTE: You can also perform some of the configurations using the Web GUI - Dell Blade IO Manager.
only using the CMC GUI. You can use the no version of this command to reactivate the Telnet or SSH session capability for the device. Use the show restrict-access command to view whether the access to a device using Telnet or SSH is disabled or not. AAA Authentication Dell EMC Networking OS supports a distributed client/server system implemented through authentication, authorization, and accounting (AAA) to help secure networks against unauthorized access.
● ● ● ● ● line: use the password you defined using the password command in LINE mode. local: use the username/password database defined in the local configuration. none: no authentication. radius: use the RADIUS servers configured with the radius-server host command. tacacs+: use the TACACS+ servers configured with the tacacs-server host command. 2. Enter LINE mode. CONFIGURATION mode line {aux 0 | console 0 | vty number [... end-number]} 3.
To use local authentication for enable secret on the console, while using remote authentication on VTY lines, issue the following commands.
AAA Authorization The Dell Networking OS enables AAA new-model by default. You can set authorization to be either local or remote. Different combinations of authentication and authorization yield different results. By default, the system sets both to local. Privilege Levels Overview Limiting access to the system is one method of protecting the system and your network. However, at times, you might need to allow others access to the router and you can limit that access to a subset of commands.
○ ○ ○ ○ ○ ○ ○ name: Enter a text string up to 63 characters long. access-class access-list-name: Restrict access by access-class. nopassword: Require password for the user to login. encryption-type: Enter 0 for plain text or 7 for encrypted text. password: Enter a string. Specify the password for the user. privilege level: The range is from 0 to 15. secret: Specify the secret for the user. To view username, use the show users command in EXEC Privilege mode.
CONFIGURATION mode enable password [level level] [encryption-mode] password Configure the optional and required parameters: ● level level: specify a level from 0 to 15. Level 15 includes all levels. ● encryption-type: enter 0 for plain text or 7 for encrypted text. ● password: enter a text string up to 32 characters long. To change only the password for the enable command, configure only the password parameter. 3. Configure level and commands for a mode or reset a command’s level.
Dell#? configure disable enable exit no show terminal traceroute Dell#confi Dell(conf)#? end Configuring from terminal Turn off privileged commands Turn on privileged commands Exit from the EXEC Negate a command Show running system information Set terminal line parameters Trace route to destination Exit from Configuration mode Specifying LINE Mode Password and Privilege You can specify a password authentication of all users on different terminal lines.
● Access-Reject — the RADIUS server does not authenticate the user. If an error occurs in the transmission or reception of RADIUS packets, you can view the error by enabling the debug radius command. Transactions between the RADIUS server and the client are encrypted (the users’ passwords are not sent in plain text). RADIUS uses UDP as the transport protocol between the RADIUS server host and the client. For more information about RADIUS, refer to RFC 2865, Remote Authentication Dial-in User Service.
Privilege Levels Through the RADIUS server, you can configure a privilege level for the user to enter into when they connect to a session. This value is configured on the client system. ● Set a privilege level. privilege level Configuration Task List for RADIUS To authenticate users using RADIUS, you must specify at least one RADIUS server so that the system can communicate with and configure RADIUS as one of your authentication methods. The following list includes the configuration tasks for RADIUS.
login authentication {method-list-name | default} This procedure is mandatory if you are not using default lists. ● To use the method list. CONFIGURATION mode authorization exec methodlist Specifying a RADIUS Server Host When configuring a RADIUS server host, you can set different communication parameters, such as the UDP port, the key password, the number of retries, and the timeout. To specify a RADIUS server host and configure its communication parameters, use the following command.
radius-server retransmit retries ○ retries: the range is from 0 to 100. Default is 3 retries. ● Configure the time interval the system waits for a RADIUS server host response. CONFIGURATION mode radius-server timeout seconds ○ seconds: the range is from 0 to 1000. Default is 5 seconds. To view the configuration of RADIUS communication parameters, use the show running-config command in EXEC Privilege mode. Monitoring RADIUS To view information on RADIUS transactions, use the following command.
CONFIGURATION mode line {aux 0 | console 0 | vty number [end-number]} 4. Assign the method-list to the terminal line. LINE mode login authentication {method-list-name | default} To view the configuration, use the show config in LINE mode or the show running-config tacacs+ command in EXEC Privilege mode. If authentication fails using the primary method, Dell EMC Networking OS employs the second method (or third method, if necessary) automatically.
Example of Specifying a TACACS+ Server Host DellEMC(conf)# DellEMC(conf)#aaa authentication login tacacsmethod tacacs+ DellEMC(conf)#aaa authentication exec tacacsauthorization tacacs+ DellEMC(conf)#tacacs-server host 25.1.1.2 key Force DellEMC(conf)# DellEMC(conf)#line vty 0 9 DellEMC(config-line-vty)#login authentication tacacsmethod DellEMC(config-line-vty)#end Specifying a TACACS+ Server Host To specify a TACACS+ server host and configure its communication parameters, use the following command.
hostname is the IP address or host name of the remote device. Enter an IPv4 or IPv6 address in dotted decimal format (A.B.C.D). ● SSH V2 is enabled by default on all the modes. ● Display SSH connection information. EXEC Privilege mode show ip ssh The following example uses the ip ssh server version 2 command to enable SSH version 2 and the show ip ssh command to confirm the setting. DellEMC(conf)#ip ssh server version 2 DellEMC(conf)#do show ip ssh SSH server : enabled. SSH server version : v2.
● ● ● ● ● ● ● ● ip ssh password-authentication enable : enable password authentication for the SSH server. ip ssh pub-key-file : specify the file the host-based authentication uses. ip ssh rhostsfile : specify the rhost file the host-based authorization uses. ip ssh rsa-authentication enable : enable RSA authentication for the SSHv2 server. ip ssh rsa-authentication : add keys for the RSA authentication. show crypto : display the public part of the SSH host-keys.
CONFIGURATION Mode ip ssh rsa-authentication enable 5. Install user’s public key for RSA authentication in SSH. EXEC Privilege Mode ip ssh rsa-authentication username username my-authorized-keys flash://public_key If you provide the username, the Dell EMC Networking OS installs the public key for that specific user. In case, no user is associated with the current logged-in session, the system displays the following error message.
admin@Unix_client# ls id_rsa id_rsa.pub shosts admin@Unix_client# cat shosts 10.16.127.201, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA8K7jLZRVfjgHJzUOmXxuIbZx/AyW hVgJDQh39k8v3e8eQvLnHBIsqIL8jVy1QHhUeb7GaDlJVEDAMz30myqQbJgXBBRTWgBpLWwL/ doyUXFufjiL9YmoVTkbKcFmxJEMkE3JyHanEi7hg34LChjk9hL1by8cYZP2kYS2lnSyQWk= The following example shows creating rhosts. admin@Unix_client# ls id_rsa id_rsa.pub rhosts shosts admin@Unix_client# cat rhosts 10.16.127.
VTY Line and Access-Class Configuration Various methods are available to restrict VTY access in . These depend on which authentication scheme you use — line, local, or remote. Table 23. VTY Access Authentication Method VTY access-class support? Username access-class support? Remote authorization support? Line YES NO NO Local NO YES NO TACACS+ YES NO YES (with version 5.2.1.0 and later) RADIUS YES NO YES (with version 6.1.1.
VTY Line Remote Authentication and Authorization retrieves the access class from the VTY line. The takes the access class from the VTY line and applies it to ALL users. does not need to know the identity of the incoming user and can immediately apply the access class. If the authentication method is RADIUS, TACACS+, or line, and you have configured an access class for the VTY line, immediately applies it.
Important Points to Remember ● ● ● ● The OS image verification feature is disabled by default on the Dell EMC Networking OS. The OS image verification feature is supported for images stored in the local system only. The OS image verification feature is not supported when the fastboot or the warmboot features are enabled on the system. If OS image verification fails after a reload, the system does not load the startup configuration.
Important Points to Remember ● ● ● ● ● The startup configuration verification feature is disabled by default on the Dell EMC Networking OS. The feature is supported for startup configuration files stored in the local system only. The feature is not supported when the fastboot or the warmboot features are enabled on the system. If the startup configuration verification fails after a reload, the system does not load your startup configuration.
Enter an encryption type for the root password. ○ 0 directs the system to store the password as clear text. ○ 7 directs the system to store the password with a dynamic salt. ○ 9 directs the system to encrypt the clear text password and store the encrypted password in an inaccessible location.
18 Simple Network Management Protocol (SNMP) Network management stations use SNMP to retrieve or alter management data from network elements. A datum of management information is called a managed object; the value of a managed object can be static or variable. Network elements store managed objects in a database called a management information base (MIB).
Implementation Information The Dell Networking OS supports SNMP version 1 as defined by RFC 1155, 1157, and 1212, SNMP version 2c as defined by RFC 1901. Configuring the Simple Network Management Protocol NOTE: The configurations in this chapter use a UNIX environment with net-snmp version 5.4. This is only one of many RFC-compliant SNMP utilities you can use to manage the Aggregator using SNMP. Also, these configurations use SNMP version 2c.
snmp-server community mycommunity ro Dell# Setting Up User-Based Security (SNMPv3) When setting up SNMPv3, you can set users up with one of the following three types of configuration for SNMP read/write operations. Users are typically associated to an SNMP group with permissions provided, such as OID view. ● noauth — no password or privacy. Select this option to set up a user with no password or privacy privileges. This setting is the basic configuration.
Dell(conf)#snmp-server host 1.1.1.1 traps {oid tree} version 3 noauth ? WORD SNMPv3 user name Subscribing to Managed Object Value Updates using SNMP By default, the Dell Networking system displays some unsolicited SNMP messages (traps) upon certain events and conditions. You can also configure the system to send the traps to a management station. Traps cannot be saved on the system.
NOTE: The envmon option enables all environment traps including those traps that are enabled with the envmon supply, envmon temperature, and envmon fan options. envmon CARD_SHUTDOWN: %sLine card %d down - %s CARD_DOWN: %sLine card %d down - %s LINECARDUP: %sLine card %d is up CARD_MISMATCH: Mismatch: line card %d is type %s - type %s required.
%ECFM-5-ECFM_XCON_ALARM: Cross connect fault detected by MEP 1 in Domain customer1 at Level 7 VLAN 1000 %ECFM-5-ECFM_ERROR_ALARM: Error CCM Defect detected by MEP 1 in Domain customer1 at Level 7 VLAN 1000 %ECFM-5-ECFM_MAC_STATUS_ALARM: MAC Status Defect detected by MEP 1 in Domain provider at Level 4 VLAN 3000 %ECFM-5-ECFM_REMOTE_ALARM: Remote CCM Defect detected by MEP 3 in Domain customer1 at Level 7 VLAN 1000 %ECFM-5-ECFM_RDI_ALARM: RDI Defect detected by MEP 3 in Domain customer1 at Level 7 VLAN 1000 e
> snmpgetnext -v 2c -c mycommunity 10.11.131.161 sysContact.0 SNMPv2-MIB::sysName.0 = STRING: Example of Reading the Value of Many Managed Objects at Once > snmpwalk -v 2c -c mycommunity 10.16.130.148 .1.3.6.1.2.1.1 SNMPv2-MIB::sysDescr.0 = STRING: Dell Networking OS Operating System Version: 1.0 Application Software Version: E8-3-17-46 Series: I/O-Aggregator Copyright (c) 1999-2012 by Dell Inc. All Rights Reserved. Build Time: Sat Jul 28 03:20:24 PDT 2012 SNMPv2-MIB::sysObjectID.
Codes: Q: U x G - * - Default VLAN, G - GVRP VLANs Untagged, T - Tagged Dot1x untagged, X - Dot1x tagged GVRP tagged, M - Vlan-stack NUM Status Description 10 Inactive Q Ports U Tengig 0/2 [Unix system output] > snmpget -v2c -c mycommunity 10.11.131.185 .1.3.6.1.2.1.17.7.1.4.3.1.2.1107787786 SNMPv2-SMI::mib-2.17.7.1.4.3.1.2.
Example of Fetching Dynamic MAC Addresses on the Default VLAN -----------------------------MAC Addresses on Dell Networking System------------------------------Dell#show mac-address-table VlanId Mac Address Type Interface State 1 00:01:e8:06:95:ac Dynamic Tengig 0/7 Active ----------------Query from Management Station--------------------->snmpwalk -v 2c -c techpubs 10.11.131.162 .1.3.6.1.2.1.17.4.3.1 SNMPv2-SMI::mib-2.17.4.3.1.1.0.1.232.6.149.
For interface indexing, slot and port numbering begins with binary one. If the Dell Networking system begins slot and port numbering from 0, binary 1 represents slot and port 0. In S4810, the first interface is 0/0, but in the Aggregator the first interface is 0/1. Hence, in the Aggregator 0/0s Ifindex is unused and Ifindex creation logic is not changed. Because Zero is reserved for logical interfaces, it starts from 1. For the first interface, port number is set to 1.
Table 26. MIB Objects to display egress queue statistics (continued) MIB Object OID Description dellNetFpEgrQTxBytesRate 1.3.6.1.4.1.6027.3.27.1.20.1.7 Rate of Bytes transmitted per Unicast/ Multicast Egress queue. dellNetFpEgrQDroppedPacketsRate 1.3.6.1.4.1.6027.3.27.1.20.1.8 Rate of Packets dropped per Unicast/ Multicast Egress queue. dellNetFpEgrQDroppedBytesRate 1.3.6.1.4.1.6027.3.27.1.20.1.9 Rate of Bytes dropped per Unicast/ Multicast Egress queue.
● snmp-server user admin admingroup 3 auth md5 helloworld ● snmp mib community-map VRF1 context cx1 ● snmp mib community-map VRF2 context cx2 ● snmp-server view readview .1 included ● snmp-server view writeview .1 included 2. Configure snmp context under the VRF instances. ● sho run bgp ● router bgp 100 ● address-family ipv4 vrf vrf1 ● snmp context context1 ● neighbor 20.1.1.1 remote-as 200 ● neighbor 20.1.1.
Example of SNMP Trap for Monitored Port-Channels [senthilnathan@lithium ~]$ snmpwalk -v 2c -c public 10.11.1.1 .1.3.6.1.4.1.6027.3.2.1.1 SNMPv2-SMI::enterprises.6027.3.2.1.1.1.1.1.1 = INTEGER: 1 SNMPv2-SMI::enterprises.6027.3.2.1.1.1.1.1.2 = INTEGER: 2 SNMPv2-SMI::enterprises.6027.3.2.1.1.1.1.2.1 = Hex-STRING: 00 01 E8 13 A5 C7 SNMPv2-SMI::enterprises.6027.3.2.1.1.1.1.2.2 = Hex-STRING: 00 01 E8 13 A5 C8 SNMPv2-SMI::enterprises.6027.3.2.1.1.1.1.3.1 = INTEGER: 1107755009 SNMPv2-SMI::enterprises.6027.3.2.1.1.
Entity MIBS The Entity MIB provides a mechanism for presenting hierarchies of physical entities using SNMP tables. The Entity MIB contains the following groups, which describe the physical elements and logical elements of a managed system. The following tables are implemented for the Aggregator. ● Physical Entity: A physical entity or physical component represents an identifiable physical resource within a managed system. Zero or more logical entities may utilize a physical resource at any given time.
SNMPv2-SMI::mib-2.47.1.1.1.1.2.56 = STRING: "Optional module 1" SNMPv2-SMI::mib-2.47.1.1.1.1.2.57 = STRING: "4-port 10GE 10BASE-T (XL) " SNMPv2-SMI::mib-2.47.1.1.1.1.2.58 = STRING: "Unit: 0 Port 49 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.59 = STRING: "Unit: 0 Port 50 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.60 = STRING: "Unit: 0 Port 51 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.61 = STRING: "Unit: 0 Port 52 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.
Enhancements 1. The dot1qVlanCurrentEgressPorts MIB attribute has been enhanced to support logical LAG interfaces. 2. Current status OID in standard VLAN MIB is accessible over SNMP. 3. The bitmap supports 42 bytes for physical ports and 16 bytes for the LAG interfaces (up to a maximum of 128 LAG interfaces). 4. A 59 byte buffer bitmap is supported and in that bitmap: ● First 42 bytes represent the physical ports. ● Next 16 bytes represent logical ports 1-128. ● An additional 1 byte is reserved for future.
MIB Support to Display the Available Memory Size on Flash Dell Networking provides more MIB objects to display the available memory size on flash memory. The following table lists the MIB object that contains the available memory size on flash memory. Table 27. MIB Objects for Displaying the Available Memory Size on Flash via SNMP MIB Object OID Description chStackUnitFlashUsageUtil 1.3.6.1.4.1.6027.3.19.1.2.8.1.6 Contains flash memory usage in percentage.
Viewing the Software Core Files Generated by the System ● To view the viewing the software core files generated by the system, use the following command. snmpwalk -v2c -c public 192.168.60.120 .1.3.6.1.4.1.6027.3.10.1.2.10 enterprises.6027.3.10.1.2.10.1.1.1.1 = 1 enterprises.6027.3.10.1.2.10.1.1.1.2 = 2 enterprises.6027.3.10.1.2.10.1.1.1.3 = 3 enterprises.6027.3.10.1.2.10.1.1.2.1 = 1 enterprises.6027.3.10.1.2.10.1.2.1.1 = "/CORE_DUMP_DIR/flashmntr.core.gz" enterprises.6027.3.10.1.2.10.1.2.1.
snmpwalk -v 2c -c public -On 10.16.150.97 1.3.6.1.4.1.6027.3.26.1.4.8 .1.3.6.1.4.1.6027.3.26.1.4.8.1.2.1 .1.3.6.1.4.1.6027.3.26.1.4.8.1.2.2 .1.3.6.1.4.1.6027.3.26.1.4.8.1.2.3 .1.3.6.1.4.1.6027.3.26.1.4.8.1.2.4 .1.3.6.1.4.1.6027.3.26.1.4.8.1.3.1 .1.3.6.1.4.1.6027.3.26.1.4.8.1.3.2 .1.3.6.1.4.1.6027.3.26.1.4.8.1.3.3 .1.3.6.1.4.1.6027.3.26.1.4.8.1.3.4 .1.3.6.1.4.1.6027.3.26.1.4.8.1.4.1 .1.3.6.1.4.1.6027.3.26.1.4.8.1.4.2 .1.3.6.1.4.1.6027.3.26.1.4.8.1.4.3 .1.3.6.1.4.1.6027.3.26.1.4.8.1.4.4 .1.3.6.1.4.1.6027.3.
Table 30. MIB Objects to display egress queue statistics (continued) MIB Object OID Description dellNetFpEgrQDroppedPacketsRate 1.3.6.1.4.1.6027.3.27.1.20.1.8 Rate of Packets dropped per Unicast/ Multicast Egress queue. dellNetFpEgrQDroppedBytesRate 1.3.6.1.4.1.6027.3.27.1.20.1.9 Rate of Bytes dropped per Unicast/ Multicast Egress queue. MIB Support to Display Egress Queue Statistics Dell Networking OS provides MIB objects to display the information of the ECMP group count information.
INTEGER: 1258296320 SNMPv2SMI::enterprises.6027.3.9.1.5.1.8.1.1.4.80.80.80.0.24.1.4.30.1.1.1.1.4.30.1.1.1 = INTEGER: 1275078656 SNMPv2-SMI::enterprises.6027.3.9.1.5.1.8.1.1.4.90.90.90.0.24.0.0.0.0 = INTEGER: 2097157 SNMPv2SMI::enterprises.6027.3.9.1.5.1.8.1.1.4.90.90.90.1.32.1.4.127.0.0.1.1.4.127.0.0.1 = INTEGER: 0 SNMPv2SMI::enterprises.6027.3.9.1.5.1.8.1.1.4.90.90.90.2.32.1.4.90.90.90.2.1.4.90.90.90.2 = INTEGER: 2097157 SNMPv2SMI::enterprises.6027.3.9.1.5.1.8.1.1.4.100.100.100.0.24.1.4.10.1.1.1.1.4.10.1.
SNMPv2-SMI::enterprises.6027.3.9.1.5.1.10.1.1.4.20.1.1.1.32.1.4.20.1.1.1.1.4.20.1.1.1 = STRING: "Po 10" SNMPv2SMI::enterprises.6027.3.9.1.5.1.10.1.1.4.20.1.1.2.32.1.4.127.0.0.1.1.4.127.0.0.1 = STRING: "CP" SNMPv2-SMI::enterprises.6027.3.9.1.5.1.10.1.1.4.30.1.1.0.24.0.0.0.0 = STRING: "CP" SNMPv2-SMI::enterprises.6027.3.9.1.5.1.10.1.1.4.30.1.1.1.32.1.4.30.1.1.1.1.4.30.1.1.1 = STRING: "Po 20" SNMPv2SMI::enterprises.6027.3.9.1.5.1.10.1.1.4.30.1.1.2.32.1.4.127.0.0.1.1.4.127.0.0.
Gauge32: 0 SNMPv2SMI::enterprises.6027.3.9.1.5.1.11.1.1.4.80.80.80.0.24.1.4.30.1.1.1.1.4.30.1.1.1 = Gauge32: 0 SNMPv2-SMI::enterprises.6027.3.9.1.5.1.11.1.1.4.90.90.90.0.24.0.0.0.0 = Gauge32: 0 SNMPv2SMI::enterprises.6027.3.9.1.5.1.11.1.1.4.90.90.90.1.32.1.4.127.0.0.1.1.4.127.0.0.1 = Gauge32: 0 SNMPv2SMI::enterprises.6027.3.9.1.5.1.11.1.1.4.90.90.90.2.32.1.4.90.90.90.2.1.4.90.90.90.2 = Gauge32: 0 SNMPv2SMI::enterprises.6027.3.9.1.5.1.11.1.1.4.100.100.100.0.24.1.4.10.1.1.1.1.4.10.1.1.
MIB Support for LAG Dell EMC Networking provides a method to retrieve the configured LACP information (Actor and Partner). Actor (local interface) is to designate the parameters and flags pertaining to the sending node, while the term Partner (remote interface) is to designate the sending node’s view of its peer parameters and flags. LACP provides a standardized means for exchanging information, with partner systems, to form a link aggregation group (LAG).
Table 33. MIB Objects for LAG (continued) MIB Object OID Description either delivering the frame to its MAC Client or discarding the frame. dot3adAggPortListTable 1.2.840.10006.300.43.1.1.2 Contains a list of all the ports associated with each Aggregator. Each LACP channel in a device occupies an entry in the table. dot3adAggPortListEntry 1.2.840.10006.300.43.1.1.2.1 Contains a list of ports associated with a given Aggregator and indexed by the ifIndex of the Aggregator. dot3adAggPortListPorts 1.
Table 34. MIB Objects for Displaying Reserved Unrecognized LLDP TLVs (continued) MIB Object OID Description lldpRemUnknownTLVInfo 1.0.8802.1.1.2.1.4.3.1.2 Contains value extracted from the value field of the TLV. Viewing the Details of Reserved Unrecognized LLDP TLVs ● To view the information of reserved unrecognized LLDP TLVs using SNMP, use the following commands. snmpwalk -v2c -c mycommunity 10.16.150.83 1.0.8802.1.1.2.1.4 iso.0.8802.1.1.2.1.4.1.1.6.0.2113029.2 = INTEGER: 5 iso.0.8802.1.1.2.1.4.1.
Table 35. MIB Objects for Displaying Organizational Specific Unrecognized LLDP TLVs (continued) MIB Object OID Description this neighbor to identify a particular unrecognized organizationally defined information instance. lldpRemOrgDefInfo 1.0.8802.1.1.2.1.4.4.1.4 Contains the string value used to identify the organizationally defined information of the remote system.
Table 36. SNMP OIDs for Transceiver Monitoring Field (OID) Description SNMPv2-SMI::enterprises.6027.3.11.1.3.1.1.1 Device Name SNMPv2-SMI::enterprises.6027.3.11.1.3.1.1.2 Port SNMPv2-SMI::enterprises.6027.3.11.1.3.1.1.3 Optics Present SNMPv2-SMI::enterprises.6027.3.11.1.3.1.1.4 Optics Type SNMPv2-SMI::enterprises.6027.3.11.1.3.1.1.5 Vendor Name SNMPv2-SMI::enterprises.6027.3.11.1.3.1.1.6 Part Number SNMPv2-SMI::enterprises.6027.3.11.1.3.1.1.7 Serial Number SNMPv2-SMI::enterprises.6027.3.11.
19 Stacking An Aggregator auto-configures to operate in standalone mode. To use an Aggregator in a stack, you must manually configure it using the CLI to operate in stacking mode. In automated Stack mode, the base module 40GbE ports (33 and 37) operate as stack links and it is fixed. In Programmable MUX (PMUX) mode, you can select either the base or optional modules (ports 33 — 56). An Aggregator supports a maximum of six stacking units.
Figure 30. A Two-Aggregator Stack Stack Management Roles The stack elects the management units for the stack management. ● Stack master — primary management unit ● Standby — secondary management unit The master holds the control plane and the other units maintain a local copy of the forwarding databases.
Stack Master Election The stack elects a master and standby unit at bootup time based on MAC address. The unit with the higher MAC value becomes master. To view which switch is the stack master, enter the show system command. The following example shows sample output from an established stack. A ● ● ● change in the stack master occurs when: You power down the stack master or bring the master switch offline. A failover of the master switch occurs. You disconnect the master switch from the stack.
● The stacking LAG dynamically aggregates; it can lose link members or gain new links. ● Shortest path selection inside the stack: if multiple paths exist between two units in the stack, the shortest path is used.
Stacking Port Numbers By default, each Aggregator in Standalone mode is numbered stack-unit 0. Stack-unit numbers are assigned to member switches when the stack comes up. The following example shows the numbers of the 40GbE stacking ports on an Aggregator. Figure 31. Stack Groups on an Aggregator Stacking in PMUX Mode PMUX stacking allows the stacking of two or more IOA units. This allows grouping of multiple units for high availability. IOA supports a maximum of six stacking units.
Dell(conf)#stack-unit 0 stack-group 1 Dell(conf)#00:37:57: %STKUNIT0-M:CP %IFMGR-6-STACK_PORTS_ADDED: Ports Fo 0/37 have been configured as stacking ports. Please save and reset stack-unit 0 for config to take effect Dell(conf)#end Dell#00:38:16: %STKUNIT0-M:CP %SYS-5-CONFIG_I: Configured from console 2. Reload the stack units. Dell#reload Proceed with reload [confirm yes/no]: yes 3. Show the units stacking status.
3. A unit is selected as Standby by the administrator, and a fail over action is manually initiated or occurs due to a Master unit failure. No record of previous stack mastership is kept when a stack loses power. As it reboots, the election process will once again determine the Master and Standby switches. As long as the priority has not changed on any members, the stack will retain the same Master and Standby.
Cabling Redundancy Connect the units in a stack with two or more stacking cables to avoid a stacking port or cable failure. Removing one of the stacked cables between two stacked units does not trigger a reset. Cabling Procedure The following cabling procedure uses the stacking topology shown earlier in this chapter. To connect the cabling: 1. Connect a 40GbE base port on the first Aggregator to a 40GbE base port on another Aggregator in the same chassis. 2.
Adding a Stack Unit You can add a new unit to an existing stack both when the unit has no stacking ports (stack groups) configured and when the unit already has stacking ports configured. If the units to be added to the stack have been previously used, they are assigned the smallest available unit ID in the stack. To add a standalone Aggregator to a stack, follow these steps: 1. Power on the switch. 2.
2. Log on to the CLI and enter Global Configuration mode. Login: username Password: ***** Dell> enable Dell# configure 3. Configure the Aggregator to operate in standalone mode. stack-unit 0 iom-mode standalone CONFIGURATION 4. Log on to the CLI and reboot each switch, one after another, in as short a time as possible. reload EXEC PRIVILEGE When the reload completes, the base-module ports comes up in 4x10GbE (quad) mode.
Table 37. Speeds in Different Aggregator Modes (continued) Module Type Standalone 10G mode Standalone 40G Mode Stacking 10G Mode Stacking 40G mode VLT 10G Mode VLT 40G Mode FC module 10G 10G 10G 10G 10G 10G To configure the uplink speed of the member interfaces in a LAG bundle to be 40 GbE Ethernet per second for the Aggregator that operates in standalone, stacking, or VLT mode, perform the following steps: Specify the uplink speed as 40 GbE.
Using Show Commands To display information on the stack configuration, use the show commands on the master switch. ● Displays stacking roles (master, standby, and member units) and the stack MAC address. show system [brief] ● Displays the FlexIO modules currently installed in expansion slots 0 and 1 on a switch and the expected module logically provisioned for the slot. show inventory optional-module ● Displays the stack groups allocated on a stacked switch. The range is from 0 to 5.
Status : not present Required Type : Example of the show inventory optional-module Command Dell# show inventory optional-module Unit Slot Expected Inserted Next Boot Power -----------------------------------------0 0 SFP+ SFP+ AUTO Good 0 1 QSFP+ QSFP+ AUTO Good * - Mismatch Example of the show system stack-unit stack-group configured Command Dell# show system stack-unit 1 stack-group configured Configured stack groups in stack-unit 1 --------------------------------------0 1 4 5 Example of the show system
2. Displays the master standby unit status, failover configuration, and result of the last master-standby synchronization; allows you to verify the readiness for a stack failover. show redundancy 3. Displays input and output flow statistics on a stacked port. show hardware stack-unit unit-number stack-port port-number 4. Clears statistics on the specified stack unit. The valid stack-unit numbers are from 0 to 5. clear hardware stack-unit unit-number counters 5.
Example of the show hardware stack-unit port-stack Command Dell# show hardware stack-unit 1 stack-port 53 Input Statistics: 7934 packets, 1049269 bytes 0 64-byte pkts, 7793 over 64-byte pkts, 100 over 127-byte pkts 0 over 255-byte pkts, 7 over 511-byte pkts, 34 over 1023-byte pkts 70 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics: 438 packets, 270449 bytes, 0 underruns 0 64-byte pkts, 57 over 64-byte pkts, 181 over 127-byte pkts 54 over 255-byte pkts,
3. A member switch is elected as the new standby. Data traffic on the new standby is uninterrupted. The control plane prepares for operation in Warm Standby mode. Stack-Link Flapping Error Problem/Resolution: Stacked Aggregators monitor their own stack ports and disable any stack port that flaps five times within 10 seconds.
3 4 5 Member not present Member not present Member not present Stack Unit in Card-Problem State Due to Configuration Mismatch ● Problem: A stack unit enters a Card-Problem state because there is a configuration mismatch between the logical provisioning stored for the stack-unit number on the master switch and the newly added unit with the same number. ● Resolution: From the master switch, reload the stack by entering thereload command in EXEC Privilege mode.
Synchronizing data to peer Stack-unit !!!! Dell# reload Proceed with reload [confirm yes/no]: yes Upgrading a Single Stack Unit Upgrading a single stacked switch is necessary when the unit was disabled due to an incorrect Dell Networking OS version. This procedure upgrades the image in the boot partition of the member unit from the corresponding partition in the master unit. To upgrade an individual stack unit with a new Dell Networking OS version, follow the below steps: 1.
20 Storm Control The storm control feature allows you to control unknown-unicast, muticast, and broadcast control traffic on Layer 2 and Layer 3 physical interfaces. Dell Networking OS Behavior: The Dell Networking OS supports broadcast control (the storm-control broadcast command) for Layer 2 and Layer 3 traffic. The minimum number of packets per second (PPS) that storm control can limit is two.
CONFIGURATION mode storm-control unknown-unicast [packets_per_second in] Configuring Storm Control from INTERFACE Mode To configure storm control, use the following command. You can only configure storm control for ingress traffic in INTERFACE mode. If you configure storm control from both INTERFACE and CONFIGURATION mode, the INTERFACE mode configurations override the CONFIGURATION mode configurations. ● Configure storm control.
21 Broadcast Storm Control On the Aggregator, the broadcast storm control feature is enabled by default on all ports, and disabled on a port when an iSCSI storage device is detected. Broadcast storm control is re-enabled as soon as the connection with an iSCSI device ends. Broadcast traffic on Layer 2 interfaces is limited or suppressed during a broadcast storm. You can view the status of a broadcast-storm control operation by using the show io-aggregator broadcast storm-control status command.
22 SupportAssist SupportAssist sends troubleshooting data securely to Dell. SupportAssist in this Dell EMC Networking OS release does not support automated email notification at the time of hardware fault alert, automatic case creation, automatic part dispatch, or reports. SupportAssist requires Dell EMC Networking OS 9.9(0.0) and SmartScripts 9.7 or later to be installed on the Dell EMC Networking device. For more information on SmartScripts, see Dell EMC Networking Open Automation guide. Figure 32.
Enable the SupportAssist service. CONFIGURATION mode support-assist activate DellEMC(conf)#support-assist activate This command guides you through steps to configure SupportAssist. Configuring SupportAssist Manually To manually configure SupportAssist service, use the following commands. 1. Accept the end-user license agreement (EULA). CONFIGURATION mode eula-consent {support-assist} {accept | reject} NOTE: Once accepted, you do not have to accept the EULA again.
support-assist DellEMC(conf)#support-assist DellEMC(conf-supportassist)# 3. (Optional) Configure the contact information for the company. SUPPORTASSIST mode contact-company name {company-name}[company-next-name] ... [company-next-name] DellEMC(conf)#support-assist DellEMC(conf-supportassist)#contact-company name test DellEMC(conf-supportassist-cmpy-test)# 4. (Optional) Configure the contact name for an individual.
[no] activity {full-transfer|core-transfer|event-transfer} DellEMC(conf-supportassist)#activity full-transfer DellEMC(conf-supportassist-act-full-transfer)# DellEMC(conf-supportassist)#activity core-transfer DellEMC(conf-supportassist-act-core-transfer)# DellEMC(conf-supportassist)#activity event-transfer DellEMC(conf-supportassist-act-event-transfer)# 2. Copy an action-manifest file for an activity to the system.
SUPPORTASSIST ACTIVITY mode [no] enable DellEMC(conf-supportassist-act-full-transfer)#enable DellEMC(conf-supportassist-act-full-transfer)# DellEMC(conf-supportassist-act-core-transfer)#enable DellEMC(conf-supportassist-act-core-transfer)# DellEMC(conf-supportassist-act-event-transfer)#enable DellEMC(conf-supportassist-act-event-transfer)# Configuring SupportAssist Company SupportAssist Company mode allows you to configure name, address and territory information of the company.
[no] contact-person [first ] last DellEMC(conf-supportassist)#contact-person first john last doe DellEMC(conf-supportassist-pers-john_doe)# 2. Configure the email addresses to reach the contact person. SUPPORTASSIST PERSON mode [no] email-address primary email-address [alternate email-address] DellEMC(conf-supportassist-pers-john_doe)#email-address primary jdoe@mycompany.com DellEMC(conf-supportassist-pers-john_doe)# 3. Configure phone numbers of the contact person.
[no] enable DellEMC(conf-supportassist-serv-default)#enable DellEMC(conf-supportassist-serv-default)# 4. Configure the URL to reach the SupportAssist remote server. SUPPORTASSIST SERVER mode [no] url uniform-resource-locator DellEMC(conf-supportassist-serv-default)#url https://192.168.1.1/index.htm DellEMC(conf-supportassist-serv-default)# Viewing SupportAssist Configuration To view the SupportAssist configurations, use the following commands: 1.
! server Dell enable url http://1.1.1.1:1337 DellEMC# 3. Display the EULA for the feature. EXEC Privilege mode show eula-consent {support-assist | other feature} DellEMC#show eula-consent support-assist SupportAssist EULA has been: Accepted Additional information about the SupportAssist EULA is as follows: By installing SupportAssist, you allow Dell to save your contact information (e.g.
23 System Time and Date The Aggregator auto-configures the hardware and software clocks with the current time and date. If necessary, you can manually set and maintain the system time and date using the CLI commands described in this chapter.
● Set the clock to the appropriate timezone. CONFIGURATION mode clock timezone timezone-name offset ○ timezone-name: Enter the name of the timezone. Do not use spaces. ○ offset: Enter one of the following: ■ a number from 1 to 23 as the number of hours in addition to UTC for the timezone. ■ a minus sign (-) then a number from 1 to 23 as the number of hours.
clock summer-time time-zone recurring start-week start-day start-month start-time endweek end-day end-month end-time [offset] ○ time-zone: Enter the three-letter name for the time zone. This name displays in the show clock output. ○ start-week: (OPTIONAL) Enter one of the following as the week that daylight saving begins and then enter values for start-day through end-time: ■ week-number: Enter a number from 1 to 4 as the number of the week in the month to start daylight saving time.
Configuring NTP control key password The Network Time Protocal daemon (NTPD) design uses NTPQ to configure NTPD. NTP control key supports encrypted and unencrypted password options. The ntp control-key- passwd command authenticates NTPQ packets. The default control-key-passwd authenticates the NTPQ packets until the user changes the control-key using the ntp control-keypasswd command. To configure NTP control key password, use the following command. Configure NTP control key password.
24 Uplink Failure Detection (UFD) Supported Modes Standalone, PMUX, VLT, Stacking Topics: • • • • • • • • • Feature Description How Uplink Failure Detection Works UFD and NIC Teaming Important Points to Remember Uplink Failure Detection (SMUX mode) Configuring Uplink Failure Detection (PMUX mode) Clearing a UFD-Disabled Interface (in PMUX mode) Displaying Uplink Failure Detection Sample Configuration: Uplink Failure Detection Feature Description UFD provides detection of the loss of upstream connectivity
Figure 33. Uplink Failure Detection How Uplink Failure Detection Works UFD creates an association between upstream and downstream interfaces. The association of uplink and downlink interfaces is called an uplink-state group. An interface in an uplink-state group can be a physical interface or a port-channel (LAG) aggregation of physical interfaces. An enabled uplink-state group tracks the state of all assigned upstream interfaces.
Figure 34. Uplink Failure Detection Example If only one of the upstream interfaces in an uplink-state group goes down, a specified number of downstream ports associated with the upstream interface are put into a Link-Down state. You can configure this number and is calculated by the ratio of the upstream port bandwidth to the downstream port bandwidth in the same uplink-state group.
For example, as shown previously, the switch/ router with UFD detects the uplink failure and automatically disables the associated downstream link port to the server. To continue to transmit traffic upstream, the server with NIC teaming detects the disabled link and automatically switches over to the backup link in order to continue to transmit traffic upstream. Important Points to Remember When you configure UFD, the following conditions apply. ● You can configure up to 16 uplink-state groups.
To disable the uplink group tracking, use the no enable command. 3. Change the default timer. UPLINK-STATE-GROUP mode defer-timer seconds Dell(conf)#uplink-state-group 1 Dell(conf-uplink-state-group-1)#defer-timer 20 Dell(conf-uplink-state-group-1)#show config ! uplink-state-group 1 downstream TenGigabitEthernet 0/1-32 upstream Port-channel 128 defer-timer 20 Configuring Uplink Failure Detection (PMUX mode) To configure UFD, use the following commands. 1.
5. Specify the time (in seconds) to wait for the upstream port channel (LAG 128) to come back up before server ports are brought down. UPLINK-STATE-GROUP mode defer-timer seconds NOTE: This command is available in Standalone and VLT modes only. The range is from 1 to 120. 6. (Optional) Enter a text description of the uplink-state group. UPLINK-STATE-GROUP mode description text The maximum length is 80 alphanumeric characters. 7.
disabled: Te 0/4 00:10:13: disabled: Te 0/5 00:10:13: disabled: Te 0/6 00:10:13: 00:10:13: 00:10:13: %STKUNIT0-M:CP %IFMGR-5-OSTATE_DN: Downstream interface set to UFD error%STKUNIT0-M:CP %IFMGR-5-OSTATE_DN: Downstream interface set to UFD error%STKUNIT0-M:CP %IFMGR-5-OSTATE_DN: Changed interface state to down: Te 0/4 %STKUNIT0-M:CP %IFMGR-5-OSTATE_DN: Changed interface state to down: Te 0/5 %STKUNIT0-M:CP %IFMGR-5-OSTATE_DN: Changed interface state to down: Te 0/6 Dell(conf-if-range-te-0/1-3)#do clear uf
Dell#show uplink-state-group detail (Up): Interface up (Dwn): Interface down (Dis): Interface disabled Uplink State Group : 1 Status: Enabled, Up Upstream Interfaces : Downstream Interfaces : Uplink State Group : 3 Status: Enabled, Up Upstream Interfaces : Tengig 0/46(Up) Tengig 0/47(Up) Downstream Interfaces : Te 13/0(Up) Te 13/1(Up) Te 13/3(Up) Te 13/5(Up) Te 13/6(Up) Uplink State Group : 5 Status: Enabled, Down Upstream Interfaces : Tengig 0/0(Dwn) Tengig 0/3(Dwn) Tengig 0/5(Dwn) Downstream Interfaces :
upstream TenGigabitEthernet 0/1 Dell# Dell(conf-uplink-state-group-16)# show configuration ! uplink-state-group 16 no enable description test downstream disable links all downstream TengigabitEthernet 0/40 upstream TengigabitEthernet 0/41 upstream Port-channel 8 Sample Configuration: Uplink Failure Detection The following example shows a sample configuration of UFD on a switch/router in which you configure as follows. ● ● ● ● ● ● Configure uplink-state group 3.
Dell#show uplink-state-group detail (Up): Interface up (Dwn): Interface down (Dis): Interface disabled Uplink State Group : 3 Status: Enabled, Up Upstream Interfaces : Te 0/3(Dwn) Te 0/4(Up) Downstream Interfaces : Te 0/1(Dis) Te 0/2(Dis) Te 0/5(Up) Te 0/9(Up) Te 0/11(Up) Te 0/12(Up) 262 Uplink Failure Detection (UFD)
25 PMUX Mode of the IO Aggregator This chapter provides an overview of the PMUX mode. Topics: • • • • I/O Aggregator (IOA) Programmable MUX (PMUX) Mode Configuring and Changing to PMUX Mode Configuring the Commands without a Separate User Account Virtual Link Trunking (VLT) I/O Aggregator (IOA) Programmable MUX (PMUX) Mode IOA PMUX is a mode that provides flexibility of operation with added configurability.
------------------------------------------------------0 programmable-mux programmable-mux Dell# The IOA is now ready for PMUX operations. Configuring the Commands without a Separate User Account Starting with Dell Networking OS version 9.3(0.0), you can configure the PMUX mode CLI commands without having to configure a new, separate user profile. The user profile you defined to access and log in to the switch is sufficient to configure the PMUX mode commands.
● ● ● ● Provides fast convergence if either the link or a device fails. Optimized forwarding with virtual router redundancy protocol (VRRP). Provides link-level resiliency. Assures high availability. CAUTION: Dell Networking does not recommend enabling Stacking and VLT simultaneously. If you enable both features at the same time, unexpected behavior occurs.
O - OpenFlow Controller Port-channel LAG L 127 Mode L2 Status up Uptime 00:18:22 128 L2 up 00:00:00 Ports Fo 0/33 Fo 0/37 Fo 0/41 (Up)<<<<<<<
Peer-Routing Peer-Routing-Timeout timer Multicast peer-routing timeout Dell# : Disabled : 0 seconds : 150 seconds 5. Configure the secondary VLT. NOTE: Repeat steps from 1 through 4 on the secondary VLT, ensuring you use the different backup destination and unit-id.
13 Active 14 Active 15 Active 20 Active Dell T T T T T T U U Po128(Te Te 0/1 Po128(Te Te 0/1 Po128(Te Te 0/1 Po128(Te Te 0/1 0/41-42) 0/41-42) 0/41-42) 0/41-42) You can remove the inactive VLANs that have no member ports using the following command: Dell#configure Dell(conf)#no interface vlan ->vlan-id - Inactive VLAN with no member ports You can remove the tagged VLANs using the no vlan tagged command.
Configure Virtual Link Trunking VLT requires that you enable the feature and then configure the same VLT domain, backup link, and VLT interconnect on both peer switches. Important Points to Remember ● VLT port channel interfaces must be switch ports. ● Dell Networking strongly recommends that the VLTi (VLT interconnect) be a static LAG and that you disable LACP on the VLTi. ● If the lacp-ungroup feature is not supported on the ToR, reboot the VLT peers one at a time.
● ● ● ● ● ● ○ In a VLT domain, the following software features are supported on VLTi: link layer discovery protocol (LLDP), flow control, port monitoring, jumbo frames, and data center bridging (DCB). ○ When you enable the VLTi link, the link between the VLT peer switches is established if the following configured information is true on both peer switches: ■ the VLT system MAC address matches. ■ the VLT unit-id is not identical.
remote peer then enables data forwarding across the interconnect trunk for packets that would otherwise have been forwarded over the failed port channel. This mechanism ensures reachability and provides loop management. If the VLT interconnect fails, the VLT software on the primary switch checks the status of the remote peer using the backup link. If the remote peer is up, the secondary switch disables all VLT ports on its device to prevent loops.
VLT Port Delayed Restoration When a VLT node boots up, if the VLT ports have been previously saved in the start-up configuration, they are not immediately enabled. To ensure MAC and ARP entries from the VLT per node are downloaded to the newly enabled VLT node, the system allows time for the VLT ports on the new node to be enabled and begin receiving traffic. The delay-restore feature waits for all saved configurations to be applied, then starts a configurable timer.
EXEC mode show vlt role ● Display the current configuration of all VLT domains or a specified group on the switch. EXEC mode show running-config vlt ● Display statistics on VLT operation. EXEC mode show vlt statistics ● Display the current status of a port or port-channel interface used in the VLT domain. EXEC mode show interfaces interface ○ interface: specify one of the following interface types: ■ Fast Ethernet: enter fastethernet slot/port. ■ 1-Gigabit Ethernet: enter gigabitethernet slot/port.
Dell# Example of the show vlt detail Command Dell_VLTpeer1# show vlt detail Local LAG Id -----------100 127 Peer LAG Id ----------100 2 Local Status Peer Status Active VLANs ------------ ----------- ------------UP UP 10, 20, 30 UP UP 20, 30 Dell_VLTpeer2# show vlt detail Local LAG Id -----------2 100 Peer LAG Id ----------127 100 Local Status -----------UP UP Peer Status ----------UP UP Example of the show vlt role Command Dell_VLTpeer1# show vlt role VLT Role ---------VLT Role: System MAC address: S
HeartBeat Messages Received: 978 ICL Hello's Sent: 89 ICL Hello's Received: 89 Additional VLT Sample Configurations To configure VLT, configure a backup link and interconnect trunk, create a VLT domain, configure a backup link and interconnect trunk, and connect the peer switches in a VLT domain to an attached access device (switch or server). Review the following examples of VLT configurations.
Table 38. Troubleshooting VLT (continued) Description Behavior at Peer Up Behavior During Run Time Action to Take A syslog error message and an A syslog error message and an SNMP trap are generated. SNMP trap are generated. Dell Networking OS Version mismatch A syslog error message is generated. A syslog error message is generated. Follow the correct upgrade procedure for the unit with the mismatched Dell Networking OS version.
26 FC Flex IO Modules This part provides a generic, broad-level description of the operations, capabilities, and configuration commands of the Fiber Channel (FC) Flex IO module. Topics: • • • FC Flex IO Modules Understanding and Working of the FC Flex IO Modules Fibre Channel over Ethernet for FC Flex IO Modules FC Flex IO Modules This part provides a generic, broad-level description of the operations, capabilities, and configuration commands of the Fiber Channel (FC) Flex IO module.
In a typical Fibre Channel storage network topology, separate network interface cards (NICs) and host bus adapters (HBAs) on each server (two each for redundancy purposes) are connected to LAN and SAN networks respectively. These deployments typically include a ToR SAN switch in addition to a ToR LAN switch. By employing converged network adapters (CNAs) that the FC Flex IO module supports, CNAs are used to transmit FCoE traffic from the server instead of separate NIC and HBA devices.
Guidelines for Working with FC Flex IO Modules The following guidelines apply to the FC Flex IO module: ● All the ports of FC Flex IO modules operate in FC mode, and do not support Ethernet mode. ● FC Flex IO modules are not supported in the chassis management controller (CMC) GUI. ● The only supported FCoE functionality is NPIV proxy gateway. Configure the other FCoE services, such as name server, zone server, and login server on an external FC switch.
● priority-group 0 bandwidth 30 pfc off ● priority-group 1 bandwidth 30 pfc off ● priority-group 2 bandwidth 40 pfc on ● priority-pgid 0 0 0 2 1 0 0 0 ● On I/O Aggregators, uplink failure detection (UFD) is disabled if FC Flex IO module is present to allow server ports to communicate with the FC fabric even when the Ethernet upstream ports are not operationally up.
Operation of the FIP Application The NPIV proxy gateway terminates the FIP sessions and responses to FIP messages. The FIP packets are intercepted by the FC Flex IO module and sent to the Dell Networking OS for further analysis. The FIP application responds to the FIP VLAN discovery request from the host based on the configured FCoE VLANs. For every ENode and VN_Port that is logged in, the FIP application responds to keepalive messages for the virtual channel.
Figure 36. Installing and Configuring Flowchart for FC Flex IO Modules To see if a switch is running the latest Dell Networking OS version, use the show version command. To download a Dell Networking OS version, go to http://support.dell.com. Installation Site Preparation Before installing the switch or switches, make sure that the chosen installation location meets the following site requirements: ● Clearance — There is adequate front and rear clearance for operator access.
Unpacking the Switch Package Contents When unpacking each switch, make sure that the following items are included: ● One Dell Networking I/O Aggregator module ● One USB type A-to-DB-9 female cable ● Getting Started Guide ● Safety and Regulatory Information ● Warranty and Support Information ● Software License Agreement Unpacking Steps 1. Before unpacking the switch, inspect the container and immediately report any evidence of damage. 2.
By default, the NPIV functionality is disabled on the Cisco MDS switch; enable this capability before you connect the FC port of the I/O Aggregator to these upstream switches. Data Center Bridging, Fibre Channel over Ethernet, and NPIV Proxy Gateway features are supported on the FC Flex IO modules. For detailed information about these applications and their working, see the corresponding chapters for these applications in this manual.
FCoE works with the Ethernet enhancements provided in Data Center Bridging (DCB) to support lossless (no-drop) SAN and LAN traffic. In addition, DCB provides flexible bandwidth sharing for different traffic types, such as LAN and SAN, according to 802.1p priority classes of service. DCBx should be enabled on the system before the FIP snooping feature is enabled. All of the commands that are supported for FCoE on the I/O Aggregator apply to the FC Flex IO modules.
27 FC FLEXIO FPORT FC FlexIO FPort is now supported on the Dell Networking OS. Topics: • • • • • • • • • • • FC FLEXIO FPORT Configuring Switch Mode to FCF Port Mode Name Server FCoE Maps Creating an FCoE Map Zoning Creating Zone and Adding Members Creating Zone Alias and Adding Members Creating Zonesets Activating a Zoneset Displaying the Fabric Parameters FC FLEXIO FPORT The switch is a blade switch which is plugged into the Dell M1000 Blade server chassis.
fcoe-map {tengigabitEthernet slot/port | fortygigabitEthernet slot/port} The FCoE map contains FCoE and FC parameter settings (refer to FCoE Maps). Manually apply the fcoe-map to any Ethernet ports used for FCoE. Name Server Each participant in the FC environment has a unique ID, which is called the World Wide Name (WWN). This WWN is a 64-bit address. A Fibre Channel fabric uses another addressing scheme to address the ports in the switched fabric.
port on the FCoE VLAN. The FIP advertisement also contains a keepalive message to maintain connectivity between a SAN fabric and downstream servers. After removing and reapplying the fabric map or after modifying the FCoE map, the Fiber Channel (FC) devices do not re-login. To mitigate this issue, you must first run the shut command and then the no shutdown command on each member interface after you alter the FCOE map. Creating an FCoE Map An FCoE map consists of the following elements.
FCoE MAP mode fka-adv-period seconds The range is from 8 to 90 seconds. The default is 8 seconds. Zoning The zoning configurations are supported for Fabric FCF Port mode operation on the MXL. In FCF Port mode, the fcoe-map fabric map-name has the default Zone mode set to deny. This setting denies all the fabric connections unless included in an active zoneset. To change this setting, use the default-zone-allow command. Changing this setting to all allows all the fabric connections without zoning.
Dell(conf-fc-zone-z1)#member al1 Dell(conf-fc-zone-z1)#exit Creating Zonesets A zoneset is a grouping or configuration of zones. To create a zoneset and zones into the zoneset, use the following steps. 1. Create a zoneset. CONFIGURATION mode fc zoneset zoneset_name 2. Add zones into a zoneset.
Command Description show fc ns switch brief Display all the devices in name server database of the switch - brief version. show fc zoneset Displays the zoneset. show fc zoneset active Displays the active zoneset. show fc zone Displays the configured zone. show fc alias Displays the configured alias. show fc switch Displays the FC Switch mode and world wide name.
Node Name Class of Service Symbolic Port Name Symbolic Node Name Port Type 20:00:d4:ae:52:44:37:b2 8 Broadcom Port0 pWWN 20:01:d4:ae:52:44:37:b2 Broadcom BCM57810 FCoE 7.6.3.0 7.6.59.
Switch WWN : 10:00:aa:00:00:00:00:ac Dell(conf)# FC FLEXIO FPORT 293
28 NPIV Proxy Gateway The N-port identifier virtualization (NPIV) Proxy Gateway (NPG) feature provides FCoE-FC bridging capability on the Aggregator, allowing server CNAs to communicate with SAN fabrics over the Aggregator.
Converged Network Adapter (CNA) ports on servers connect to the FX2 chassis Ten-Gigabit Ethernet ports and log in to an upstream FC core switch through the N port. Server fabric login (FLOGI) requests are converted into fabric discovery (FDISC) requests before being forwarded to the FC core switch. Servers use CNA ports to connect over FCoE to an Ethernet port in ENode mode on the NPIV proxy gateway.
Table 39. Aggregator with the NPIV Proxy Gateway: Terms and Definitions (continued) Term Description N port Port mode of an Aggregator with the FC port that connects to an F port on an FC switch in a SAN fabric. On an Aggregator with the NPIV proxy gateway, an N port also functions as a proxy for multiple server CNA-port connections. ENode port Port mode of a server-facing Aggregator with the Ethernet port that provides access to FCF functionality on a fabric.
● The FC-MAP value used to generate a fabric-provided MAC address. ● The association between the FCoE VLAN ID and FC fabric ID where the desired storage arrays are installed. Each Fibre Channel fabric serves as an isolated SAN topology within the same physical network. ● The priority used by a server to select an upstream FCoE forwarder (FCF priority). ● FIP keepalive (FKA) advertisement timeout. NOTE: In each FCoE map, the fabric ID, FC-MAP value, and FCoE VLAN must be unique.
Default FCoE map Dell(conf)#do show fcoe-map Fabric Name Fabric Id Vlan Id Vlan priority FC-MAP FKA-ADV-Period Fcf Priority Config-State Oper-State Members Fc 0/9 Te 0/4 SAN_FABRIC 1002 1002 3 0efc00 8 128 ACTIVE UP DCB_MAP_PFC_OFF Dell(conf)#do show qos dcb-map DCB_MAP_PFC_OFF ----------------------State :In-Progress PfcMode:OFF -------------------Dell(conf)# Enabling Fibre Channel Capability on the Switch Enable the Fibre Channel capability on an Aggregator that you want to configure as an NPG for the
2 are mapped to priority group 0; dot1p priority 3 is mapped to priority group 1; dot1p priority 4 is mapped to priority group 2; dot1p priorities 5, 6, and 7 are mapped to priority group 4. All priorities that map to the same egress queue must be in the same priority group.
Creating an FCoE Map An FCoE map consists of: ● An association between the dedicated VLAN, used to carry FCoE traffic, and the SAN fabric where the storage arrays are installed. Use a separate FCoE VLAN for each fabric to which the FCoE traffic is forwarded. Any non-FCoE traffic sent on a dedicated FCoE VLAN is dropped. ● The FC-MAP value, used to generate the fabric-provided MAC address (FPMA). The FPMA is used by servers to transmit FCoE traffic to the fabric.
When you enable a server-facing Ethernet port, the servers respond to the FIP advertisements by performing FLOGIs on upstream virtualized FCF ports. The NPG forwards the FLOGIs as FDISC messages to a SAN switch. 1. Configure a server-facing Ethernet port or port channel with an FCoE map. CONFIGURATION mode interface {tengigabitEthernet slot/port | port-channel num} 2. Apply the FCoE/FC configuration in an FCoE map on the Ethernet port. Repeat this step to apply an FCoE map to more than one port.
Sample Configuration 1. Configure a DCB map with PFC and ETS settings: Dell(config)# dcb-map SAN_DCB_MAP Dell(config-dcbx-name)# priority-group 0 bandwidth 60 pfc off Dell(config-dcbx-name)# priority-group 1 bandwidth 20 pfc on Dell(config-dcbx-name)# priority-group 2 bandwidth 20 pfc on Dell(config-dcbx-name)# priority-group 4 strict-priority pfc off Dell(conf-dcbx-name)# priority-pgid 0 0 0 1 2 4 4 4 2.
Table 40. Displaying NPIV Proxy Gateway Information (continued) Command Description Enter the name of an FCoE map to display the FC and FCoE parameters configured in the map to be applied on the Aggregator with the FC ports. show qos dcb-map map-name Displays configuration parameters in a specified DCB map. show npiv devices [brief] Displays information on FCoE and FC devices currently logged in to the NPG.
show fcoe-map Command Examples Dell# show fcoe-map brief Fabric-Name Fabric-Id Oper-State fid_1003 1003 fid_1004 1004 Vlan-Id FC-MAP FCF-Priority Config-State 1003 1004 0efc03 0efc04 128 128 ACTIVE ACTIVE UP DOWN Dell# show fcoe-map fid_1003 Fabric Name Fabric Id Vlan Id Vlan priority FC-MAP FKA-ADV-Period Fcf Priority Config-State Oper-State Members Fc 0/9 Te 0/11 Te 0/12 fid_1003 1003 1003 3 0efc03 8 128 ACTIVE UP Table 42.
Priorities:0 1 2 4 5 6 7 PG:1 TSA:ETS Priorities:3 BW:50 PFC:ON Table 43. show qos dcb-map Field Descriptions Field Description State Complete: All mandatory DCB parameters are correctly configured. In progress: The DCB map configuration is not complete. Some mandatory parameters are not configured. PFC Mode PFC configuration in the DCB map: On (enabled) or Off. PG Priority group configured in the DCB map.
Table 44. show npiv devices brief Field Descriptions (continued) Field Description Fabric-Map Name of the FCoE map containing the FCoE/FC configuration parameters for the server CNA-fabric connection. Login Method Method used by the server CNA to log in to the fabric; for example: FLOGI - ENode logged in using a fabric login (FLOGI). FDISC - ENode logged in using a fabric discovery (FDISC).
Table 45. show npiv devices Field Descriptions (continued) Field Description FCoE VLAN ID of the dedicated VLAN used to transmit FCoE traffic from a server CNA to a fabric and configured on both the server-facing Aggregator with the server CNA port. Fabric Map Name of the FCoE map containing the FCoE/FC configuration parameters for the server CNA-fabric connection. Enode WWPN Worldwide port name of the server CNA port. Enode WWNN Worldwide node name of the server CNA.
29 Upgrade Procedures To find the upgrade procedures, go to the Dell Networking OS Release Notes for your system type to see all the requirements needed to upgrade to the desired Dell Networking OS version. To upgrade your system type, follow the procedures in the Dell Networking OS Release Notes. Get Help with Upgrades Direct any questions or concerns about the Dell Networking OS upgrade procedures to the Dell Technical Support Center. You can reach Technical Support: ● On the web: http://support.dell.
30 Debugging and Diagnostics This chapter contains the following sections:.
Te 0/53 Te 0/54 Te 0/56 (Up) (Up) (Up) Dell#show uplink-state-group 1 detail (Up): Interface up (Dwn): Interface down Uplink State Group Defer Timer Upstream Interfaces Downstream Interfaces : : : : 0/15(Up) 0/20(Dwn) 0/25(Dwn) 0/30(Dwn) 1 10 Po Te Te Te (Dis): Interface disabled Status: Enabled, Up sec 128(Up) 0/1(Up) Te 0/2(Up) Te 0/3(Dwn) Te 0/4(Dwn) Te 0/5(Up) 0/6(Dwn) Te 0/7(Dwn) Te 0/8(Up) Te 0/9(Up) Te 0/10(Up) 0/11(Dwn) Te 0/12(Dwn) Te 0/13(Up) Te 0/14(Dwn) Te Te 0/16(Up) Te 0/17(Dwn) Te 0
1. Display the current port mode for Aggregator L2 interfaces (show interfaces switchport interface command).. Dell#show interfaces switchport tengigabitethernet 0/1 Codes: U x G i - Untagged, T - Tagged Dot1x untagged, X - Dot1x tagged GVRP tagged, M - Trunk, H - VSN tagged Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged Name: TenGigabitEthernet 0/1 802.1QTagged: Hybrid SMUX port mode: Auto VLANs enabled Vlan membership: Q Vlans U 1 T 2-4094 Native VlanId: 1 2.
System Type: I/O-Aggregator Control Processor: MIPS RMI XLP with 2147483648 bytes of memory. 256M bytes of boot flash memory. 1 34-port GE/TE (XL) 56 Ten GigabitEthernet/IEEE 802.
● Level 2 — The full set of diagnostic tests. Level 2 diagnostics are used primarily for on-board MAC level, Physical level, external Loopback tests, and more extensive component diagnostics. Various components on the board are put into Loopback mode and test packets are transmitted through those components. These diagnostics also perform snake tests using virtual local area network (VLAN) configurations. NOTE: Diagnostic is not allowed in Stacking mode, including member stacking.
Dell# Trace Logs In addition to the syslog buffer, the Dell Networking OS buffers trace messages which are continuously written by various software tasks to report hardware and software events and status information. Each trace message provides the date, time, and name of the Dell Networking OS process. All messages are stored in a ring buffer. You can save the messages to a file either manually or automatically after failover.
show hardware stack-unit {0-5} buffer unit {0-1} port {1-64 | all} buffer-info ● View the forwarding plane statistics containing the packet buffer statistics per COS per port. EXEC Privilege mode show hardware stack-unit {0-5} buffer unit {0-1} port {1-64} queue {0-14 | all} bufferinfo ● View input and output statistics on the party bus, which carries inter-process communication traffic between CPUs.
SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 Transceiver Code Encoding BR Nominal Length(9um) Km Length(9um) 100m Length(50um) 10m Length(62.
CHMGR-2-TEMP_SHUTDOWN_WARN: WARNING! temperature is [value]C; approaching shutdown threshold of [value]C To view the programmed alarm thresholds levels, including the shutdown value, use the show alarms threshold command. NOTE: When the ingress air temperature exceeds 61°C, the Status LED turns Amber and a major alarm is triggered.
Troubleshoot an Under-Voltage Condition To troubleshoot an under-voltage condition, check that the correct number of power supplies are installed and their Status light emitting diodes (LEDs) are lit. The following table lists information for SNMP traps and OIDs, which provide information about environmental monitoring hardware and hardware components. Table 48. SNMP Traps and OIDs OID String OID Name Description chSysPortXfpRecvPower OID displays the receiving power of the connected optics.
status. Dedicated buffers introduce a trade-off. They provide each interface with a guaranteed minimum buffer to prevent an overused and congested interface from starving all other interfaces. However, this minimum guarantee means that the buffer manager does not reallocate the buffer to an adjacent congested interface, which means that in some cases, memory is under-used. ● Dynamic buffer — this pool is shared memory that is allocated as needed, up to a configured limit.
CONFIGURATION mode buffer-profile fp fsqueue ● Define a buffer profile for the CSF queues. CONFIGURATION mode buffer-profile csf csqueue ● Change the dedicated buffers on a physical 1G interface. BUFFER PROFILE mode buffer dedicated ● Change the maximum number of dynamic buffers an interface can request. BUFFER PROFILE mode buffer dynamic ● Change the number of packet-pointers per queue. BUFFER PROFILE mode buffer packet-pointers ● Apply the buffer profile to a CSF to FP link.
Example of Viewing the Buffer Profile Allocations Dell#show running-config interface tengigabitethernet 2/0 ! interface TenGigabitEthernet 2/0 no ip address mtu 9252 switchport no shutdown buffer-policy myfsbufferprofile Example of Viewing the Buffer Profile (Interface) Dell#show buffer-profile detail int gi 0/10 Interface Gi 0/10 Buffer-profile fsqueue-fp Dynamic buffer 1256.00 (Kilobytes) Queue# Dedicated Buffer Buffer Packets Kilobytes) 0 3.00 256 1 3.00 256 2 3.00 256 3 3.00 256 4 3.00 256 5 3.
If the default buffer profile (4Q) is active, the system displays an error message instructing you to remove the default configuration using the no buffer-profile global command. To apply a predefined buffer profile, use the following command. ● Apply one of the pre-defined buffer profiles for all port pipes in the system. CONFIGURATION mode buffer-profile global [1Q|4Q] Sample Buffer Profile Configuration The two general types of network environments are sustained data transfers and voice/data.
● clear hardware stack-unit 0-5 cpu data-plane statistics ● clear hardware stack-unit 0-5 cpu party-bus statistics ● clear hardware stack-unit 0-5 stack-port 33–56 Displaying Drop Counters To display drop counters, use the following commands. ● Identify which stack unit, port pipe, and port is experiencing internal drops. show hardware stack-unit 0–11 drops [unit 0 [port 0–63]] ● Display drop counters. show hardware stack-unit drops unit port ● Identify which interface is experiencing internal drops.
HOL DROPS on COS6 HOL DROPS on COS7 HOL DROPS on COS8 HOL DROPS on COS9 HOL DROPS on COS10 HOL DROPS on COS11 HOL DROPS on COS12 HOL DROPS on COS13 HOL DROPS on COS14 HOL DROPS on COS15 HOL DROPS on COS16 HOL DROPS on COS17 TxPurge CellErr Aged Drops --- Egress MAC counters--Egress FCS Drops --- Egress FORWARD PROCESSOR IPv4 L3UC Aged & Drops TTL Threshold Drops INVALID VLAN CNTR Drops L2MC Drops PKT Drops of ANY Conditions Hg MacUnderflow TX Err PKT Counter --- Error counters--Internal Mac Transmit Errors
txDatapathErr txPkt(COS0) txPkt(COS1) txPkt(COS2) txPkt(COS3) txPkt(COS4) txPkt(COS5) txPkt(COS6) txPkt(COS7) txPkt(UNIT0) :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 The show hardware stack-unit cpu party-bus statistics command displays input and output statistics on the party bus, which carries inter-process communication traffic between CPUs Example of Viewing Party Bus Statistics Dell#show hardware stack-unit 2 cpu party-bus statistics Input Statistics: 27550 packets, 2559298 bytes 0 dropped, 0 errors Output Statis
CONFIGURATION mode Dell(conf)#buffer-stats-snapshot Dell(conf)#no disable You must enable this utility to be able to configure the parameters for buffer statistics tracking. By default, buffer statistics tracking is disabled. 3.
Q# TYPE Q# TOTAL BUFFERED CELLS --------------------------------------4. Use show hardware buffer-stats-snapshot resource interface interface{priority-group { id | all } | queue { ucast{id | all}{ mcast {id | all} | all} to view buffer statistics tracking resource information for a specific interface.
factory settings? Confirm [yes/no]:yes -- Restore status -- Unit Nvram -----------------------0 Success Power-cycling the unit(s). ....
31 Standards Compliance This chapter describes standards compliance for Dell Networking products. NOTE: Unless noted, when a standard cited here is listed as supported by the Dell Networking Operating System (OS), the system also supports predecessor standards. One way to search for predecessor standards is to use the http:// tools.ietf.org/ website. Click “Browse and search IETF documents,” enter an RFC number, and inspect the top of the resulting document for obsolescence citations to related RFCs.
General Internet Protocols The following table lists the Dell Networking OS support per platform for general internet protocols. Table 49.
Table 50. General IPv4 Protocols (continued) RFC# Full Name 3021 Using 31-Bit Prefixes on IPv4 Point-to-Point Links 3046 DHCP Relay Agent Information Option 3069 VLAN Aggregation for Efficient IP Address Allocation 3128 Protection Against a Variant of the Tiny Fragment Attack Network Management The following table lists the Dell Networking OS support per platform for network management protocol. Table 51.
Table 51.
Table 51. Network Management (continued) RFC# Full Name FORCE10-SYSTEM-COMPONENT-MIB Force10 System Component MIB (enables the user to view CAM usage information) FORCE10-TC-MIB Force10 Textual Convention FORCE10-TRAP-ALARM-MIB Force10 Trap Alarm MIB FORCE10-FIPS NOOPING-MI B Force10 FIP Snooping MIB (Based on T11-FCoE-MIB mentioned in FCBB-5) FORCE10-DCB -MIB Force10 DCB MIB IEEE 802.1Qaz Management Information Base extension module for IEEE 802.