Dell PowerEdge FN I/O Aggregator Configuration Guide 9.6(0.
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2014 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
Contents 1 About this Guide..................................................................................................13 Audience.............................................................................................................................................. 13 Conventions.........................................................................................................................................13 Information Symbols............................................................
Priority-Based Flow Control............................................................................................................... 28 Configuring Priority-Based Flow Control.................................................................................... 29 Enhanced Transmission Selection...................................................................................................... 31 Configuring Enhanced Transmission Selection...............................................................
FIP Snooping on Ethernet Bridges......................................................................................................70 How FIP Snooping is Implemented.................................................................................................... 72 FIP Snooping on VLANs.................................................................................................................73 FC-MAP Value...........................................................................................
Displaying VLAN Membership...................................................................................................... 98 Adding an Interface to a Tagged VLAN........................................................................................ 98 Adding an Interface to an Untagged VLAN.................................................................................. 99 Port Channel Interfaces........................................................................................................
Creating a Port Channel.............................................................................................................. 121 Adding a Physical Interface to a Port Channel........................................................................... 121 Reassigning an Interface to a New Port Channel...................................................................... 123 Configuring the Minimum Oper Up Links in a Port Channel....................................................
14 Security............................................................................................................. 157 Understanding Banner Settings........................................................................................................ 157 Accessing the I/O Aggregator Using the CMC Console Only.........................................................157 AAA Accounting.......................................................................................................................
16 Stacking............................................................................................................188 Configuring a Switch Stack...............................................................................................................188 Stacking Prerequisites................................................................................................................. 188 Master Selection Criteria...................................................................................
20 PMUX Mode of the IO Aggregator.............................................................. 216 Link Aggregation............................................................................................................................... 216 Multiple Uplink LAGs with 10G Member Ports........................................................................... 216 Link Layer Discovery Protocol (LLDP)...............................................................................................
Applying a DCB Map on Server-facing Ethernet Ports ............................................................. 255 Creating an FCoE VLAN.............................................................................................................. 256 Creating an FCoE Map ............................................................................................................... 256 Applying an FCoE Map on Server-facing Ethernet Ports...........................................................
Important Points to Remember................................................................................................. 285 24 Standards Compliance.................................................................................. 287 IEEE Compliance...............................................................................................................................287 RFC and I-D Compliance.......................................................................................................
About this Guide 1 This guide describes the supported protocols and software features, and provides configuration instructions and examples, for the Dell Networking FN I/O Aggregator running Dell Networking OS version 9.6(0.0). The I/O Aggregator is installed in a Dell PowerEdge FX2 server chassis. For information about how to install and perform the initial switch configuration, refer to the Getting Started Guides on the Dell Support website at http://www.dell.
Information Symbols This book uses the following information symbols. NOTE: The Note icon signals important operational information. CAUTION: The Caution icon signals information about situations that could result in equipment damage or loss of data. WARNING: The Warning icon signals information about hardware handling that could result in injury. * (Exception). This symbol is a note associated with additional text on the page that is marked with an asterisk.
Before You Start 2 To install the Aggregator in a Dell PowerEdge FX2 server chassis, use the instructions in the Dell PowerEdge FN I/O Aggregator Getting Started Guide that is shipped with the product. The I/O Aggregator (also known as Aggregator) installs with zero-touch configuration. After you power it on, an Aggregator boots up with default settings and auto-configures with software features enabled.
Select this mode to configure PMUX mode CLI commands. For more information on the PMUX mode, refer to PMUX Mode of the IO Aggregator. Stacking mode stack-unit unit iom-mode stack CONFIGURATION mode Dell(conf)#stack-unit 0 iom-mode stack Select this mode to configure Stacking mode CLI commands. For more information on the Stacking mode, refer to Stacking.
• Data center bridging capability exchange protocol (DCBx): Server-facing ports auto-configure in auto-downstream port roles; uplink ports auto-configure in auto-upstream port roles. • Fibre Channel over Ethernet (FCoE) connectivity and FCoE initiation protocol (FIP) snooping: The uplink port channel (LAG 128) is enabled to operate in Fibre channel forwarder (FCF) port mode.
iSCSI Operation Support for iSCSI traffic is turned on by default when the Aggregator powers up. No configuration is required. When an aggregator powers up, it monitors known TCP ports for iSCSI storage devices on all interfaces. When a session is detected, an entry is created and monitored as long as the session is active. The Aggregator also detects iSCSI storage devices on all interfaces and autoconfigures to optimize performance.
Uplink LAG The tagged VLAN membership of the uplink LAG is automatically configured based on the VLAN configuration of all server-facing ports (ports from 1 to 8). The untagged VLAN used for the uplink LAG is always the default VLAN. Server-Facing LAGs The tagged VLAN membership of a server-facing LAG is automatically configured based on the serverfacing ports that are members of the LAG.
3 Configuration Fundamentals The Dell Networking Operating System (OS) command line interface (CLI) is a text-based interface you can use to configure interfaces and protocols. The CLI is structured in modes for security and management purposes. Different sets of commands are available in each mode, and you can limit user access to modes using privilege levels. In Dell Networking OS, after you enable a command, it is entered into the running configuration file.
• EXEC Privilege mode has commands to view configurations, clear counters, manage configuration files, run diagnostics, and enable or disable debug operations. The privilege level is 15, which is unrestricted. You can configure a password for this mode. • CONFIGURATION mode allows you to configure security features, time settings, set logging and SNMP functions, and set line cards on the system. Beneath CONFIGURATION mode are submodes that apply to interfaces, protocols, and features.
CLI Command Mode Prompt Access Command CONFIGURATION Dell(conf)# • • From EXEC privilege mode, enter the configure command. From every mode except EXEC and EXEC Privilege, enter the exit command. NOTE: Access all of the following modes from CONFIGURATION mode.
4 5 Member Member not present not present Dell# Undoing Commands When you enter a command, the command line is added to the running configuration file (runningconfig). To disable a command and remove it from the running-config, enter the no command, then the original command. For example, to delete an IP address configured on an interface, use the no ip address ip-address command. NOTE: Use the help or ? command as described in Obtaining Help.
• Enter [space]? after a keyword lists all of the keywords that can follow the specified keyword. Dell(conf)#clock ? summer-time Configure summer (daylight savings) time timezone Configure time zone Dell(conf)#clock Entering and Editing Commands Notes for entering commands. • The CLI is not case-sensitive. • You can enter partial CLI keywords. – Enter the minimum number of letters to uniquely identify a command.
Short-Cut Key Combination Action Esc F Moves the cursor forward one word. Esc D Deletes all characters from the cursor to the end of the word. Command History Dell Networking OS maintains a history of previously-entered commands for each mode. For example: • • When you are in EXEC mode, the UP and DOWN arrow keys display the previously-entered EXEC mode commands. When you are in CONFIGURATION mode, the UP or DOWN arrows keys recall the previously-entered CONFIGURATION mode commands.
Admin mode is On Admin is enabled Local is enabled Link Delay 65535 pause quantum Dell(conf)# The find keyword displays the output of the show command beginning from the first occurrence of specified text. The following example shows this command used in combination with the show linecard all command.
Data Center Bridging (DCB) 4 On an I/O Aggregator, data center bridging (DCB) features are auto-configured in standalone mode. You can display information on DCB operation by using show commands. Ethernet Enhancements in Data Center Bridging DCB refers to a set of IEEE Ethernet enhancements that provide data centers with a single, robust, converged network to support multiple traffic types, including local area network (LAN), server, and storage traffic.
NOTE: In Dell Networking OS version 9.4.0.x, only the PFC, ETS, and DCBx features are supported in data center bridging. Priority-Based Flow Control In a data center network, priority-based flow control (PFC) manages large bursts of one traffic type in multiprotocol links so that it does not affect other traffic types and no frames are lost due to congestion. When PFC detects congestion on a queue for a specified priority, it sends a pause frame for the 802.1p priority traffic to the transmitting device.
• • • By default, PFC is enabled on an interface with no dot1p priorities configured. You can configure the PFC priorities if the switch negotiates with a remote peer using DCBX. During DCBX negotiation with a remote peer: – DCBx communicates with the remote peer by link layer discovery protocol (LLDP) type, length, value (TLV) to determine current policies, such as PFC support and enhanced transmission selection (ETS) BW allocation.
Either strict-priority or bandwidth percentage can be set for ETS on the priority group. PFC can either be enabled or disabled for the priority group. 3. Configure the priorities to priority group. DCB-MAP mode priority-pgid pgid range is from 0 to 7. Configure priority to priority group mapping from priority 0 to priority 7 in order. 4. Exit the DCB MAP configuration mode. DCB-MAP mode exit 5. Enter interface configuration mode.
• You can enable link-level flow control on the interface. To delete the dcb-map, first disable link-level flow control. PFC is then automatically enabled on the interface because an interface is by default PFC-enabled. • PFC still allows you to configure lossless queues on a port to ensure no-drop handling of lossless traffic. NOTE: You cannot enable PFC and link-level flow control at the same time on an interface.
Figure 2. Enhanced Transmission Selection The following table lists the traffic groupings ETS uses to select multiprotocol traffic for transmission. Table 2. ETS Traffic Groupings Traffic Groupings Description Priority group A group of 802.1p priorities used for bandwidth allocation and queue scheduling. All 802.1p priority traffic in a group must have the same traffic handling requirements for latency and frame loss. Group ID A 4-bit identifier assigned to each priority group.
– ETS shaping – (Credit-based shaping is not supported) • ETS uses the DCB MIB IEEE 802.1azd2.5. Configuring Enhanced Transmission Selection ETS provides a way to optimize bandwidth allocation to outbound 802.1p classes of converged Ethernet traffic. Different traffic types have different service needs. Using ETS, you can create groups within an 802.1p priority class to configure different treatment for traffic with different bandwidth, latency, and best-effort needs.
Data Center Bridging in a Traffic Flow The following figure shows how DCB handles a traffic flow on an interface. Figure 3. DCB PFC and ETS Traffic Handling Data Center Bridging: Auto-DCB-Enable Mode On an Aggregator in standalone or VLT modes, the default mode of operation for data center bridging on Ethernet ports is auto-DCB-enable mode.
When DCB is Disabled (Default) By default, Aggregator interfaces operate with DCB disabled and linklevel flow control enabled. When an interface comes up, it is automatically configured with: • Flow control enabled on input interfaces. • A DCB-MAP policy is applied with PFC disabled. The following example shows a default interface configuration with DCB disabled and link-level flow control enabled.
Lossless traffic is not guaranteed when it is transmitted on a PFC-enabled port and received on a linklevel flow control-enabled port, or transmitted on a link-level flow control-enabled port and received on a PFC-enabled port. Enabling DCB on Next Reload To configure the Aggregator so that all interfaces come up with DCB enabled and flow control disabled, use the dcb enable on-next-reload command. Internal PFC buffers are automatically configured.
NOTE: Dell Networking does not recommend mapping all ingress traffic to a single queue when using PFC and ETS. However, Dell Networking does recommend using Ingress traffic classification using the service-class dynamic dot1p command (honor dot1p) on all DCB-enabled interfaces. If you use L2 class maps to map dot1p priority traffic to egress queues, take into account the default dot1p-queue assignments in the following table and the maximum number of two lossless queues supported on a port.
number (2) of lossless queues supported globally on the switch. In this case, all PFC configurations received from PFC-enabled peers are removed and re-synchronized with the peer devices. • Dell Networking OS does not support MACsec Bypass Capability (MBC). How Enhanced Transmission Selection is Implemented Enhanced transmission selection (ETS) provides a way to optimize bandwidth allocation to outbound 802.1p classes of converged Ethernet traffic. Different traffic types have different service needs.
– ETS is enabled by default with the default ETS configuration applied (all dot1p priorities in the same group with equal bandwidth allocation). ETS Operation with DCBx In DCBx negotiation with peer ETS devices, ETS configuration is handled as follows: • ETS TLVs are supported in DCBx versions CIN, CEE, and IEEE2.5. • ETS operational parameters are determined by the DCBX port-role configurations. • ETS configurations received from TLVs from a peer are validated.
DCBx Operation DCBx performs the following operations: • Discovers DCB configuration (such as PFC and ETS) in a peer device. • Detects DCB mis-configuration in a peer device; that is, when DCB features are not compatibly configured on a peer device and the local switch. Mis-configuration detection is feature-specific because some DCB features support asymmetric configuration.
When an auto-downstream port receives and overwrites its configuration with internally propagated information, one of the following actions is taken: • If the peer configuration received is compatible with the internally propagated port configuration, the link with the DCBx peer is enabled.
values for the configurations to be compatible. For example, ETS uses an asymmetric exchange of parameters between DCBx peers. Symmetric DCB parameters are exchanged between a DCBx-enabled port and a peer port but requires that each configured parameter value be the same for the configurations in order to be compatible. For example, PFC uses an symmetric exchange of parameters between DCBx peers.
Auto-Detection of the DCBx Version The Aggregator operates in auto-detection mode so that a DCBX port automatically detects the DCBX version on a peer port. Legacy CIN and CEE versions are supported in addition to the standard IEEE version 2.5 DCBX. A DCBx port detects a peer version after receiving a valid frame for that version. The local DCBx port reconfigures to operate with the peer version and maintains the peer version on the link until one of the following conditions occurs: • The switch reboots.
Figure 4. DCBx Sample Topology DCBX Prerequisites and Restrictions The following prerequisites and restrictions apply when you configure DCBx operation on a port: • 44 DCBX requires LLDP in both send (TX) and receive (RX) mode to be enabled on a port interface. If multiple DCBX peer ports are detected on a local DCBX interface, LLDP is shut down.
• The CIN version of DCBx supports only PFC, ETS, and FCOE; it does not support iSCSI, backward congestion management (BCN), logical link down (LLD), and network interface virtualization (NIV). DCBX Error Messages The following syslog messages appear when an error in DCBx operation occurs. LLDP_MULTIPLE_PEER_DETECTED: DCBx is operationally disabled after detecting more than one DCBx peer on the port interface. LLDP_PEER_AGE_OUT: DCBx is disabled as a result of LLDP timing out on a DCBx peer interface.
Verifying the DCB Configuration To display DCB configurations, use the following show commands. Table 3. Displaying DCB Configurations Command Output show dcb [stack-unit unit-number] Displays the data center bridging status, number of PFC-enabled ports, and number of PFC-enabled queues. On the master switch in a stack, you can specify a stack-unit number. The range is from 0 to 5.
6 7 0 0 0 0 0 0 Example of the show interfaces pfc summary Command Dell# show interfaces tengigabitethernet 0/4 pfc summary Interface TenGigabitEthernet 0/4 Admin mode is on Admin is enabled Remote is enabled, Priority list is 4 Remote Willing Status is enabled Local is enabled Oper status is Recommended PFC DCBx Oper status is Up State Machine Type is Feature TLV Tx Status is enabled PFC Link Delay 45556 pause quantams Application Priority TLV Parameters : -------------------------------------FCOE TLV
Fields Description is on, PFC advertisements are enabled to be sent and received from peers; received PFC configuration takes effect. The admin operational status for a DCBx exchange of PFC configuration is enabled or disabled. Remote is enabled; Priority list Remote Willing Status is enabled Operational status (enabled or disabled) of peer device for DCBx exchange of PFC configuration with a list of the configured PFC priorities.
Fields Description Application Priority TLV: Local ISCSI Priority Map Priority bitmap used by local DCBx port in ISCSI advertisements in application priority TLVs. Application Priority TLV: Remote FCOE Priority Map Priority bitmap received from the remote DCBX port in FCoE advertisements in application priority TLVs. Application Priority TLV: Remote ISCSI Priority Map Priority bitmap received from the remote DCBX port in iSCSI advertisements in application priority TLVs.
------------------Remote is disabled Local Parameters : -----------------Local is enabled TC-grp Priority# 0 0,1,2,3,4,5,6,7 1 2 3 4 5 6 7 Bandwidth 100% 0% 0% 0% 0% 0% 0% 0% Priority# Bandwidth 0 13% 1 13% 2 13% 3 13% 4 12% 5 12% 6 12% 7 12% Oper status is init Conf TLV Tx Status is disabled Traffic Class TLV Tx Status is disabled TSA ETS ETS ETS ETS ETS ETS ETS ETS TSA ETS ETS ETS ETS ETS ETS ETS ETS Example of the show interface ets detail Command Dell# show interfaces tengigabitethernet Interface Te
4 5 6 7 0% 0% 0% 0% ETS ETS ETS ETS Oper status is init ETS DCBX Oper status is Down Reason: Port Shutdown State Machine Type is Asymmetric Conf TLV Tx Status is enabled Reco TLV Tx Status is enabled 0 Input Conf TLV Pkts, 0 Output Conf TLV Pkts, 0 Error Conf TLV Pkts 0 Input Reco TLV Pkts, 0 Output Reco TLV Pkts, 0 Error Reco TLV Pkts The following table describes the show interface ets detail command fields. Table 5.
Field Description • Internally propagated: ETS configuration parameters were received from configuration source. ETS DCBx Oper status Operational status of ETS configuration on local port: match or mismatch. Reason Reason displayed when the DCBx operational status for ETS on a port is down.
Admin Parameters: -------------------Admin is enabled TC-grp Priority# Bandwidth TSA -----------------------------------------------0 0,1,2,3,4,5,6,7 100% ETS 1 2 3 4 5 6 7 8 Stack unit 1 stack port all Max Supported TC Groups is 4 Number of Traffic Classes is 1 Admin mode is on Admin Parameters: -------------------Admin is enabled TC-grp Priority# Bandwidth TSA -----------------------------------------------0 0,1,2,3,4,5,6,7 100% ETS 1 2 3 4 5 6 7 8 Example of the show interface DCBx detail Command Dell# s
----------------DCBX Operational Version is 0 DCBX Max Version Supported is 0 Sequence Number: 2 Acknowledgment Number: 2 Protocol State: In-Sync Peer DCBX Status: ---------------DCBX Operational Version is 0 DCBX Max Version Supported is 255 Sequence Number: 2 Acknowledgment Number: 2 2 Input PFC TLV pkts, 3 Output PFC TLV pkts, 0 Error PFC pkts, 0 PFC Pause Tx pkts, 0 Pause Rx pkts 2 Input PG TLV Pkts, 3 Output PG TLV Pkts, 0 Error PG TLV Pkts 2 Input Appln Priority TLV pkts, 0 Output Appln Priority TLV p
Field Description Local DCBx TLVs Transmitted Transmission status (enabled or disabled) of advertised DCB TLVs (see TLV code at the top of the show command output). Local DCBx Status: DCBx Operational Version DCBx version advertised in Control TLVs. Local DCBx Status: DCBx Max Version Supported Highest DCBx version supported in Control TLVs. Local DCBx Status: Sequence Number Sequence number transmitted in Control TLVs.
Hierarchical Scheduling in ETS Output Policies ETS supports up to three levels of hierarchical scheduling. For example, you can apply ETS output policies with the following configurations: Priority group 1 Assigns traffic to one priority queue with 20% of the link bandwidth and strictpriority scheduling. Priority group 2 Assigns traffic to one priority queue with 30% of the link bandwidth.
Reason Description LLDP Rx/Tx is disabled LLDP is disabled (Admin Mode set to rx or tx only) globally or on the interface. Waiting for Peer Waiting for peer or detected peer connection has aged out. Multiple Peer Detected Multiple peer connections detected on the interface. Version Conflict DCBx version on peer version is different than the local or globally configured DCBx version.
Reason Description • Total bandwidth assigned to priorities in one or more priority groups is not equal to 100%. Or one of the following ETS failure errors occurred: 58 • Incompatible priority group ID (PGID). • Incompatible bandwidth (BW) allocation. • Incompatible TSA. • Incompatible TC BW. • Incompatible TC TSA.
Dynamic Host Configuration Protocol (DHCP) 5 The Aggregator is auto-configured to operate as a DHCP client. The DHCP server, DHCP relay agent, and secure DHCP features are not supported.The dynamic host configuration protocol (DHCP) is an application layer protocol that dynamically assigns IP addresses and other configuration parameters to network end-stations (hosts) based on configuration policies determined by network administrators.
DHCPDECLINE A client sends this message to the server in response to a DHCPACK if the configuration parameters are unacceptable; for example, if the offered address is already in use. In this case, the client starts the configuration process over by sending a DHCPDISCOVER. DHCPINFORM A client uses this message to request configuration parameters when it assigned an IP address manually rather than with DHCP. The server responds by unicast.
Dell Networking OS Behavior: DHCP is implemented in Dell Networking OS based on RFC 2131 and 3046. Debugging DHCP Client Operation To enable debug messages for DHCP client operation, enter the following debug commands: • Enable the display of log messages for all DHCP packets sent and received on DHCP client interfaces.
1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :Transitioned to state START 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP DISABLED CMD sent to FTOS in state START Dell# release dhcp int Ma 0/0 Dell#1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP RELEASE CMD Received in state BOUND 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: DHCP RELEASE sent in Interface Ma 0
Dell# renew dhcp interface tengigabitethernet 0/1 Dell#May 27 15:55:28: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 : DHCP RENEW CMD Received in state STOPPED May 27 15:55:31: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 : Transitioned to state SELECTING May 27 15:55:31: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: DHCP DISCOVER sent in Interface Ma 0/0 May 27 15:55:31: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: Rec
address remains in the running configuration for the interface. To acquire a new IP address, enter either the renew dhcp command at the EXEC privilege level or the ip address dhcp command at the interface configuration level. If you enter renew dhcp command on an interface already configured with a dynamic IP address, the lease time of the dynamically acquired IP address is renewed. Important: To verify the currently configured dynamic IP address on an interface, enter the show ip dhcp lease command.
DHCP Packet Format and Options DHCP uses the user datagram protocol (UDP) as its transport protocol. The server listens on port 67 and transmits to port 68; the client listens on port 68 and transmits to port 67. The configuration parameters are carried as options in the DHCP packet in Type, Length, Value (TLV) format; many options are specified in RFC 2132.
Option Number and Description • 5: DHCPACK • 6: DHCPNACK • 7: DHCPRELEASE • 8: DHCPINFORM Parameter Request Option 55 List Clients use this option to tell the server which parameters it requires. It is a series of octets where each octet is DHCP option code. Renewal Time Option 58 Specifies the amount of time after the IP address is granted that the client attempts to renew its lease with the original server.
• Insert Option 82 into DHCP packets. CONFIGURATION mode int ma 0/0 ip add dhcp relay information-option remote-id For routers between the relay agent and the DHCP server, enter the trust-downstream option. Releasing and Renewing DHCP-based IP Addresses On an Aggregator configured as a DHCP client, you can release a dynamically-assigned IP address without removing the DHCP client operation on the interface. To manually acquire a new IP address from the DHCP server, use the following command.
DHCPREQUEST DHCPDECLINE DHCPRELEASE DHCPREBIND DHCPRENEW DHCPINFORM Dell# 0 0 0 0 0 0 Example of the show ip dhcp lease Command Dell# show ip dhcp Interface Lease-IP Def-Router ServerId State Lease Obtnd At Lease Expires At ========= ======== ========= ======== ===== ============== ================ Ma 0/0 0.0.0.0/0 0.0.0.0 0.0.0.0 INIT -----NA--------NA---Vl 1 10.1.1.254/24 0.0.0.0 08-27-2011 04:33:39 Renew Time ========== ----NA---08-26-2011 16:21:50 68 10.1.1.
FIP Snooping 6 FIP snooping is auto-configured on an Aggregator in standalone mode. You can display information on FIP snooping operation and statistics by entering show commands. This chapter describes about the FIP snooping concepts and configuration procedures.
FIP provides a functionality for discovering and logging in to an FCF. After discovering and logging in, FIP allows FCoE traffic to be sent and received between FCoE end-devices (ENodes) and the FCF. FIP uses its own EtherType and frame format. The below illustration about FIP discovery, depicts the communication that occurs between an ENode server and an FCoE switch (FCF).
transmitted between an FCoE end-device and an FCF. An Ethernet bridge that provides these functions is called a FIP snooping bridge (FSB). On a FIP snooping bridge, ACLs are created dynamically as FIP login frames are processed. The ACLs are installed on switch ports configured for the following port modes: • ENode mode for server-facing ports • FCF mode for a trusted port directly connected to an FCF You must enable FIP snooping on an Aggregator and configure the FIP snooping parameters.
Figure 8. FIP Snooping on an Aggregator The following sections describes how to configure the FIP snooping feature on a switch that functions as a FIP snooping bridge so that it can perform the following functions: • Performs FIP snooping (allowing and parsing FIP frames) globally on all VLANs or on a per-VLAN basis.
FIP Snooping on VLANs FIP snooping is enabled globally on an Aggregator on all VLANs: • FIP frames are allowed to pass through the switch on the enabled VLANs and are processed to generate FIP snooping ACLs. • FCoE traffic is allowed on VLANs only after a successful virtual-link initialization (fabric login FLOGI) between an ENode and an FCF. All other FCoE traffic is dropped. • Atleast one interface is auto-configured for FCF (FIP snooping bridge — FCF) mode on a FIP snooping-enabled VLAN.
– Each FIP snooping port is auto-configured to operate in Hybrid mode so that it accepts both tagged and untagged VLAN frames. – Tagged VLAN membership is auto-configured on each FIP snooping port that sends and receives FCoE traffic and has links with an FCF, ENode server or another FIP snooping bridge. – The default VLAN membership of the port should continue to operate with untagged frames. FIP snooping is not supported on a port that is configured for non-default untagged VLAN membership.
By default, a port is configured for bridge-to-ENode links. 5. Configure the port for bridge-to-FCF links. INTERFACE or CONFIGURATION mode fip-snooping port-mode fcf NOTE: All these configurations are available only in PMUX mode. NOTE: To disable the FIP snooping feature or FIP snooping on VLANs, use the no version of a command; for example, no feature fip-snooping or no fip-snooping enable. .
interface port-channel portchannel-number] show fip-snooping system Display information on the status of FIP snooping on the switch (enabled or disabled), including the number of FCoE VLANs, FCFs, ENodes, and currently active sessions. show fip-snooping vlan Display information on the FCoE VLANs on which FIP snooping is enabled.
show fip-snooping enode Command Example Dell# show fip-snooping enode Enode MAC Enode Interface VLAN FC-ID ------------------------------d4:ae:52:1b:e3:cd Te 0/1 100 62:00:11 FCF MAC ------54:7f:ee:37:34:40 show fip-snooping enode Command Description Field Description ENode MAC MAC address of the ENode. ENode Interface Slot/ port number of the interface connected to the ENode. FCF MAC MAC address of the FCF. VLAN VLAN ID number used by the session.
FC-ID Fibre Channel session ID assigned by the FCF.
Number Number Number Number Number Number Number Number Number Number Number Number Number of of of of of of of of of of of of of VN Port Keep Alive Multicast Discovery Advertisement Unicast Discovery Advertisement FLOGI Accepts FLOGI Rejects FDISC Accepts FDISC Rejects FLOGO Accepts FLOGO Rejects CVL FCF Discovery Timeouts VN Port Session Timeouts Session failures due to Hardware Config :0 :4451 :2 :2 :0 :16 :0 :0 :0 :0 :0 :0 :0 show fip-snooping statistics Command Description Field Description Numbe
Number of FDISC Rejects Number of FIP FDISC reject frames received on the interface. Number of FLOGO Accepts Number of FIP FLOGO accept frames received on the interface. Number of FLOGO Rejects Number of FIP FLOGO reject frames received on the interface. Number of CVLs Number of FIP clear virtual link frames received on the interface. Number of FCF Discovery Timeouts Number of FCF discovery timeouts that occurred on the interface.
FIP Snooping Example The following figure shows an Aggregator used as a FIP snooping bridge for FCoE traffic between an ENode (server blade) and an FCF (ToR switch). The ToR switch operates as an FCF and FCoE gateway. Figure 9. FIP Snooping on an Aggregator In tbe above figure, DCBX and PFC are enabled on the Aggregator (FIP snooping bridge) and on the FCF ToR switch. On the FIP snooping bridge, DCBX is configured as follows: • A server-facing port is configured for DCBX in an auto-downstream role.
Debugging FIP Snooping To enable debug messages for FIP snooping events, enter the debug fip-snooping command.. Task Command Command Mode Enable FIP snooping debugging on for all or a specified event type, where: debug fip-snooping [all | acl | error | ifm | info | ipc | rx] EXEC PRIVILEGE all enables all debugging options. acl enables debugging only for ACL-specific events. error enables debugging only for error conditions. ifm enables debugging only for IFM events.
IGMP Overview 7 IGMP has three versions. Version 3 obsoletes and is backwards-compatible with version 2; version 2 obsoletes version 1. Internet Group Management Protocol (IGMP) On an Aggregator, IGMP snooping is auto-configured. You can display information on IGMP by using show ip igmp command. Multicast is based on identifying many hosts by a single destination IP address. Hosts represented by the same IP address are a multicast group.
Figure 10. IGMP Version 2 Packet Format Joining a Multicast Group There are two ways that a host may join a multicast group: it may respond to a general query from its querier, or it may send an unsolicited report to its querier. • Responding to an IGMP Query. – One router on a subnet is elected as the querier. The querier periodically multicasts (to allmulticast-systems address 224.0.0.1) a general query to all hosts on the subnet.
• To enable filtering, routers must keep track of more state information, that is, the list of sources that must be filtered. An additional query type, the group-and-source-specific query, keeps track of state changes, while the group-specific and general queries still refresh existing state. • Reporting is more efficient and robust.
• The host’s third message indicates that it is only interested in traffic from sources 10.11.1.1 and 10.11.1.2. Because this request again prevents all other sources from reaching the subnet, the router sends another group-and-source query so that it can satisfy all other hosts. There are no other interested hosts, so the request is recorded. Figure 13.
Figure 14. IGMP Membership Queries: Leaving and Staying in Groups IGMP Snooping IGMP snooping is auto-configured on an Aggregator. Multicast packets are addressed with multicast MAC addresses, which represents a group of devices rather than one unique device. Switches forward multicast frames out of all ports in a VLAN by default, even if there are only a small number of interested hosts, resulting in a waste of bandwidth.
Displaying IGMP Information Command Output show ip igmp groups [group-address [detail] Displays information on IGMP groups. | detail | interface [group-address [detail]] show ip igmp interface [interface] Displays IGMP information on IGMP-enabled interfaces. show ip igmp snooping mrouter [vlan vlannumber] Displays information on IGMP-enabled multicast router (mrouter) interfaces. clear ip igmp groups [group-address | interface] Clears IGMP information for group addresses and IGMPenabled interfaces.
Interface IGMP group join rate limit is not set IGMP snooping is enabled on interface IGMP Snooping query interval is 60 seconds IGMP Snooping querier timeout is 125 seconds IGMP Snooping last member query response interval is 1000 ms IGMP snooping fast-leave is disabled on this interface IGMP snooping querier is disabled on this interface Vlan 3 is up, line protocol is down Inbound IGMP access group is not set Interface IGMP group join rate limit is not set IGMP snooping is enabled on interface IGMP Snoopi
Interfaces 8 This chapter describes TenGigabit Ethernet interface types, both physical and logical, and how to configure them with the Dell Networking Operating Software (OS).
• All interfaces are auto-configured as members of all (4094) VLANs and untagged VLAN 1. All VLANs are up and can send or receive layer 2 traffic. You can use the Command Line Interface (CLI) or CMC interface to configure only the required VLANs on a port interface. Interface Types The following interface types are supported on an Aggregator.
0 packets, 0 bytes 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 0 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics: 0 packets, 0 bytes, 0 underruns 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 0 Multicasts, 0 Broadcasts, 0 Unicasts 0 throttles, 0 discarded, 0 collisions, 0 wreddrops Rate info (inte
Disabling and Re-enabling a Physical Interface By default, all port interfaces on an Aggregator are operationally enabled (no shutdown) to send and receive Layer 2 traffic. You can reconfigure a physical interface to shut it down by entering the shutdown command. To re-enable the interface, enter the no shutdown command. Step Command Syntax Command Mode Purpose 1. interface interface CONFIGURATION Enter the keyword interface followed by the type of interface and slot/port information: 2.
advertise management-tlv system-name dcbx port-role auto-downstream no shutdown Dell(conf-if-te-0/1)# To view the interfaces in Layer 2 mode, use the show interfaces switchport command in EXEC mode. Management Interfaces An Aggregator auto-configures with a DHCP-based IP address for in-band management on VLAN 1 and remote out-of-band (OOB) management. The IOM management interface has both a public IP and private IP address on the internal Fabric D interface.
Slot range: 0-0 To configure an IP address on a management interface, use either of the following commands in MANAGEMENT INTERFACE mode: Command Syntax Command Mode Purpose ip address ip-address mask INTERFACE Configure an IP address and mask on the interface. • ip address dhcp INTERFACE ip-address mask: enter an address in dotted-decimal format (A.B.C.D), the mask must be in /prefix format (/x) Acquire an IP address from the DHCP server.
VLAN Membership A virtual LAN (VLANs) is a logical broadcast domain or logical grouping of interfaces in a LAN in which all data received is kept locally and broadcast to all members of the group. In Layer 2 mode, VLANs move traffic at wire speed and can span multiple devices. Dell Networking OS supports up to 4093 port-based VLANs and one default VLAN, as specified in IEEE 802.1Q. VLAN provide the following benefits: • Improved security because you can isolate groups of users into different VLANs.
interface level to filter traffic. When you enable tagging, a tag header is added to the frame after the destination and source MAC addresses. The information that is preserved as the frame moves through the network. The below figure shows the structure of a frame with a tag header. The VLAN ID is inserted in the tag header. Figure 15.
Command Syntax Command Mode Purpose vlan untagged {vlan-id} INTERFACE Add the interface as an untagged member of one or more VLANs, where: vlan-id specifies an untagged VLAN number. Range: 2-4094 If you configure additional VLAN membership and save it to the startup configuration, the new VLAN configuration takes place immediately.
vlan tagged 2-4 ! port-channel-protocol LACP port-channel 1 mode active ! protocol lldp advertise management-tlv system-name dcbx port-role auto-downstream no shutdown Dell(conf-if-te-0/2)# Except for hybrid ports, only a tagged interface can be a member of multiple VLANs. You can assign hybrid ports to two VLANs if the port is untagged in one VLAN and tagged in all others.
Port Channel Definitions and Standards Link aggregation is defined by IEEE 802.3ad as a method of grouping multiple physical interfaces into a single logical interface—a link aggregation group (LAG) or port channel. A LAG is “a group of links that appear to a MAC client as if they were a single link” according to IEEE 802.3ad. In Dell Networking OS, a LAG is referred to as a port channel interface. A port channel provides redundancy by aggregating physical interfaces into one logical interface.
10GbE Interface in Port Channels When TenGigabitEthernet interfaces are added to a port channel, the interfaces must share a common speed. When interfaces have a configured speed different from the port channel speed, the software disables those interfaces. The common speed is determined when the port channel is first enabled. At that time, the software checks the first interface listed in the port channel configuration.
To display detailed information on a port channel, enter the show interfaces port-channel command in EXEC Privilege mode. The below example shows the port channel’s mode (L2 for Layer 2, L3 for Layer 3, and L2L3 for a Layer 2 port channel assigned to a routed VLAN), the status, and the number of interfaces belonging to the port channel. In this example, the Port-channel 1 is a dynamically created port channel based on the NIC teaming configuration in connected servers learned via LACP.
NOTE: When creating an interface range, interfaces appear in the order they were entered and are not sorted. To display all interfaces that have been validated under the interface range context, use the show range in Interface Range mode. To display the running configuration only for interfaces that are part of interface range, use the show configuration command in Interface Range mode.
Commas The example below shows how to use commas to add different interface types to the range, enabling all Ten Gigabit Ethernet interfaces in the range 0/1 to 0/5. Multiple-Range Bulk Configuration Ten-Gigabit Ethernet and Ten-Gigabit Ethernet Dell(conf-if)# interface range tengigabitethernet 0/1 - 2, tengigabitethernet 0/1 - 5 Dell(conf-if-range-te-0/1-5)# no shutdown Dell(conf-if-range-te-0/1-5)# Monitor and Maintain Interfaces You can display interface statistics with the monitor interface command.
Over 255B packets: Over 511B packets: Over 1023B packets: Error statistics: Input underruns: Input giants: Input throttles: Input CRC: Input IP checksum: Input overrun: Output underruns: Output throttles: m l T q - Change mode Page up Increase refresh interval Quit 0 0 0 0 pps 0 pps 0 pps 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 pps pps pps pps pps pps pps pps c - Clear screen a - Page down t - Decrease refresh interval Flow Control Using Ethernet Pause Frames An Aggregator auto-conf
The following error message appears when trying to enable half duplex and flow control configuration is on: Can’t configure half duplex when flowcontrol is on, config ignored. MTU Size The Aggregator auto-configures interfaces to use a maximum MTU size of 12,000 bytes. If a packet includes a Layer 2 header, the difference in bytes between the link MTU and IP MTU must be enough to include the Layer 2 header. For example, for VLAN packets, if the MTU is 1400, the link MTU must be no less than 1422.
For example, the VLAN contains tagged members with a link MTU of 1522 and an IP MTU of 1500 and untagged members with a link MTU of 1518 and an IP MTU of 1500. The VLAN’s Link MTU cannot be higher than 1518 bytes and its IP MTU cannot be higher than 1500 bytes. Auto-Negotiation on Ethernet Interfaces Setting Speed and Duplex Mode of Ethernet Interfaces By default, auto-negotiation of speed and duplex mode is enabled on 10GbE Ethernet interface on an Aggregator.
show interface status Command Example: Dell# show interfaces Port Description Te 0/1 Te 0/2 Te 0/3 Te 0/4 Te 0/5 Te 0/6 Te 0/7 Te 0/8 toB300 Fc 0/9 Fc 0/10 Te 0/11 Te 0/12 status Status Up Down Up Down Up Up Up Down Up Up Down Down Speed Duplex Vlan 10000 Mbit Full 1-4094 Auto Auto 1-1001,1003-4094 10000 Mbit Full 1-1001,1003-4094 Auto Auto 1-1001,1003-4094 10000 Mbit Full 1-4094 10000 Mbit Full 1-4094 10000 Mbit Full 1-4094 Auto Auto 1-1001,1003-4094 8000 Mbit Full -8000 Mbit Full -Auto Auto -Auto Auto -
speed auto interfaceconfig mode Supported Supported Supported Supported Error messages not thrown wherever it says not supported speed 1000 interfaceconfig mode Supported Supported Supported Supported speed 10000 interfaceconfig mode Supported Supported Not Supported Not supported Error messages not thrown wherever it says not supported negotiation auto interfaceconfig mode Supported Not Not supported supported( Should some error message be thrown?) Not supported Error messages not thrown
interface, whether the interface supports IEEE 802.1Q tagging or not, and the VLANs to which the interface belongs. show interfaces switchport Command Example: Dell#show interfaces switchport Codes: tagged U x G i - Untagged, T - Tagged Dot1x untagged, X - Dot1x tagged GVRP tagged, M - Trunk, H - VSN tagged Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT Name: TenGigabitEthernet 0/1 802.1QTagged: False Vlan membership: Q Vlans U 1 Name: TenGigabitEthernet 0/2 802.
Without an interface specified, the command clears all interface counters. • (OPTIONAL) Enter the following interface keywords and slot/port or number information: • For a Port Channel interface, enter the keyword portchannel followed by a number from 1 to 128. • For a 10-Gigabit Ethernet interface, enter the keyword TenGigabitEthernet followed by the slot/port numbers. • For a VLAN, enter the keyword vlan followed by a number from 1 to 4094.
9 iSCSI Optimization An Aggregator enables internet small computer system interface (iSCSI) optimization with default iSCSI parameter settings(Default iSCSI Optimization Values) and is auto-provisioned to support: iSCSI Optimization: Operation To display information on iSCSI configuration and sessions, use show commands. iSCSI optimization enables quality-of-service (QoS) treatment for iSCSI traffic.
• iSCSI DCBx TLVs are supported. The following figure shows iSCSI optimization between servers in a server enclosure and a storage array in which an Aggregator connects installed servers (iSCSI initiators) to a storage array (iSCSI targets) in a SAN network. iSCSI optimization running on the Aggregator is configured to use dot1p priority-queue assignments to ensure that iSCSI traffic in these sessions receives priority treatment when forwarded on Aggregator hardware. Figure 16.
You can configure the switch to monitor traffic for additional port numbers or a combination of port number and target IP address, and you can remove the well-known port numbers from monitoring.
occur like jumbo frames on all ports and no storm control and spanning tree port-fast on the port of detection. Configuring iSCSI Optimization To configure iSCSI optimization, use the following commands. 1. For a non-DCB environment: Enable session monitoring. CONFIGURATION mode cam-acl l2acl 4 ipv4acl 4 ipv6acl 0 ipv4qos 2 l2qos 1 l2pt 0 ipmacacl 0 vman-qos 0 ecfmacl 0 fcoeacl 0 iscsioptacl 2 NOTE: Content addressable memory (CAM) allocation is optional.
[no] iscsi target port tcp-port-1 [tcp-port-2...tcp-port-16] [ip-address address] • tcp-port-n is the TCP port number or a list of TCP port numbers on which the iSCSI target listens to requests. You can configure up to 16 target TCP ports on the switch in one command or multiple commands. The default is 860, 3260. Separate port numbers with a comma.
You can send iSCSI TLVs either globally or on a specified interface. The interface configuration takes priority over global configuration. The default is Enabled. 10. (Optional) Configures the advertised priority bitmap in iSCSI application TLVs. LLDP CONFIGURATION mode [no] iscsi priority-bits. The default is 4 (0x10 in the bitmap). 11. (Optional) Enter interface configuration mode to configure the auto-detection of Dell Compellent disk arrays. CONFIGURATION mode interface port-type slot/port 12.
-----------------------------------------------TCP Port Target IP Address 3260 860 show iscsi sessions Command Example Dell# show iscsi sessions Session 0: ---------------------------------------------------------------------------------------Target: iqn.2001-05.com.equallogic:0-8a0906-0e70c2002-10a0018426a48c94-iom010 Initiator: iqn.1991-05.com.microsoft:win-x9l8v27yajg ISID: 400001370000 Session 1: ---------------------------------------------------------------------------------------Target: iqn.2001-05.
Link Aggregation 10 The I/O Aggregator auto-configures with link aggregation groups (LAGs) as follows: • All uplink ports are automatically configured in a single port channel (LAG 128). • Server-facing LAGs are automatically configured if you configure server for link aggregation control protocol (LACP)-based NIC teaming (Network Interface Controller (NIC) Teaming). No manual configuration is required to configure Aggregator ports in the uplink or a server-facing LAG.
number is assigned based on the first available number in the range from 1 to 127. For each unique remote system-id and port-key combination, a new LAG is formed and the port automatically becomes a member of the LAG. All ports with the same combination of system ID and port key automatically become members of the same LAG. Ports are automatically removed from the LAG if the NIC teaming configuration on a serverfacing port changes or if the port goes operationally down.
• Creating a Port Channel (mandatory) • Adding a Physical Interface to a Port Channel (mandatory) • Reassigning an Interface to a New Port Channel (optional) • Configuring the Minimum Oper Up Links in a Port Channel (optional) • Configuring VLAN Tags for Member Interfaces (optional) • Deleting or Disabling a Port Channel (optional) Creating a Port Channel You can create up to 128 port channels with four port members per group on the Aggregator.
To add a physical interface to a port, use the following commands. 1. Add the interface to a port channel. INTERFACE PORT-CHANNEL mode channel-member interface This command is applicable only in PMUX mode. The interface variable is the physical interface type and slot/port information. 2. Double check that the interface was added to the port channel.
0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics: 0 packets, 0 bytes, 0 underruns 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 0 Multicasts, 0 Broadcasts, 0 Unicasts 0 throttles, 0 discarded, 0 collisions, 0 wreddrops Rate info (interval 299 seconds): Input 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Output 00.00 Mbits/sec, 0 packets/sec, 0.
This command is applicable only in PMUX mode. 3. Add the interface to the second port channel. INTERFACE PORT-CHANNEL mode channel-member interface Example of Moving an Interface to a New Port Channel The following example shows moving the TenGigabitEthernet 0/8 interface from port channel 4 to port channel 3.
Dell(conf-if-te-0/2)#vlan tagged 2,3-4 2. Use the switchport command in INTERFACE mode to enable Layer 2 data transmissions through an individual interface INTERFACE mode Dell(conf-if-te-0/2)#switchport This switchport configuration is allowed only in PMUX mode. In all other modes, it is automatically configured. 3. Verify the manually configured VLAN membership (show interfaces switchport interface command).
Dell(config)# io-aggregator auto-lag enable To disable the auto LAG on all the server ports, use the no io-aggregator auto-lag enable command. When disabled, all the server ports associated in a LAG are removed and the LAG itself gets removed. Any LACPDUs received on the server ports are discarded. In VLT mode, the global auto LAG is automatically synced to the peer VLT through ICL message. 2. Enable the auto LAG on a specific server port.
0 CRC, 0 overrun, 0 discarded Output Statistics: 0 packets, 0 bytes, 0 underruns 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 0 Multicasts, 0 Broadcasts, 0 Unicasts 0 throttles, 0 discarded, 0 collisions, 0 wreddrops Rate info (interval 299 seconds): Input 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Output 00.00 Mbits/sec, 0 packets/sec, 0.
Dell(conf)#interface port-channel 128 Dell(conf-if-po-128)#minimum-links 4 Use the show interfaces port-channel command to view information regarding the configured LAG or port channel settings. The Minimum number of links to bring Port-channel up is field in the output of this command displays the configured number of active links for the LAG to be enabled.
Preserving LAG and Port Channel Settings in Nonvolatile Storage Use the write memory command on an I/O Aggregator, which operates in either standalone or PMUX modes, to save the LAG port channel configuration parameters. This behavior enables the port channels to be brought up because the configured interface attributes are available in the system database during the booting of the device.
Table 8.
LineSpeed 40000 Mbit Members in this channel: Te0/9 Te0/10 Te 0/11 Te0/12 ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:11:50 Queueing strategy: fifo Input Statistics: 182 packets, 17408 bytes 92 64-byte pkts, 0 over 64-byte pkts, 90 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 182 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics: 2999 packets, 383916 bytes, 0 underruns 5 64-by
show interfaces port-channel 1 Command Example Dell# show interfaces port-channel 1 Port-channel 1 is up, line protocol is up Created by LACP protocol Hardware address is 00:01:e8:e1:e1:c1, Current address is 00:01:e8:e1:e1:c1 Interface index is 1107755009 Minimum number of links to bring Port-channel up is 1 Internet address is not set Mode of IP Address Assignment : NONE DHCP Client-ID :lag10001e8e1e1c1 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed 10000 Mbit Members in this channel: Te 0/12(U) ARP type:
Layer 2 11 The Aggregator supports CLI commands to manage the MAC address table: • Clearing the MAC Address Entries • Displaying the MAC Address Table The Aggregator auto-configures with support for Network Interface Controller (NIC) Teaming. NOTE: On an Aggregator, all ports are configured by default as members of all (4094) VLANs, including the default VLAN. All VLANs operate in Layer 2 mode.
Displaying the MAC Address Table To display the MAC address table, use the following command. • Display the contents of the MAC address table. EXEC Privilege mode NOTE: This command is available only in PMUX mode. show mac-address-table [address | aging-time [vlan vlan-id]| count | dynamic | interface | static | vlan] – address: displays the specified entry. – aging-time: displays the configured aging-time.
Figure 17. Redundant NOCs with NIC Teaming MAC Address Station Move When you use NIC teaming, consider that the server MAC address is originally learned on Port 0/1 of the switch (see figure below). If the NIC fails, the same MAC address is learned on Port 0/5 of the switch. The MAC address is disassociated with one port and re-associated with another in the ARP table; in other words, the ARP entry is “moved”. The Aggregator is auto-configured to support MAC Address station moves.
Figure 18. MAC Address Station Move MAC Move Optimization Station-move detection takes 5000ms because this is the interval at which the detection algorithm runs.
Link Layer Discovery Protocol (LLDP) 12 Link layer discovery protocol (LLDP) advertises connectivity and management from the local station to the adjacent stations on an IEEE 802 LAN. LLDP facilitates multi-vendor interoperability by using standard management tools to discover and make available a physical topology for network management. The Dell Networking operating software implementation of LLDP is based on IEEE standard 801.1ab.
TLVs are encapsulated in a frame called an LLDP data unit (LLDPDU), which is transmitted from one LLDP-enabled device to its LLDP-enabled neighbors. LLDP is a one-way protocol. LLDP-enabled devices (LLDP agents) can transmit and/or receive advertisements, but they cannot solicit and do not respond to advertisements. There are five types of TLVs (as shown in the below table). All types are mandatory in the construction of an LLDPDU except Optional TLVs.
Management TLVs A management TLV is an optional TLVs sub-type. This kind of TLV contains essential management information about the sender. Organizationally Specific TLVs A professional organization or a vendor can define organizationally specific TLVs. They have two mandatory fields (as shown in the following illustration) in addition to the basic TLV fields. • Organizationally Unique Identifier (OUI)—a unique number assigned by the IEEE to an organization or vendor.
Type TLV Description 8 Management address Indicates the network address of the management interface. The Dell Networking OS does not currently support this TLV. 127 Port-VLAN ID On Dell Networking systems, indicates the untagged VLAN to which a port belongs. 127 Port and Protocol VLAN ID On Dell Networking systems, indicates the tagged VLAN to which a port belongs (and the untagged VLAN to which a port belongs if the port is in Hybrid mode).
Type TLV Description and the port identification of the LAG. The Dell Networking OS does not currently support this TLV. 127 Maximum Frame Size Indicates the maximum frame size capability of the MAC and PHY. LLDP-MED Capabilities TLV The LLDP-MED capabilities TLV communicates the types of TLVs that the endpoint device and the network connectivity device support. LLDP-MED network connectivity devices must transmit the Network Policies TLV.
Table 11. LLDP-MED Device Types Value Device Type 0 Type Not Defined 1 Endpoint Class 1 2 Endpoint Class 2 3 Endpoint Class 3 4 Network Connectivity 5–255 Reserved LLDP-MED Network Policies TLV A network policy in the context of LLDP-MED is a device’s VLAN configuration and associated Layer 2 and Layer 3 configurations.
Type Application Description 4 Guest Voice Signaling Specify this application type only if guest voice control packets use a separate network policy than voice data. 5 Softphone Voice Specify this application type only if guest voice control packets use a separate network policy than voice data. 6 Video Conferencing Specify this application type for dedicated video conferencing and other similar appliances supporting real-time interactive video.
Figure 24. Extended Power via MDI TLV LLDP Operation On an Aggregator, LLDP operates as follows: • LLDP is enabled by default. • LLDPDUs are transmitted and received by default. LLDPDUs are transmitted periodically. The default interval is 30 seconds. • LLDPDU information received from a neighbor expires after the default Time to Live (TTL) value: 120 seconds. • Dell Networking OS supports up to eight neighbors per interface.
protocol lldp R1(conf-if-te-0/3-lldp)# Viewing Information Advertised by Adjacent LLDP Agents To view brief information about adjacent devices or to view all the information that neighbors are advertising, use the following commands. • Display brief information about adjacent devices. • show lldp neighbors Display all of the information that neighbors are advertising.
Clearing LLDP Counters You can clear LLDP statistics that are maintained on an Aggregator for LLDP counters for frames transmitted to and received from neighboring devices on all or a specified physical interface. To clear LLDP counters, enter the clear lldp counters command. Command Syntax Command Mode Purpose clear lldp counters [interface] EXEC Privilege Clear counters for LLDP frames sent to and received from neighboring devices on all Aggregator interfaces or on a specified interface.
Figure 25. The debug lldp detail Command — LLDPDU Packet Dissection Relevant Management Objects Dell Networkings OS supports all IEEE 802.1AB MIB objects. The following tables list the objects associated with: • received and transmitted TLVs • the LLDP configuration on the local agent • IEEE 802.1AB Organizationally Specific TLVs • received and transmitted LLDP-MED TLVs Table 13.
MIB Object Category Basic TLV Selection LLDP Variable LLDP MIB Object Description msgTxInterval lldpMessageTxInterval Transmit Interval value. rxInfoTTL lldpRxInfoTTL Time to live for received TLVs. txInfoTTL lldpTxInfoTTL Time to live for transmitted TLVs. mibBasicTLVsTxEnable lldpPortConfigTLVsTxEnabl e Indicates which management TLVs are enabled for system ports.
Table 14.
TLV Type TLV Name TLV Variable System interface numbering Local subtype interface number OID LLDP MIB Object lldpLocManAddrIfSu btype Remote lldpRemManAddrIfS ubtype Local lldpLocManAddrIfId Remote lldpRemManAddrIfId Local lldpLocManAddrOID Remote lldpRemManAddrOI D Table 15. LLDP 802.
Table 16.
TLV Sub-Type TLV Name TLV Variable System LLDP-MED MIB Object 3 Location Data Format Local lldpXMedLocLocatio nSubtype Remote lldpXMedRemLocati onSubtype Local lldpXMedLocLocatio nInfo Remote lldpXMedRemLocati onInfo Local lldpXMedLocXPoED eviceType Remote lldpXMedRemXPoED eviceType Local lldpXMedLocXPoEPS EPowerSource Location Identifier Location ID Data 4 Extended Power via MDI Power Device Type Power Source lldpXMedLocXPoEP DPowerSource Remote lldpXMedRemXPoEP SEPowerSource lld
Port Monitoring 13 The Aggregator supports user-configured port monitoring. See Configuring Port Monitoring for the configuration commands to use. Port monitoring copies all incoming or outgoing packets on one port and forwards (mirrors) them to another port. The source port is the monitored port (MD) and the destination port is the monitoring port (MG). Configuring Port Monitoring To configure port monitoring, use the following commands. 1.
Example of Viewing Port Monitoring Configuration To display information on currently configured port-monitoring sessions, use the show monitor session command from EXEC Privilege mode.
• The destination interface must be an uplink port (ports 9 to 12). • In general, a monitoring port should have no ip address and no shutdown as the only configuration; the Dell Networking OS permits a limited set of commands for monitoring ports. You can display these commands using the ? command. • A monitoring port may not be a member of a VLAN. • There may only be one destination port in a monitoring session. • A source port (MD) can only be monitored by one destination port (MG).
• If the MD port is a Layer 2 port, the frames are tagged with the VLAN ID of the VLAN to which the MD belongs. • If the MD port is a Layer 3 port, the frames are tagged with VLAN ID 4095. • If the MD port is in a Layer 3 VLAN, the frames are tagged with the respective Layer 3 VLAN ID.
Security 14 Security features are supported on the I/O Aggregator. This chapter describes several ways to provide access security to the Dell Networking system. For details about all the commands described in this chapter, refer to the Security chapter in the Dell PowerEdge FN I/O Aggregator Command Line Reference Guide. Understanding Banner Settings This functionality is supported on the I/O Aggregator.
AAA Accounting Accounting, authentication, and authorization (AAA) accounting is part of the AAA security model. For details about commands related to AAA security, refer to the Security chapter in the Dell Networking OS Command Reference Guide. AAA accounting enables tracking of services that users are accessing and the amount of network resources being consumed by those services.
Suppressing AAA Accounting for Null Username Sessions When you activate AAA accounting, the Dell Networking OS software issues accounting records for all users on the system, including users whose username string is NULL because of protocol translation. An example of this is a user who comes in on a line where the AAA authentication login method-list none command is applied.
Monitoring AAA Accounting Dell Networking OS does not support periodic interim accounting because the periodic command can cause heavy congestion when many users are logged in to the network. No specific show command exists for TACACS+ accounting. To obtain accounting records displaying information about users currently logged in, use the following command. • Step through all active sessions and print all the accounting records for the actively accounted functions.
• Enabling AAA Authentication For a complete list of all commands related to login authentication, refer to the Security chapter in the Dell Networking OS Command Reference Guide. Configure Login Authentication for Terminal Lines You can assign up to five authentication methods to a method list. Dell Networking OS evaluates the methods in the order in which you enter them in each list.
Enabling AAA Authentication To enable AAA authentication, use the following command. • Enable AAA authentication. CONFIGURATION mode aaa authentication enable {method-list-name | default} method1 [... method4] – default: uses the listed authentication methods that follow this argument as the default list of methods when a user logs in. – method-list-name: character string used to name the list of enable authentication methods activated when a user logs in. – method1 [...
Example of Enabling Local Authentication for the Console and Remote Authentication for VTY Lines Dell(config)# aaa authentication enable mymethodlist radius tacacs Dell(config)# line vty 0 9 Dell(config-line-vty)# enable authentication mymethodlist Server-Side Configuration • TACACS+ — When using TACACS+, Dell Networking OS sends an initial packet with service type SVC_ENABLE, and then sends a second packet with just the password. The TACACS server must have an entry for username $enable$.
Configuration Task List for RADIUS To authenticate users using RADIUS, you must specify at least one RADIUS server so that the system can communicate with and configure RADIUS as one of your authentication methods. The following list includes the configuration tasks for RADIUS.
• Enable AAA login authentication for the specified RADIUS method list. LINE mode login authentication {method-list-name | default} • This procedure is mandatory if you are not using default lists. To use the method list. CONFIGURATION mode authorization exec methodlist Specifying a RADIUS Server Host When configuring a RADIUS server host, you can set different communication parameters, such as the UDP port, the key password, the number of retries, and the timeout.
Setting Global Communication Parameters for all RADIUS Server Hosts You can configure global communication parameters (auth-port, key, retransmit, and timeout parameters) and specific host communication parameters on the same system. However, if you configure both global and specific host parameters, the specific host parameters override the global parameters for that RADIUS server host. To set global communication parameters for all RADIUS server hosts, use the following commands.
TACACS+ Dell Networking OS supports terminal access controller access control system (TACACS+ client, including support for login authentication. Configuration Task List for TACACS+ The following list includes the configuration task for TACACS+ functions.
Example of a Failed Authentication To view the configuration, use the show config in LINE mode or the show running-config tacacs + command in EXEC Privilege mode. If authentication fails using the primary method, Dell Networking OS employs the second method (or third method, if necessary) automatically. For example, if the TACACS+ server is reachable, but the server key is invalid, Dell Networking OS proceeds to the next authentication method.
Example of Specifying a TACACS+ Server Host Dell(conf)# Dell(conf)#aaa authentication login tacacsmethod tacacs+ Dell(conf)#aaa authentication exec tacacsauthorization tacacs+ Dell(conf)#tacacs-server host 25.1.1.2 key Force Dell(conf)# Dell(conf)#line vty 0 9 Dell(config-line-vty)#login authentication tacacsmethod Dell(config-line-vty)#end Specifying a TACACS+ Server Host To specify a TACACS+ server host and configure its communication parameters, use the following command.
Enabling SCP and SSH Secure shell (SSH) is a protocol for secure remote login and other secure network services over an insecure network. Dell Networking OS is compatible with SSH versions 1.5 and 2, in both the client and server modes. SSH sessions are encrypted and use authentication. SSH is enabled by default. For details about the command syntax, refer to the Security chapter in the Dell Networking OS Command Line Interface Reference Guide.
Using SCP with SSH to Copy a Software Image To use secure copy (SCP) to copy a software image through an SSH connection from one switch to another, use the following commands. 1. On Chassis One, set the SSH port number (port 22 by default). CONFIGURATION mode ip ssh server port number 2. On Chassis One, enable SSH. CONFIGURATION mode ip ssh server enable 3. On Chassis Two, invoke SCP. CONFIGURATION mode copy scp: flash: 4.
• The SSH server and client are enhanced to support the VRF awareness functionality. Using this capability, an SSH client or server can use a VRF instance name to look up the correct routing table and establish a connection. Enabling SSH Authentication by Password Authenticate an SSH client by prompting for a password when attempting to connect to the Dell Networking system. This setup is the simplest method of authentication and uses SSH version 1.
Example of Generating RSA Keys admin@Unix_client#ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/admin/.ssh/id_rsa): /home/admin/.ssh/id_rsa already exists. Overwrite (y/n)? y Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/admin/.ssh/id_rsa. Your public key has been saved in /home/admin/.ssh/id_rsa.pub. Configuring Host-Based SSH Authentication Authenticate a particular host.
admin@Unix_client# ls id_rsa id_rsa.pub shosts admin@Unix_client# cat shosts 10.16.127.201, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA8K7jLZRVfjgHJzUOmXxuIbZx/AyW hVgJDQh39k8v3e8eQvLnHBIsqIL8jVy1QHhUeb7GaDlJVEDAMz30myqQbJgXBBRTWgBpLWwL/ doyUXFufjiL9YmoVTkbKcFmxJEMkE3JyHanEi7hg34LChjk9hL1by8cYZP2kYS2lnSyQWk= The following example shows creating rhosts. admin@Unix_client# ls id_rsa id_rsa.pub rhosts shosts admin@Unix_client# cat rhosts 10.16.127.
The Telnet server or client is VRF-aware. You can enable a Telnet server or client to listen to a specific VRF by using the vrf vrf-instance-name parameter in the telnet command. This capability enables a Telent server or client to look up the correct routing table and establish a connection.
local database and applies it. (Dell Networking OS then can close the connection if a user is denied access.) NOTE: If a VTY user logs in with RADIUS authentication, the privilege level is applied from the RADIUS server only if you configure RADIUS authentication. The following example shows how to allow or deny a Telnet connection to a user. Users see a login prompt even if they cannot log in. No access class is configured for the VTY line. It defaults from the local database.
To apply a MAC ACL on a VTY line, use the same access-class command as IP ACLs. The following example shows how to deny incoming connections from subnet 10.0.0.0 without displaying a login prompt.
15 Simple Network Management Protocol (SNMP) Network management stations use SNMP to retrieve or alter management data from network elements. A datum of management information is called a managed object; the value of a managed object can be static or variable. Network elements store managed objects in a database called a management information base (MIB).
Setting up SNMP Dell Networking OS supports SNMP version 1 and version 2 which are community-based security models. The primary difference between the two versions is that version 2 supports two additional protocol operations (informs operation and snmpgetbulk query) and one additional object (counter64 object). Creating a Community For SNMPv1 and SNMPv2, create a community to enable the community-based security in the Dell Networking OS.
• Read the value of the managed object directly below the specified object. • snmpgetnext -v version -c community agent-ip {identifier.instance | descriptor.instance} Read the value of many objects at once. snmpwalk -v version -c community agent-ip {identifier.instance | descriptor.instance} In the following example, the value “4” displays in the OID before the IP address for IPv4. For an IPv6 IP address, a value of “16” displays.
To display the ports in a VLAN, send an snmpget request for the object dot1qStaticEgressPorts using the interface index as the instance number, as shown in the following example. Example of Viewing the Ports in a VLAN in SNMP snmpget -v2c -c mycommunity 10.11.131.185 .1.3.6.1.2.1.17.7.1.4.3.1.2.1107787786 SNMPv2-SMI::mib-2.17.7.1.4.3.1.2.
Fetching Dynamic MAC Entries using SNMP The Aggregator supports the RFC 1493 dot1d table for the default VLAN and the dot1q table for all other VLANs. NOTE: The table contains none of the other information provided by the show vlan command, such as port speed or whether the ports are tagged or untagged. NOTE: The 802.1q Q-BRIDGE MIB defines VLANs regarding 802.1d, as 802.1d itself does not define them.
>snmpwalk -v 2c -c techpubs 10.11.131.162 .1.3.6.1.2.1.17.4.3.1 SNMPv2-SMI::mib-2.17.4.3.1.1.0.1.232.6.149.172 = Hex-STRING: 00 01 E8 06 95 AC Example of Fetching Dynamic MAC Addresses on a Non-default VLANs In the following example, TenGigabitEthernet 0/7 is moved to VLAN 1000, a non-default VLAN. To fetch the MAC addresses learned on non-default VLANs, use the object dot1qTpFdbTable. The instance number is the VLAN number concatenated with the decimal conversion of the MAC address.
are not given. The interface is physical, so this must be represented by a 0 bit, and the unused bit is always 0. These two bits are not given because they are the most significant bits, and leading zeros are often omitted. For interface indexing, slot and port numbering begins with binary one. If the Dell Networking system begins slot and port numbering from 0, binary 1 represents slot and port 0.
dot3aCurAggVlanId SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.1.1.0.0.0.0.0.1.1 dot3aCurAggMacAddr SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.2.1.0.0.0.0.0.1.1 00 00 00 01 dot3aCurAggIndex SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.3.1.0.0.0.0.0.1.1 dot3aCurAggStatus SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.4.1.0.0.0.0.0.1.1 active, 2 – status inactive = INTEGER: 1 = Hex-STRING: 00 00 = INTEGER: 1 = INTEGER: 1 << Status For L3 LAG, you do not have this support. SNMPv2-MIB::sysUpTime.
Unit Slot Expected Inserted Next Boot Status/Power(On/Off) -----------------------------------------------------------------------1 0 SFP+ SFP+ AUTO Good/On 1 1 QSFP+ QSFP+ AUTO Good/On * - Mismatch Dell# The status of the MIBS is as follows: $ snmpwalk -c public -v 2c 10.16.150.162 .1.3.6.1.2.1.47.1.1.1.1.2 SNMPv2-SMI::mib-2.47.1.1.1.1.2.1 = "" SNMPv2-SMI::mib-2.47.1.1.1.1.2.2 = STRING: "PowerEdge-FN-410S-IOA" SNMPv2-SMI::mib-2.47.1.1.1.1.2.3 = STRING: "Chassis 0 container" SNMPv2-SMI::mib-2.47.1.1.1.1.2.
Fetching the Switchport Configuration and the Logical Interface Configuration Important Points to Remember • The SNMP should be configured in the chassis and the chassis management interface should be up with the IP address. • If a port is configured in a VLAN, the respective bit for that port will be set to 1 in the specific VLAN. • In the aggregator, all the server ports and uplink LAG 128 will be in switchport. Hence, the respective bits are set to 1. The following output is for the default VLAN.
Stacking 16 An Aggregator auto-configures to operate in standalone mode. To use an Aggregator in a stack, you must manually configure it using the CLI to operate in stacking mode. Stacking is supported on the FN410S and FN410T Aggregators with ports 9 and 10 as the stack ports. The Aggregator supports both ring and daisy-chain topology and stacking of the same type. FN 410S and FN 410T Aggregators support two-unit in-chassis stacking and up to six units stacking across the chassis.
2. The switch with the highest MAC address at boot time. 3. A unit is selected as Standby by the administrator, and a fail over action is manually initiated or occurs due to a Master unit failure. No record of previous stack mastership is kept when a stack loses power. As it reboots, the election process will once again determine the Master and Standby switches. As long as the priority has not changed on any members, the stack will retain the same Master and Standby.
Cabling the Switch Stack Dell PowerEdge FN I/O Aggregators are connected to operate as a single stack in a ring topology using the SFP+ or Base-T ports on the front end ports 9 and 10. To create a stack in either a ring or daisy-chain topology, you can use two units on the same chassis or up to six units across multiple chassis. Prerequisite: Before you attach the stacking cables, all Aggregators in the stack must be powered up with the default or reconfigured settings.
Repeat the above steps on each Aggregator in the stack by entering the stackunit 0 iom-mode stack command and saving the configuration. If the stacked switches all reboot at approximately the same time, the Aggregator with the highest MAC address is automatically elected as the master switch. The Aggregator with the next highest MAC address is elected as the standby master.
If an Aggregator is already configured to operate in stacking mode, simply attach SFP+ or direct attach cables to connect 10G ports on the base module of each stacked Aggregator. The new unit synchronizes its running and startup configurations with the stack.
Merging Two Operational Stacks The recommended procedure for merging two operational stacks is as follows: 1. Always power off all units in one stack before connecting to another stack. 2. Add the units as a group by unplugging one stacking cable in the operational stack and physically connecting all unpowered units. 3. Completely cable the stacking connections, making sure the redundant link is also in place.
Troubleshooting a Switch Stack To perform troubleshooting operations on a switch stack, use the following commands on the master switch. 1. Displays the status of stacked ports on stack units. show system stack-ports 2. Displays the master standby unit status, failover configuration, and result of the last master-standby synchronization; allows you to verify the readiness for a stack failover. show redundancy 3. Displays input and output flow statistics on a stacked port.
Master Switch Fails • • Problem: The master switch fails due to a hardware fault, software crash, or power loss. Resolution: A failover procedure begins: 1. Keep-alive messages from the Aggregator master switch time out after 60 seconds and the switch is removed from the stack. 2. The standby switch takes the master role. Data traffic on the new master switch is uninterrupted. Protocol traffic is managed by the control plane. 3. A member switch is elected as the new standby.
-- Stack Info -Unit UnitType Status ReqTyp CurTyp Version Ports -------------------------------------------------------------------------0 Management online PE-FN-410S-IOA PE-FN-410S-IOA 1-0(0-1864) 12 1 Standby card problem PE-FN-410S-IOA unknown 12 2 Member not present 3 Member not present 4 Member not present 5 Member not present Card Problem — Resolved Dell#show system brief Stack MAC : 00:1e:c9:f1:04:82 -- Stack Info -Unit UnitType Status ReqTyp CurTyp Version Ports ------------------------------------
write memory 5. Reload the stack unit to activate the new Dell Networking OS version. CONFIGURATION mode reload Example of Upgrading all Stacked Switches The following example shows how to upgrade all switches in a stack, including the master switch. Dell# upgrade system ftp: A: Address or name of remote host []: 10.11.200.241 Source file name []: //FTOS-XL-8.3.17.0.
CONFIGURATION mode boot system stack-unit unit-number primary system partition 3. Save the configuration. EXEC Privilege mode write memory 4. Reset the stack unit to activate the new Dell Networking OS version. EXEC Privilege mode power-cycle stack-unit unit-number Example of Upgrading a Single Stack Unit The following example shows how to upgrade an individual stack unit.
Broadcast Storm Control 17 In Standalone mode, the broadcast storm control feature is enabled by default on all ports, and disabled on a port when an iSCSI storage device is detected. Broadcast storm control is re-enabled as soon as the connection with an iSCSI device ends. Broadcast traffic on Layer 2 interfaces is limited or suppressed during a broadcast storm. You can view the status of a broadcast-storm control operation by using the show io-aggregator broadcast storm-control status command.
System Time and Date 18 The Aggregator auto-configures the hardware and software clocks with the current time and date. If necessary, you can manually set and maintain the system time and date using the CLI commands described in this chapter. • Setting the Time for the Software Clock • Setting the Time Zone • Setting Daylight Savings Time Setting the Time for the Software Clock You can change the order of the month and day parameters to enter the time and date as time day month year.
• Set the clock to the appropriate timezone. CONFIGURATION mode clock timezone timezone-name offset – timezone-name: Enter the name of the timezone. Do not use spaces. – offset: Enter one of the following: * a number from 1 to 23 as the number of hours in addition to UTC for the timezone. * a minus sign (-) then a number from 1 to 23 as the number of hours.
Example of the clock summer-time Command Dell(conf)#clock summer-time pacific date Mar 14 2012 00:00 Nov 7 2012 00:00 Dell(conf)# Setting Recurring Daylight Saving Time Set a date (and time zone) on which to convert the switch to daylight saving time on a specific day every year. If you have already set daylight saving for a one-time setting, you can set that date and time as the recurring setting with the clock summer-time time-zone recurring command.
Example of the clock summer-time recurring Command Dell(conf)#clock summer-time pacific recurring Mar 14 2012 00:00 Nov 7 2012 00:00 Dell(conf)# NOTE: If you enter after entering the recurring command parameter, and you have already set a one-time daylight saving time/date, the system uses that time and date as the recurring setting.
19 Uplink Failure Detection (UFD) Feature Description UFD provides detection of the loss of upstream connectivity and, if used with network interface controller (NIC) teaming, automatic recovery from a failed link. A switch provides upstream connectivity for devices, such as servers. If a switch loses its upstream connectivity, downstream devices also lose their connectivity.
Figure 27. Uplink Failure Detection How Uplink Failure Detection Works UFD creates an association between upstream and downstream interfaces. The association of uplink and downlink interfaces is called an uplink-state group. An interface in an uplink-state group can be a physical interface or a port-channel (LAG) aggregation of physical interfaces. An enabled uplink-state group tracks the state of all assigned upstream interfaces.
Figure 28. Uplink Failure Detection Example If only one of the upstream interfaces in an uplink-state group goes down, a specified number of downstream ports associated with the upstream interface are put into a Link-Down state. You can configure this number and is calculated by the ratio of the upstream port bandwidth to the downstream port bandwidth in the same uplink-state group.
Using UFD, you can configure the automatic recovery of downstream ports in an uplink-state group when the link status of an upstream port changes. The tracking of upstream link status does not have a major impact on central processing unit (CPU) usage. UFD and NIC Teaming To implement a rapid failover solution, you can use uplink failure detection on a switch with network adapter teaming on a server. For more information, refer to Network Interface Controller (NIC) Teaming.
– For an example of debug log message, refer to Clearing a UFD-Disabled Interface. Uplink Failure Detection (SMUX mode) In Standalone or VLT modes, by default, all the server-facing ports are tracked by the operational status of the uplink LAG. If the uplink LAG goes down, the aggregator loses its connectivity and is no longer operational. All the server-facing ports are brought down after the specified defer-timer interval, which is 10 seconds by default.
UPLINK-STATE-GROUP mode {upstream | downstream} interface For interface, enter one of the following interface types: • TenGigabit Ethernet: enter tengigabitethernet {slot/port |slot/port-range} • Port channel: enter port-channel {1-128 | port-channel-range} Where port-range and port-channel-range specify a range of ports separated by a dash (-) and/or individual ports/port channels in any order; for example: upstream tengigabitethernet 0/1-2,5,9,11-12 downstream port-channel 1-3,5 • A comma is required
The default is auto-recovery of UFD-disabled downstream ports is enabled. To disable auto-recovery, use the no downstream auto-recover command. 6. Specify the time (in seconds) to wait for the upstream port channel (LAG 128) to come back up before server ports are brought down. UPLINK-STATE-GROUP mode defer-timer seconds NOTE: This command is available in Standalone and VLT modes only. The range is from 1 to 120. 7. (Optional) Enter a text description of the uplink-state group.
Example of Syslog Messages Before and After Entering the clear ufd-disable uplink-stategroup Command The following example message shows the Syslog messages that display when you clear the UFDDisabled state from all disabled downstream interfaces in an uplink-state group by using the clear ufd-disable uplink-state-group group-id command. All downstream interfaces return to an operationally up state.
Displaying Uplink Failure Detection To display information on the UFD feature, use any of the following commands. • Display status information on a specified uplink-state group or all groups. EXEC mode show uplink-state-group [group-id] [detail] – group-id: The values are 1 to 16. • – detail: displays additional status information on the upstream and downstream interfaces in each group. Display the current status of a port or port-channel interface assigned to an uplink-state group.
Te 0/6(Dwn) Te 0/7(Up) Te 0/8(Up) Dell# Example of Viewing Interface Status with UFD Information Dell#show interfaces tengigabitethernet 0/7 TenGigabitEthernet 0/7 is up, line protocol is down (error-disabled[UFD]) Hardware is Force10Eth, address is 00:01:e8:32:7a:47 Current address is 00:01:e8:32:7a:47 Interface index is 280544512 Internet address is not set MTU 1554 bytes, IP MTU 1500 bytes LineSpeed 1000 Mbit, Mode auto Flowcontrol rx off tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show
Sample Configuration: Uplink Failure Detection The following example shows a sample configuration of UFD on a switch/router in which you configure as follows. • • • • • • Configure uplink-state group 3. Add downstream links Gigabitethernet 0/1, 0/2, 0/5, 0/9, 0/11, and 0/12. Configure two downstream links to be disabled if an upstream link fails. Add upstream links Gigabitethernet 0/3 and 0/4. Add a text description for the group. Verify the configuration with various show commands.
Uplink State Group : 3 Status: Enabled, Up Upstream Interfaces : Te 0/3(Dwn) Te 0/4(Up) Downstream Interfaces : Te 0/1(Dis) Te 0/2(Dis) Te 0/5(Up) Te 0/9(Up) Te 0/11(Up) Te 0/12(Up) Uplink Failure Detection (UFD) 215
PMUX Mode of the IO Aggregator 20 This chapter describes the various configurations applicable in PMUX mode. Link Aggregation Unlike IOA Automated modes (Standalone and VLT modes), the IOA Programmable MUX can support multiple uplink LAGs. You can provision multiple uplink LAGs. NOTE: In order to avoid loops, only disjoint VLANs are allowed between the uplink ports/uplink LAGs and uplink-to-uplink switching is disabled.
Dell(conf-if-po-10)#portmode hybrid Dell(conf-if-po-10)#switchport Dell(conf-if-po-10)#vlan tagged 1000 Dell(conf-if-po-10)#link-bundle-monitor enable Dell#configure Dell(conf)#int port-channel 11 Dell(conf-if-po-11)#portmode hybrid Dell(conf-if-po-11)#switchport Dell(conf-if-po-11)#vlan tagged 1000 % Error: Same VLAN cannot be added to more than one uplink port/LAG.
An Aggregator auto-configures to support the link layer discovery protocol (LLDP) for the auto-discovery of network devices. You can use CLI commands to display acquired LLDP information, clear LLDP counters, and debug LACP operation. Configure LLDP Configuring LLDP is a two-step process. 1. Enable LLDP globally. 2. Advertise TLVs out of an interface.
Dell(conf-if-te-0/3-lldp)#? advertise Advertise TLVs disable Disable LLDP protocol on this interface end Exit from configuration mode exit Exit from LLDP configuration mode hello LLDP hello configuration mode LLDP mode configuration (default = rx and tx) multiplier LLDP multiplier configuration no Negate a command or set its defaults show Show LLDP configuration Dell(conf-if-te-0/3-lldp)# Enabling LLDP LLDP is enabled by default. Enable and disable LLDP globally or per interface.
advertise {dcbx-appln-tlv | dcbx-tlv | dot3-tlv | interface-port-desc | management-tlv | med } Include the keyword for each TLV you want to advertise. • For management TLVs: system-capabilities, system-description. • For 802.1 TLVs: port-protocol-vlan-id, port-vlan-id. • For 802.3 TLVs: max-frame-size.
Example of Viewing LLDP Global Configurations R1(conf)#protocol lldp R1(conf-lldp)#show config ! protocol lldp advertise dot3-tlv max-frame-size advertise management-tlv system-capabilities system-description hello 10 no disable R1(conf-lldp)# Example of Viewing LLDP Interface Configurations R1(conf-lldp)#exit R1(conf)#interface tengigabitethernet 0/3 R1(conf-if-te-0/3)#show config ! interface tengigabitEthernet 0/3 switchport no shutdown R1(conf-if-te-0/3)#protocol lldp R1(conf-if-te-0/3-lldp)#show config
The neighbors are given below: ----------------------------------------------------------------------Remote Chassis ID Subtype: Mac address (4) Remote Chassis ID: 00:01:e8:06:95:3e Remote Port Subtype: Interface name (5) Remote Port ID: TeGigabitEthernet 2/11 Local Port ID: TeGigabitEthernet 1/21 Locally assigned remote Neighbor Index: 4 Remote TTL: 120 Information valid for next 120 seconds Time since last information change of this neighbor: 01:50:16 Remote MTU: 1554 Remote System Desc: Dell Networks Real
Configuring a Time to Live The information received from a neighbor expires after a specific amount of time (measured in seconds) called a time to live (TTL). The TTL is the product of the LLDPDU transmit interval (hello) and an integer called a multiplier. The default multiplier is 4, which results in a default TTL of 120 seconds. • Adjust the TTL value. CONFIGURATION mode or INTERFACE mode. • multiplier Return to the default multiplier value. CONFIGURATION mode or INTERFACE mode.
Figure 30. The debug lldp detail Command — LLDPDU Packet Dissection Security Security features are supported on the I/O Aggregator. This chapter describes several ways to provide access security to the Dell Networking system. For details about all the commands described in this chapter, refer to the Security chapter in the Dell PowerEdge FN I/O Aggregator Command Line Reference Guide. RADIUS Remote authentication dial-in user service (RADIUS) is a distributed client/server protocol.
If an error occurs in the transmission or reception of RADIUS packets, you can view the error by enabling the debug radius command. Transactions between the RADIUS server and the client are encrypted (the users’ passwords are not sent in plain text). RADIUS uses UDP as the transport protocol between the RADIUS server host and the client. For more information about RADIUS, refer to RFC 2865, Remote Authentication Dial-in User Service.
• Enter a text string (up to 16 characters long) as the name of the method list you wish to use with the RADIUS authentication method. CONFIGURATION mode • aaa authentication login method-list-name radius Create a method list with RADIUS and TACACS+ as authorization methods. CONFIGURATION mode aaa authorization exec {method-list-name | default} radius tacacs+ Typical order of methods: RADIUS, TACACS+, Local, None.
– key [encryption-type] key: enter 0 for plain text or 7 for encrypted text, and a string for the key. The key can be up to 42 characters long. This key must match the key configured on the RADIUS server host. If you do not configure these optional parameters, the global default values for all RADIUS host are applied. To specify multiple RADIUS server hosts, configure the radius-server host command multiple times.
– seconds: the range is from 0 to 1000. Default is 5 seconds. To view the configuration of RADIUS communication parameters, use the show running-config command in EXEC Privilege mode. Monitoring RADIUS To view information on RADIUS transactions, use the following command. • View RADIUS transactions to troubleshoot problems. EXEC Privilege mode debug radius TACACS+ Dell Networking OS supports terminal access controller access control system (TACACS+ client, including support for login authentication.
CONFIGURATION mode line {aux 0 | console 0 | vty number [end-number]} 4. Assign the method-list to the terminal line. LINE mode login authentication {method-list-name | default} Example of a Failed Authentication To view the configuration, use the show config in LINE mode or the show running-config tacacs + command in EXEC Privilege mode. If authentication fails using the primary method, Dell Networking OS employs the second method (or third method, if necessary) automatically.
TACACS+ Remote Authentication When configuring a TACACS+ server host, you can set different communication parameters, such as the key password. Example of Specifying a TACACS+ Server Host Dell(conf)# Dell(conf)#aaa authentication login tacacsmethod tacacs+ Dell(conf)#aaa authentication exec tacacsauthorization tacacs+ Dell(conf)#tacacs-server host 25.1.1.
Configuring Storm Control The following configurations are available only in PMUX mode. 1. To configure the percentage of broadcast traffic allowed on an interface, use the storm-control broadcast [packets_per_second in] command from INTERFACE mode. 2. To configure the percentage of multicast traffic allowed on an interface, use the storm-control multicast [packets_per_second in] command from INTERFACE mode. 3.
differentiator between the UTC and your local timezone. For example, San Jose, CA is the Pacific Timezone with a UTC offset of -8. To set the clock timezone, use the following command. • Set the clock to the appropriate timezone. CONFIGURATION mode clock timezone timezone-name offset – timezone-name: Enter the name of the timezone. Do not use spaces. – offset: Enter one of the following: * a number from 1 to 23 as the number of hours in addition to UTC for the timezone.
– offset: (OPTIONAL) enter the number of minutes to add during the summer-time period. The range is from 1 to 1440. The default is 60 minutes. Example of the clock summer-time Command Dell(conf)#clock summer-time pacific date Mar 14 2012 00:00 Nov 7 2012 00:00 Dell(conf)# Setting Recurring Daylight Saving Time Set a date (and time zone) on which to convert the switch to daylight saving time on a specific day every year.
– offset: (OPTIONAL) Enter the number of minutes to add during the summer-time period. The range is from 1 to1440. The default is 60 minutes. Example of the clock summer-time recurring Command Dell(conf)#clock summer-time pacific recurring Mar 14 2012 00:00 Nov 7 2012 00:00 Dell(conf)# NOTE: If you enter after entering the recurring command parameter, and you have already set a one-time daylight saving time/date, the system uses that time and date as the recurring setting.
4. Initialize the port-channel with configurations such as admin up, portmode, and switchport. Dell#configure Dell(conf)#int port-channel 128 Dell(conf-if-po-128)#portmode hybrid Dell(conf-if-po-128)#switchport 5. Configure the tagged VLANs 10 through 15 and untagged VLAN 20 on this port-channel. Dell(conf-if-po-128)#vlan tagged 10-15 Dell(conf-if-po-128)# Dell(conf-if-po-128)#vlan untagged 20 6. Show the running configurations on this port-channel.
You can remove the tagged VLANs using the no vlan tagged vlan-range command. You can remove the untagged VLANs using the no vlan untagged command in the physical port/portchannel. Virtual Link Trunking (VLT) VLT allows physical links between two chassis to appear as a single virtual link to the network core. VLT eliminates the requirement for Spanning Tree protocols by allowing link aggregation group (LAG) terminations on two separate distribution or core switches, and by supporting a loop-free topology.
• Provides a loop-free topology. • Uses all available uplink bandwidth. • Provides fast convergence if either the link or a device fails. • Optimized forwarding with virtual router redundancy protocol (VRRP). • Provides link-level resiliency. • Assures high availability. As shown in the following example, VLT presents a single logical Layer 2 domain from the perspective of attached devices that have a virtual link trunk terminating on separate chassis in the VLT domain.
Configuration Notes When you configure VLT, the following conditions apply. • VLT domain – A VLT domain supports two chassis members, which appear as a single logical device to network access devices connected to VLT ports through a port channel. – A VLT domain consists of the two core chassis, the interconnect trunk, backup link, and the LAG members connected to attached devices. – Each VLT domain has a unique MAC address that you create or VLT creates automatically.
– When you change the default VLAN ID on a VLT peer switch, the VLT interconnect may flap. – In a VLT domain, the following software features are supported on VLTi: link layer discovery protocol (LLDP), flow control, port monitoring, jumbo frames, and data center bridging (DCB). – When you enable the VLTi link, the link between the VLT peer switches is established if the following configured information is true on both peer switches: * the VLT system MAC address matches.
• Software features supported on VLT port-channels – For information about configuring IGMP Snooping in a VLT domain, refer to VLT and IGMP Snooping. – All system management protocols are supported on VLT ports, including SNMP, RMON, AAA, ACL, DNS, FTP, SSH, Syslog, NTP, RADIUS, SCP, TACACS+, Telnet, and LLDP. – Enable Layer 3 VLAN connectivity VLT peers by configuring a VLAN network interface for the same VLAN on both switches. – Dell Networking does not recommend enabling peer-routing if the CAM is full.
associated with the VLT domain. If heartbeat messages are not received, the Secondary Peer forwards traffic assumes the role of the Primary Peer. If the original Primary Peer is restored, the VLT peer reassigned as the Primary Peer retains this role and the other peer must be reassigned as a Secondary Peer. Peer role changes are reported as SNMP traps.
Additionally, ARP entries resulting from station movements from VLT to non-VLT ports or to different non-VLT ports are learned on the non-VLT port and synced with the peer node. The peer node is updated to use the new non-VLT port. NOTE: ARP entries learned on non-VLT, non-spanned VLANs are not synced with VLT peers. Verifying a VLT Configuration To monitor the operation or verify the configuration of a VLT domain, use any of the following show commands on the primary and secondary VLT switches.
* Port channel: enter port-channel {1-128}. Example of the show vlt backup-link Command Dell_VLTpeer1# show vlt backup-link VLT Backup Link ----------------Destination: Peer HeartBeat status: HeartBeat Timer Interval: HeartBeat Timeout: UDP Port: HeartBeat Messages Sent: HeartBeat Messages Received: 10.11.200.
Remote system version: Delay-Restore timer: 5(1) 90 seconds Example of the show vlt detail Command Dell_VLTpeer1# show vlt detail Local LAG Id -----------100 127 Peer LAG Id ----------100 2 Local Status Peer Status Active VLANs ------------ ----------- ------------UP UP 10, 20, 30 UP UP 20, 30 Dell_VLTpeer2# show vlt detail Local LAG Id -----------2 100 Peer LAG Id ----------127 100 Local Status -----------UP UP Peer Status ----------UP UP Active VLANs ------------20, 30 10, 20, 30 Example of the
Example of the show vlt statistics Command Dell_VLTpeer1# show vlt statistics VLT Statistics ---------------HeartBeat Messages Sent: HeartBeat Messages Received: ICL Hello's Sent: ICL Hello's Received: 987 986 148 98 Dell_VLTpeer2# show vlt statistics VLT Statistics ---------------HeartBeat Messages Sent: HeartBeat Messages Received: ICL Hello's Sent: ICL Hello's Received: 994 978 89 89 VLT Sample Configurations To configure VLT, configure a backup link and interconnect trunk, create a VLT domain, confi
Dell_VLTpeer1(conf-if-po-110)#vlt-peer-lag port-channel 110 Dell_VLTpeer1(conf-if-po-110)#end Verify that the port channels used in the VLT domain are assigned to the same VLAN.
NUM Status Description Q Ports 10 Active U Po110(Te 0/8) T Po100(Te 0/3,4) Verifying a Port-Channel Connection to a VLT Domain (From an Attached Access Switch) On an access device, verify the port-channel connection to a VLT domain. Dell_TORswitch(conf)# show running-config interface port-channel 11 ! interface Port-channel 11 switchport channel-member TenGigE 0/1,2 no shutdown Troubleshooting VLT To help troubleshoot different VLT issues that may occur, use the following information.
Description Behavior at Peer Up Behavior During Run Time Action to Take System MAC mismatch A syslog error message and an SNMP trap are generated. A syslog error message and an SNMP trap are generated. Verify that the unit ID of VLT peers is not the same on both units and that the MAC address is the same on both units. Unit ID mismatch The VLT peer does not boot up. The VLTi is forced to a down state. The VLT peer does not boot up. The VLTi is forced to a down state.
NPIV Proxy Gateway 21 The N-port identifier virtualization (NPIV) Proxy Gateway (NPG) feature provides FCoE-FC bridging capability on the FN 2210S Aggregator, allowing server CNAs to communicate with SAN fabrics over the FN 2210S Aggregator.
Converged Network Adapter (CNA) ports on servers connect to the FX2 chassis Ten-Gigabit Ethernet ports and log in to an upstream FC core switch through the N port. Server fabric login (FLOGI) requests are converted into fabric discovery (FDISC) requests before being forwarded to the FC core switch. Servers use CNA ports to connect over FCoE to an Ethernet port in ENode mode on the NPIV proxy gateway.
NPIV Proxy Gateway: Terms and Definitions The following table describes the terms used in an NPG configuration on the Aggregator. Table 20. Aggregator with the NPIV Proxy Gateway: Terms and Definitions Term Description FC port Fibre Channel port on the Aggregator that operates in autosensing, 2, 4, or 8-Gigabit mode. On an NPIV proxy gateway, an FC port can be used as a downlink for a server connection and an uplink for a fabric connection.
Term Description an upstream FCoE switch operating as an FCF. FIP keepalive messages maintain the connection between an FCoE initiator and an FCF. NPIV N-port identifier virtualization: The capability to map multiple FCoE links from downstream ports to a single upstream FC link. principal switch The switch in a fabric with the lowest domain number. The principal switch accesses the master name database and the zone/zone set database.
Configuring an NPIV Proxy Gateway Prerequisite: Before you configure an NPIV proxy gateway (NPG) on an Aggregator, ensure that the following features are enabled. • DCB is enabled by default on the Aggregator. • Autonegotiated DCBx is enabled for converged traffic by default with the Ethernet ports on all Aggregators. • FCoE transit with FIP snooping is automatically enabled when you configure Fibre Channel on the Aggregator.
Fabric Name Fabric Id Vlan Id Vlan priority FC-MAP FKA-ADV-Period Fcf Priority Config-State Oper-State Members Fc 0/9 Te 0/4 SAN_FABRIC 1002 1002 3 0efc00 8 128 ACTIVE UP DCB_MAP_PFC_OFF Dell(conf)#do show qos dcb-map DCB_MAP_PFC_OFF ----------------------State :In-Progress PfcMode:OFF -------------------Dell(conf)# Enabling Fibre Channel Capability on the Switch Enable the Fibre Channel capability on an Aggregator that you want to configure as an NPG for the Fibre Channel protocol.
Step Task Command Command Mode priority-pgid dot1p0_group_num dot1p1_group_num dot1p2_group_num dot1p3_group_num dot1p4_group_num dot1p5_group_num dot1p6_group_num dot1p7_group_num DCB MAP Restriction: You can enable PFC on a maximum of two priority queues.
Step Task Command Command Mode dcb-map name INTERFACE You cannot apply a DCB map on a port channel. However, you can apply a DCB map on the ports that are members of the port channel. Apply the DCB map on an Ethernet port or port channel. The port is configured with the PFC and ETS settings in the DCB map, for example: 2 Dell# interface tengigabitEthernet 0/0 Dell(config-if-te-0/0)# dcb-map SAN_DCB1 Repeat this step to apply a DCB map to more than one port or port channel.
Step Task Command 1 Create an FCoE map that contains parameters fcoe-map map-name used in the communication between servers and a SAN fabric. 2 Configure the association between the dedicated VLAN and the fabric where the desired storage arrays are installed. The fabric and VLAN ID numbers must be the same. Fabric and VLAN ID range: 2–4094.
Step Task Command Command Mode 1 Configure a server-facing Ethernet port or port channel with an FCoE map. interface {tengigabitEthernet slot/port | portchannel num} CONFIGURATION 2 Apply the FCoE/FC configuration in an FCoE fcoe-map map-name map on the Ethernet port.
Step Task Command Command Mode 3 Enable the port for FC transmission.
Dell(config-fcoe-name)# keepalive Dell(config-fcoe-name)# fcf-priority 128 Dell(config-fcoe-name)# fka-adv-period 8 5. Enable an upstream FC port: Dell(config)# interface fibrechannel 0/0 Dell(config-if-fc-0)# no shutdown 6. Enable a downstream Ethernet port: Dell(config)#interface tengigabitEthernet 0/0 Dell(conf-if-te-0)# no shutdown Displaying NPIV Proxy Gateway Information To display information on the NPG operation, use the show commands in the following table: Table 21.
Te Te Te Te Te Te Te Te Fc Fc Te Te 0/1 0/2 0/3 0/4 0/5 0/6 0/7 0/8 toB300 0/9 0/10 0/11 0/12 Up Down Up Down Up Up Up Down Up Up Down Down 10000 Mbit Auto 10000 Mbit Auto 10000 Mbit 10000 Mbit 10000 Mbit Auto 8000 Mbit 8000 Mbit Auto Auto Full Auto Full Auto Full Full Full Auto Full Full Auto Auto 1-4094 1-1001,1003-4094 1-1001,1003-4094 1-1001,1003-4094 1-4094 1-4094 1-4094 1-1001,1003-4094 ----- Table 22.
Fcf Priority Config-State Oper-State Members Fc 0/9 Te 0/11 Te 0/12 128 ACTIVE UP Table 23. show fcoe-map Field Descriptions Field Description Fabric-Name Name of a SAN fabric. Fabric ID The ID number of the SAN fabric to which FC traffic is forwarded. VLAN ID The dedicated VLAN used to transport FCoE storage traffic between servers and a fabric over the NPG. The configured VLAN ID must be the same as the fabric ID. VLAN priority FCoE traffic uses VLAN priority 3.
Table 24. show qos dcb-map Field Descriptions Field Description State Complete: All mandatory DCB parameters are correctly configured. In progress: The DCB map configuration is not complete. Some mandatory parameters are not configured. PFC Mode PFC configuration in the DCB map: On (enabled) or Off. PG Priority group configured in the DCB map. TSA Transmission scheduling algorithm used in the DCB map: Enhanced Transmission Selection (ETS).
Field Description Fabric-Map Name of the FCoE map containing the FCoE/FC configuration parameters for the server CNA-fabric connection. Login Method Method used by the server CNA to log in to the fabric; for example: FLOGI - ENode logged in using a fabric login (FLOGI). FDISC - ENode logged in using a fabric discovery (FDISC). Status Operational status of the link between a server CNA port and a SAN fabric: Logged In - Server has logged in to the fabric and is able to transmit FCoE traffic.
Field Description FCF MAC Fibre Channel forwarder MAC: MAC address of Aggregator with the FCF interface. Fabric Intf Fabric-facing Aggregator with the Fibre Channel port (slot/port) on which FCoE traffic is transmitted to the specified fabric. FCoE VLAN ID of the dedicated VLAN used to transmit FCoE traffic from a server CNA to a fabric and configured on both the server-facing Aggregator with the server CNA port.
22 Upgrade Procedures To find the upgrade procedures, go to the Dell Networking OS Release Notes for your system type to see all the requirements needed to upgrade to the desired Dell Networking OS version. To upgrade your system type, follow the procedures in the Dell Networking OS Release Notes. Get Help with Upgrades Direct any questions or concerns about the Dell Networking OS upgrade procedures to the Dell Technical Support Center. You can reach Technical Support: • On the web: http://support.dell.
23 Debugging and Diagnostics This chapter contains the following sections:. • • • • • Debugging Aggregator Operation Software Show Commands Offline Diagnostics Trace Logs Show Hardware Commands Debugging Aggregator Operation This section describes common troubleshooting procedures to use for error conditions that may arise during Aggregator operation. All interfaces on the Aggregator are operationally down This section describes how you can troubleshoot the scenario in which all the interfaces are down.
0/5(Up) 2. Te 0/6(Dwn) Te 0/7(Dwn) Te 0/8(Up) Verify that the downstream port channel in the top-of-rack switch that connect to the Aggregator is configured correctly. Broadcast, unknown multicast, and DLF packets switched at a very low rate Symptom: Broadcast, unknown multicast, and DLF packets are switched at a very low rate. By default, broadcast storm control is enabled on an Aggregator and rate limits the transmission of broadcast, unknown multicast, and DLF packets to 1Gbps.
U T 1 2-4094 Native VlanId: 2. 1 Assign the port to a specified group of VLANs (vlan tagged command) and re-display the port mode status..
12 Ten GigabitEthernet/IEEE 802.3 interface(s) Dell# Offline Diagnostics The offline diagnostics test suite is useful for isolating faults and debugging hardware. The diagnostics tests are grouped into three levels: • Level 0 — Level 0 diagnostics check for the presence of various components and perform essential path verifications. In addition, Level 0 diagnostics verify the identification registers of the components on the board. • Level 1 — A smaller set of diagnostic tests.
Please make sure that stacking/fanout not configured for Diagnostics execution. Also reboot/online command is necessary for normal operation after the offline command is issued. Proceed with Offline [confirm yes/no]:yes Dell# 2. Confirm the offline status.
flash: 2143281152 bytes total (2069291008 bytes free) Using the Show Hardware Commands The show hardware command tree consists of commands used with the Aggregator switch. These commands display information from a hardware sub-component and from hardware-based feature tables. NOTE: Use the show hardware commands only under the guidance of the Dell Technical Assistance Center. • View internal interface status of the stack-unit CPU port which connects to the external management interface.
• This view helps identifying the stack unit/port pipe/port that may experience internal drops. View the input and output statistics for a stack-port interface. EXEC Privilege mode • show hardware stack-unit {0-5} stack-port {33–56} View the counters in the field processors of the stack unit. EXEC Privilege mode • show hardware stack-unit {0-5} unit {0-0} counters View the details of the FP Devices and Hi gig ports on the stack-unit.
SFP+ SFP+ SFP+ SFP+ SFP+ SFP+ SFP+ SFP+ SFP+ SFP+ SFP+ SFP+ SFP+ SFP+ SFP+ SFP+ SFP+ 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 BR Nominal = Length(SFM) Km = Length(OM3) 2m = Length(OM2) 1m = Length(OM1) 1m = Length(Copper) 1m = Vendor Rev = Laser Wavelength = CheckCodeBase = Serial Extended ID fields Options = BR max = BR min = Vendor SN = Datecode = CheckCodeExt = DOM is not supported 0x67 0x00 0x00 0x00 0x00 0x01 A 256 nm 0xf2 0x00 0x00 0 0 APF11040012888 110207 0xb3 Dell# Recognize an Over-Temperature Condi
3. After the software has determined that the temperature levels are within normal limits, you can repower the card safely. To bring back the line card online, use the power-on command in EXEC mode. In addition, Dell Networking requires that you install blanks in all slots without a line card to control airflow for adequate system cooling. NOTE: Exercise care when removing a card; if it has exceeded the major or shutdown thresholds, the card could be hot to the touch.
OID String OID Name Description .1.3.6.1.4.1.6027.3.10.1.2.5.1.7 chSysPortXfpRecvTemp OID displays the temperature of the connected optics. NOTE: These OIDs only generate if you enable the enable optic-infoupdate-interval is enabled command. Hardware MIB Buffer Statistics .1.3.6.1.4.1.6027.3.16.1.1.4 fpPacketBufferTable View the modular packet buffers details per stack unit and the mode of allocation. .1.3.6.1.4.1.6027.3.16.1.1.
manager does not reallocate the buffer to an adjacent congested interface, which means that in some cases, memory is under-used. • Dynamic buffer — this pool is shared memory that is allocated as needed, up to a configured limit. Using dynamic buffers provides the benefit of statistical buffer sharing. An interface requests dynamic buffers when its dedicated buffer pool is exhausted. The buffer manager grants the request based on three conditions: – The number of used and available dynamic buffers.
Deciding to Tune Buffers Dell Networking recommends exercising caution when configuring any non-default buffer settings, as tuning can significantly affect system performance. The default values work for most cases. As a guideline, consider tuning buffers if traffic is bursty (and coming from several interfaces). In this case: • • • Reduce the dedicated buffer on all queues/interfaces. Increase the dynamic buffer on all interfaces.
%S50N:0 %DIFFSERV-2-DSA_DEVICE_BUFFER_UNAVAILABLE: Unable to allocate dedicated buffers for stack-unit 0, port pipe 0, egress port 25 due to unavailability of cells. Dell Networking OS Behavior: When you remove a buffer-profile using the no buffer-profile [fp | csf] command from CONFIGURATION mode, the buffer-profile name still appears in the output of the show buffer-profile [detail | summary] command.
5 6 7 3.00 3.00 3.00 256 256 256 Example of Viewing the Buffer Profile (Linecard) Dell#show buffer-profile detail fp-uplink stack-unit 0 port-set 0 Linecard 0 Port-set 0 Buffer-profile fsqueue-hig Dynamic Buffer 1256.00 (Kilobytes) Queue# Dedicated Buffer Buffer Packets (Kilobytes) 0 3.00 256 1 3.00 256 2 3.00 256 3 3.00 256 4 3.00 256 5 3.00 256 6 3.00 256 7 3.
buffer-profile global [1Q|4Q] Sample Buffer Profile Configuration The two general types of network environments are sustained data transfers and voice/data. Dell Networking recommends a single-queue approach for data transfers.
Displaying Drop Counters To display drop counters, use the following commands. • Identify which stack unit, port pipe, and port is experiencing internal drops. • show hardware stack-unit 0–11 drops [unit 0 [port 0–63]] Display drop counters.
--- Egress FORWARD PROCESSOR Drops --IPv4 L3UC Aged & Drops : 0 TTL Threshold Drops : 0 INVALID VLAN CNTR Drops : 0 L2MC Drops : 0 PKT Drops of ANY Conditions : 0 Hg MacUnderflow : 0 TX Err PKT Counter : 0 Dataplane Statistics The show hardware stack-unit cpu data-plane statistics command provides insight into the packet types coming to the CPU. The command output in the following example has been augmented, providing detailed RX/ TX packet statistics on a per-queue basis.
The show hardware stack-unit cpu party-bus statistics command displays input and output statistics on the party bus, which carries inter-process communication traffic between CPUs Example of Viewing Party Bus Statistics Dell#show hardware stack-unit 2 cpu party-bus statistics Input Statistics: 27550 packets, 2559298 bytes 0 dropped, 0 errors Output Statistics: 1649566 packets, 1935316203 bytes 0 errors Displaying Drop Counters To display drop counters, use the following commands.
Rx VLAN Drops : 0 --- Ingress MAC counters--Ingress FCSDrops : 0 Ingress MTUExceeds : 0 --- MMU Drops --HOL DROPS TxPurge CellErr Aged Drops : 0 : 0 : 0 --- Egress MAC counters--Egress FCS Drops : 0 --- Egress FORWARD PROCESSOR Drops --IPv4 L3UC Aged & Drops : 0 TTL Threshold Drops : 0 INVALID VLAN CNTR Drops : 0 L2MC Drops : 0 PKT Drops of ANY Conditions : 0 Hg MacUnderflow : 0 TX Err PKT Counter : 0 Restoring the Factory Default Settings Restoring factory defaults deletes the existing NVRAM setting
Power-cycling the unit(s). ....
Standards Compliance 24 This chapter describes standards compliance for Dell Networking products. NOTE: Unless noted, when a standard cited here is listed as supported by the Dell Networking Operating System (OS), the system also supports predecessor standards. One way to search for predecessor standards is to use the http://tools.ietf.org/ website. Click “Browse and search IETF documents,” enter an RFC number, and inspect the top of the resulting document for obsolescence citations to related RFCs.
General Internet Protocols The following table lists the Dell Networking OS support per platform for general internet protocols. Table 30.
Network Management The following table lists the Dell Networking OS support per platform for network management protocol. Table 32.
RFC# Full Name 2579 Textual Conventions for SMIv2 2580 Conformance Statements for SMIv2 2618 RADIUS Authentication Client MIB, except the following four counters: radiusAuthClientInvalidServerAddresses radiusAuthClientMalformedAccessResponses radiusAuthClientUnknownTypes radiusAuthClientPacketsDropped 3635 Definitions of Managed Objects for the Ethernet-like Interface Types 2674 Definitions of Managed Objects for Bridges with Traffic Classes, Multicast Filtering and Virtual LAN Extensions 2787
RFC# Full Name IEEE 802.1AB The LLDP Management Information Base extension module for IEEE 802.1 organizationally defined discovery information. (LLDP DOT1 MIB and LLDP DOT3 MIB) IEEE 802.1AB The LLDP Management Information Base extension module for IEEE 802.3 organizationally defined discovery information. (LLDP DOT1 MIB and LLDP DOT3 MIB) sFlow.org sFlow Version 5 sFlow.
https://www.force10networks.com/csportal20/MIBs/MIB_OIDs.aspx Some pages of iSupport require a login. To request an iSupport account, go to: https://www.force10networks.com/CSPortal20/Support/AccountRequest.aspx If you have forgotten or lost your account information, contact Dell TAC for assistance.