Dell PowerEdge FN I/O Aggregator Configuration Guide 9.7(0.
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2015 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
Contents 1 About this Guide..................................................................................................13 Audience.............................................................................................................................................. 13 Conventions.........................................................................................................................................13 Information Symbols............................................................
Ethernet Enhancements in Data Center Bridging..............................................................................27 Priority-Based Flow Control............................................................................................................... 28 Configuring Priority-Based Flow Control.................................................................................... 29 Enhanced Transmission Selection......................................................................................
6 FIP Snooping........................................................................................................ 71 Supported Modes................................................................................................................................ 71 Fibre Channel over Ethernet............................................................................................................... 71 Ensuring Robustness in a Converged Ethernet Network..................................................
Configuring a Static Route for a Management Interface.............................................................97 VLAN Membership.............................................................................................................................. 98 Default VLAN ................................................................................................................................ 98 Port-Based VLANs.........................................................................................
Displaying iSCSI Optimization Information......................................................................................122 10 Isolated Networks for Aggregators.............................................................124 Configuring and Verifying Isolated Network Settings..................................................................... 124 11 Link Aggregation.............................................................................................125 Supported Modes......................
Configure LLDP................................................................................................................................. 147 Related Configuration Tasks....................................................................................................... 147 Important Points to Remember.................................................................................................. 147 CONFIGURATION versus INTERFACE Configurations.....................................................
TACACS+........................................................................................................................................... 183 Configuration Task List for TACACS+........................................................................................ 183 TACACS+ Remote Authentication..............................................................................................185 Enabling SCP and SSH......................................................................................
Configuring and Bringing Up a Stack.........................................................................................204 Adding a Stack Unit.....................................................................................................................205 Resetting a Unit on a Stack.........................................................................................................206 Removing an Aggregator from a Stack.........................................................................
Virtual Link Trunking (VLT)................................................................................................................ 231 Overview...................................................................................................................................... 231 Setting up VLT............................................................................................................................. 232 VLT Terminology..............................................................
24 Debugging and Diagnostics.........................................................................269 Supported Modes............................................................................................................................. 269 Debugging Aggregator Operation................................................................................................... 269 All interfaces on the Aggregator are operationally down.........................................................
About this Guide 1 This guide describes the supported protocols and software features, and provides configuration instructions and examples, for the Dell Networking FN I/O Aggregator running Dell Networking OS version 9.6(0.0). The I/O Aggregator is installed in a Dell PowerEdge FX2 server chassis. For information about how to install and perform the initial switch configuration, refer to the Getting Started Guides on the Dell Support website at http://www.dell.
Information Symbols This book uses the following information symbols. NOTE: The Note icon signals important operational information. CAUTION: The Caution icon signals information about situations that could result in equipment damage or loss of data. WARNING: The Warning icon signals information about hardware handling that could result in injury. * (Exception). This symbol is a note associated with additional text on the page that is marked with an asterisk.
Before You Start 2 To install the Aggregator in a Dell PowerEdge FX2 server chassis, use the instructions in the Dell PowerEdge FN I/O Aggregator Getting Started Guide that is shipped with the product. The I/O Aggregator (also known as Aggregator) installs with zero-touch configuration. After you power it on, an Aggregator boots up with default settings and auto-configures with software features enabled.
Select this mode to configure PMUX mode CLI commands. For more information on the PMUX mode, refer to PMUX Mode of the IO Aggregator. Stacking mode stack-unit unit iom-mode stack CONFIGURATION mode Dell(conf)#stack-unit 0 iom-mode stack Select this mode to configure Stacking mode CLI commands. For more information on the Stacking mode, refer to Stacking.
• Fibre Channel over Ethernet (FCoE) connectivity and FCoE initiation protocol (FIP) snooping: The uplink port channel (LAG 128) is enabled to operate in Fibre channel forwarder (FCF) port mode. • Link layer discovery protocol (LLDP): Enabled on all ports to advertise management TLV and system name with neighboring devices. • Internet small computer system interface (iSCSI)optimization. • Internet group management protocol (IGMP) snooping.
When an aggregator powers up, it monitors known TCP ports for iSCSI storage devices on all interfaces. When a session is detected, an entry is created and monitored as long as the session is active. The Aggregator also detects iSCSI storage devices on all interfaces and autoconfigures to optimize performance. Performance optimization operations, such as Jumbo frame size support and disabling storm control on interfaces connected to an iSCSI equallogic (EQL) storage device, are applied automatically.
Server-Facing LAGs The tagged VLAN membership of a server-facing LAG is automatically configured based on the serverfacing ports that are members of the LAG. The untagged VLAN of a server-facing LAG is configured based on the untagged VLAN to which the lowest numbered server-facing port in the LAG belongs. NOTE: Dell Networking recommends configuring the same VLAN membership on all LAG member ports. Where to Go From Here You can customize the Aggregator for use in your data center network as necessary.
3 Configuration Fundamentals The Dell Networking Operating System (OS) command line interface (CLI) is a text-based interface you can use to configure interfaces and protocols. The CLI is structured in modes for security and management purposes. Different sets of commands are available in each mode, and you can limit user access to modes using privilege levels. In Dell Networking OS, after you enable a command, it is entered into the running configuration file.
• EXEC Privilege mode has commands to view configurations, clear counters, manage configuration files, run diagnostics, and enable or disable debug operations. The privilege level is 15, which is unrestricted. You can configure a password for this mode. • CONFIGURATION mode allows you to configure security features, time settings, set logging and SNMP functions, and set line cards on the system. Beneath CONFIGURATION mode are submodes that apply to interfaces, protocols, and features.
CLI Command Mode Prompt Access Command CONFIGURATION Dell(conf)# • • From EXEC privilege mode, enter the configure command. From every mode except EXEC and EXEC Privilege, enter the exit command. NOTE: Access all of the following modes from CONFIGURATION mode.
Dell# Undoing Commands When you enter a command, the command line is added to the running configuration file (runningconfig). To disable a command and remove it from the running-config, enter the no command, then the original command. For example, to delete an IP address configured on an interface, use the no ip address ip-address command. NOTE: Use the help or ? command as described in Obtaining Help.
• Enter [space]? after a keyword lists all of the keywords that can follow the specified keyword. Dell(conf)#clock ? summer-time Configure summer (daylight savings) time timezone Configure time zone Dell(conf)#clock Entering and Editing Commands Notes for entering commands. • The CLI is not case-sensitive. • You can enter partial CLI keywords. – Enter the minimum number of letters to uniquely identify a command.
Short-Cut Key Combination Action Esc D Deletes all characters from the cursor to the end of the word. Command History Dell Networking OS maintains a history of previously-entered commands for each mode. For example: • When you are in EXEC mode, the UP and DOWN arrow keys display the previously-entered EXEC mode commands. • When you are in CONFIGURATION mode, the UP or DOWN arrows keys recall the previously-entered CONFIGURATION mode commands.
Admin is enabled Local is enabled Link Delay 65535 pause quantum Dell(conf)# The find keyword displays the output of the show command beginning from the first occurrence of specified text. The following example shows this command used in combination with the show linecard all command.
Data Center Bridging (DCB) 4 On an I/O Aggregator, data center bridging (DCB) features are auto-configured in standalone mode. You can display information on DCB operation by using show commands. NOTE: DCB features are not supported on an Aggregator in stacking mode.
• 802.1Qbb - Priority-based Flow Control (PFC) • 802.1Qaz - Enhanced Transmission Selection (ETS) • 802.1Qau - Congestion Notification • Data Center Bridging Exchange (DCBx) protocol NOTE: In Dell Networking OS version 9.4.0.x, only the PFC, ETS, and DCBx features are supported in data center bridging.
• Buffer space is allocated and de-allocated only when you configure a PFC priority on the port. • PFC delay constraints place an upper limit on the transmit time of a queue after receiving a message to pause a specified priority. • By default, PFC is enabled on an interface with no dot1p priorities configured. You can configure the PFC priorities if the switch negotiates with a remote peer using DCBX.
pg_num range is from 0 to 7. bandwidth percentage range is from 1 to 100. Either strict-priority or bandwidth percentage can be set for ETS on the priority group. PFC can either be enabled or disabled for the priority group. 3. Configure the priorities to priority group. DCB-MAP mode priority-pgid pgid range is from 0 to 7. Configure priority to priority group mapping from priority 0 to priority 7 in order. 4. Exit the DCB MAP configuration mode.
To honor a PFC pause frame multiplied by the number of PFC-enabled ingress ports, the minimum link delay must be greater than the round-trip transmission time the peer requres. If you apply dcb-map with PFC disabled (no pfc mode on): • You can enable link-level flow control on the interface. To delete the dcb-map, first disable link-level flow control. PFC is then automatically enabled on the interface because an interface is by default PFC-enabled.
Figure 2. Enhanced Transmission Selection The following table lists the traffic groupings ETS uses to select multiprotocol traffic for transmission. Table 2. ETS Traffic Groupings Traffic Groupings Description Priority group A group of 802.1p priorities used for bandwidth allocation and queue scheduling. All 802.1p priority traffic in a group must have the same traffic handling requirements for latency and frame loss. Group ID A 4-bit identifier assigned to each priority group.
– ETS shaping – (Credit-based shaping is not supported) • ETS uses the DCB MIB IEEE 802.1azd2.5. Configuring Enhanced Transmission Selection ETS provides a way to optimize bandwidth allocation to outbound 802.1p classes of converged Ethernet traffic. Different traffic types have different service needs. Using ETS, you can create groups within an 802.1p priority class to configure different treatment for traffic with different bandwidth, latency, and best-effort needs.
Important Points to Remember • If you remove a dot1p priority-to-priority group mapping from a DCB map (the no priority pgid command), the PFC and ETS parameters revert to their default values on the interfaces on which the DCB map is applied. By default, PFC is not applied on specific 802.1p priorities; ETS assigns equal bandwidth to each 802.1p priority. • To change the ETS bandwidth allocation configured for a priority group in a DCB map, do not modify the existing DCB map configuration.
Data Center Bridging in a Traffic Flow The following figure shows how DCB handles a traffic flow on an interface. Figure 3. DCB PFC and ETS Traffic Handling Data Center Bridging: Auto-DCB-Enable Mode On an Aggregator in standalone or VLT modes, the default mode of operation for data center bridging on Ethernet ports is auto-DCB-enable mode.
When DCB is Disabled (Default) By default, Aggregator interfaces operate with DCB disabled and linklevel flow control enabled. When an interface comes up, it is automatically configured with: • Flow control enabled on input interfaces. • A DCB-MAP policy is applied with PFC disabled. The following example shows a default interface configuration with DCB disabled and link-level flow control enabled.
Lossless traffic is not guaranteed when it is transmitted on a PFC-enabled port and received on a linklevel flow control-enabled port, or transmitted on a link-level flow control-enabled port and received on a PFC-enabled port. Enabling DCB on Next Reload To configure the Aggregator so that all interfaces come up with DCB enabled and flow control disabled, use the dcb enable on-next-reload command. Internal PFC buffers are automatically configured.
NOTE: Dell Networking does not recommend mapping all ingress traffic to a single queue when using PFC and ETS. However, Dell Networking does recommend using Ingress traffic classification using the service-class dynamic dot1p command (honor dot1p) on all DCB-enabled interfaces. If you use L2 class maps to map dot1p priority traffic to egress queues, take into account the default dot1p-queue assignments in the following table and the maximum number of two lossless queues supported on a port.
number (2) of lossless queues supported globally on the switch. In this case, all PFC configurations received from PFC-enabled peers are removed and re-synchronized with the peer devices. • Dell Networking OS does not support MACsec Bypass Capability (MBC). How Enhanced Transmission Selection is Implemented Enhanced transmission selection (ETS) provides a way to optimize bandwidth allocation to outbound 802.1p classes of converged Ethernet traffic. Different traffic types have different service needs.
– ETS is enabled by default with the default ETS configuration applied (all dot1p priorities in the same group with equal bandwidth allocation). ETS Operation with DCBx In DCBx negotiation with peer ETS devices, ETS configuration is handled as follows: • ETS TLVs are supported in DCBx versions CIN, CEE, and IEEE2.5. • ETS operational parameters are determined by the DCBX port-role configurations. • ETS configurations received from TLVs from a peer are validated.
DCBx Port Roles The following DCBx port roles are auto-configured on an Aggregator to propagate DCB configurations learned from peer DCBx devices internally to other switch ports: Auto-upstream The port advertises its own configuration to DCBx peers and receives its configuration from DCBx peers (ToR or FCF device). The port also propagates its configuration to other ports on the switch. The first auto-upstream that is capable of receiving a peer configuration is elected as the configuration source.
Default DCBx port role: Uplink ports are auto-configured in an auto-upstream role. Server-facing ports are auto-configured in an auto-downstream role. NOTE: You can change the port roles only in the PMUX mode. Use the following command to change the port roles: dcbx port-role {auto-downstream | auto-upstream | config-source | manual} manual is the default port role.
keeps the peer link up and continues to exchange DCBx packets. If a compatible peer configuration is later received, DCBx is enabled on the port. • If there is no configuration source, a port may elect itself as the configuration source. A port may become the configuration source if the following conditions exist: – No other port is the configuration source. – The port role is auto-upstream. – The port is enabled with link up and DCBx enabled. – The port has performed a DCBx exchange with a DCBx peer.
syslog message is generated and the peer version is recorded in the peer status table. If the frame cannot be processed, it is discarded and the discard counter is incremented. DCBx Example The following figure shows how DCBx is used on an Aggregator installed in a Dell PowerEdge FX2 server chassis in which servers are also installed. The Aggregator ports are numbered 1 to 12. Ports 1 to 8 are internal server-facing interfaces. Ports 9 to 12 are uplink ports.
Figure 4. DCBx Sample Topology DCBx Prerequisites and Restrictions The following prerequisites and restrictions apply when you configure DCBx operation on a port: • DCBx requires LLDP in both send (TX) and receive (RX) modes to be enabled on a port interface. If multiple DCBx peer ports are detected on a local DCBx interface, LLDP is shut down.
• The CIN version of DCBx supports only PFC, ETS, and FCOE; it does not support iSCSI, backward congestion management (BCN), logical link down (LLD), and network interface virtualization (NIV). DCBx Error Messages The following syslog messages appear when an error in DCBx operation occurs. LLDP_MULTIPLE_PEER_DETECTED: DCBx is operationally disabled after detecting more than one DCBx peer on the port interface. LLDP_PEER_AGE_OUT: DCBx is disabled as a result of LLDP timing out on a DCBx peer interface.
Verifying the DCB Configuration To display DCB configurations, use the following show commands. Table 3. Displaying DCB Configurations Command Output show dcb [stack-unit unit-number] Displays the data center bridging status, number of PFC-enabled ports, and number of PFC-enabled queues. On the master switch in a stack, you can specify a stack-unit number. The range is from 0 to 5.
6 7 0 0 0 0 0 0 Example of the show interfaces pfc summary Command Dell# show interfaces tengigabitethernet 0/4 pfc summary Interface TenGigabitEthernet 0/4 Admin mode is on Admin is enabled Remote is enabled, Priority list is 4 Remote Willing Status is enabled Local is enabled Oper status is Recommended PFC DCBx Oper status is Up State Machine Type is Feature TLV Tx Status is enabled PFC Link Delay 45556 pause quantams Application Priority TLV Parameters : -------------------------------------FCOE TLV
Fields Description is on, PFC advertisements are enabled to be sent and received from peers; received PFC configuration takes effect. The admin operational status for a DCBx exchange of PFC configuration is enabled or disabled. Remote is enabled; Priority list Remote Willing Status is enabled Operational status (enabled or disabled) of peer device for DCBx exchange of PFC configuration with a list of the configured PFC priorities.
Fields Description Application Priority TLV: Local ISCSI Priority Map Priority bitmap used by local DCBx port in ISCSI advertisements in application priority TLVs. Application Priority TLV: Remote FCOE Priority Map Priority bitmap received from the remote DCBx port in FCoE advertisements in application priority TLVs. Application Priority TLV: Remote ISCSI Priority Map Priority bitmap received from the remote DCBx port in iSCSI advertisements in application priority TLVs.
------------------Remote is disabled Local Parameters : -----------------Local is enabled TC-grp Priority# 0 0,1,2,3,4,5,6,7 1 2 3 4 5 6 7 Bandwidth 100% 0% 0% 0% 0% 0% 0% 0% Priority# Bandwidth 0 13% 1 13% 2 13% 3 13% 4 12% 5 12% 6 12% 7 12% Oper status is init Conf TLV Tx Status is disabled Traffic Class TLV Tx Status is disabled TSA ETS ETS ETS ETS ETS ETS ETS ETS TSA ETS ETS ETS ETS ETS ETS ETS ETS Example of the show interface ets detail Command Dell# show interfaces tengigabitethernet Interface Te
4 5 6 7 0% 0% 0% 0% ETS ETS ETS ETS Oper status is init ETS DCBX Oper status is Down Reason: Port Shutdown State Machine Type is Asymmetric Conf TLV Tx Status is enabled Reco TLV Tx Status is enabled 0 Input Conf TLV Pkts, 0 Output Conf TLV Pkts, 0 Error Conf TLV Pkts 0 Input Reco TLV Pkts, 0 Output Reco TLV Pkts, 0 Error Reco TLV Pkts The following table describes the show interface ets detail command fields. Table 5.
Field Description • Internally propagated: ETS configuration parameters were received from configuration source. ETS DCBx Oper status Operational status of ETS configuration on local port: match or mismatch. Reason Reason displayed when the DCBx operational status for ETS on a port is down.
Admin Parameters: -------------------Admin is enabled TC-grp Priority# Bandwidth TSA -----------------------------------------------0 0,1,2,3,4,5,6,7 100% ETS 1 2 3 4 5 6 7 8 Stack unit 1 stack port all Max Supported TC Groups is 4 Number of Traffic Classes is 1 Admin mode is on Admin Parameters: -------------------Admin is enabled TC-grp Priority# Bandwidth TSA -----------------------------------------------0 0,1,2,3,4,5,6,7 100% ETS 1 2 3 4 5 6 7 8 Example of the show interface DCBx detail Command Dell# s
----------------DCBX Operational Version is 0 DCBX Max Version Supported is 0 Sequence Number: 2 Acknowledgment Number: 2 Protocol State: In-Sync Peer DCBX Status: ---------------DCBX Operational Version is 0 DCBX Max Version Supported is 255 Sequence Number: 2 Acknowledgment Number: 2 2 Input PFC TLV pkts, 3 Output PFC TLV pkts, 0 Error PFC pkts, 0 PFC Pause Tx pkts, 0 Pause Rx pkts 2 Input PG TLV Pkts, 3 Output PG TLV Pkts, 0 Error PG TLV Pkts 2 Input Appln Priority TLV pkts, 0 Output Appln Priority TLV p
Field Description Local DCBx TLVs Transmitted Transmission status (enabled or disabled) of advertised DCB TLVs (see TLV code at the top of the show command output). Local DCBx Status: DCBx Operational Version DCBx version advertised in Control TLVs. Local DCBx Status: DCBx Max Version Supported Highest DCBx version supported in Control TLVs. Local DCBx Status: Sequence Number Sequence number transmitted in Control TLVs.
Hierarchical Scheduling in ETS Output Policies ETS supports up to three levels of hierarchical scheduling. For example, you can apply ETS output policies with the following configurations: Priority group 1 Assigns traffic to one priority queue with 20% of the link bandwidth and strictpriority scheduling. Priority group 2 Assigns traffic to one priority queue with 30% of the link bandwidth.
Reason Description LLDP Rx/Tx is disabled LLDP is disabled (Admin Mode set to rx or tx only) globally or on the interface. Waiting for Peer Waiting for peer or detected peer connection has aged out. Multiple Peer Detected Multiple peer connections detected on the interface. Version Conflict DCBx version on peer version is different than the local or globally configured DCBx version.
Reason Description • Total bandwidth assigned to priorities in one or more priority groups is not equal to 100%. Or one of the following ETS failure errors occurred: • Incompatible priority group ID (PGID). • Incompatible bandwidth (BW) allocation. • Incompatible TSA. • Incompatible TC BW. • Incompatible TC TSA.
Dynamic Host Configuration Protocol (DHCP) 5 The Aggregator is auto-configured to operate as a dynamic host configuration protocol (DHCP) client. The DHCP server, DHCP relay agent, and secure DHCP features are not supported. The DHCP is an application layer protocol that dynamically assigns IP addresses and other configuration parameters to network end-stations (hosts) based on configuration policies determined by network administrators.
binding table. The server then broadcasts a DHCPACK message, which signals to the client that it may begin using the assigned parameters. There are additional messages that are used in case the DHCP negotiation deviates from the process previously described and shown in the illustration below. DHCPDECLINE A client sends this message to the server in response to a DHCPACK if the configuration parameters are unacceptable; for example, if the offered address is already in use.
Figure 5. Assigning Network Parameters using DHCP Dell Networking OS Behavior: DHCP is implemented in Dell Networking OS based on RFC 2131 and 3046. Debugging DHCP Client Operation To enable debug messages for DHCP client operation, enter the following debug commands: • Enable the display of log messages for all DHCP packets sent and received on DHCP client interfaces.
[no] debug ip dhcp client events [interface type slot/port] The following example shows the packet- and event-level debug messages displayed for the packet transmissions and state transitions on a DHCP client interface.
Ma 0/0 :Transitioned to state STOPPED 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP IP RELEASED CMD sent to FTOS in state STOPPED Dell# renew dhcp int Ma 0/0 Dell#1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP RENEW CMD Received in state STOPPED 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :Transitioned to state SELECTING 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_
DHCP Client An Aggregator is auto-configured to operate as a DHCP client. The DHCP client functionality is enabled only on the default VLAN and the management interface. A DHCP client is a network device that requests an IP address and configuration parameters from a DHCP server.
Important: To verify the currently configured dynamic IP address on an interface, enter the show ip dhcp lease command. The show running-configuration command output only displays ip address dhcp; the currently assigned dynamic IP address is not displayed. DHCP Client on a Management Interface These conditions apply when you enable a management interface to operate as a DHCP client. • The management default route is added with the gateway as the router IP address received in the DHCP ACK packet.
provide, hosts specify the parameters that they require, and the server sends only those parameters. Some common options are shown in the following illustration. Figure 6. DHCP packet Format The following table lists common DHCP options. Option Number and Description Subnet Mask Option 1 Specifies the client’s subnet mask. Router Option 3 Specifies the router IP addresses that may serve as the client’s default gateway.
Option Number and Description Parameter Request Option 55 List Clients use this option to tell the server which parameters it requires. It is a series of octets where each octet is DHCP option code. Renewal Time Option 58 Specifies the amount of time after the IP address is granted that the client attempts to renew its lease with the original server.
CONFIGURATION mode int ma 0/0 ip add dhcp relay information-option remote-id For routers between the relay agent and the DHCP server, enter the trust-downstream option. Releasing and Renewing DHCP-based IP Addresses On an Aggregator configured as a DHCP client, you can release a dynamically-assigned IP address without removing the DHCP client operation on the interface. To manually acquire a new IP address from the DHCP server, use the following command.
DHCPDECLINE DHCPRELEASE DHCPREBIND DHCPRENEW DHCPINFORM Dell# 0 0 0 0 0 Example of the show ip dhcp lease Command Dell# show ip dhcp Interface Lease-IP Def-Router ServerId State Lease Obtnd At Lease Expires At ========= ======== ========= ======== ===== ============== ================ Ma 0/0 0.0.0.0/0 0.0.0.0 0.0.0.0 INIT -----NA--------NA---Vl 1 10.1.1.254/24 0.0.0.0 08-27-2011 04:33:39 Renew Time ========== ----NA---08-26-2011 16:21:50 70 10.1.1.
FIP Snooping 6 This chapter describes about the FIP snooping concepts and configuration procedures. Supported Modes Standalone, PMUX, VLT Fibre Channel over Ethernet Fibre Channel over Ethernet (FCoE) provides a converged Ethernet network that allows the combination of storage-area network (SAN) and LAN traffic on a Layer 2 link by encapsulating Fibre Channel data into Ethernet frames.
requirement for point-to-point connections by creating a unique virtual link for each connection between an FCoE end-device and an FCF via a transit switch. FIP provides a functionality for discovering and logging in to an FCF. After discovering and logging in, FIP allows FCoE traffic to be sent and received between FCoE end-devices (ENodes) and the FCF. FIP uses its own EtherType and frame format.
FIP Snooping on Ethernet Bridges In a converged Ethernet network, intermediate Ethernet bridges can snoop on FIP packets during the login process on an FCF. Then, using ACLs, a transit bridge can permit only authorized FCoE traffic to be transmitted between an FCoE end-device and an FCF. An Ethernet bridge that provides these functions is called a FIP snooping bridge (FSB). On a FIP snooping bridge, ACLs are created dynamically as FIP login frames are processed.
Figure 8. FIP Snooping on an Aggregator The following sections describes how to configure the FIP snooping feature on a switch that functions as a FIP snooping bridge so that it can perform the following functions: • Performs FIP snooping (allowing and parsing FIP frames) globally on all VLANs or on a per-VLAN basis. • Set the FCoE MAC address prefix (FC-MAP) value used by an FCF to assign a MAC address to an ECoE end-device (server ENode or storage device) after a server successfully logs in.
FIP Snooping on VLANs FIP snooping is enabled globally on an Aggregator on all VLANs: • FIP frames are allowed to pass through the switch on the enabled VLANs and are processed to generate FIP snooping ACLs. • FCoE traffic is allowed on VLANs only after a successful virtual-link initialization (fabric login FLOGI) between an ENode and an FCF. All other FCoE traffic is dropped. • Atleast one interface is auto-configured for FCF (FIP snooping bridge — FCF) mode on a FIP snooping-enabled VLAN.
– Each FIP snooping port is auto-configured to operate in Hybrid mode so that it accepts both tagged and untagged VLAN frames. – Tagged VLAN membership is auto-configured on each FIP snooping port that sends and receives FCoE traffic and has links with an FCF, ENode server or another FIP snooping bridge. – The default VLAN membership of the port should continue to operate with untagged frames. FIP snooping is not supported on a port that is configured for non-default untagged VLAN membership.
interface port-type slot/port By default, a port is configured for bridge-to-ENode links. 5. Configure the port for bridge-to-FCF links. INTERFACE or CONFIGURATION mode fip-snooping port-mode fcf NOTE: All these configurations are available only in PMUX mode. NOTE: To disable the FIP snooping feature or FIP snooping on VLANs, use the no version of a command; for example, no feature fip-snooping or no fip-snooping enable. .
clear fip-snooping statistics [interface vlan vlan-id | interface port-type port/slot | interface port-channel portchannel-number] Clears the statistics on the FIP packets snooped on all VLANs, a specified VLAN, or a specified port interface. show fip-snooping system Display information on the status of FIP snooping on the switch (enabled or disabled), including the number of FCoE VLANs, FCFs, ENodes, and currently active sessions.
show fip-snooping enode Command Example Dell# show fip-snooping enode Enode MAC Enode Interface VLAN FC-ID ------------------------------d4:ae:52:1b:e3:cd Te 0/1 100 62:00:11 FCF MAC ------54:7f:ee:37:34:40 show fip-snooping enode Command Description Field Description ENode MAC MAC address of the ENode. ENode Interface Slot/ port number of the interface connected to the ENode. FCF MAC MAC address of the FCF. VLAN VLAN ID number used by the session.
FC-ID Fibre Channel session ID assigned by the FCF.
Number Number Number Number Number Number Number Number Number Number Number Number Number of of of of of of of of of of of of of VN Port Keep Alive Multicast Discovery Advertisement Unicast Discovery Advertisement FLOGI Accepts FLOGI Rejects FDISC Accepts FDISC Rejects FLOGO Accepts FLOGO Rejects CVL FCF Discovery Timeouts VN Port Session Timeouts Session failures due to Hardware Config :0 :4451 :2 :2 :0 :16 :0 :0 :0 :0 :0 :0 :0 show fip-snooping statistics Command Description Field Description Numbe
Number of FDISC Rejects Number of FIP FDISC reject frames received on the interface. Number of FLOGO Accepts Number of FIP FLOGO accept frames received on the interface. Number of FLOGO Rejects Number of FIP FLOGO reject frames received on the interface. Number of CVLs Number of FIP clear virtual link frames received on the interface. Number of FCF Discovery Timeouts Number of FCF discovery timeouts that occurred on the interface.
FIP Snooping Example The following figure shows an Aggregator used as a FIP snooping bridge for FCoE traffic between an ENode (server blade) and an FCF (ToR switch). The ToR switch operates as an FCF and FCoE gateway. Figure 9. FIP Snooping on an Aggregator In tbe above figure, DCBX and PFC are enabled on the Aggregator (FIP snooping bridge) and on the FCF ToR switch. On the FIP snooping bridge, DCBX is configured as follows: • A server-facing port is configured for DCBX in an auto-downstream role.
Debugging FIP Snooping To enable debug messages for FIP snooping events, enter the debug fip-snooping command.. Task Command Command Mode Enable FIP snooping debugging on for all or a specified event type, where: debug fip-snooping [all | acl | error | ifm | info | ipc | rx] EXEC PRIVILEGE all enables all debugging options. acl enables debugging only for ACL-specific events. error enables debugging only for error conditions. ifm enables debugging only for IFM events.
Internet Group Management Protocol (IGMP) 7 On an Aggregator, IGMP snooping is auto-configured. You can display information on IGMP by using show ip igmp command. Multicast is based on identifying many hosts by a single destination IP address. Hosts represented by the same IP address are a multicast group. The internet group management protocol (IGMP) is a Layer 3 multicast protocol that hosts use to join or leave a multicast group.
Figure 10. IGMP Version 2 Packet Format Joining a Multicast Group There are two ways that a host may join a multicast group: it may respond to a general query from its querier, or it may send an unsolicited report to its querier. • Responding to an IGMP Query. – One router on a subnet is elected as the querier. The querier periodically multicasts (to allmulticast-systems address 224.0.0.1) a general query to all hosts on the subnet.
• To enable filtering, routers must keep track of more state information, that is, the list of sources that must be filtered. An additional query type, the group-and-source-specific query, keeps track of state changes, while the group-specific and general queries still refresh existing state. • Reporting is more efficient and robust.
• The host’s third message indicates that it is only interested in traffic from sources 10.11.1.1 and 10.11.1.2. Because this request again prevents all other sources from reaching the subnet, the router sends another group-and-source query so that it can satisfy all other hosts. There are no other interested hosts, so the request is recorded. Figure 13.
Figure 14. IGMP Membership Queries: Leaving and Staying in Groups IGMP Snooping IGMP snooping is auto-configured on an Aggregator. Multicast packets are addressed with multicast MAC addresses, which represents a group of devices rather than one unique device. Switches forward multicast frames out of all ports in a VLAN by default, even if there are only a small number of interested hosts, resulting in a waste of bandwidth.
Disabling Multicast Flooding If the switch receives a multicast packet that has an IP address of a group it has not learned (unregistered frame), the switch floods that packet out of all ports on the VLAN. To disable multicast flooding on all VLAN ports, enter the no ip igmp snooping flood command in global configuration mode. When multicast flooding is disabled, unregistered multicast data traffic is forwarded to only multicast router ports on all VLANs.
Group source list Source address 1.1.1.2 Member Ports: Po 1 Interface Group Uptime Expires Router mode Last reporter Last reporter mode Last report received Group source list Source address 1.1.1.2 Member Ports: Po 1 Dell# Uptime 00:00:21 Expires 00:01:48 Uptime 00:00:04 Expires 00:02:05 Vlan 1600 226.0.0.1 00:00:04 Never INCLUDE 1.1.1.
Interfaces 8 This chapter describes TenGigabit Ethernet interface types, both physical and logical, and how to configure them with the Dell Networking Operating Software (OS).
• All interfaces are auto-configured as members of all (4094) VLANs and untagged VLAN 1. All VLANs are up and can send or receive layer 2 traffic. You can use the Command Line Interface (CLI) or CMC interface to configure only the required VLANs on a port interface. Interface Types The following interface types are supported on an Aggregator.
0 packets, 0 bytes 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 0 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics: 0 packets, 0 bytes, 0 underruns 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 0 Multicasts, 0 Broadcasts, 0 Unicasts 0 throttles, 0 discarded, 0 collisions, 0 wreddrops Rate info (inte
Disabling and Re-enabling a Physical Interface By default, all port interfaces on an Aggregator are operationally enabled (no shutdown) to send and receive Layer 2 traffic. You can reconfigure a physical interface to shut it down by entering the shutdown command. To re-enable the interface, enter the no shutdown command. Step Command Syntax Command Mode Purpose 1. interface interface CONFIGURATION Enter the keyword interface followed by the type of interface and slot/port information: 2.
advertise management-tlv system-name dcbx port-role auto-downstream no shutdown Dell(conf-if-te-0/1)# To view the interfaces in Layer 2 mode, use the show interfaces switchport command in EXEC mode. Management Interfaces An Aggregator auto-configures with a DHCP-based IP address for in-band management on VLAN 1 and remote out-of-band (OOB) management. The Aggregator management interface has both a public IP and private IP address on the internal Fabric D interface.
Slot range: 0-0 To configure an IP address on a management interface, use either of the following commands in MANAGEMENT INTERFACE mode: Command Syntax Command Mode Purpose ip address ip-address mask INTERFACE Configure an IP address and mask on the interface. • ip address dhcp INTERFACE ip-address mask: enter an address in dotted-decimal format (A.B.C.D), the mask must be in /prefix format (/x) Acquire an IP address from the DHCP server.
VLAN Membership A virtual LAN (VLANs) is a logical broadcast domain or logical grouping of interfaces in a LAN in which all data received is kept locally and broadcast to all members of the group. In Layer 2 mode, VLANs move traffic at wire speed and can span multiple devices. Dell Networking OS supports up to 4093 port-based VLANs and one default VLAN, as specified in IEEE 802.1Q. VLAN provide the following benefits: • Improved security because you can isolate groups of users into different VLANs.
VLANs and Port Tagging To add an interface to a VLAN, it must be in Layer 2 mode. After you place an interface in Layer 2 mode, it is automatically placed in the default VLAN. Dell Networking OS supports IEEE 802.1Q tagging at the interface level to filter traffic. When you enable tagging, a tag header is added to the frame after the destination and source MAC addresses. The information that is preserved as the frame moves through the network.
vlan-id specifies a tagged VLAN number. Range: 2-4094 To reconfigure an interface as a member of only specified untagged VLANs, enter the vlan untagged command in INTERFACE mode: Command Syntax Command Mode Purpose vlan untagged {vlan-id} INTERFACE Add the interface as an untagged member of one or more VLANs, where: vlan-id specifies an untagged VLAN number.
Adding an Interface to a Tagged VLAN The following example shows you how to add a tagged interface (Te 0/2) to the VLANs. Enter the vlan tagged command to add interface Te 0/2 to VLANs 2 - 4, which is as shown below. Enter the show config command to verify that interface Te 0/2 is a tagged member of the VLANs.
VLAN Configuration on Physical Ports and Port-Channels Unlike other Dell Networking OS platforms, IOA allows VLAN configurations on port and port-channel levels. This allows you to assign VLANs to a port/port-channel. NOTE: In PMUX mode, in order to avoid loops, only disjoint VLANs are allowed between the uplink ports/uplink LAGs and uplink-to-uplink switching is disabled. 1. Initialize the port with configurations such as admin up, portmode, and switchport.
Mirroring VLANs, P - Primary, C - Community, I - Isolated O - Openflow Q: U - Untagged, T - Tagged x - Dot1x untagged, X - Dot1x tagged o - OpenFlow untagged, O - OpenFlow tagged G - GVRP tagged, M - Vlan-stack, H - VSN tagged i - Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged * 1 10 NUM Status Active Active 11 12 Active Active 13 Active 14 Active 15 Active 20 Active Description Dell# Q Ports U Te 0/3 T Po128(Te 0/4-5) T Te 0/1 T Po128(Te 0/4-5) T Po128(Te 0/4-5) T
A port channel provides redundancy by aggregating physical interfaces into one logical interface. If one physical interface goes down in the port channel, another physical interface carries the traffic. Port Channel Benefits A port channel interface provides many benefits, including easy management, link redundancy, and sharing. Port channels are transparent to network configurations and can be modified and managed as one interface.
configuration becomes the common speed of the port channel. If the other interfaces configured in that port channel are configured with a different speed, Dell Networking OS disables them.
In this example, the Port-channel 1 is a dynamically created port channel based on the NIC teaming configuration in connected servers learned via LACP. Also, the Port-channel 128 is the default port channel to which all the uplink ports are assigned by default.
To display all interfaces that have been validated under the interface range context, use the show range in Interface Range mode. To display the running configuration only for interfaces that are part of interface range, use the show configuration command in Interface Range mode.
Dell(conf-if-range-te-0/1-5)# no shutdown Dell(conf-if-range-te-0/1-5)# Monitor and Maintain Interfaces You can display interface statistics with the monitor interface command. This command displays an ongoing list of the interface status (up/down), number of packets, traffic statistics, etc. Command Syntax Command Mode Purpose monitor interface interface EXEC Privilege View interface statistics.
Output underruns: Output throttles: m l T q - 0 0 0 pps 0 pps Change mode Page up Increase refresh interval Quit 0 0 c - Clear screen a - Page down t - Decrease refresh interval Maintenance Using TDR The time domain reflectometer (TDR) is supported on all Dell Networking switch/routers. TDR is an assistance tool to resolve link issues that helps detect obvious open or short conditions within any of the four copper pairs.
• DCB is required for PFC, ETS, DCBX, and FCoE initialization protocol (FIP) snooping to operate. Link-level flow control uses Ethernet pause frames to signal the other end of the connection to pause data transmission for a certain amount of time as specified in the frame. Ethernet pause frames allow for a temporary stop in data transmission. A situation may arise where a sending device may transmit data faster than a destination device can accept it.
– tx off: enter the keywords tx off so that flow control frames are not sent from this port to the connected device when a higher rate of traffic is received. – negotiate: enable pause-negotiation with the egress port of the peer device. If the negotiate command is not used, pause-negotiation is disabled. NOTE: The default is rx off. . MTU Size The Aggregator auto-configures interfaces to use a maximum MTU size of 12,000 bytes.
For example, the VLAN contains tagged members with a link MTU of 1522 and an IP MTU of 1500 and untagged members with a link MTU of 1518 and an IP MTU of 1500. The VLAN’s Link MTU cannot be higher than 1518 bytes and its IP MTU cannot be higher than 1500 bytes. Auto-Negotiation on Ethernet Interfaces Setting Speed and Duplex Mode of Ethernet Interfaces By default, auto-negotiation of speed and duplex mode is enabled on 10GbE Ethernet interface on an Aggregator.
show interface status Command Example: Dell# show interfaces Port Description Te 0/1 Te 0/2 Te 0/3 Te 0/4 Te 0/5 Te 0/6 Te 0/7 Te 0/8 toB300 Fc 0/9 Fc 0/10 Te 0/11 Te 0/12 status Status Up Down Up Down Up Up Up Down Up Up Down Down Speed Duplex Vlan 10000 Mbit Full 1-4094 Auto Auto 1-1001,1003-4094 10000 Mbit Full 1-1001,1003-4094 Auto Auto 1-1001,1003-4094 10000 Mbit Full 1-4094 10000 Mbit Full 1-4094 10000 Mbit Full 1-4094 Auto Auto 1-1001,1003-4094 8000 Mbit Full -8000 Mbit Full -Auto Auto -Auto Auto -
speed auto interfaceconfig mode Supported Supported Supported Supported Error messages not thrown wherever it says not supported speed 1000 interfaceconfig mode Supported Supported Supported Supported speed 10000 interfaceconfig mode Supported Supported Not Supported Not supported Error messages not thrown wherever it says not supported negotiation auto interfaceconfig mode Supported Not Not supported supported( Should some error message be thrown?) Not supported Error messages not thrown
interface, whether the interface supports IEEE 802.1Q tagging or not, and the VLANs to which the interface belongs. show interfaces switchport Command Example: Dell#show interfaces switchport Codes: tagged U x G i - Untagged, T - Tagged Dot1x untagged, X - Dot1x tagged GVRP tagged, M - Trunk, H - VSN tagged Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT Name: TenGigabitEthernet 0/1 802.1QTagged: False Vlan membership: Q Vlans U 1 Name: TenGigabitEthernet 0/2 802.
Without an interface specified, the command clears all interface counters. • (OPTIONAL) Enter the following interface keywords and slot/port or number information: • For a Port Channel interface, enter the keyword portchannel followed by a number from 1 to 128. • For a 10-Gigabit Ethernet interface, enter the keyword TenGigabitEthernet followed by the slot/port numbers. • For a VLAN, enter the keyword vlan followed by a number from 1 to 4094.
iSCSI Optimization 9 An Aggregator enables internet small computer system interface (iSCSI) optimization with default iSCSI parameter settings(Default iSCSI Optimization Values) and is auto-provisioned to support: iSCSI Optimization: Operation To display information on iSCSI configuration and sessions, use show commands. iSCSI optimization enables quality-of-service (QoS) treatment for iSCSI traffic.
• iSCSI QoS — A user-configured iSCSI class of service (CoS) profile is applied to all iSCSI traffic. Classifier rules are used to direct the iSCSI data traffic to queues that can be given preferential QoS treatment over other data passing through the switch. Preferential treatment helps to avoid session interruptions during times of congestion that would otherwise cause dropped iSCSI packets. • iSCSI DCBx TLVs are supported.
Monitoring iSCSI Traffic Flows The switch snoops iSCSI session-establishment and termination packets by installing classifier rules that trap iSCSI protocol packets to the CPU for examination. Devices that initiate iSCSI sessions usually use well-known TCP ports 3260 or 860 to contact targets. When you enable iSCSI optimization, by default the switch identifies IP packets to or from these ports as iSCSI traffic.
• iSCSI LLDP monitoring starts to automatically detect EqualLogic arrays. iSCSI optimization requires LLDP to be enabled. LLDP is enabled by default when an Aggregator autoconfigures. The following message displays when you enable iSCSI on a switch and describes the configuration changes that are automatically performed: %STKUNIT0-M:CP %IFMGR-5-IFM_ISCSI_ENABLE: iSCSI has been enabled causing flow control to be enabled on all interfaces.
• enable: enables the application of preferential QoS treatment to iSCSI traffic so that iSCSI packets are scheduled in the switch with a dot1p priority 4 regardless of the VLAN priority tag in the packet. The default is: iSCSI packets are handled with dotp1 priority 4 without remark. • disable: disables the application of preferential QoS treatment to iSCSI frames.
Displaying iSCSI Optimization Information To display information on iSCSI optimization, use the show commands detailed in the below table: Table 7. Displaying iSCSI Optimization Information Command Output show iscsi Displays the currently configured iSCSI settings. show iscsi sessions Displays information on active iSCSI sessions on the switch that have been established since the last reload.
Up Time:00:00:01:28(DD:HH:MM:SS) Time for aging out:00:00:09:34(DD:HH:MM:SS) ISID:806978696102 Initiator Initiator Target Target Connection IP Address TCP Port IP Address TCPPort ID 10.10.0.44 33345 10.10.0.101 3260 0 Session 1 : ----------------------------------------------------------------------------Target:iqn.2010-11.com.ixia:ixload:iscsi-TG1 Initiator:iqn.2010-11.com.ixia.
Isolated Networks for Aggregators 10 An Isolated Network is an environment in which servers can only communicate with the uplink interfaces and not with each other even though they are part of same VLAN. If the servers in the same chassis need to communicate with each other, it requires a non-isolated network connectivity between them or it needs to be routed in the TOR. Isolated Networks can be enabled on per VLAN basis.
Link Aggregation 11 Unlike IOA Automated modes (Standalone and VLT modes), the IOA Programmable MUX (PMUX) can support multiple uplink LAGs. You can provision multiple uplink LAGs. The I/O Aggregator autoconfigures with link aggregation groups (LAGs) as follows: • All uplink ports are automatically configured in a single port channel (LAG 128).
Uplink LAG When the Aggregator power is on, all uplink ports are configured in a single LAG (LAG 128). Server-Facing LAGs Server-facing ports are configured as individual ports by default. If you configure a server NIC in standalone, stacking, or VLT mode for LACP-based NIC teaming, server-facing ports are automatically configured as part of dynamic LAGs. The LAG range 1 to 127 is reserved for server-facing LAGs.
Link Aggregation Control Protocol (LACP) The commands for Dell Networks’s implementation of the link aggregation control protocol (LACP) for creating dynamic link aggregation groups (LAGs) — known as port-channels in the Dell Networking OS — are provided in the following sections. NOTE: For static LAG commands, refer to the InterfacesInterfaces chapter), based on the standards specified in the IEEE 802.
You can add any physical interface to a port channel if the interface configuration is minimal. You can configure only the following commands on an interface if it is a member of a port channel: • description • shutdown/no shutdown • mtu • ip mtu (if the interface is on a Jumbo-enabled by default) NOTE: A logical port channel interface cannot have flow control. Flow control can only be present on the physical interfaces if they are part of a port channel.
Example of the show interface port-channel Command Dell#show interface port-channel 1 Port-channel 1 is down, line protocol is down Hardware address is 00:1e:c9:de:04:9c, Current address is 00:1e:c9:de:04:9c Interface index is 1107492865 Minimum number of links to bring Port-channel up is 1 Internet address is not set Mode of IPv4 Address Assignment : NONE DHCP Client-ID :001ec9de049c MTU 1554 bytes, IP MTU 1500 bytes LineSpeed auto Members in this channel: ARP type: ARPA, ARP Timeout 04:00:00 Last clearing
Reassigning an Interface to a New Port Channel An interface can be a member of only one port channel. If the interface is a member of a port channel, remove it from the first port channel and then add it to the second port channel. Each time you add or remove a channel member from a port channel, Dell Networking OS recalculates the hash algorithm for the port channel. To reassign an interface to a new port channel, use the following commands. 1. Remove the interface from the first port channel.
Example of Configuring the Minimum Oper Up Links in a Port Channel Dell#config t Dell(conf)#int po 128 Dell(conf-if-po-128)#minimum-links 5 Dell(conf-if-po-128)# Configuring VLAN Tags for Member Interfaces To configure and verify VLAN tags for individual members of a port channel, perform the following: 1. Configure VLAN membership on individual ports INTERFACE mode Dell(conf-if-te-0/2)#vlan tagged 2,3-4 2.
shutdown When you disable a port channel, all interfaces within the port channel are operationally down also. Configuring Auto LAG You can enable or disable auto LAG on the server-facing interfaces. By default, auto LAG is enabled. This functionality is supported on the Aggregator in Standalone, Stacking, and VLT modes. To configure auto LAG, use the following commands: 1. Enable the auto LAG on all the server ports.
Server Port AdminState is Up Pluggable media not present Interface index is 15274753 Internet address is not set Mode of IPv4 Address Assignment : NONE DHCP Client-ID :f8b156071d8e MTU 12000 bytes, IP MTU 11982 bytes LineSpeed auto Auto-lag is disabled Flowcontrol rx on tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:12:53 Queueing strategy: fifo Input Statistics: 0 packets, 0 bytes 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 ov
Configuring the Minimum Number of Links to be Up for Uplink LAGs to be Active You can activate the LAG bundle for uplink interfaces or ports (the uplink port-channel is LAG 128) on the I/O Aggregator only when a minimum number of member interfaces of the LAG bundle is up. For example, based on your network deployment, you may want the uplink LAG bundle to be activated only if a certain number of member interface links is also in the up state.
Output 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Time since last interface status change: 05:22:28 Optimizing Traffic Disruption Over LAG Interfaces On IOA Switches in VLT Mode When you use the write memory command while an Aggregator operates in VLT mode, the VLT LAG configurations are saved in nonvolatile storage (NVS). By restoring the settings saved in NVS, the VLT ports come up quicker on the primary VLT node and traffic disruption is reduced.
Monitoring the Member Links of a LAG Bundle You can examine and view the operating efficiency and the traffic-handling capacity of member interfaces of a LAG or port channel bundle. This method of analyzing and tracking the number of packets processed by the member interfaces helps you manage and distribute the packets that are handled by the LAG bundle.
Verifying LACP Operation and LAG Configuration To verify the operational status and configuration of a dynamically created LAG, and LACP operation on a LAG on an Aggregator, enter the show interfaces port-channel port-channel-number and show lacp port-channel-number commands.
I - Collection enabled, J - Collection disabled, K - Distribution enabled L - Distribution disabled, M - Partner Defaulted, N - Partner Non-defaulted, O - Receiver is in expired state, P - Receiver is not in expired state Port Te 0/9 is enabled, LACP is Port State: Bundle Actor Admin: State ADEHJLMP Oper: State ADEGIKNP Partner Admin: State BDFHJLMP Oper: State ACEGIKNP enabled and mode is lacp Key Key Key Key 128 Priority 32768 128 Priority 32768 0 Priority 0 128 Priority 32768 Port Te 0/10 is enabled,
show lacp 1 Command Example Dell# show lacp 1 Port-channel 1 admin up, oper up, mode lacp Actor System ID: Priority 32768, Address 0001.e8e1.e1c3 Partner System ID: Priority 65535, Address 24b6.fd87.
4. Configure the port mode, VLAN, and so forth on the port-channel. Dell#configure Dell(conf)#int port-channel 10 Dell(conf-if-po-10)#portmode hybrid Dell(conf-if-po-10)#switchport Dell(conf-if-po-10)#vlan tagged 1000 Dell(conf-if-po-10)#link-bundle-monitor enable Dell#configure Dell(conf)#int port-channel 11 Dell(conf-if-po-11)#portmode hybrid Dell(conf-if-po-11)#switchport Dell(conf-if-po-11)#vlan tagged 1000 % Error: Same VLAN cannot be added to more than one uplink port/LAG.
Layer 2 12 The Aggregator supports CLI commands to manage the MAC address table: • Clearing the MAC Address Entries • Displaying the MAC Address Table The Aggregator auto-configures with support for Network Interface Controller (NIC) Teaming. NOTE: On an Aggregator, all ports are configured by default as members of all (4094) VLANs, including the default VLAN. All VLANs operate in Layer 2 mode.
clear mac-address-table dynamic {all | interface {tengigabitethernet <0–5> | SLOT/PORT} } • all: deletes all dynamic entries. • interface: deletes all entries for the specified interface. Displaying the MAC Address Table To display the MAC address table, use the following command. • Display the contents of the MAC address table. EXEC Privilege mode NOTE: This command is available only in PMUX mode.
Figure 17. Redundant NOCs with NIC Teaming MAC Address Station Move When you use NIC teaming, consider that the server MAC address is originally learned on Port 0/1 of the switch (see figure below). If the NIC fails, the same MAC address is learned on Port 0/5 of the switch. The MAC address is disassociated with one port and re-associated with another in the ARP table; in other words, the ARP entry is “moved”. The Aggregator is auto-configured to support MAC Address station moves.
Figure 18. MAC Address Station Move MAC Move Optimization Station-move detection takes 5000ms because this is the interval at which the detection algorithm runs.
Link Layer Discovery Protocol (LLDP) 13 Link layer discovery protocol (LLDP) advertises connectivity and management from the local station to the adjacent stations on an IEEE 802 LAN. LLDP facilitates multi-vendor interoperability by using standard management tools to discover and make available a physical topology for network management. The Dell Networking operating software implementation of LLDP is based on IEEE standard 801.1ab.
Figure 19. Type, Length, Value (TLV) Segment TLVs are encapsulated in a frame called an LLDP data unit (LLDPDU), which is transmitted from one LLDP-enabled device to its LLDP-enabled neighbors. LLDP is a one-way protocol. LLDP-enabled devices (LLDP agents) can transmit and/or receive advertisements, but they cannot solicit and do not respond to advertisements. There are five types of TLVs (as shown in the below table). All types are mandatory in the construction of an LLDPDU except Optional TLVs.
Figure 20. LLDPDU Frame Configure LLDP Configuring LLDP is a two-step process. 1. Enable LLDP globally. 2. Advertise TLVs out of an interface. Related Configuration Tasks • Viewing the LLDP Configuration • Viewing Information Advertised by Adjacent LLDP Agents • Configuring LLDPDU Intervals • Configuring a Time to Live • Debugging LLDP Important Points to Remember • LLDP is enabled by default. • Dell Networking systems support up to eight neighbors per interface.
exit hello mode multiplier no show Exit from LLDP configuration mode LLDP hello configuration LLDP mode configuration (default = rx and tx) LLDP multiplier configuration Negate a command or set its defaults Show LLDP configuration Dell(conf-lldp)#exit Dell(conf)#interface tengigabitethernet 0/3 Dell(conf-if-te-0/3)#protocol lldp Dell(conf-if-te-0/3-lldp)#? advertise Advertise TLVs disable Disable LLDP protocol on this interface end Exit from configuration mode exit Exit from LLDP configuration mode hello
To advertise TLVs, use the following commands. 1. Enter LLDP mode. CONFIGURATION or INTERFACE mode protocol lldp 2. Advertise one or more TLVs. PROTOCOL LLDP mode advertise {dcbx-appln-tlv | dcbx-tlv | dot3-tlv | interface-port-desc | management-tlv | med } Include the keyword for each TLV you want to advertise. • For management TLVs: system-capabilities, system-description. • For 802.1 TLVs: port-protocol-vlan-id, port-vlan-id. • For 802.3 TLVs: max-frame-size.
Optional TLVs The Dell Networking Operating System (OS) supports the following optional TLVs: Management TLVs, IEEE 802.1 and 802.3 organizationally specific TLVs, and TIA-1057 organizationally specific TLVs. Management TLVs A management TLV is an optional TLVs sub-type. This kind of TLV contains essential management information about the sender. Organizationally Specific TLVs A professional organization or a vendor can define organizationally specific TLVs.
Type TLV Description Router, Telephone, DOCSIS cable device, end station only, or other. 8 Management address Indicates the network address of the management interface. The Dell Networking OS does not currently support this TLV. 127 Port-VLAN ID On Dell Networking systems, indicates the untagged VLAN to which a port belongs.
Type TLV Description 127 Link Aggregation Indicates whether the link is capable of being aggregated, whether it is currently in a LAG, and the port identification of the LAG. The Dell Networking OS does not currently support this TLV. 127 Maximum Frame Size Indicates the maximum frame size capability of the MAC and PHY. LLDP-MED Capabilities TLV The LLDP-MED capabilities TLV communicates the types of TLVs that the endpoint device and the network connectivity device support.
Table 11. LLDP-MED Device Types Value Device Type 0 Type Not Defined 1 Endpoint Class 1 2 Endpoint Class 2 3 Endpoint Class 3 4 Network Connectivity 5–255 Reserved LLDP-MED Network Policies TLV A network policy in the context of LLDP-MED is a device’s VLAN configuration and associated Layer 2 and Layer 3 configurations.
Type Application Description 4 Guest Voice Signaling Specify this application type only if guest voice control packets use a separate network policy than voice data. 5 Softphone Voice Specify this application type only if guest voice control packets use a separate network policy than voice data. 6 Video Conferencing Specify this application type for dedicated video conferencing and other similar appliances supporting real-time interactive video.
Figure 25. Extended Power via MDI TLV LLDP Operation On an Aggregator, LLDP operates as follows: • LLDP is enabled by default. • LLDPDUs are transmitted and received by default. LLDPDUs are transmitted periodically. The default interval is 30 seconds. • LLDPDU information received from a neighbor expires after the default Time to Live (TTL) value: 120 seconds. • Dell Networking OS supports up to eight neighbors per interface.
protocol lldp R1(conf-if-te-0/3-lldp)# Viewing Information Advertised by Adjacent LLDP Agents To view brief information about adjacent devices or to view all the information that neighbors are advertising, use the following commands. • Display brief information about adjacent devices. • show lldp neighbors Display all of the information that neighbors are advertising.
Configuring LLDPDU Intervals LLDPDUs are transmitted periodically; the default interval is 30 seconds. To configure LLDPDU intervals, use the following command. • Configure a non-default transmit interval.
no disable R1(conf-lldp)#multiplier ? <2-10> Multiplier (default=4) R1(conf-lldp)#multiplier 5 R1(conf-lldp)#show config ! protocol lldp advertise dot1-tlv port-protocol-vlan-id port-vlan-id advertise dot3-tlv max-frame-size advertise management-tlv system-capabilities system-description multiplier 5 no disable R1(conf-lldp)#no multiplier R1(conf-lldp)#show config ! protocol lldp advertise dot1-tlv port-protocol-vlan-id port-vlan-id advertise dot3-tlv max-frame-size advertise management-tlv system-capabilit
Figure 26. The debug lldp detail Command — LLDPDU Packet Dissection Relevant Management Objects Dell Networkings OS supports all IEEE 802.1AB MIB objects. The following tables list the objects associated with: • received and transmitted TLVs • the LLDP configuration on the local agent • IEEE 802.1AB Organizationally Specific TLVs • received and transmitted LLDP-MED TLVs Table 13.
MIB Object Category Basic TLV Selection LLDP Variable LLDP MIB Object Description msgTxInterval lldpMessageTxInterval Transmit Interval value. rxInfoTTL lldpRxInfoTTL Time to live for received TLVs. txInfoTTL lldpTxInfoTTL Time to live for transmitted TLVs. mibBasicTLVsTxEnable lldpPortConfigTLVsTxEnabl e Indicates which management TLVs are enabled for system ports.
Table 14.
TLV Type TLV Name TLV Variable System interface numbering Local subtype interface number OID LLDP MIB Object lldpLocManAddrIfSu btype Remote lldpRemManAddrIfS ubtype Local lldpLocManAddrIfId Remote lldpRemManAddrIfId Local lldpLocManAddrOID Remote lldpRemManAddrOI D Table 15. LLDP 802.
Table 16.
TLV Sub-Type TLV Name TLV Variable System LLDP-MED MIB Object 3 Location Data Format Local lldpXMedLocLocatio nSubtype Remote lldpXMedRemLocati onSubtype Local lldpXMedLocLocatio nInfo Remote lldpXMedRemLocati onInfo Local lldpXMedLocXPoED eviceType Remote lldpXMedRemXPoED eviceType Local lldpXMedLocXPoEPS EPowerSource Location Identifier Location ID Data 4 Extended Power via MDI Power Device Type Power Source lldpXMedLocXPoEP DPowerSource Remote lldpXMedRemXPoEP SEPowerSource lld
Port Monitoring 14 The Aggregator supports user-configured port monitoring. See Configuring Port Monitoring for the configuration commands to use. Port monitoring copies all incoming or outgoing packets on one port and forwards (mirrors) them to another port. The source port is the monitored port (MD) and the destination port is the monitoring port (MG). Supported Modes Standalone, PMUX, VLT, Stacking Configuring Port Monitoring To configure port monitoring, use the following commands. 1.
NOTE: By default, all uplink ports are assigned to port-channel (LAG) 128 and the destination port in a port monitoring session must be an uplink port. When you configure the destination port using the source command, the destination port is removed from LAG 128. To display the uplink ports currently assigned to LAG 128, enter the show lag 128 command.
• The monitored (the source, [MD]) and monitoring ports (the destination, [MG]) must be on the same switch. • The monitored (source) interface must be a server-facing interface in the format slot/port, where the valid slot numbers are 0 and server-facing port numbers are from 1 to 8. • The destination interface must be an uplink port (ports 9 to 12).
Dell Networking OS Behavior: All monitored frames are tagged if the configured monitoring direction is transmit (TX), regardless of whether the monitored port (MD) is a Layer 2 or Layer 3 port. • If the MD port is a Layer 2 port, the frames are tagged with the VLAN ID of the VLAN to which the MD belongs. • If the MD port is a Layer 3 port, the frames are tagged with VLAN ID 4095. • If the MD port is in a Layer 3 VLAN, the frames are tagged with the respective Layer 3 VLAN ID.
Security 15 Security features are supported on the I/O Aggregator. This chapter describes several ways to provide access security to the Dell Networking system. For details about all the commands described in this chapter, refer to the Security chapter in the Dell PowerEdge FN I/O Aggregator Command Line Reference Guide. Supported Modes Standalone, PMUX, VLT, Stacking Understanding Banner Settings This functionality is supported on the Aggregator.
show restrict-access command to view whether the access to a device using Telnet or SSH is disabled or not. AAA Accounting Accounting, authentication, and authorization (AAA) accounting is part of the AAA security model. For details about commands related to AAA security, refer to the Security chapter in the Dell Networking OS Command Reference Guide. AAA accounting enables tracking of services that users are accessing and the amount of network resources being consumed by those services.
– tacacs+: designate the security service. Currently, Dell Networking OS supports only TACACS+. Suppressing AAA Accounting for Null Username Sessions When you activate AAA accounting, the Dell Networking OS software issues accounting records for all users on the system, including users whose username string is NULL because of protocol translation. An example of this is a user who comes in on a line where the AAA authentication login method-list none command is applied.
Monitoring AAA Accounting Dell Networking OS does not support periodic interim accounting because the periodic command can cause heavy congestion when many users are logged in to the network. No specific show command exists for TACACS+ accounting. To obtain accounting records displaying information about users currently logged in, use the following command. • Step through all active sessions and print all the accounting records for the actively accounted functions.
For a complete list of all commands related to login authentication, refer to the Security chapter in the Dell Networking OS Command Reference Guide. Configure Login Authentication for Terminal Lines You can assign up to five authentication methods to a method list. Dell Networking OS evaluates the methods in the order in which you enter them in each list.
Enabling AAA Authentication To enable AAA authentication, use the following command. • Enable AAA authentication. CONFIGURATION mode aaa authentication enable {method-list-name | default} method1 [... method4] – default: uses the listed authentication methods that follow this argument as the default list of methods when a user logs in. – method-list-name: character string used to name the list of enable authentication methods activated when a user logs in. – method1 [...
Example of Enabling Local Authentication for the Console and Remote Authentication for VTY Lines Dell(config)# aaa authentication enable mymethodlist radius tacacs Dell(config)# line vty 0 9 Dell(config-line-vty)# enable authentication mymethodlist Server-Side Configuration • TACACS+ — When using TACACS+, Dell Networking OS sends an initial packet with service type SVC_ENABLE, and then sends a second packet with just the password. The TACACS server must have an entry for username $enable$.
log in to the router, enter the enable command for privilege level 15 (this privilege level is the default level for the command) and then enter CONFIGURATION mode. You can configure passwords to control access to the box and assign different privilege levels to users. The Dell Networking OS supports the use of passwords when you log in to the system and when you enter the enable command. If you move between privilege levels, you are prompted for a password if you move to a higher privilege level.
• Configure a password for a privilege level. CONFIGURATION mode enable password [level level] [encryption-mode] password Configure the optional and required parameters: – level level: Specify a level from 0 to 15. Level 15 includes all levels. – encryption-type: Enter 0 for plain text or 7 for encrypted text. – password: Enter a string. To change only the password for the enable command, configure only the password parameter.
• password: enter a text string up to 32 characters long. To change only the password for the enable command, configure only the password parameter. 3. Configure level and commands for a mode or reset a command’s level. CONFIGURATION mode privilege mode {level level command | reset command} Configure the following required and optional parameters: • • • • mode: enter a keyword for the modes (exec, configure, interface, line, route-map, or router) level level: the range is from 0 to 15.
Example of Privilege Level Login and Available Commands apollo% telnet 172.31.1.53 Trying 172.31.1.53... Connected to 172.31.1.53. Escape character is '^]'.
• Move to a lower privilege level. EXEC Privilege mode disable level-number – level-number: The level-number you wish to set. If you enter disable without a level-number, your security level is 1. RADIUS Remote authentication dial-in user service (RADIUS) is a distributed client/server protocol. This protocol transmits authentication, authorization, and configuration information between a central RADIUS server and a RADIUS client (the Dell Networking system).
• Specifying a RADIUS Server Host (mandatory) • Setting Global Communication Parameters for all RADIUS Server Hosts (optional) • Monitoring RADIUS (optional) For a complete listing of all Dell Networking OS commands related to RADIUS, refer to the Security chapter in the Dell Networking OS Command Reference Guide. NOTE: RADIUS authentication and authorization are done in a single step. Hence, authorization cannot be used independent of authentication.
authorization exec methodlist Specifying a RADIUS Server Host When configuring a RADIUS server host, you can set different communication parameters, such as the UDP port, the key password, the number of retries, and the timeout. To specify a RADIUS server host and configure its communication parameters, use the following command. • Enter the host name or IP address of the RADIUS server host.
• – seconds: the range is from 0 to 2147483647. The default is 0 seconds. Configure a key for all RADIUS communications between the system and RADIUS server hosts. CONFIGURATION mode radius-server key [encryption-type] key – encryption-type: enter 7 to encrypt the password. Enter 0 to keep the password as plain text. • – key: enter a string. The key can be up to 42 characters long. You cannot use spaces in the key. Configure the number of times Dell Networking OS retransmits RADIUS requests.
Choosing TACACS+ as the Authentication Method One of the login authentication methods available is TACACS+ and the user’s name and password are sent for authentication to the TACACS hosts specified. To use TACACS+ to authenticate users, specify at least one TACACS+ server for the system to communicate with and configure TACACS+ as one of your authentication methods. To select TACACS+ as the login authentication method, use the following commands. 1. Configure a TACACS+ server host.
aaa authorization commands 15 default tacacs+ none aaa accounting exec default start-stop tacacs+ aaa accounting commands 1 default start-stop tacacs+ aaa accounting commands 15 default start-stop tacacs+ Dell(conf)# Dell(conf)#do show run tacacs+ ! tacacs-server key 7 d05206c308f4d35b tacacs-server host 10.10.10.10 timeout 1 Dell(conf)#tacacs-server key angeline Dell(conf)#%RPM0-P:CP %SEC-5-LOGIN_SUCCESS: Login successful for user admin on vty0 (10.11.9.
– port port-number: the range is from 0 to 65535. Enter a TCP port number. The default is 49. – timeout seconds: the range is from 0 to 1000. Default is 10 seconds. – key key: enter a string for the key. The key can be up to 42 characters long. This key must match a key configured on the TACACS+ server host. This parameter must be the last parameter you configure. If you do not configure these optional parameters, the default global values are applied.
show ip ssh Specifying an SSH Version The following example uses the ip ssh server version 2 command to enable SSH version 2 and the show ip ssh command to confirm the setting. Dell(conf)#ip ssh server version 2 Dell(conf)#do show ip ssh SSH server : enabled. SSH server version : v2. SSH server vrf : default. SSH server ciphers : 3des-cbc,aes128-cbc,aes192-cbc,aes256-cbc,aes128ctr,aes192-ctr,aes256-ctr. SSH server macs : hmac-md5,hmac-md5-96,hmac-sha1,hmac-sha1-96,hmacsha2-256,hmac-sha2-256-96.
Example of Using Telnet for Remote Login Dell(conf)#ip telnet server enable Dell(conf)#no ip telnet server enable VTY Line and Access-Class Configuration Various methods are available to restrict VTY access in Dell Networking OS. These depend on which authentication scheme you use — line, local, or remote. Table 17.
The following example shows how to allow or deny a Telnet connection to a user. Users see a login prompt even if they cannot log in. No access class is configured for the VTY line. It defaults from the local database.
16 Simple Network Management Protocol (SNMP) Network management stations use SNMP to retrieve or alter management data from network elements. A datum of management information is called a managed object; the value of a managed object can be static or variable. Network elements store managed objects in a database called a management information base (MIB).
Setting up SNMP Dell Networking OS supports SNMP version 1 and version 2 which are community-based security models. The primary difference between the two versions is that version 2 supports two additional protocol operations (informs operation and snmpgetbulk query) and one additional object (counter64 object). Creating a Community For SNMPv1 and SNMPv2, create a community to enable the community-based security in the Dell Networking OS.
• Read the value of the managed object directly below the specified object. • snmpgetnext -v version -c community agent-ip {identifier.instance | descriptor.instance} Read the value of many objects at once. snmpwalk -v version -c community agent-ip {identifier.instance | descriptor.instance} In the following example, the value “4” displays in the OID before the IP address for IPv4. For an IPv6 IP address, a value of “16” displays.
To display the ports in a VLAN, send an snmpget request for the object dot1qStaticEgressPorts using the interface index as the instance number, as shown in the following example. Example of Viewing the Ports in a VLAN in SNMP snmpget -v2c -c mycommunity 10.11.131.185 .1.3.6.1.2.1.17.7.1.4.3.1.2.1107787786 SNMPv2-SMI::mib-2.17.7.1.4.3.1.2.
Fetching Dynamic MAC Entries using SNMP The Aggregator supports the RFC 1493 dot1d table for the default VLAN and the dot1q table for all other VLANs. NOTE: The table contains none of the other information provided by the show vlan command, such as port speed or whether the ports are tagged or untagged. NOTE: The 802.1q Q-BRIDGE MIB defines VLANs regarding 802.1d, as 802.1d itself does not define them.
>snmpwalk -v 2c -c techpubs 10.11.131.162 .1.3.6.1.2.1.17.4.3.1 SNMPv2-SMI::mib-2.17.4.3.1.1.0.1.232.6.149.172 = Hex-STRING: 00 01 E8 06 95 AC Example of Fetching Dynamic MAC Addresses on a Non-default VLANs In the following example, TenGigabitEthernet 0/7 is moved to VLAN 1000, a non-default VLAN. To fetch the MAC addresses learned on non-default VLANs, use the object dot1qTpFdbTable. The instance number is the VLAN number concatenated with the decimal conversion of the MAC address.
are not given. The interface is physical, so this must be represented by a 0 bit, and the unused bit is always 0. These two bits are not given because they are the most significant bits, and leading zeros are often omitted. For interface indexing, slot and port numbering begins with binary one. If the Dell Networking system begins slot and port numbering from 0, binary 1 represents slot and port 0.
dot3aCurAggVlanId SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.1.1.0.0.0.0.0.1.1 dot3aCurAggMacAddr SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.2.1.0.0.0.0.0.1.1 00 00 00 01 dot3aCurAggIndex SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.3.1.0.0.0.0.0.1.1 dot3aCurAggStatus SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.4.1.0.0.0.0.0.1.1 Status active, 2 – status inactive = INTEGER: 1 = Hex-STRING: 00 00 = INTEGER: 1 = INTEGER: 1 << For L3 LAG, you do not have this support. SNMPv2-MIB::sysUpTime.
Unit Slot Expected Inserted Next Boot Status/Power(On/Off) -----------------------------------------------------------------------1 0 SFP+ SFP+ AUTO Good/On 1 1 QSFP+ QSFP+ AUTO Good/On * - Mismatch Dell# The status of the MIBS is as follows: $ snmpwalk -c public -v 2c 10.16.150.162 .1.3.6.1.2.1.47.1.1.1.1.2 SNMPv2-SMI::mib-2.47.1.1.1.1.2.1 = "" SNMPv2-SMI::mib-2.47.1.1.1.1.2.2 = STRING: "PowerEdge-FN-410S-IOA" SNMPv2-SMI::mib-2.47.1.1.1.1.2.3 = STRING: "Chassis 0 container" SNMPv2-SMI::mib-2.47.1.1.1.1.2.
Fetching the Switchport Configuration and the Logical Interface Configuration Important Points to Remember • The SNMP should be configured in the chassis and the chassis management interface should be up with the IP address. • If a port is configured in a VLAN, the respective bit for that port will be set to 1 in the specific VLAN. • In the aggregator, all the server ports and uplink LAG 128 will be in switchport. Hence, the respective bits are set to 1. The following output is for the default VLAN.
MIB Support to Display the Available Memory Size on Flash Dell Networking provides more MIB objects to display the available memory size on flash memory. The following table lists the MIB object that contains the available memory size on flash memory. Table 19. MIB Objects for Displaying the Available Memory Size on Flash via SNMP MIB Object OID Description chStackUnitFlashUsageUtil 1.3.6.1.4.1.6027.3.19.1.2.8.1.6 Contains flash memory usage in percentage.
MIB Object OID Description chSysCoresStackUnitNumber 1.3.6.1.4.1.6027.3.19.1.2.9.1.4 Contains information that includes which stack unit or processor the core file was originated from. chSysCoresProcess 1.3.6.1.4.1.6027.3.19.1.2.9.1.5 Contains information that includes the process names that generated each core file. Viewing the Software Core Files Generated by the System • To view the viewing the software core files generated by the system, use the following command. snmpwalk -v2c -c public 192.
Stacking 17 An Aggregator auto-configures to operate in standalone mode. To use an Aggregator in a stack, you must manually configure it using the CLI to operate in stacking mode. Stacking is supported on the FN410S and FN410T Aggregators with ports 9 and 10 as the stack ports. The Aggregator supports both ring and daisy-chain topology and stacking of the same type. FN 410S and FN 410T Aggregators support two-unit in-chassis stacking and up to six units stacking across the chassis.
Master Selection Criteria A Master is elected or re-elected based on the following considerations, in order: 1. The switch with the highest priority at boot time. 2. The switch with the highest MAC address at boot time. 3. A unit is selected as Standby by the administrator, and a fail over action is manually initiated or occurs due to a Master unit failure. No record of previous stack mastership is kept when a stack loses power.
3. Continue to run the stack-unit 0 stack-group <0-3> command to add additional stack ports to the switch, using the stack-group mapping. Cabling the Switch Stack Dell PowerEdge FN I/O Aggregators are connected to operate as a single stack in a ring topology using the SFP+ or Base-T ports on the front end ports 9 and 10. To create a stack in either a ring or daisy-chain topology, you can use two units on the same chassis or up to six units across multiple chassis.
4. Reboot the Aggregator by entering the reload command in EXEC Privilege mode: Dell# reload Repeat the above steps on each Aggregator in the stack by entering the stackunit 0 iom-mode stack command and saving the configuration. If the stacked switches all reboot at approximately the same time, the Aggregator with the highest MAC address is automatically elected as the master switch. The Aggregator with the next highest MAC address is elected as the standby master.
EXEC Privilege mode reload If an Aggregator is already configured to operate in stacking mode, simply attach SFP+ or direct attach cables to connect 10G ports on the base module of each stacked Aggregator. The new unit synchronizes its running and startup configurations with the stack.
The switch functions in standalone mode but retains the running and startup configuration that was last synchronized by the master switch while it operated as a stack unit. Merging Two Operational Stacks The recommended procedure for merging two operational stacks is as follows: 1. Always power off all units in one stack before connecting to another stack. 2. Add the units as a group by unplugging one stacking cable in the operational stack and physically connecting all unpowered units. 3.
Troubleshooting a Switch Stack To perform troubleshooting operations on a switch stack, use the following commands on the master switch. 1. Displays the status of stacked ports on stack units. show system stack-ports 2. Displays the master standby unit status, failover configuration, and result of the last master-standby synchronization; allows you to verify the readiness for a stack failover. show redundancy 3. Displays input and output flow statistics on a stacked port.
Master Switch Fails • Problem: The master switch fails due to a hardware fault, software crash, or power loss. • Resolution: A failover procedure begins: 1. Keep-alive messages from the Aggregator master switch time out after 60 seconds and the switch is removed from the stack. 2. The standby switch takes the master role. Data traffic on the new master switch is uninterrupted. Protocol traffic is managed by the control plane. 3. A member switch is elected as the new standby.
add the switch to the stack as described in Adding a Stack Unit. To verify that the problem has been resolved and the stacked switch is back online, use the show system brief command.
boot system stack-unit all primary system partition 4. Save the configuration. EXEC Privilege write memory 5. Reload the stack unit to activate the new Dell Networking OS version. CONFIGURATION mode reload Example of Upgrading all Stacked Switches The following example shows how to upgrade all switches in a stack, including the master switch. Dell# upgrade system ftp: A: Address or name of remote host []: 10.11.200.241 Source file name []: //FTOS-XL-8.3.17.0.
upgrade system stack-unit unit-number partition 2. Reboot the stack unit from the master switch to load the Dell Networking OS image from the same partition. CONFIGURATION mode boot system stack-unit unit-number primary system partition 3. Save the configuration. EXEC Privilege mode write memory 4. Reset the stack unit to activate the new Dell Networking OS version.
Broadcast Storm Control 18 On the Aggregator, the broadcast storm control feature is enabled by default on all ports, and disabled on a port when an iSCSI storage device is detected. Broadcast storm control is re-enabled as soon as the connection with an iSCSI device ends. Broadcast traffic on Layer 2 interfaces is limited or suppressed during a broadcast storm. You can view the status of a broadcast-storm control operation by using the show io-aggregator broadcast storm-control status command.
System Time and Date 19 The Aggregator auto-configures the hardware and software clocks with the current time and date. If necessary, you can manually set and maintain the system time and date using the CLI commands described in this chapter.
differentiator between the UTC and your local timezone. For example, San Jose, CA is the Pacific Timezone with a UTC offset of -8. To set the clock timezone, use the following command. • Set the clock to the appropriate timezone. CONFIGURATION mode clock timezone timezone-name offset – timezone-name: Enter the name of the timezone. Do not use spaces. – offset: Enter one of the following: * a number from 1 to 23 as the number of hours in addition to UTC for the timezone.
– offset: (OPTIONAL) enter the number of minutes to add during the summer-time period. The range is from 1 to 1440. The default is 60 minutes. Example of the clock summer-time Command Dell(conf)#clock summer-time pacific date Mar 14 2012 00:00 Nov 7 2012 00:00 Dell(conf)# Setting Recurring Daylight Saving Time Set a date (and time zone) on which to convert the switch to daylight saving time on a specific day every year.
– offset: (OPTIONAL) Enter the number of minutes to add during the summer-time period. The range is from 1 to1440. The default is 60 minutes. Example of the clock summer-time recurring Command Dell(conf)#clock summer-time pacific recurring Mar 14 2012 00:00 Nov 7 2012 00:00 Dell(conf)# NOTE: If you enter after entering the recurring command parameter, and you have already set a one-time daylight saving time/date, the system uses that time and date as the recurring setting.
20 Uplink Failure Detection (UFD) Supported Modes Standalone, PMUX, VLT, Stacking Feature Description UFD provides detection of the loss of upstream connectivity and, if used with network interface controller (NIC) teaming, automatic recovery from a failed link. A switch provides upstream connectivity for devices, such as servers. If a switch loses its upstream connectivity, downstream devices also lose their connectivity.
Figure 28. Uplink Failure Detection How Uplink Failure Detection Works UFD creates an association between upstream and downstream interfaces. The association of uplink and downlink interfaces is called an uplink-state group. An interface in an uplink-state group can be a physical interface or a port-channel (LAG) aggregation of physical interfaces. An enabled uplink-state group tracks the state of all assigned upstream interfaces.
Figure 29. Uplink Failure Detection Example If only one of the upstream interfaces in an uplink-state group goes down, a specified number of downstream ports associated with the upstream interface are put into a Link-Down state. You can configure this number and is calculated by the ratio of the upstream port bandwidth to the downstream port bandwidth in the same uplink-state group.
Using UFD, you can configure the automatic recovery of downstream ports in an uplink-state group when the link status of an upstream port changes. The tracking of upstream link status does not have a major impact on central processing unit (CPU) usage. UFD and NIC Teaming To implement a rapid failover solution, you can use uplink failure detection on a switch with network adapter teaming on a server. For more information, refer to Network Interface Controller (NIC) Teaming.
– For an example of debug log message, refer to Clearing a UFD-Disabled Interface. Uplink Failure Detection (SMUX mode) In Standalone or VLT modes, by default, all the server-facing ports are tracked by the operational status of the uplink LAG. If the uplink LAG goes down, the aggregator loses its connectivity and is no longer operational. All the server-facing ports are brought down after the specified defer-timer interval, which is 10 seconds by default.
UPLINK-STATE-GROUP mode {upstream | downstream} interface For interface, enter one of the following interface types: • TenGigabit Ethernet: enter tengigabitethernet {slot/port |slot/port-range} • Port channel: enter port-channel {1-128 | port-channel-range} Where port-range and port-channel-range specify a range of ports separated by a dash (-) and/or individual ports/port channels in any order; for example: upstream tengigabitethernet 0/1-2,5,9,11-12 downstream port-channel 1-3,5 • A comma is required
The default is auto-recovery of UFD-disabled downstream ports is enabled. To disable auto-recovery, use the no downstream auto-recover command. 6. Specify the time (in seconds) to wait for the upstream port channel (LAG 128) to come back up before server ports are brought down. UPLINK-STATE-GROUP mode defer-timer seconds NOTE: This command is available in Standalone and VLT modes only. The range is from 1 to 120. 7. (Optional) Enter a text description of the uplink-state group.
Example of Syslog Messages Before and After Entering the clear ufd-disable uplink-stategroup Command The following example message shows the Syslog messages that display when you clear the UFDDisabled state from all disabled downstream interfaces in an uplink-state group by using the clear ufd-disable uplink-state-group group-id command. All downstream interfaces return to an operationally up state.
Displaying Uplink Failure Detection To display information on the UFD feature, use any of the following commands. • Display status information on a specified uplink-state group or all groups. EXEC mode show uplink-state-group [group-id] [detail] • – group-id: The values are 1 to 16. – detail: displays additional status information on the upstream and downstream interfaces in each group. Display the current status of a port or port-channel interface assigned to an uplink-state group.
Dell# Example of Viewing Interface Status with UFD Information Dell#show interfaces tengigabitethernet 0/7 TenGigabitEthernet 0/7 is up, line protocol is down (error-disabled[UFD]) Hardware is Force10Eth, address is 00:01:e8:32:7a:47 Current address is 00:01:e8:32:7a:47 Interface index is 280544512 Internet address is not set MTU 1554 bytes, IP MTU 1500 bytes LineSpeed 1000 Mbit, Mode auto Flowcontrol rx off tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:25:46 Queu
Sample Configuration: Uplink Failure Detection The following example shows a sample configuration of UFD on a switch/router in which you configure as follows. • • • • • • Configure uplink-state group 3. Add downstream links Gigabitethernet 0/1, 0/2, 0/5, 0/9, 0/11, and 0/12. Configure two downstream links to be disabled if an upstream link fails. Add upstream links Gigabitethernet 0/3 and 0/4. Add a text description for the group. Verify the configuration with various show commands.
Uplink State Group : 3 Status: Enabled, Up Upstream Interfaces : Te 0/3(Dwn) Te 0/4(Up) Downstream Interfaces : Te 0/1(Dis) Te 0/2(Dis) Te 0/5(Up) Te 0/9(Up) Te 0/11(Up) Te 0/12(Up) Uplink Failure Detection (UFD) 229
PMUX Mode of the IO Aggregator 21 This chapter provides an overview of the PMUX mode. I/O Aggregator (IOA) Programmable MUX (PMUX) Mode IOA PMUX is a mode that provides flexibility of operation with added configurability. This involves creating multiple LAGs, configuring VLANs on uplinks and the server side, configuring data center bridging (DCB) parameters, and so forth. By default, IOA starts up in IOA Standalone mode.
------------------------------------------------------0 programmable-mux programmable-mux Dell# The IOA is now ready for PMUX operations. Configuring the Commands without a Separate User Account Starting with Dell Networking OS version 9.3(0.0), you can configure the PMUX mode CLI commands without having to configure a new, separate user profile. The user profile you defined to access and log in to the switch is sufficient to configure the PMUX mode commands.
VLT provides Layer 2 multipathing, creating redundancy through increased bandwidth, enabling multiple parallel paths between nodes and load-balancing traffic where alternative paths exist. Virtual link trunking offers the following benefits: • • • • • • • • Allows a single device to use a LAG across two upstream devices. Eliminates STP-blocked ports. Provides a loop-free topology. Uses all available uplink bandwidth. Provides fast convergence if either the link or a device fails.
NOTE: Ensure the connectivity to ToR from each Aggregator. To enable VLT and verify the configuration, follow these steps. 1. Enable VLT in node 1 and 2. stack-unit unit iom-mode vlt CONFIGURATION mode Dell(conf)#stack-unit 0 iom-mode vlt 2. Verify the VLT configurations.
Important Points to Remember • VLT port channel interfaces must be switch ports. • Dell Networking strongly recommends that the VLTi (VLT interconnect) be a static LAG and that you disable LACP on the VLTi. • If the lacp-ungroup feature is not supported on the ToR, reboot the VLT peers one at a time. After rebooting, verify that VLTi (ICL) is active before attempting DHCP connectivity. Configuration Notes When you configure VLT, the following conditions apply.
– MAC addresses for VLANs configured across VLT peer chassis are synchronized over the VLT interconnect on an egress port such as a VLT LAG. MAC addresses are the same on both VLT peer nodes. – ARP entries configured across the VLTi are the same on both VLT peer nodes. – If you shut down the port channel used in the VLT interconnect on a peer switch in a VLT domain in which you did not configure a backup link, the switch’s role displays in the show vlt brief command output as Primary instead of Standalone.
– VLT allows multiple active parallel paths from access switches to VLT chassis. – VLT supports port-channel links with LACP between access switches and VLT peer switches. Dell Networking recommends using static port channels on VLTi. – If VLTi connectivity with a peer is lost but the VLT backup connectivity indicates that the peer is still alive, the VLT ports on the Secondary peer are orphaned and are shut down.
If the VLTi link fails, the status of the remote VLT Primary Peer is checked using the backup link. If the remote VLT Primary Peer is available, the Secondary Peer disables all VLT ports to prevent loops. If all ports in the VLTi link fail or if the communication between VLTi links fails, VLT checks the backup link to determine the cause of the failure.
Non-VLT ARP Sync In the Dell Networking OS version 9.2(0.0), ARP entries (including ND entries) learned on other ports are synced with the VLT peer to support station move scenarios. Prior to Dell Networking OS version 9.2.(0.0), only ARP entries learned on VLT ports were synced between peers. Additionally, ARP entries resulting from station movements from VLT to non-VLT ports or to different non-VLT ports are learned on the non-VLT port and synced with the peer node.
* Port channel: enter port-channel {1-128}. Example of the show vlt backup-link Command Dell_VLTpeer1# show vlt backup-link VLT Backup Link ----------------Destination: Peer HeartBeat status: HeartBeat Timer Interval: HeartBeat Timeout: UDP Port: HeartBeat Messages Sent: HeartBeat Messages Received: 10.11.200.
Remote system version: Delay-Restore timer: 5(1) 90 seconds Example of the show vlt detail Command Dell_VLTpeer1# show vlt detail Local LAG Id -----------100 127 Peer LAG Id ----------100 2 Local Status Peer Status Active VLANs ------------ ----------- ------------UP UP 10, 20, 30 UP UP 20, 30 Dell_VLTpeer2# show vlt detail Local LAG Id -----------2 100 Peer LAG Id ----------127 100 Local Status -----------UP UP Peer Status ----------UP UP Active VLANs ------------20, 30 10, 20, 30 Example of the
Example of the show vlt statistics Command Dell_VLTpeer1# show vlt statistics VLT Statistics ---------------HeartBeat Messages Sent: HeartBeat Messages Received: ICL Hello's Sent: ICL Hello's Received: 987 986 148 98 Dell_VLTpeer2# show vlt statistics VLT Statistics ---------------HeartBeat Messages Sent: HeartBeat Messages Received: ICL Hello's Sent: ICL Hello's Received: 994 978 89 89 VLT Sample Configurations To configure VLT, configure a backup link and interconnect trunk, create a VLT domain, confi
Verify that the port channels used in the VLT domain are assigned to the same VLAN. Dell_VLTpeer1# show vlan id 10 Codes: * - Default VLAN, G - GVRP VLANs, P - Primary, C - Community, I Isolated Q: U - Untagged, T - Tagged x - Dot1x untagged, X - Dot1x tagged G - GVRP tagged, M - Vlan-stack, H - Hyperpull tagged NUM Status Description Q Ports 10 Active U Po110(Te 0/5) T Po100(Te 0/6,7) Configuring Virtual Link Trunking (VLT Peer 2) Configure the backup link.
Verifying a Port-Channel Connection to a VLT Domain (From an Attached Access Switch) On an access device, verify the port-channel connection to a VLT domain. Dell_TORswitch(conf)# show running-config interface port-channel 11 ! interface Port-channel 11 switchport channel-member TenGigE 0/1,2 no shutdown Troubleshooting VLT To help troubleshoot different VLT issues that may occur, use the following information.
Description Behavior at Peer Up Behavior During Run Time Action to Take that the MAC address is the same on both units. The VLT peer does not boot up. The VLTi is forced to a down state. The VLT peer does not boot up. The VLTi is forced to a down state. A syslog error message is generated. A syslog error message is generated. Version ID mismatch A syslog error message and an SNMP trap are generated. A syslog error message and an SNMP trap are generated.
NPIV Proxy Gateway 22 The N-port identifier virtualization (NPIV) Proxy Gateway (NPG) feature provides FCoE-FC bridging capability on the FN 2210S Aggregator, allowing server CNAs to communicate with SAN fabrics over the FN 2210S Aggregator.
Converged Network Adapter (CNA) ports on servers connect to the FX2 chassis Ten-Gigabit Ethernet ports and log in to an upstream FC core switch through the N port. Server fabric login (FLOGI) requests are converted into fabric discovery (FDISC) requests before being forwarded to the FC core switch. Servers use CNA ports to connect over FCoE to an Ethernet port in ENode mode on the NPIV proxy gateway.
NPIV Proxy Gateway: Terms and Definitions The following table describes the terms used in an NPG configuration on the Aggregator. Table 22. Aggregator with the NPIV Proxy Gateway: Terms and Definitions Term Description FC port Fibre Channel port on the Aggregator that operates in autosensing, 2, 4, or 8-Gigabit mode. On an NPIV proxy gateway, an FC port can be used as a downlink for a server connection and an uplink for a fabric connection.
Term Description an upstream FCoE switch operating as an FCF. FIP keepalive messages maintain the connection between an FCoE initiator and an FCF. NPIV N-port identifier virtualization: The capability to map multiple FCoE links from downstream ports to a single upstream FC link. principal switch The switch in a fabric with the lowest domain number. The principal switch accesses the master name database and the zone/zone set database.
Configuring an NPIV Proxy Gateway Prerequisite: Before you configure an NPIV proxy gateway (NPG) on an Aggregator, ensure that the following features are enabled. • DCB is enabled by default on the Aggregator. • Autonegotiated DCBx is enabled for converged traffic by default with the Ethernet ports on all Aggregators. • FCoE transit with FIP snooping is automatically enabled when you configure Fibre Channel on the Aggregator.
Fabric Name Fabric Id Vlan Id Vlan priority FC-MAP FKA-ADV-Period Fcf Priority Config-State Oper-State Members Fc 0/9 Te 0/4 SAN_FABRIC 1002 1002 3 0efc00 8 128 ACTIVE UP DCB_MAP_PFC_OFF Dell(conf)#do show qos dcb-map DCB_MAP_PFC_OFF ----------------------State :In-Progress PfcMode:OFF -------------------Dell(conf)# Enabling Fibre Channel Capability on the Switch Enable the Fibre Channel capability on an Aggregator that you want to configure as an NPG for the Fibre Channel protocol.
Step Task Command Command Mode priority-pgid dot1p0_group_num dot1p1_group_num dot1p2_group_num dot1p3_group_num dot1p4_group_num dot1p5_group_num dot1p6_group_num dot1p7_group_num DCB MAP Restriction: You can enable PFC on a maximum of two priority queues.
Step Task Command Command Mode dcb-map name INTERFACE You cannot apply a DCB map on a port channel. However, you can apply a DCB map on the ports that are members of the port channel. Apply the DCB map on an Ethernet port or port channel. The port is configured with the PFC and ETS settings in the DCB map, for example: 2 Dell# interface tengigabitEthernet 0/0 Dell(config-if-te-0/0)# dcb-map SAN_DCB1 Repeat this step to apply a DCB map to more than one port or port channel.
Step Task Command 1 Create an FCoE map that contains parameters fcoe-map map-name used in the communication between servers and a SAN fabric. 2 Configure the association between the dedicated VLAN and the fabric where the desired storage arrays are installed. The fabric and VLAN ID numbers must be the same. Fabric and VLAN ID range: 2–4094.
Step Task Command Command Mode 1 Configure a server-facing Ethernet port or port channel with an FCoE map. interface {tengigabitEthernet slot/port | portchannel num} CONFIGURATION 2 Apply the FCoE/FC configuration in an FCoE fcoe-map map-name map on the Ethernet port.
Step Task Command Command Mode 3 Enable the port for FC transmission.
Dell(config-fcoe-name)# keepalive Dell(config-fcoe-name)# fcf-priority 128 Dell(config-fcoe-name)# fka-adv-period 8 5. Enable an upstream FC port: Dell(config)# interface fibrechannel 0/0 Dell(config-if-fc-0)# no shutdown 6. Enable a downstream Ethernet port: Dell(config)#interface tengigabitEthernet 0/0 Dell(conf-if-te-0)# no shutdown Displaying NPIV Proxy Gateway Information To display information on the NPG operation, use the show commands in the following table: Table 23.
Te Te Te Te Te Te Te Te Fc Fc Te Te 0/1 0/2 0/3 0/4 0/5 0/6 0/7 0/8 toB300 0/9 0/10 0/11 0/12 Up Down Up Down Up Up Up Down Up Up Down Down 10000 Mbit Auto 10000 Mbit Auto 10000 Mbit 10000 Mbit 10000 Mbit Auto 8000 Mbit 8000 Mbit Auto Auto Full Auto Full Auto Full Full Full Auto Full Full Auto Auto 1-4094 1-1001,1003-4094 1-1001,1003-4094 1-1001,1003-4094 1-4094 1-4094 1-4094 1-1001,1003-4094 ----- Table 24.
Fcf Priority Config-State Oper-State Members Fc 0/9 Te 0/11 Te 0/12 128 ACTIVE UP Table 25. show fcoe-map Field Descriptions Field Description Fabric-Name Name of a SAN fabric. Fabric ID The ID number of the SAN fabric to which FC traffic is forwarded. VLAN ID The dedicated VLAN used to transport FCoE storage traffic between servers and a fabric over the NPG. The configured VLAN ID must be the same as the fabric ID. VLAN priority FCoE traffic uses VLAN priority 3.
Table 26. show qos dcb-map Field Descriptions Field Description State Complete: All mandatory DCB parameters are correctly configured. In progress: The DCB map configuration is not complete. Some mandatory parameters are not configured. PFC Mode PFC configuration in the DCB map: On (enabled) or Off. PG Priority group configured in the DCB map. TSA Transmission scheduling algorithm used in the DCB map: Enhanced Transmission Selection (ETS).
Field Description Fabric-Map Name of the FCoE map containing the FCoE/FC configuration parameters for the server CNA-fabric connection. Login Method Method used by the server CNA to log in to the fabric; for example: FLOGI - ENode logged in using a fabric login (FLOGI). FDISC - ENode logged in using a fabric discovery (FDISC). Status Operational status of the link between a server CNA port and a SAN fabric: Logged In - Server has logged in to the fabric and is able to transmit FCoE traffic.
Field Description FCF MAC Fibre Channel forwarder MAC: MAC address of Aggregator with the FCF interface. Fabric Intf Fabric-facing Aggregator with the Fibre Channel port (slot/port) on which FCoE traffic is transmitted to the specified fabric. FCoE VLAN ID of the dedicated VLAN used to transmit FCoE traffic from a server CNA to a fabric and configured on both the server-facing Aggregator with the server CNA port.
Table 30. Displaying NPIV Proxy Gateway Information Command Description show interfaces status Displays the operational status of Ethernet and Fibre Channel interfaces on the Aggregator with the NPG. NOTE: Although the show interface status command displays the Fiber Channel (FC) interfaces with the abbreviated label of 'Fc' in the output, if you attempt to specify a FC interface by using the interface fc command in the CLI interface, an error message is displayed.
Ethernet ports - up (transmitting FCoE and LAN storage traffic) or down (not transmitting traffic). Fibre Channel ports - up (link is up and transmitting FC traffic) or down (link is down and not transmitting FC traffic), link-wait (link is up and waiting for FLOGI to complete on peer SW port), or removed (port has been shut down). Speed Transmission speed (in Megabits per second) of Ethernet and FC ports, including auto-negotiated speed (Auto).
FC-MAP FCoE MAC-address prefix value - The unique 24-bit MAC address prefix that identifies a fabric. FKA-ADV-period Time interval (in seconds) used to transmit FIP keepalive advertisements. FCF Priority The priority used by a server to select an upstream FCoE forwarder.
show npiv devices brief Command Example Dell# show npiv devices brief Total NPIV Devices = 2 ------------------------------------------------------------------------------------------------------ENode-Intf ENode-WWPN FCoE-Vlan Fabric-Intf Fabric-Map LoginMethod Status ------------------------------------------------------------------------------------------------------Te 0/11 fid_1003 Te 0/12 fid_1003 20:01:00:10:18:f1:94:20 1003 FLOGI LOGGED_IN 10:00:00:00:c9:d9:9c:cb 1003 FDISC LOGGED_IN Fc 0/9 Fc 0/10
ENode WWNN FCoE MAC FC-ID LoginMethod Secs Status : : : : : : 20:00:00:10:18:f1:94:21 0e:fc:03:01:02:01 01:02:01 FLOGI 5593 LOGGED_IN ENode[1]: ENode MAC ENode Intf FCF MAC Fabric Intf FCoE Vlan Fabric Map ENode WWPN ENode WWNN FCoE MAC FC-ID LoginMethod Secs Status : : : : : : : : : : : : : 00:10:18:f1:94:22 Te 0/12 5c:f9:dd:ef:10:c9 Fc 0/10 1003 fid_1003 10:00:00:00:c9:d9:9c:cb 10:00:00:00:c9:d9:9c:cd 0e:fc:03:01:02:02 01:02:01 FDISC 5593 LOGGED_IN Table 35.
Field Description LoginMethod Method used by the server CNA to log in to the fabric; for example, FLOGI or FDISC. Secs Number of seconds that the fabric connection is up. State Status of the fabric connection: logged in. show fc switch Command Example Dell# show fc switch Switch Mode : NPG Switch WWN : 10:00:5c:f9:dd:ef:10:c0 Dell# Table 36. show fc switch Command Description Field Description Switch Mode Fibre Channel mode of operation of an Aggregator.
23 Upgrade Procedures To find the upgrade procedures, go to the Dell Networking OS Release Notes for your system type to see all the requirements needed to upgrade to the desired Dell Networking OS version. To upgrade your system type, follow the procedures in the Dell Networking OS Release Notes. Get Help with Upgrades Direct any questions or concerns about the Dell Networking OS upgrade procedures to the Dell Technical Support Center. You can reach Technical Support: • On the web: http://support.dell.
24 Debugging and Diagnostics This chapter contains the following sections:. • • • • • Debugging Aggregator Operation Software Show Commands Offline Diagnostics Trace Logs Show Hardware Commands Supported Modes Standalone, PMUX, VLT Debugging Aggregator Operation This section describes common troubleshooting procedures to use for error conditions that may arise during Aggregator operation.
(Up): Interface up (Dwn): Interface down Uplink State Group Defer Timer Upstream Interfaces Downstream Interfaces 0/5(Up) 2. : : : : (Dis): Interface disabled 1 Status: Enabled, Up 10 sec Po 128(Up) Te 0/1(Up) Te 0/2(Up) Te 0/3(Dwn) Te 0/4(Dwn) Te Te 0/6(Dwn) Te 0/7(Dwn) Te 0/8(Up) Verify that the downstream port channel in the top-of-rack switch that connect to the Aggregator is configured correctly.
Name: TenGigabitEthernet 0/1 802.1QTagged: Hybrid SMUX port mode: Auto VLANs enabled Vlan membership: Q Vlans U 1 T 2-4094 Native VlanId: 2. 1 Assign the port to a specified group of VLANs (vlan tagged command) and re-display the port mode status..
System Type: PE-FN-410S-IOA Control Processor: MIPS RMI XLP with 2147483648 bytes of memory, core(s) 1. 128M bytes of boot flash memory. 1 12-port GE/TE (FN) 12 Ten GigabitEthernet/IEEE 802.3 interface(s) Dell# Offline Diagnostics The offline diagnostics test suite is useful for isolating faults and debugging hardware. The diagnostics tests are grouped into three levels: • Level 0 — Level 0 diagnostics check for the presence of various components and perform essential path verifications.
NOTE: The system reboots when the offline diagnostics complete. This is an automatic process. The following warning message appears when you implement the offline stackunit command: Warning - offline of unit will bring down all the protocols and the unit will be operationally down, except for running Diagnostics. Please make sure that stacking/fanout not configured for Diagnostics execution. Also reboot/online command is necessary for normal operation after the offline command is issued.
Auto Save on Crash or Rollover Exception information for MASTER or standby units is stored in the flash:/TRACE_LOG_DIR directory. This directory contains files that save trace information when there has been a task crash or timeout. • On a MASTER unit, you can reach the TRACE_LOG_DIR files by FTP or by using the show file command from the flash://TRACE_LOG_DIR directory.
• show hardware stack-unit {0-5} buffer unit {0-1} port {1-64 | all} bufferinfo View the forwarding plane statistics containing the packet buffer statistics per COS per port. EXEC Privilege mode • show hardware stack-unit {0-5} buffer unit {0-1} port {1-64} queue {0-14 | all} buffer-info View input and output statistics on the party bus, which carries inter-process communication traffic between CPUs.
show hardware stack-unit {0-5} unit {0-0} table-dump {table name} Environmental Monitoring Aggregator components use environmental monitoring hardware to detect transmit power readings, receive power readings, and temperature updates. To receive periodic power updates, you must enable the following command. • Enable environmental monitoring.
To view the programmed alarm thresholds levels, including the shutdown value, use the show alarms threshold command. NOTE: When the ingress air temperature exceeds 61°C, the Status LED turns Amber and a major alarm is triggered.
This message indicates that the specified card is not receiving enough power. In response, the system first shuts down Power over Ethernet (PoE). Troubleshoot an Under-Voltage Condition To troubleshoot an under-voltage condition, check that the correct number of power supplies are installed and their Status light emitting diodes (LEDs) are lit. The following table lists information for SNMP traps and OIDs, which provide information about environmental monitoring hardware and hardware components. Table 38.
Forwarding processor (FP) ASICs provide Ethernet MAC functions, queueing, and buffering, as well as store feature and forwarding tables for hardware-based lookup and forwarding decisions. 1G and 10G interfaces use different FPs. You can tune buffers at three locations 1. CSF — Output queues going from the CSF. 2. FP Uplink — Output queues going from the FP to the CSF IDP links. 3. Front-End Link — Output queues going from the FP to the front-end PHY.
Figure 31. Buffer Tuning Points Deciding to Tune Buffers Dell Networking recommends exercising caution when configuring any non-default buffer settings, as tuning can significantly affect system performance. The default values work for most cases. As a guideline, consider tuning buffers if traffic is bursty (and coming from several interfaces). In this case: • Reduce the dedicated buffer on all queues/interfaces. • Increase the dynamic buffer on all interfaces.
BUFFER PROFILE mode • buffer dedicated Change the maximum number of dynamic buffers an interface can request. BUFFER PROFILE mode • buffer dynamic Change the number of packet-pointers per queue. BUFFER PROFILE mode • buffer packet-pointers Apply the buffer profile to a CSF to FP link.
Queue# Dedicated Buffer (Kilobytes) 0 2.50 1 2.50 2 2.50 3 2.50 4 9.38 5 9.38 6 9.38 7 9.38 Buffer Packets 256 256 256 256 256 256 256 256 Example of Viewing the Buffer Profile Allocations Dell#show running-config interface tengigabitethernet 0/6 ! interface TenGigabitEthernet 0/6 mtu 9252 switchport no shutdown buffer-policy myfsbufferprofile Example of Viewing the Buffer Profile (Interface) Dell#show buffer-profile detail int te 0/2 Interface Te 0/2 Buffer-profile fsqueue-fp Dynamic buffer 1256.
Using a Pre-Defined Buffer Profile The Dell Networking OS provides two pre-defined buffer profiles, one for single-queue (for example, non-quality-of-service [QoS]) applications, and one for four-queue (for example, QoS) applications. You must reload the system for the global buffer profile to take effect, a message similar to the following displays: % Info: For the global pre-defined buffer profile to take effect, please save the config and reload the system..
! buffer fp-uplink stack-unit 0 port-set 0 buffer-policy fsqueue-hig buffer fp-uplink stack-unit 0 port-set 1 buffer-policy fsqueue-hig ! Interface range gi 0/1 - 48 buffer-policy fsqueue-fp Dell#show run int Tengig 0/10 ! interface TenGigabitEthernet 0/10 Troubleshooting Packet Loss The show hardware stack-unit command is intended primarily to troubleshoot packet loss. To troubleshoot packet loss, use the following commands.
Port# Drops 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 6 0 0 7 0 0 8 0 0 :Ingress Drops :IngMac Drops :Total Mmu Drops :EgMac Drops :Egress 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Dell#show hardware stack-unit --- Ingress Drops --Ingress Drops : IBP CBP Full Drops : PortSTPnotFwd Drops : IPv4 L3 Discards : Policy Discards : Packets dropped by FP : (L2+L3) Drops : Port bitmap zero Drops : Rx VLAN Drops : 0 drops unit 0 port 1 30 0 0 0 0 14 0 16 0 --- Ingress MAC counters--Ingress FCSDrops : 0 Ingress MTUExc
noClus recvd dropped recvToNet rxError rxDatapathErr rxPkt(COS0) rxPkt(COS1) rxPkt(COS2) rxPkt(COS3) rxPkt(COS4) rxPkt(COS5) rxPkt(COS6) rxPkt(COS7) rxPkt(UNIT0) rxPkt(UNIT1) rxPkt(UNIT2) rxPkt(UNIT3) transmitted txRequested noTxDesc txError txReqTooLarge txInternalError txDatapathErr txPkt(COS0) txPkt(COS1) txPkt(COS2) txPkt(COS3) txPkt(COS4) txPkt(COS5) txPkt(COS6) txPkt(COS7) txPkt(UNIT0) :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 The show hard
0 CRC, 0 overrun, 0 discarded Output Statistics: 1649714 packets, 1948622676 bytes, 0 underruns 0 64-byte pkts, 27234 over 64-byte pkts, 107970 over 127-byte pkts 34 over 255-byte pkts, 504838 over 511-byte pkts, 1009638 over 1023-byte pkts 0 Multicasts, 0 Broadcasts, 1649714 Unicasts 0 throttles, 0 discarded, 0 collisions Rate info (interval 45 seconds): Input 00.00 Mbits/sec, 2 packets/sec, 0.00% of line-rate Output 00.06 Mbits/sec, 8 packets/sec, 0.
Standards Compliance 25 This chapter describes standards compliance for Dell Networking products. NOTE: Unless noted, when a standard cited here is listed as supported by the Dell Networking Operating System (OS), the system also supports predecessor standards. One way to search for predecessor standards is to use the http://tools.ietf.org/ website. Click “Browse and search IETF documents,” enter an RFC number, and inspect the top of the resulting document for obsolescence citations to related RFCs.
General Internet Protocols The following table lists the Dell Networking OS support per platform for general internet protocols. Table 39.
Network Management The following table lists the Dell Networking OS support per platform for network management protocol. Table 41.
RFC# Full Name 2579 Textual Conventions for SMIv2 2580 Conformance Statements for SMIv2 2618 RADIUS Authentication Client MIB, except the following four counters: radiusAuthClientInvalidServerAddresses radiusAuthClientMalformedAccessResponses radiusAuthClientUnknownTypes radiusAuthClientPacketsDropped 3635 Definitions of Managed Objects for the Ethernet-like Interface Types 2674 Definitions of Managed Objects for Bridges with Traffic Classes, Multicast Filtering and Virtual LAN Extensions 2787
RFC# Full Name IEEE 802.1AB The LLDP Management Information Base extension module for IEEE 802.1 organizationally defined discovery information. (LLDP DOT1 MIB and LLDP DOT3 MIB) IEEE 802.1AB The LLDP Management Information Base extension module for IEEE 802.3 organizationally defined discovery information. (LLDP DOT1 MIB and LLDP DOT3 MIB) sFlow.org sFlow Version 5 sFlow.
https://www.force10networks.com/csportal20/MIBs/MIB_OIDs.aspx Some pages of iSupport require a login. To request an iSupport account, go to: https://www.force10networks.com/CSPortal20/Support/AccountRequest.aspx If you have forgotten or lost your account information, contact Dell TAC for assistance.