Dell PowerEdge FN I/O Aggregator Configuration Guide 9.8(0.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2015 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
Contents 1 About this Guide.............................................................................................................13 Audience........................................................................................................................................................................... 13 Conventions......................................................................................................................................................................
Creating a DCB Map........................................................................................................................................................ 28 Important Points to Remember.................................................................................................................................. 29 Applying a DCB Map on Server-Facing Ethernet Ports..............................................................................................
FIP Snooping on VLANs.............................................................................................................................................65 FC-MAP Value........................................................................................................................................................... 65 Bridge-to-FCF Links...................................................................................................................................................
Port Channel Definitions and Standards..................................................................................................................... 90 Port Channel Benefits................................................................................................................................................ 90 Port Channel Implementation.....................................................................................................................................
Adding a Physical Interface to a Port Channel............................................................................................................ 111 Reassigning an Interface to a New Port Channel....................................................................................................... 113 Configuring the Minimum Oper Up Links in a Port Channel....................................................................................... 114 Configuring VLAN Tags for Member Interfaces..........
Clearing LLDP Counters..................................................................................................................................................139 Debugging LLDP.............................................................................................................................................................139 Relevant Management Objects.......................................................................................................................................
Monitor Port-Channels....................................................................................................................................................172 Entity MIBS.....................................................................................................................................................................173 Example of Sample Entity MIBS outputs...................................................................................................................
20 Uplink Failure Detection (UFD)................................................................................... 191 Supported Modes............................................................................................................................................................191 Feature Description......................................................................................................................................................... 191 How Uplink Failure Detection Works.....
show npiv devices brief Command Example.............................................................................................................226 show npiv devices Command Example ....................................................................................................................226 show fc switch Command Example .........................................................................................................................227 Displaying NPIV Proxy Gateway Information.........
General Internet Protocols....................................................................................................................................... 254 General IPv4 Protocols.............................................................................................................................................254 Network Management............................................................................................................................................. 255 MIB Location...
1 About this Guide This guide describes the supported protocols and software features, and provides configuration instructions and examples, for the Dell Networking FN I/O Aggregator running Dell Networking OS version 9.6(0.0). The I/O Aggregator is installed in a Dell PowerEdge FX2 server chassis. For information about how to install and perform the initial switch configuration, refer to the Getting Started Guides on the Dell Support website at http://www.dell.
Related Documents For more information about the Dell PowerEdge FN I/O Aggregator, refer to the following documents: • Dell PowerEdge FN I/O Aggregator Command Line Reference Guide • Dell PowerEdge FN I/O Aggregator Getting Started Guide • Release Notes for the Dell PowerEdge FN I/O Aggregator 14 About this Guide
2 Before You Start To install the Aggregator in a Dell PowerEdge FX2 server chassis, use the instructions in the Dell PowerEdge FN I/O Aggregator Getting Started Guide that is shipped with the product. The I/O Aggregator (also known as Aggregator) installs with zero-touch configuration. After you power it on, an Aggregator boots up with default settings and auto-configures with software features enabled.
Stacking mode stack-unit unit iom-mode stack CONFIGURATION mode Dell(conf)#stack-unit 0 iom-mode stack Select this mode to configure Stacking mode CLI commands. For more information on the Stacking mode, refer to Stacking.
• • Link tracking: Uplink-state group 1 is automatically configured. In uplink state-group 1, server-facing ports auto-configure as downstream interfaces; the uplink port-channel (LAG 128) auto-configures as an upstream interface. Server-facing links are auto-configured to be brought up only if the uplink port-channel is up. In VLT mode, port 9 is automatically configured as VLT interconnect ports. VLT domain configuration is automatic.
interval, which is 10 seconds by default. If you have configured VLAN, you can reduce the defer time by changing the defer-timer value or remove it by using the no defer-timer command. NOTE: If installed servers do not have connectivity to a switch, check the Link Status LED of uplink ports on the aggregator. If all LEDs are on, to ensure the LACP is correctly configured, check the LACP configuration on the ToR switch that is connected to the aggregator .
3 Configuration Fundamentals The Dell Networking Operating System (OS) command line interface (CLI) is a text-based interface you can use to configure interfaces and protocols. The CLI is structured in modes for security and management purposes. Different sets of commands are available in each mode, and you can limit user access to modes using privilege levels. In Dell Networking OS, after you enable a command, it is entered into the running configuration file.
• LINE submode is the mode in which you to configure the console and virtual terminal lines. NOTE: At any time, entering a question mark (?) displays the available command options. For example, when you are in CONFIGURATION mode, entering the question mark first lists all available commands, including the possible submodes.
CLI Command Mode Prompt Access Command VIRTUAL TERMINAL Dell(config-line-vty)# line (LINE Modes) The following example shows how to change the command mode from CONFIGURATION mode to INTERFACE configuration mode. Example of Changing Command Modes Dell(conf)#interface tengigabitethernet 0/2 Dell(conf-if-te-0/2)# The do Command You can enter an EXEC mode command from any CONFIGURATION mode (CONFIGURATION, INTERFACE, and so on.
Obtaining Help Obtain a list of keywords and a brief functional description of those keywords at any CLI mode using the ? or help command: • To list the keywords available in the current mode, enter ? at the prompt or after a keyword. • Enter ? after a prompt lists all of the available keywords. The output of this command is the same for the help command.
Short-Cut Key Combination Action CNTL-N Return to more recent commands in the history buffer after recalling commands with CTRL-P or the UP arrow key. CNTL-P Recalls commands, beginning with the last command. CNTL-U Deletes the line. CNTL-W Deletes the previous word. CNTL-X Deletes the line. CNTL-Z Ends continuous scrolling of command outputs. Esc B Moves the cursor back one word. Esc F Moves the cursor forward one word. Esc D Deletes all characters from the cursor to the end of the word.
The except keyword displays text that does not match the specified text. The following example shows this command used in combination with the show linecard all command. Example of the except Keyword Dell(conf)#do show stack-unit all stack-ports all pfc details | except 0 Admin mode is On Admin is enabled Local is enabled Link Delay 65535 pause quantum Dell(conf)# The find keyword displays the output of the show command beginning from the first occurrence of specified text.
4 Data Center Bridging (DCB) On an I/O Aggregator, data center bridging (DCB) features are auto-configured in standalone mode. You can display information on DCB operation by using show commands. NOTE: DCB features are not supported on an Aggregator in stacking mode. Supported Modes Standalone, Stacking, PMUX, VLT Ethernet Enhancements in Data Center Bridging The following section describes DCB.
• 802.1Qbb - Priority-based Flow Control (PFC) • 802.1Qaz - Enhanced Transmission Selection (ETS) • 802.1Qau - Congestion Notification • Data Center Bridging Exchange (DCBx) protocol NOTE: In Dell Networking OS version 9.4.0.x, only the PFC, ETS, and DCBx features are supported in data center bridging.
– If the negotiation fails and PFC is enabled on the port, any user-configured PFC input policies are applied. If no PFC dcb-map has been previously applied, the PFC default setting is used (no priorities configured). If you do not enable PFC on an interface, you can enable the 802.3x link-level pause function. By default, the link-level pause is disabled, when you disable DCBx and PFC. If no PFC dcb-map has been applied on the interface, the default PFC settings are used.
Traffic Groupings Description the same traffic handling requirements for latency and frame loss. Group ID A 4-bit identifier assigned to each priority group. The range is from 0 to 7. Group bandwidth Percentage of available bandwidth allocated to a priority group. Group transmission selection algorithm (TSA) Type of queue scheduling a priority group uses. In the Dell Networking OS, ETS is implemented as follows: • ETS supports groups of 802.
Task Command Command Mode Create a DCB map to specify the PFC and dcb-map name ETS settings for groups of dot1p priorities. CONFIGURATION Configure the PFC setting (on or off) and the ETS bandwidth percentage allocated to traffic in each priority group or whether priority group traffic should be handled with strict priority scheduling.
ETS: Equal bandwidth is assigned to each port queue and each dot1p priority in a priority group. To configure PFC and ETS parameters on an S6000 interface, you must specify the PFC mode, the ETS bandwidth allocation for a priority group, and the 802.1p priority-to-priority group mapping in a DCB map. No default PFC and ETS settings are applied to Ethernet interfaces. Data Center Bridging in a Traffic Flow The following figure shows how DCB handles a traffic flow on an interface. Figure 3.
• A DCB-MAP policy is applied with PFC disabled. The following example shows a default interface configuration with DCB disabled and link-level flow control enabled.
Enabling Auto-DCB-Enable Mode on Next Reload To configure the Aggregator so that all interfaces come up in auto-DCB-enable mode with DCB disabled and flow control enabled, use the dcb enable aut-detect on-next-reload command. Task Command Globally enable auto-detection of DCBx dcb enable auto-detect on-next-reload and auto-enabling of DCB on all interfaces after switch reload.
Configure priority to priority group mapping from priority 0 to priority 7 in order. 4. Exit the DCB MAP configuration mode. DCB-MAP mode exit 5. Enter interface configuration mode. CONFIGURATION mode interface type slot/port 6. Apply the dcb-map with PFC and ETS configurations to both ingress and egress interfaces. INTERFACE mode dcb-map map-name 7. Repeat steps 1 to 6 on all PFC and ETS enabled interfaces to ensure lossless traffic service.
NOTE: All these configurations are available only in PMUX mode and you cannot perform these configurations in Standalone mode. How Priority-Based Flow Control is Implemented Priority-based flow control provides a flow control mechanism based on the 802.1p priorities in converged Ethernet traffic received on an interface and is enabled by default.
• ETS-assigned bandwidth allocation and scheduling apply only to data queues, not to control queues. • Dell Networking OS supports hierarchical scheduling on an interface. Dell Networking OS control traffic is redirected to control queues as higher priority traffic with strict priority scheduling. After control queues drain out, the remaining data traffic is scheduled to queues according to the bandwidth and scheduler configuration in the dcb-map.
Priority group 2 Assigns traffic to one priority queue with 30% of the link bandwidth. Priority group 3 Assigns traffic to two priority queues with 50% of the link bandwidth and strict-priority scheduling.
DCBx Port Roles The following DCBx port roles are auto-configured on an Aggregator to propagate DCB configurations learned from peer DCBx devices internally to other switch ports: Auto-upstream The port advertises its own configuration to DCBx peers and receives its configuration from DCBx peers (ToR or FCF device). The port also propagates its configuration to other ports on the switch. The first auto-upstream that is capable of receiving a peer configuration is elected as the configuration source.
NOTE: On a DCBx port, application priority TLV advertisements are handled as follows: • The application priority TLV is transmitted only if the priorities in the advertisement match the configured PFC priorities on the port. • On auto-upstream and auto-downstream ports: – If a configuration source is elected, the ports send an application priority TLV based on the application priority TLV received on the configuration-source port.
Propagation of DCB Information When an auto-upstream or auto-downstream port receives a DCB configuration from a peer, the port acts as a DCBx client and checks if a DCBx configuration source exists on the switch. • If a configuration source is found, the received configuration is checked against the currently configured values that are internally propagated by the configuration source.
Figure 4. DCBx Sample Topology DCBx Prerequisites and Restrictions The following prerequisites and restrictions apply when you configure DCBx operation on a port: • DCBx requires LLDP in both send (TX) and receive (RX) modes to be enabled on a port interface. If multiple DCBx peer ports are detected on a local DCBx interface, LLDP is shut down.
DCBx Error Messages The following syslog messages appear when an error in DCBx operation occurs. LLDP_MULTIPLE_PEER_DETECTED: DCBx is operationally disabled after detecting more than one DCBx peer on the port interface. LLDP_PEER_AGE_OUT: DCBx is disabled as a result of LLDP timing out on a DCBx peer interface. DSM_DCBx_PEER_VERSION_CONFLICT: A local port expected to receive the IEEE, CIN, or CEE version in a DCBx TLV from a remote peer but received a different, conflicting DCBx version.
Command Output To clear PFC TLV counters, use the clear pfc counters {stack-unit unit-number | tengigabitethernet slot/port} command. show interface port-type slot/port ets {summary | detail} Displays the ETS configuration applied to egress traffic on an interface, including priority groups with priorities and bandwidth allocation. To clear ETS TLV counters, enter the clear ets counters stack-unit unit-number command.
Admin mode is on Admin is enabled Remote is enabled Remote Willing Status is enabled Local is enabled Oper status is recommended PFC DCBx Oper status is Up State Machine Type is Feature TLV Tx Status is enabled PFC Link Delay 45556 pause quanta Application Priority TLV Parameters : -------------------------------------FCOE TLV Tx Status is disabled ISCSI TLV Tx Status is disabled Local FCOE PriorityMap is 0x8 Local ISCSI PriorityMap is 0x10 Remote FCOE PriorityMap is 0x8 Remote ISCSI PriorityMap is 0x8 0 In
Fields Description TLV Tx Status Status of PFC TLV advertisements: enabled or disabled. PFC Link Delay Link delay (in quanta) used to pause specified priority traffic. Application Priority TLV: FCOE TLV Tx Status Status of FCoE advertisements in application priority TLVs from local DCBx port: enabled or disabled. Application Priority TLV: ISCSI TLV Tx Status Status of ISCSI advertisements in application priority TLVs from local DCBx port: enabled or disabled.
6 7 Remote Parameters: ------------------Remote is disabled Local Parameters : -----------------Local is enabled TC-grp Priority# 0 0,1,2,3,4,5,6,7 1 2 3 4 5 6 7 12% 12% ETS ETS Bandwidth 100% 0% 0% 0% 0% 0% 0% 0% TSA ETS ETS ETS ETS ETS ETS ETS ETS Priority# Bandwidth 0 13% 1 13% 2 13% 3 13% 4 12% 5 12% 6 12% 7 12% Oper status is init Conf TLV Tx Status is disabled Traffic Class TLV Tx Status is disabled TSA ETS ETS ETS ETS ETS ETS ETS ETS Example of the show interface ets detail Command Dell# show
7 0% ETS Oper status is init ETS DCBX Oper status is Down Reason: Port Shutdown State Machine Type is Asymmetric Conf TLV Tx Status is enabled Reco TLV Tx Status is enabled 0 Input Conf TLV Pkts, 0 Output Conf TLV Pkts, 0 Error Conf TLV Pkts 0 Input Reco TLV Pkts, 0 Output Reco TLV Pkts, 0 Error Reco TLV Pkts The following table describes the show interface ets detail command fields. Table 5.
Field Description Conf TLV Tx Status Status of ETS Configuration TLV advertisements: enabled or disabled. Reco TLV Tx Status Status of ETS Recommendation TLV advertisements: enabled or disabled. Input Conf TLV pkts, Output Conf TLV pkts, Error Conf TLV pkts Number of ETS Configuration TLVs received and transmitted, and number of ETS Error Configuration TLVs received.
0 1 2 3 4 5 6 7 8 0,1,2,3,4,5,6,7 100% - ETS - Example of the show interface DCBx detail Command Dell# show interface tengigabitethernet 0/4 dcbx detail Dell#show interface te 0/4 dcbx detail E-ETS Configuration TLV enabled e-ETS Configuration TLV disabled R-ETS Recommendation TLV enabled r-ETS Recommendation TLV disabled P-PFC Configuration TLV enabled p-PFC Configuration TLV disabled F-Application priority for FCOE enabled f-Application Priority for FCOE disabled I-Application priority for iSCSI enabl
Field Description DCBx Operational Status Operational status (enabled or disabled) used to elect a configuration source and internally propagate a DCB configuration. The DCBx operational status is the combination of PFC and ETS operational status. Configuration Source Specifies whether the port serves as the DCBx configuration source on the switch: true (yes) or false (no). Local DCBx Compatibility mode DCBx version accepted in a DCB configuration as compatible.
Field Description Application Priority TLV Statistics: Input Appln Priority TLV pkts Number of Application TLVs received. Application Priority TLV Statistics: Output Appln Priority TLV pkts Number of Application TLVs transmitted. Application Priority TLV Statistics: Error Appln Priority TLV Pkts Number of Application TLV error packets received.
Reason Description Port Shutdown Port is shut down. All other reasons for DCBx inoperation, if any, are ignored. LLDP Rx/Tx is disabled LLDP is disabled (Admin Mode set to rx or tx only) globally or on the interface. Waiting for Peer Waiting for peer or detected peer connection has aged out. Multiple Peer Detected Multiple peer connections detected on the interface. Version Conflict DCBx version on peer version is different than the local or globally configured DCBx version.
Reason 52 Description • Incompatible TSA. • Incompatible TC BW. • Incompatible TC TSA.
5 Dynamic Host Configuration Protocol (DHCP) The Aggregator is auto-configured to operate as a dynamic host configuration protocol (DHCP) client. The DHCP server, DHCP relay agent, and secure DHCP features are not supported. The DHCP is an application layer protocol that dynamically assigns IP addresses and other configuration parameters to network end-stations (hosts) based on configuration policies determined by network administrators.
DHCPINFORM A client uses this message to request configuration parameters when it assigned an IP address manually rather than with DHCP. The server responds by unicast. DHCPNAK A server sends this message to the client if it is not able to fulfill a DHCPREQUEST; for example, if the requested address is already in use. In this case, the client starts the configuration process over by sending a DHCPDISCOVER.
EXEC Privilege [no] debug ip dhcp client events [interface type slot/port] The following example shows the packet- and event-level debug messages displayed for the packet transmissions and state transitions on a DHCP client interface.
Dell# renew dhcp int Ma 0/0 Dell#1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP RENEW CMD Received in state STOPPED 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :Transitioned to state SELECTING 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: DHCP DISCOVER sent in Interface Ma 0/0 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: Received DHCPOFFER packet in Interface Ma 0/0 with Lease-Ip
You can also manually configure an IP address for the VLAN 1 default management interface using the CLI. If no user-configured IP address exists for the default VLAN management interface exists and if the default VLAN IP address is not in the startup configuration, the Aggregator will automatically obtain it using DHCP. • The default VLAN 1 with all ports configured as members is the only L3 interface on the Aggregator.
NOTE: Management routes added by the DHCP client include the specific routes to reach a DHCP server in a different subnet and the management route. DHCP Client on a VLAN The following conditions apply on a VLAN that operates as a DHCP client: • The default VLAN 1 with all ports auto-configured as members is the only L3 interface on the Aggregator.
Option Number and Description Specifies the amount of time that the client is allowed to use an assigned IP address. DHCP Message Type Option 53 • 1: DHCPDISCOVER • 2: DHCPOFFER • 3: DHCPREQUEST • 4: DHCPDECLINE • 5: DHCPACK • 6: DHCPNACK • 7: DHCPRELEASE • 8: DHCPINFORM Parameter Request List Option 55 Renewal Time Option 58 Clients use this option to tell the server which parameters it requires. It is a series of octets where each octet is DHCP option code.
To insert Option 82 into DHCP packets, follow this step. • Insert Option 82 into DHCP packets. CONFIGURATION mode int ma 0/0 ip add dhcp relay information-option remote-id For routers between the relay agent and the DHCP server, enter the trust-downstream option. Releasing and Renewing DHCP-based IP Addresses On an Aggregator configured as a DHCP client, you can release a dynamically-assigned IP address without removing the DHCP client operation on the interface.
DHCPINFORM Dell# 0 Example of the show ip dhcp lease Command Dell# show ip dhcp Interface Lease-IP Def-Router ServerId State Lease Obtnd At Lease Expires At ========= ======== ========= ======== ===== ============== ================ Ma 0/0 0.0.0.0/0 0.0.0.0 0.0.0.0 INIT -----NA--------NA---Vl 1 10.1.1.254/24 0.0.0.0 Renew Time ========== ----NA---08-26-2011 16:21:50 10.1.1.
6 FIP Snooping This chapter describes about the FIP snooping concepts and configuration procedures. Supported Modes Standalone, PMUX, VLT Fibre Channel over Ethernet Fibre Channel over Ethernet (FCoE) provides a converged Ethernet network that allows the combination of storage-area network (SAN) and LAN traffic on a Layer 2 link by encapsulating Fibre Channel data into Ethernet frames.
• FIP discovery: FCoE end-devices and FCFs are automatically discovered. • Initialization: FCoE devices perform fabric login (FLOGI) and fabric discovery (FDISC) to create a virtual link with an FCoE switch. • Maintenance: A valid virtual link between an FCoE device and an FCoE switch is maintained and the link termination logout (LOGO) functions properly. Figure 7.
• • • • Global ACLs are applied on server-facing ENode ports. Port-based ACLs are applied on ports directly connected to an FCF and on server-facing ENode ports. Port-based ACLs take precedence over global ACLs. FCoE-generated ACLs take precedence over user-configured ACLs. A user-configured ACL entry cannot deny FCoE and FIP snooping frames. The below illustration depicts an Aggregator used as a FIP snooping bridge in a converged Ethernet network. The ToR switch operates as an FCF for FCoE traffic.
legitimate sessions. By default, all FCoE and FIP frames are dropped unless specifically permitted by existing FIP snooping-generated ACLs. FIP Snooping on VLANs FIP snooping is enabled globally on an Aggregator on all VLANs: • FIP frames are allowed to pass through the switch on the enabled VLANs and are processed to generate FIP snooping ACLs. • FCoE traffic is allowed on VLANs only after a successful virtual-link initialization (fabric login FLOGI) between an ENode and an FCF.
FIP Snooping Restrictions The following restrictions apply to FIP snooping on an Aggregator: • The maximum number of FCoE VLANs supported on the Aggregator is eight. • The maximum number of FIP snooping sessions supported per ENode server is 32. To increase the maximum number of sessions to 64, use the fip-snooping max-sessions-per-enodemac command. This is configurable only in PMUX mode.
Displaying FIP Snooping Information Use the show commands from the table below, to display information on FIP snooping. Command Output show fip-snooping sessions [interface vlan vlan-id] Displays information on FIP-snooped sessions on all VLANs or a specified VLAN, including the ENode interface and MAC address, the FCF interface and MAC address, VLAN ID, FCoE MAC address and FCoE session ID number (FC-ID), worldwide node name (WWNN) and the worldwide port name (WWPN).
Field Description ENode MAC MAC address of the ENode. ENode Interface Slot/ port number of the interface connected to the ENode. FCF MAC MAC address of the FCF. FCF Interface Slot/ port number of the interface to which the FCF is connected. VLAN VLAN ID number used by the session. FCoE MAC MAC address of the FCoE session assigned by the FCF. FC-ID Fibre Channel ID assigned by the FCF. Port WWPN Worldwide port name of the CNA port. Port WWNN Worldwide node name of the CNA port.
show fip-snooping fcf Command Description Field Description FCF MAC MAC address of the FCF. FCF Interface Slot/port number of the interface to which the FCF is connected. VLAN VLAN ID number used by the session. FC-MAP FC-Map value advertised by the FCF. ENode Interface Slot/ number of the interface connected to the ENode. FKA_ADV_PERIOD Period of time (in milliseconds) during which FIP keep-alive advertisements are transmitted. No of ENodes Number of ENodes connected to the FCF.
Number Number Number Number Number of of of of of FLOGO Rejects CVL FCF Discovery Timeouts VN Port Session Timeouts Session failures due to Hardware Config :0 :0 :0 :0 :0 show fip-snooping statistics (port channel) Command Example Dell# show fip-snooping statistics interface port-channel 22 Number of Vlan Requests :0 Number of Vlan Notifications :2 Number of Multicast Discovery Solicits :0 Number of Unicast Discovery Solicits :0 Number of FLOGI :0 Number of FDISC :0 Number of FLOGO :0 Number of Enode Ke
Number of FLOGI Rejects Number of FIP FLOGI reject frames received on the interface. Number of FDISC Accepts Number of FIP FDISC accept frames received on the interface. Number of FDISC Rejects Number of FIP FDISC reject frames received on the interface. Number of FLOGO Accepts Number of FIP FLOGO accept frames received on the interface. Number of FLOGO Rejects Number of FIP FLOGO reject frames received on the interface.
FIP Snooping Example The following figure shows an Aggregator used as a FIP snooping bridge for FCoE traffic between an ENode (server blade) and an FCF (ToR switch). The ToR switch operates as an FCF and FCoE gateway. Figure 9. FIP Snooping on an Aggregator In tbe above figure, DCBX and PFC are enabled on the Aggregator (FIP snooping bridge) and on the FCF ToR switch. On the FIP snooping bridge, DCBX is configured as follows: • • A server-facing port is configured for DCBX in an auto-downstream role.
acl enables debugging only for ACL-specific events. error enables debugging only for error conditions. ifm enables debugging only for IFM events. info enables debugging only for information events. ipc enables debugging only for IPC events. rx enables debugging only for incoming packet traffic. To turn off debugging event messages, enter the no debug fip-snooping command.
7 Internet Group Management Protocol (IGMP) On an Aggregator, IGMP snooping is auto-configured. You can display information on IGMP by using show ip igmp command. Multicast is based on identifying many hosts by a single destination IP address. Hosts represented by the same IP address are a multicast group. The internet group management protocol (IGMP) is a Layer 3 multicast protocol that hosts use to join or leave a multicast group.
– One router on a subnet is elected as the querier. The querier periodically multicasts (to all-multicast-systems address 224.0.0.1) a general query to all hosts on the subnet. – A host that wants to join a multicast group responds with an IGMP membership report that contains the multicast address of the group it wants to join (the packet is addressed to the same group).
Figure 12. IGMP version 3 Membership Report Packet Format Joining and Filtering Groups and Sources The below illustration shows how multicast routers maintain the group and source information from unsolicited reports. • The first unsolicited report from the host indicates that it wants to receive traffic for group 224.1.1.1. • The host’s second report indicates that it is only interested in traffic from group 224.1.1.1, source 10.11.1.1.
Leaving and Staying in Groups The below illustration shows how multicast routers track and refreshes the state change in response to group-and-specific and general queries. • Host 1 sends a message indicating it is leaving group 224.1.1.1 and that the included filter for 10.11.1.1 and 10.11.1.2 are no longer necessary.
Disabling Multicast Flooding If the switch receives a multicast packet that has an IP address of a group it has not learned (unregistered frame), the switch floods that packet out of all ports on the VLAN. To disable multicast flooding on all VLAN ports, enter the no ip igmp snooping flood command in global configuration mode. When multicast flooding is disabled, unregistered multicast data traffic is forwarded to only multicast router ports on all VLANs.
Last report received Group source list Source address 1.1.1.
8 Interfaces This chapter describes TenGigabit Ethernet interface types, both physical and logical, and how to configure them with the Dell Networking Operating Software (OS).
Interface Types The following interface types are supported on an Aggregator.
0 Multicasts, 0 Broadcasts, 0 Unicasts 0 throttles, 0 discarded, 0 collisions, 0 wreddrops Rate info (interval 299 seconds): Input 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Output 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Time since last interface status change: 05:29:16 To view only configured interfaces use the show interfaces configured command in EXEC Privilege mode. To determine which physical interfaces are available, use the show running-config command in EXEC mode.
The management IP address on the D-fabric provides a dedicated management access to the system. The switch interfaces support Layer 2 traffic over the 10-Gigabit Ethernet interfaces. These interfaces can also become part of virtual interfaces such as VLANs or port channels. For more information about VLANs, refer to VLANs and Port Tagging. For more information about port channels, refer to Port Channel Interfaces.
addresses and IP addresses if it appears in the main routing table of Dell Networking OS. In addition, the proxy address resolution protocol (ARP) is not supported on this interface. For additional management access, the Aggregator supports the default VLAN (VLAN 1) L3 interface in addition to the public fabric D management interface. You can assign the IP address for the VLAN 1 default management interface using the setup wizard or through the CLI.
Dell# VLAN Membership A virtual LAN (VLANs) is a logical broadcast domain or logical grouping of interfaces in a LAN in which all data received is kept locally and broadcast to all members of the group. In Layer 2 mode, VLANs move traffic at wire speed and can span multiple devices. Dell Networking OS supports up to 4093 port-based VLANs and one default VLAN, as specified in IEEE 802.1Q. VLAN provide the following benefits: • Improved security because you can isolate groups of users into different VLANs.
Figure 15. Tagged Frame Format The tag header contains some key information used by Dell Networking OS: • The VLAN protocol identifier identifies the frame as tagged according to the IEEE 802.1Q specifications (2 bytes). • Tag control information (TCI) includes the VLAN ID (2 bytes total). The VLAN ID can have 4,096 values, but two are reserved. NOTE: The insertion of the tag header into the Ethernet frame increases the size of the frame to more than the 1518 bytes specified in the IEEE 802.3 standard.
member of VLAN 2 and port 0/4 is an untagged member of VLAN 3, the resulting LAG consisting of the two ports is an untagged member of VLAN 2 and a tagged member of VLAN 3.
Adding an Interface to an Untagged VLAN To move an untagged interfaces from the default VLAN to another VLAN, use the vlan untagged command as shown in the below figure.
5. Configure the tagged VLANs 10 through 15 and untagged VLAN 20 on this port-channel. Dell(conf-if-po-128)#vlan tagged 10-15 Dell(conf-if-po-128)# Dell(conf-if-po-128)#vlan untagged 20 6. Show the running configurations on this port-channel. Dell(conf-if-po-128)#show config ! interface Port-channel 128 portmode hybrid switchport vlan tagged 10-15 vlan untagged 20 shutdown Dell(conf-if-po-128)#end Dell# 7. Show the VLAN configurations.
NOTE: A port channel may also be referred to as a link aggregation group (LAG). Port Channel Definitions and Standards Link aggregation is defined by IEEE 802.3ad as a method of grouping multiple physical interfaces into a single logical interface—a link aggregation group (LAG) or port channel. A LAG is “a group of links that appear to a MAC client as if they were a single link” according to IEEE 802.3ad. In Dell Networking OS, a LAG is referred to as a port channel interface.
channel. If the other interfaces configured in that port channel are configured with a different speed, Dell Networking OS disables them.
Members in this channel: Te 0/9(U) Te 0/10(U) Te 0/11(U) ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 04:44:48 Queueing strategy: fifo Input Statistics: 10063 packets, 749248 bytes 8419 64-byte pkts, 0 over 64-byte pkts, 1644 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 10063 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics: 61970 packets, 7743149 bytes, 0 underruns 0 64-byte pkt
Create a Multiple-Range Creating a Multiple-Range Prompt Dell(conf)#interface range tengigabitethernet 0/5 - 10 , tengigabitethernet 0/1 , vlan 1 Dell(conf-if-range-te-0/5-10,te-0/1,vl-1)# Exclude a Smaller Port Range If the interface range has multiple port ranges, the smaller port range is excluded from the prompt.
monitor interface command example Dell#monitor interface tengig 0/1 Dell Networking OS uptime is 1 day(s), 4 hour(s), 31 minute(s) Monitor time: 00:00:00 Refresh Intvl.
2. show tdr tengigabitethernet / EXEC Privilege Displays TDR test results. Flow Control Using Ethernet Pause Frames An Aggregator auto-configures to operate in auto-DCB-enable mode (Refer to Data Center Bridging: Auto-DCB-Enable Mode).
– tx on: enter the keywords tx on to send control frames from this port to the connected device when a higher rate of traffic is received. – tx off: enter the keywords tx off so that flow control frames are not sent from this port to the connected device when a higher rate of traffic is received. – negotiate: enable pause-negotiation with the egress port of the peer device. If the negotiate command is not used, pause-negotiation is disabled. NOTE: The default is rx off. .
For example, the VLAN contains tagged members with a link MTU of 1522 and an IP MTU of 1500 and untagged members with a link MTU of 1518 and an IP MTU of 1500. The VLAN’s Link MTU cannot be higher than 1518 bytes and its IP MTU cannot be higher than 1500 bytes. Auto-Negotiation on Ethernet Interfaces Setting Speed and Duplex Mode of Ethernet Interfaces By default, auto-negotiation of speed and duplex mode is enabled on 10GbE Ethernet interface on an Aggregator.
Fc Fc Te Te 0/9 0/10 0/11 0/12 Up Up Down Down 8000 Mbit 8000 Mbit Auto Auto Full Full Auto Auto ----- In the above example, several ports display “Auto” in the speed field, including port 0/1. Now, in the below example, the speed of port 0/1 is set to 100 Mb and then its auto-negotiation is disabled.
duplex half interfaceconfig mode Supported CLI not available CLI not available Invalid Input error- CLI not available duplex full interfaceconfig mode Supported CLI not available CLI not available Invalid Input error-CLI not available Viewing Interface Information Displaying Non-Default Configurations. The show [ip | running-config] interfaces configured command allows you to display only interfaces that have non-default configurations are displayed.
Vlan membership: Q Vlans U 1 --More-- Clearing Interface Counters The counters in the show interfaces command are reset by the clear counters command. This command does not clear the counters captured by any SNMP program. To clear the counters, use the following command in EXEC Privilege mode: Command Syntax Command Mode Purpose clear counters [interface] EXEC Privilege Clear the counters used in the show interface commands for all VLANs, and physical interfaces or selected ones.
feature fc Configuring Fibre Channel Interfaces To configure a Fibre Channel interface, follow these steps. Convert the interfaces 9 and 10 from FC to Ethernet mode.
9 iSCSI Optimization An Aggregator enables internet small computer system interface (iSCSI) optimization with default iSCSI parameter settings(Default iSCSI Optimization Values) and is auto-provisioned to support: iSCSI Optimization: Operation To display information on iSCSI configuration and sessions, use show commands. iSCSI optimization enables quality-of-service (QoS) treatment for iSCSI traffic.
Aggregator is configured to use dot1p priority-queue assignments to ensure that iSCSI traffic in these sessions receives priority treatment when forwarded on Aggregator hardware. Figure 16. iSCSI Optimization Example Monitoring iSCSI Traffic Flows The switch snoops iSCSI session-establishment and termination packets by installing classifier rules that trap iSCSI protocol packets to the CPU for examination.
• Initiator’s IQN (iSCSI qualified name) • Target’s IQN • Initiator’s TCP Port • Target’s TCP Port If no iSCSI traffic is detected for a session during a user-configurable aging period, the session data clears. Synchronizing iSCSI Sessions Learned on VLT-Lags with VLT-Peer The following behavior occurs during synchronization of iSCSI sessions.
To delete a specific IP address from the TCP port, use the no iscsi target port tcp-port-n ip-address address command to specify the address to be deleted. • ip-address specifies the IP address of the iSCSI target. When you enter the no form of the command, and the TCP port you want to delete is one bound to a specific IP address, include the IP address value in the command.
[no] iscsi profile-compellent. The default is: Compellent disk arrays are not detected. NOTE: All these configurations are available only in PMUX mode. Displaying iSCSI Optimization Information To display information on iSCSI optimization, use the show commands detailed in the below table: Table 7. Displaying iSCSI Optimization Information Command Output show iscsi Displays the currently configured iSCSI settings.
Time for aging out:00:00:09:34(DD:HH:MM:SS) ISID:806978696102 Initiator Initiator Target Target Connection IP Address TCP Port IP Address TCPPort ID 10.10.0.44 33345 10.10.0.101 3260 0 Session 1 : ----------------------------------------------------------------------------Target:iqn.2010-11.com.ixia:ixload:iscsi-TG1 Initiator:iqn.2010-11.com.ixia.
10 Isolated Networks for Aggregators An Isolated Network is an environment in which servers can only communicate with the uplink interfaces and not with each other even though they are part of same VLAN. If the servers in the same chassis need to communicate with each other, it requires a nonisolated network connectivity between them or it needs to be routed in the TOR. Isolated Networks can be enabled on per VLAN basis.
11 Link Aggregation Unlike IOA Automated modes (Standalone and VLT modes), the IOA Programmable MUX (PMUX) can support multiple uplink LAGs. You can provision multiple uplink LAGs. The I/O Aggregator auto-configures with link aggregation groups (LAGs) as follows: • All uplink ports are automatically configured in a single port channel (LAG 128).
Uplink LAG When the Aggregator power is on, all uplink ports are configured in a single LAG (LAG 128). Server-Facing LAGs Server-facing ports are configured as individual ports by default. If you configure a server NIC in standalone, stacking, or VLT mode for LACP-based NIC teaming, server-facing ports are automatically configured as part of dynamic LAGs. The LAG range 1 to 127 is reserved for server-facing LAGs.
• • • • • • Creating a Port Channel (mandatory) Adding a Physical Interface to a Port Channel (mandatory) Reassigning an Interface to a New Port Channel (optional) Configuring the Minimum Oper Up Links in a Port Channel (optional) Configuring VLAN Tags for Member Interfaces (optional) Deleting or Disabling a Port Channel (optional) Creating a Port Channel You can create up to 128 port channels with four port members per group on the Aggregator. To configure a port channel, use the following commands. 1.
The interface variable is the physical interface type and slot/port information. 2. Double check that the interface was added to the port channel. INTERFACE PORT-CHANNEL mode show config To view the port channel’s status and channel members in a tabular format, use the show interfaces port-channel brief command in EXEC Privilege mode, as shown in the following example.
When more than one interface is added to a Layer 2-port channel, Dell Networking OS selects one of the active interfaces in the port channel to be the primary port. The primary port replies to flooding and sends protocol data units (PDUs). An asterisk in the show interfaces port-channel brief command indicates the primary port. As soon as a physical interface is added to a port channel, the properties of the port channel determine the properties of the physical interface.
channel-member TenGigabitEthernet 0/8 shutdown Dell(conf-if-po-3)# Configuring the Minimum Oper Up Links in a Port Channel You can configure the minimum links in a port channel (LAG) that must be in “oper up” status to consider the port channel to be in “oper up” status. To set the “oper up” status of your links, use the following command. • Enter the number of links in a LAG that must be in “oper up” status. INTERFACE mode minimum-links number The default is 1.
Deleting or Disabling a Port Channel To delete or disable a port channel, use the following commands. • Delete a port channel. CONFIGURATION mode no interface portchannel channel-number • Disable a port channel. shutdown When you disable a port channel, all interfaces within the port channel are operationally down also. Configuring Auto LAG You can enable or disable auto LAG on the server-facing interfaces. By default, auto LAG is enabled.
For the interface level auto LAG configurations, use the show interface command.
Configuring the Minimum Number of Links to be Up for Uplink LAGs to be Active You can activate the LAG bundle for uplink interfaces or ports (the uplink port-channel is LAG 128) on the I/O Aggregator only when a minimum number of member interfaces of the LAG bundle is up. For example, based on your network deployment, you may want the uplink LAG bundle to be activated only if a certain number of member interface links is also in the up state.
Optimizing Traffic Disruption Over LAG Interfaces On IOA Switches in VLT Mode When you use the write memory command while an Aggregator operates in VLT mode, the VLT LAG configurations are saved in nonvolatile storage (NVS). By restoring the settings saved in NVS, the VLT ports come up quicker on the primary VLT node and traffic disruption is reduced. The delay in restoring the VLT LAG parameters is reduced (90 seconds by default) on the secondary VLT peer node before it becomes operational.
Monitoring the Member Links of a LAG Bundle You can examine and view the operating efficiency and the traffic-handling capacity of member interfaces of a LAG or port channel bundle. This method of analyzing and tracking the number of packets processed by the member interfaces helps you manage and distribute the packets that are handled by the LAG bundle.
Port-channel 128 is up, line protocol is up Created by LACP protocol Hardware address is 00:01:e8:e1:e1:c1, Current address is 00:01:e8:e1:e1:c1 Interface index is 1107755136 Minimum number of links to bring Port-channel up is 1 Internet address is not set Mode of IP Address Assignment : NONE DHCP Client-ID :lag1280001e8e1e1c1 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed 40000 Mbit Members in this channel: Te0/9 Te0/10 Te 0/11 Te0/12 ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" co
show interfaces port-channel 1 Command Example Dell# show interfaces port-channel 1 Port-channel 1 is up, line protocol is up Created by LACP protocol Hardware address is 00:01:e8:e1:e1:c1, Current address is 00:01:e8:e1:e1:c1 Interface index is 1107755009 Minimum number of links to bring Port-channel up is 1 Internet address is not set Mode of IP Address Assignment : NONE DHCP Client-ID :lag10001e8e1e1c1 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed 10000 Mbit Members in this channel: Te 0/12(U) ARP type:
Multiple Uplink LAGs with 10G Member Ports The following sample commands configure multiple dynamic uplink LAGs with 10G member ports based on LACP. 1. Bring up all the ports. Dell#configure Dell(conf)#int range tengigabitethernet 0/1 - 12 Dell(conf-if-range-te-0/1-12)#no shutdown 2. Associate the member ports into LAG-10 and 11.
1000 1001 Dell# 5. Active Active U Po11(Te 0/6) T Po10(Te 0/4-5) T Po11(Te 0/6) Show LAG member ports utilization.
12 Layer 2 The Aggregator supports CLI commands to manage the MAC address table: • Clearing the MAC Address Entries • Displaying the MAC Address Table The Aggregator auto-configures with support for Network Interface Controller (NIC) Teaming. NOTE: On an Aggregator, all ports are configured by default as members of all (4094) VLANs, including the default VLAN. All VLANs operate in Layer 2 mode.
• Display the contents of the MAC address table. EXEC Privilege mode NOTE: This command is available only in PMUX mode. show mac-address-table [address | aging-time [vlan vlan-id]| count | dynamic | interface | static | vlan] – address: displays the specified entry. – aging-time: displays the configured aging-time. – count: displays the number of dynamic and static entries for all VLANs, and the total number of entries. – dynamic: displays only dynamic entries.
Figure 17. Redundant NOCs with NIC Teaming MAC Address Station Move When you use NIC teaming, consider that the server MAC address is originally learned on Port 0/1 of the switch (see figure below). If the NIC fails, the same MAC address is learned on Port 0/5 of the switch. The MAC address is disassociated with one port and reassociated with another in the ARP table; in other words, the ARP entry is “moved”. The Aggregator is auto-configured to support MAC Address station moves. Figure 18.
MAC Move Optimization Station-move detection takes 5000ms because this is the interval at which the detection algorithm runs.
13 Link Layer Discovery Protocol (LLDP) Link layer discovery protocol (LLDP) advertises connectivity and management from the local station to the adjacent stations on an IEEE 802 LAN. LLDP facilitates multi-vendor interoperability by using standard management tools to discover and make available a physical topology for network management. The Dell Networking operating software implementation of LLDP is based on IEEE standard 801.1ab.
There are five types of TLVs (as shown in the below table). All types are mandatory in the construction of an LLDPDU except Optional TLVs. You can configure the inclusion of individual Optional TLVs. Type, Length, Value (TLV) Types Type TLV Description 0 End of LLDPDU Marks the end of an LLDPDU. 1 Chassis ID The Chassis ID TLV is a mandatory TLV that identifies the chassis containing the IEEE 802 LAN station associated with the transmitting LLDP agent.
CONFIGURATION versus INTERFACE Configurations All LLDP configuration commands are available in PROTOCOL LLDP mode, which is a sub-mode of the CONFIGURATION mode and INTERFACE mode. • Configurations made at the CONFIGURATION level are global; that is, they affect all interfaces on the system. • Configurations made at the INTERFACE level affect only the specific interface; they override CONFIGURATION level configurations.
To undo an LLDP configuration, precede the relevant command with the keyword no. Advertising TLVs You can configure the system to advertise TLVs out of all interfaces or out of specific interfaces. • If you configure the system globally, all interfaces send LLDPDUs with the specified TLVs. • If you configure an interface, only the interface sends LLDPDUs with the specified TLVs.
Figure 21. Configuring LLDP Optional TLVs The Dell Networking Operating System (OS) supports the following optional TLVs: Management TLVs, IEEE 802.1 and 802.3 organizationally specific TLVs, and TIA-1057 organizationally specific TLVs. Management TLVs A management TLV is an optional TLVs sub-type. This kind of TLV contains essential management information about the sender. Organizationally Specific TLVs A professional organization or a vendor can define organizationally specific TLVs.
Type TLV Description 5 System name A user-defined alphanumeric string that identifies the system. 6 System description A user-defined alphanumeric string that identifies the system. 7 System capabilities Identifies the chassis as one or more of the following: repeater, bridge, WLAN Access Point, Router, Telephone, DOCSIS cable device, end station only, or other. 8 Management address Indicates the network address of the management interface.
LLDP-MED Capabilities TLV The LLDP-MED capabilities TLV communicates the types of TLVs that the endpoint device and the network connectivity device support. LLDP-MED network connectivity devices must transmit the Network Policies TLV. • The value of the LLDP-MED capabilities field in the TLV is a 2–octet bitmap, each bit represents an LLDP-MED capability (as shown in the following table). • The possible values of the LLDP-MED device type are shown in the following.
• DSCP value An integer represents the application type (the Type integer shown in the following table), which indicates a device function for which a unique network policy is defined. An individual LLDP-MED network policy TLV is generated for each application type that you specify with the CLI (XXAdvertising TLVs).
• Power Source — there are two possible power sources: primary and backup. The Dell Networking system is a primary power source, which corresponds to a value of 1, based on the TIA-1057 specification. • Power Priority — there are three possible priorities: Low, High, and Critical. On Dell Networking systems, the default power priority is High, which corresponds to a value of 2 based on the TIA-1057 specification. You can configure a different power priority through the CLI.
switchport no shutdown R1(conf-if-te-0/3)#protocol lldp R1(conf-if-te-0/3-lldp)#show config ! protocol lldp R1(conf-if-te-0/3-lldp)# Viewing Information Advertised by Adjacent LLDP Agents To view brief information about adjacent devices or to view all the information that neighbors are advertising, use the following commands. • Display brief information about adjacent devices. show lldp neighbors • Display all of the information that neighbors are advertising.
Configuring LLDPDU Intervals LLDPDUs are transmitted periodically; the default interval is 30 seconds. To configure LLDPDU intervals, use the following command. • Configure a non-default transmit interval.
advertise dot1-tlv port-protocol-vlan-id port-vlan-id advertise dot3-tlv max-frame-size advertise management-tlv system-capabilities system-description multiplier 5 no disable R1(conf-lldp)#no multiplier R1(conf-lldp)#show config ! protocol lldp advertise dot1-tlv port-protocol-vlan-id port-vlan-id advertise dot3-tlv max-frame-size advertise management-tlv system-capabilities system-description no disable R1(conf-lldp)# Clearing LLDP Counters You can clear LLDP statistics that are maintained on an Aggregat
Figure 26. The debug lldp detail Command — LLDPDU Packet Dissection Relevant Management Objects Dell Networkings OS supports all IEEE 802.1AB MIB objects. The following tables list the objects associated with: • received and transmitted TLVs • the LLDP configuration on the local agent • IEEE 802.1AB Organizationally Specific TLVs • received and transmitted LLDP-MED TLVs Table 13.
MIB Object Category LLDP Statistics LLDP Variable LLDP MIB Object Description mibMgmtAddrInstanceTxEnable lldpManAddrPortsTxEnable The management addresses defined for the system and the ports through which they are enabled for transmission. statsAgeoutsTotal lldpStatsRxPortAgeoutsTotal Total number of times that a neighbor’s information is deleted on the local system due to an rxInfoTTL timer expiration.
TLV Type TLV Name TLV Variable management address length management address subtype management address interface numbering subtype interface number OID System LLDP MIB Object Remote lldpRemSysCapEnabled Local lldpLocManAddrLen Remote lldpRemManAddrLen Local lldpLocManAddrSubtype Remote lldpRemManAddrSubtype Local lldpLocManAddr Remote lldpRemManAddr Local lldpLocManAddrIfSubtype Remote lldpRemManAddrIfSubtyp e Local lldpLocManAddrIfId Remote lldpRemManAddrIfId Local lldpLoc
Table 16.
TLV Sub-Type TLV Name TLV Variable System LLDP-MED MIB Object 4 Extended Power via MDI Power Device Type Local lldpXMedLocXPoEDevice Type Remote lldpXMedRemXPoEDevice Type Local lldpXMedLocXPoEPSEPo werSource Power Source lldpXMedLocXPoEPDPow erSource Remote lldpXMedRemXPoEPSEP owerSource lldpXMedRemXPoEPDPo werSource Power Priority Local lldpXMedLocXPoEPDPow erPriority lldpXMedLocXPoEPSEPor tPDPriority Remote lldpXMedRemXPoEPSEP owerPriority lldpXMedRemXPoEPDPo werPriority Power Value
14 Port Monitoring The Aggregator supports user-configured port monitoring. See Configuring Port Monitoring for the configuration commands to use. Port monitoring copies all incoming or outgoing packets on one port and forwards (mirrors) them to another port. The source port is the monitored port (MD) and the destination port is the monitoring port (MG). Supported Modes Standalone, PMUX, VLT, Stacking Configuring Port Monitoring To configure port monitoring, use the following commands. 1.
Example of Viewing Port Monitoring Configuration To display information on currently configured port-monitoring sessions, use the show monitor session command from EXEC Privilege mode.
• A source port (MD) can only be monitored by one destination port (MG).
15 Security Security features are supported on the I/O Aggregator. This chapter describes several ways to provide access security to the Dell Networking system. For details about all the commands described in this chapter, refer to the Security chapter in the Dell PowerEdge FN I/O Aggregator Command Line Reference Guide. Supported Modes Standalone, PMUX, VLT, Stacking Understanding Banner Settings This functionality is supported on the Aggregator.
AAA accounting enables tracking of services that users are accessing and the amount of network resources being consumed by those services. When you enable AAA accounting, the network server reports user activity to the security server in the form of accounting records. Each accounting record comprises accounting attribute/value (AV) pairs and is stored on the access control server.
Configuring Accounting of EXEC and Privilege-Level Command Usage The network access server monitors the accounting functions defined in the TACACS+ attribute/value (AV) pairs. • Configure AAA accounting to monitor accounting functions defined in TACACS+. CONFIGURATION mode aaa accounting system default start-stop tacacs+ aaa accounting command 15 default start-stop tacacs+ System accounting can use only the default method list.
AAA Authentication Dell Networking OS supports a distributed client/server system implemented through authentication, authorization, and accounting (AAA) to help secure networks against unauthorized access.
• • 2. radius: use the RADIUS servers configured with the radius-server host command. tacacs+: use the TACACS+ servers configured with the tacacs-server host command. Enter LINE mode. CONFIGURATION mode line {aux 0 | console 0 | vty number [... end-number]} 3. Assign a method-list-name or the default list to the terminal line. LINE mode login authentication {method-list-name | default} To view the configuration, use the show config command in LINE mode or the show running-config in EXEC Privilege mode.
Dell(config)# radius-server host x.x.x.x key Dell(config)# tacacs-server host x.x.x.x key To use local authentication for enable secret on the console, while using remote authentication on VTY lines, issue the following commands.
You can configure passwords to control access to the box and assign different privilege levels to users. The Dell Networking OS supports the use of passwords when you log in to the system and when you enter the enable command. If you move between privilege levels, you are prompted for a password if you move to a higher privilege level. Configuration Task List for Privilege Levels The following list has the configuration tasks for privilege levels and passwords.
– password: Enter a string. To change only the password for the enable command, configure only the password parameter. To view the configuration for the enable secret command, use the show running-config command in EXEC Privilege mode. In custom-configured privilege levels, the enable command is always available. No matter what privilege level you entered, you can enter the enable 15 command to access and configure all CLIs.
• reset: return the command to its default privilege mode. To view the configuration, use the show running-config command in EXEC Privilege mode. The following example shows a configuration to allow a user john to view only EXEC mode commands and all snmp-server commands. Because the snmp-server commands are enable level commands and, by default, found in CONFIGURATION mode, also assign the launch command for CONFIGURATION mode, configure, to the same privilege level as the snmp-server commands.
Specifying LINE Mode Password and Privilege You can specify a password authentication of all users on different terminal lines. The user’s privilege level is the same as the privilege level assigned to the terminal line, unless a more specific privilege level is assigned to the user. To specify a password for the terminal line, use the following commands. • Configure a custom privilege level for the terminal lines. LINE mode privilege level level • – level level: The range is from 0 to 15.
Transactions between the RADIUS server and the client are encrypted (the users’ passwords are not sent in plain text). RADIUS uses UDP as the transport protocol between the RADIUS server host and the client. For more information about RADIUS, refer to RFC 2865, Remote Authentication Dial-in User Service.
If RADIUS denies authorization, the session ends (RADIUS must not be the last method specified). Applying the Method List to Terminal Lines To enable RADIUS AAA login authentication for a method list, apply it to a terminal line. To configure a terminal line for RADIUS authentication and authorization, use the following commands. • Enter LINE mode. CONFIGURATION mode line {aux 0 | console 0 | vty number [end-number]} • Enable AAA login authentication for the specified RADIUS method list.
Setting Global Communication Parameters for all RADIUS Server Hosts You can configure global communication parameters (auth-port, key, retransmit, and timeout parameters) and specific host communication parameters on the same system. However, if you configure both global and specific host parameters, the specific host parameters override the global parameters for that RADIUS server host. To set global communication parameters for all RADIUS server hosts, use the following commands.
• • TACACS+ Remote Authentication Specifying a TACACS+ Server Host For a complete listing of all commands related to TACACS+, refer to the Security chapter in the Dell Networking OS Command Reference Guide. Choosing TACACS+ as the Authentication Method One of the login authentication methods available is TACACS+ and the user’s name and password are sent for authentication to the TACACS hosts specified.
aaa authorization commands 15 default tacacs+ none aaa accounting exec default start-stop tacacs+ aaa accounting commands 1 default start-stop tacacs+ aaa accounting commands 15 default start-stop tacacs+ Dell(conf)# Dell(conf)#do show run tacacs+ ! tacacs-server key 7 d05206c308f4d35b tacacs-server host 10.10.10.10 timeout 1 Dell(conf)#tacacs-server key angeline Dell(conf)#%RPM0-P:CP %SEC-5-LOGIN_SUCCESS: Login successful for user admin on vty0 (10.11.9.
Example of Connecting with a TACACS+ Server Host To specify multiple TACACS+ server hosts, configure the tacacs-server host command multiple times. If you configure multiple TACACS+ server hosts, Dell Networking OS attempts to connect with them in the order in which they were configured. To view the TACACS+ configuration, use the show running-config tacacs+ command in EXEC Privilege mode. To delete a TACACS+ server host, use the no tacacs-server host {hostname | ip-address} command.
sha1,diffie-hellman-group14-sha1. Password Authentication : enabled. Hostbased Authentication : disabled. RSA Authentication : disabled. Vty Encryption HMAC Dell(conf)# Remote IP To disable SSH server functions, use the no ip ssh server enable command. Using SCP with SSH to Copy a Software Image To use secure copy (SCP) to copy a software image through an SSH connection from one switch to another, use the following commands. On the chassis, invoke SCP.
Authentication Method VTY access-class support? Username access-class support? Remote authorization support? RADIUS YES NO YES (with Dell Networking OS version 6.1.1.0 and later) Dell Networking OS provides several ways to configure access classes for VTY lines, including: • • VTY Line Local Authentication and Authorization VTY Line Remote Authentication and Authorization VTY Line Local Authentication and Authorization Dell Networking OS retrieves the access class from the local database.
Dell(conf-ext-nacl)#deny any Dell(conf)# Dell(conf)#aaa authentication login tacacsmethod tacacs+ Dell(conf)#tacacs-server host 256.1.1.
16 Simple Network Management Protocol (SNMP) Network management stations use SNMP to retrieve or alter management data from network elements. A datum of management information is called a managed object; the value of a managed object can be static or variable. Network elements store managed objects in a database called a management information base (MIB).
agents and managers that are allowed to interact. Communities are necessary to secure communication between SNMP managers and agents; SNMP agents do not respond to requests from management stations that are not part of the community. The Dell Networking OS enables SNMP automatically when you create an SNMP community and displays the following message. You must specify whether members of the community may retrieve values in Read-Only mode. Read-write access is not supported.
Operating System Version: 1.0 Application Software Version: E8-3-17-46 Series: I/O-Aggregator Copyright (c) 1999-2012 by Dell Inc. All Rights Reserved. Build Time: Sat Jul 28 03:20:24 PDT 2012 SNMPv2-MIB::sysObjectID.0 = OID: SNMPv2-SMI::enterprises.6027.1.4.2 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (77916) 0:12:59.16 SNMPv2-MIB::sysContact.0 = STRING: SNMPv2-MIB::sysName.0 = STRING: FTOS SNMPv2-MIB::sysLocation.0 = STRING: SNMPv2-MIB::sysServices.
G - GVRP tagged, M - Vlan-stack NUM Status Description 10 Inactive Q Ports U TenGi 0/2 [Unix system output] > snmpget -v2c -c mycommunity 10.11.131.185 .1.3.6.1.2.1.17.7.1.4.3.1.2.1107787786 SNMPv2-SMI::mib-2.17.7.1.4.3.1.2.1107787786 = Hex-STRING: 40 00 00 00 00 00 00 00 00 00 00 The value 40 is in the first set of 7 hex pairs, indicating that these ports are in Stack Unit 0. The hex value 40 is 0100 0000 in binary. As described, the left-most position in the string represents Port 1.
VlanId Mac Address Type Interface State 1 00:01:e8:06:95:ac Dynamic Tengig 0/7 Active ----------------Query from Management Station--------------------->snmpwalk -v 2c -c techpubs 10.11.131.162 .1.3.6.1.2.1.17.4.3.1 SNMPv2-SMI::mib-2.17.4.3.1.1.0.1.232.6.149.172 = Hex-STRING: 00 01 E8 06 95 AC Example of Fetching Dynamic MAC Addresses on a Non-default VLANs In the following example, TenGigabitEthernet 0/7 is moved to VLAN 1000, a non-default VLAN.
logic is not changed. Because Zero is reserved for logical interfaces, it starts from 1. For the first interface, port number is set to 1. Adding it causes an increment by 1 for the next interfaces, so it only starts from 2. Therefore, the port number is set to 4 for 0/3.
down: Tengig 0/1" 2010-02-10 14:22:39 10.16.130.4 [10.16.130.4]: SNMPv2-MIB::sysUpTime.0 = Timeticks: (8500842) 23:36:48.42 SNMPv2-MIB::snmpTrapOID.0 = OID: IF-MIB::linkDown IF-MIB::ifIndex.1107755009 = INTEGER: 1107755009 SNMPv2-SMI::enterprises.6027.3.1.1.4.1.2 = STRING: "OSTATE_DN: Changed interface state down: Po 1" 2010-02-10 14:22:40 10.16.130.4 [10.16.130.4]: SNMPv2-MIB::sysUpTime.0 = Timeticks: (8500932) 23:36:49.32 SNMPv2-MIB::snmpTrapOID.0 = IF-MIB::linkUp IF-MIB::ifIndex.
SNMP Traps for Link Status To enable SNMP traps for link status changes, use the snmp-server enable traps snmp linkdown linkup command. Standard VLAN MIB When the Aggregator is in Standalone mode, where all the 4000 VLANs are part of all server side interfaces as well as the single uplink LAG, it takes a long time (30 seconds or more) for external management entities to discover the entire VLAN membership table of all the ports.
In standalone mode, there are 4000 VLANs, by default. The SNMP output will display for all 4000 VLANs. To view a particular VLAN, issue the snmp query with VLAN interface ID. Dell#show interface vlan 1010 | grep “Interface index” Interface index is 1107526642 Use the output of the above command in the snmp query. snmpwalk -Os -c public -v 1 10.16.151.151 1.3.6.1.2.1.17.7.1.4.2.1.4.0.1107526642 mib-2.17.7.1.4.2.1.4.0.
MIB Object OID Description chSysCoresTimeCreated 1.3.6.1.4.1.6027.3.19.1.2.9.1.3 Contains the time at which core files are created. chSysCoresStackUnitNumber 1.3.6.1.4.1.6027.3.19.1.2.9.1.4 Contains information that includes which stack unit or processor the core file was originated from. chSysCoresProcess 1.3.6.1.4.1.6027.3.19.1.2.9.1.5 Contains information that includes the process names that generated each core file.
17 Stacking An Aggregator auto-configures to operate in standalone mode. To use an Aggregator in a stack, you must manually configure it using the CLI to operate in stacking mode. Stacking is supported on the FN410S and FN410T Aggregators with ports 9 and 10 as the stack ports. The Aggregator supports both ring and daisy-chain topology and stacking of the same type. FN 410S and FN 410T Aggregators support two-unit in-chassis stacking and up to six units stacking across the chassis.
No record of previous stack mastership is kept when a stack loses power. As it reboots, the election process will once again determine the Master and Standby switches. As long as the priority has not changed on any members, the stack will retain the same Master and Standby. NOTE: Each stack members’ role (including the Master and Standby) can be defined by the user at any time by setting the priority.
1. Insert a cable in port 9 on the first aggregator. 2. Connect the cable to port 10 on the next aggregator. 3. Continue this pattern on up to 6 aggregators. 4. Connect a cable from port 9 on the last aggregator to port 10 on the first aggregator. This creates a ring topology. NOTE: The resulting topology allows the stack to function as a single switch with resilient failover capabilities. Accessing the CLI To configure a stack, you must access the stack master in one of the following ways.
Adding a Stack Unit You can add a new unit to an existing stack both when the unit has no stacking ports (stack groups) configured and when the unit already has stacking ports configured. If the units to be added to the stack have been previously used, they are assigned the smallest available unit ID in the stack. To add a standalone Aggregator to a stack, follow these steps: 1. Power on the switch. 2.
Removing an Aggregator from a Stack To remove an Aggregator from a stack, follow the below steps: 1. Disconnect the stacking cables from the unit. The unit can be powered on or off and can be online or offline. 2. Log on to the CLI and enter Global Configuration mode. Login: username Password: ***** Dell> enable Dell# configure 3. Configure the Aggregator to operate in standalone mode. CONFIGURATION stack-unit 0 iom-mode standalone 4.
show system [brief] • Displays the stack groups allocated on a stacked switch. The range is from 0 to 5. show system stack-unit unit-number stack-group configured • Displays the port numbers that correspond to the stack groups on a switch. The valid stack-unit numbers are from 0 to 5. show system stack-unit unit-number stack-group • Displays the type of stack topology (ring or daisy chain) with a list of all stacked ports, port status, link speed, and peer stackunit connection.
Master Switch Fails • Problem: The master switch fails due to a hardware fault, software crash, or power loss. • Resolution: A failover procedure begins: 1. Keep-alive messages from the Aggregator master switch time out after 60 seconds and the switch is removed from the stack. 2. The standby switch takes the master role. Data traffic on the new master switch is uninterrupted. Protocol traffic is managed by the control plane. 3. A member switch is elected as the new standby.
Card Problem — Resolved Dell#show system brief Stack MAC : 00:1e:c9:f1:04:82 -- Stack Info -Unit UnitType Status ReqTyp CurTyp Version Ports ---------------------------------------------------------------------0 Management online PE-FN-410S-IOA PE-FN-410S-IOA 1-0(0-1864) 12 1 Standby online PE-FN-410S-IOA PE-FN-410S-IOA 1-0(0-1864) 12 2 Member not present 3 Member not present 4 Member not present 5 Member not present Stack Unit in Card-Problem State Due to Configuration Mismatch • • Problem: A stack unit
...................................Writing......................................... ................................................................................... ................................................................................... 31972272 bytes successfully copied System image upgrade completed successfully.
config in flash by default Synchronizing data to peer Stack-unit !!!! ....
18 Broadcast Storm Control On the Aggregator, the broadcast storm control feature is enabled by default on all ports, and disabled on a port when an iSCSI storage device is detected. Broadcast storm control is re-enabled as soon as the connection with an iSCSI device ends. Broadcast traffic on Layer 2 interfaces is limited or suppressed during a broadcast storm. You can view the status of a broadcaststorm control operation by using the show io-aggregator broadcast storm-control status command.
19 System Time and Date The Aggregator auto-configures the hardware and software clocks with the current time and date. If necessary, you can manually set and maintain the system time and date using the CLI commands described in this chapter.
– timezone-name: Enter the name of the timezone. Do not use spaces. – offset: Enter one of the following: * a number from 1 to 23 as the number of hours in addition to UTC for the timezone. * a minus sign (-) then a number from 1 to 23 as the number of hours. Example of the clock timezone Command Dell#conf Dell(conf)#clock timezone Pacific -8 Dell# Setting Daylight Savings Time Dell Networking OS supports setting the system to daylight savings time once or on a recurring basis every year.
clock summer-time time-zone recurring start-week start-day start-month start-time endweek end-day end-month end-time [offset] – time-zone: Enter the three-letter name for the time zone. This name displays in the show clock output. – start-week: (OPTIONAL) Enter one of the following as the week that daylight saving begins and then enter values for start-day through end-time: * week-number: Enter a number from 1 to 4 as the number of the week in the month to start daylight saving time.
20 Uplink Failure Detection (UFD) Supported Modes Standalone, PMUX, VLT, Stacking Feature Description UFD provides detection of the loss of upstream connectivity and, if used with network interface controller (NIC) teaming, automatic recovery from a failed link. A switch provides upstream connectivity for devices, such as servers. If a switch loses its upstream connectivity, downstream devices also lose their connectivity.
Figure 28. Uplink Failure Detection How Uplink Failure Detection Works UFD creates an association between upstream and downstream interfaces. The association of uplink and downlink interfaces is called an uplink-state group. An interface in an uplink-state group can be a physical interface or a port-channel (LAG) aggregation of physical interfaces. An enabled uplink-state group tracks the state of all assigned upstream interfaces.
Figure 29. Uplink Failure Detection Example If only one of the upstream interfaces in an uplink-state group goes down, a specified number of downstream ports associated with the upstream interface are put into a Link-Down state. You can configure this number and is calculated by the ratio of the upstream port bandwidth to the downstream port bandwidth in the same uplink-state group.
For example, as shown previously, the switch/ router with UFD detects the uplink failure and automatically disables the associated downstream link port to the server. To continue to transmit traffic upstream, the server with NIC teaming detects the disabled link and automatically switches over to the backup link in order to continue to transmit traffic upstream. Important Points to Remember When you configure UFD, the following conditions apply. • You can configure up to 16 uplink-state groups.
enable Dell(conf)#uplink-state-group 1 Dell(conf-uplink-state-group-1)#enable To disable the uplink group tracking, use the no enable command. 3. Change the default timer.
Where port-range and port-channel-range specify a range of ports separated by a dash (-) and/or individual ports/ port channels in any order; for example: upstream gigabitethernet 0/1-2,5,9,11-12 downstream port-channel 1-3,5 • A comma is required to separate each port and port-range entry. To delete an interface from the group, use the no {upstream | downstream} interface command. 4.
Clearing a UFD-Disabled Interface (in PMUX mode) You can manually bring up a downstream interface in an uplink-state group that UFD disabled and is in a UFD-Disabled Error state. To re-enable one or more disabled downstream interfaces and clear the UFD-Disabled Error state, use the following command. • Re-enable a downstream interface on the switch/router that is in a UFD-Disabled Error State so that it can send and receive traffic.
Displaying Uplink Failure Detection To display information on the UFD feature, use any of the following commands. • Display status information on a specified uplink-state group or all groups. EXEC mode show uplink-state-group [group-id] [detail] • – group-id: The values are 1 to 16. – detail: displays additional status information on the upstream and downstream interfaces in each group. Display the current status of a port or port-channel interface assigned to an uplink-state group.
Hardware is Force10Eth, address is 00:01:e8:32:7a:47 Current address is 00:01:e8:32:7a:47 Interface index is 280544512 Internet address is not set MTU 1554 bytes, IP MTU 1500 bytes LineSpeed 1000 Mbit, Mode auto Flowcontrol rx off tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:25:46 Queueing strategy: fifo Input Statistics: 0 packets, 0 bytes 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts
Group 3 Dell(conf-uplink-state-group-3)#downstream tengigabitethernet 0/1-2,5,9,11-12 Dell(conf-uplink-state-group-3)#downstream disable links 2 Dell(conf-uplink-state-group-3)#upstream tengigabitethernet 0/3-4 Dell(conf-uplink-state-group-3)#description Testing UFD feature Dell(conf-uplink-state-group-3)#show config ! uplink-state-group 3 description Testing UFD feature downstream disable links 2 downstream TenGigabitEthernet 0/1-2,5,9,11-12 upstream TenGigabitEthernet 0/3-4 Dell#show running-config uplink
21 PMUX Mode of the IO Aggregator This chapter provides an overview of the PMUX mode. I/O Aggregator (IOA) Programmable MUX (PMUX) Mode IOA PMUX is a mode that provides flexibility of operation with added configurability. This involves creating multiple LAGs, configuring VLANs on uplinks and the server side, configuring data center bridging (DCB) parameters, and so forth. By default, IOA starts up in IOA Standalone mode.
Configuring the Commands without a Separate User Account Starting with Dell Networking OS version 9.3(0.0), you can configure the PMUX mode CLI commands without having to configure a new, separate user profile. The user profile you defined to access and log in to the switch is sufficient to configure the PMUX mode commands. The IOA PMUX Mode CLI Commands section lists the PMUX mode CLI commands that you can now configure without a separate user account.
• Assures high availability. As shown in the following example, VLT presents a single logical Layer 2 domain from the perspective of attached devices that have a virtual link trunk terminating on separate chassis in the VLT domain. However, the two VLT chassis are independent Layer2/Layer3 (L2/L3) switches for devices in the upstream network. L2/L3 control plane protocols and system management features function normally in VLT mode.
O - OpenFlow Controller Port-channel LAG L 127 Mode L2 Status up Uptime 00:18:22 128 L2 up 00:00:00 Ports Fo 0/33 Fo 0/37 Fo 0/41 (Up)<<<<<<<
– Separately configure each VLT peer switch with the same VLT domain ID and the VLT version. If the system detects mismatches between VLT peer switches in the VLT domain ID or VLT version, the VLT Interconnect (VLTi) does not activate. To find the reason for the VLTi being down, use the show vlt statistics command to verify that there are mismatch errors, then use the show vlt brief command on each VLT peer to view the VLT version on the peer switch.
– The chassis backup link does not carry control plane information or data traffic. Its use is restricted to health checks only. • Virtual link trunks (VLTs) between access devices and VLT peer switches – To connect servers and access switches with VLT peer switches, you use a VLT port channel, as shown in Overview. – The discovery protocol running between VLT peers automatically generates the ID number of the port channel that connects an access device and a VLT switch.
If the VLTi link fails, the status of the remote VLT Primary Peer is checked using the backup link. If the remote VLT Primary Peer is available, the Secondary Peer disables all VLT ports to prevent loops. If all ports in the VLTi link fail or if the communication between VLTi links fails, VLT checks the backup link to determine the cause of the failure.
Verifying a VLT Configuration To monitor the operation or verify the configuration of a VLT domain, use any of the following show commands on the primary and secondary VLT switches. • Display information on backup link operation. EXEC mode show vlt backup-link • Display general status information about VLT domains currently configured on the switch.
----------------Destination: Peer HeartBeat status: HeartBeat Timer Interval: HeartBeat Timeout: UDP Port: HeartBeat Messages Sent: HeartBeat Messages Received: 10.11.200.
Example of the show vlt role Command Dell_VLTpeer1# show vlt role VLT Role ---------VLT Role: System MAC address: System Role Priority: Local System MAC address: Local System Role Priority: Primary 00:01:e8:8a:df:bc 32768 00:01:e8:8a:df:bc 32768 Dell_VLTpeer2# show vlt role VLT Role ---------VLT Role: System MAC address: System Role Priority: Local System MAC address: Local System Role Priority: Secondary 00:01:e8:8a:df:bc 32768 00:01:e8:8a:df:e6 32768 Example of the show running-config vlt Command Dell
Configuring Virtual Link Trunking (VLT Peer 1) Configure the backup link. Dell_VLTpeer1(conf)#interface ManagementEthernet 0/0 Dell_VLTpeer1(conf-if-ma-0/0)#ip address 10.11.206.23/ Dell_VLTpeer1(conf-if-ma-0/0)#no shutdown Dell_VLTpeer1(conf-if-ma-0/0)#exit Configure the VLT interconnect (VLTi).
Enable VLT and create a VLT domain with a backup-link VLT interconnect (VLTi). Dell_VLTpeer2(conf)#vlt domain 999 Dell_VLTpeer2(conf-vlt-domain)#peer-link port-channel 100 Dell_VLTpeer2(conf-vlt-domain)#back-up destination 10.11.206.23 Dell_VLTpeer2(conf-vlt-domain)#exit Configure the port channel to an attached device.
Description Behavior at Peer Up Behavior During Run Time Action to Take Dell Networking OS Version mismatch A syslog error message is generated. A syslog error message is generated. Follow the correct upgrade procedure for the unit with the mismatched Dell Networking OS version. Remote VLT port channel status N/A N/A Use the show vlt detail and show vlt brief commands to view the VLT port channel status information. System MAC mismatch A syslog error message and an SNMP trap are generated.
22 NPIV Proxy Gateway The N-port identifier virtualization (NPIV) Proxy Gateway (NPG) feature provides FCoE-FC bridging capability on the FN 2210S Aggregator, allowing server CNAs to communicate with SAN fabrics over the FN 2210S Aggregator.
The NPIV proxy gateway aggregates multiple locally connected server CNA ports into one or more upstream N port links, conserving the number of ports required on an upstream FC core switch while providing an FCoE-to-FC bridging functionality. The upstream N ports on an FX2 can connect to the same or multiple fabrics.
Term Description ENode port Port mode of a server-facing Aggregator with the Ethernet port that provides access to FCF functionality on a fabric. CNA port N-port functionality on an FCoE-enabled server port. A converged network adapter (CNA) can use one or more Ethernet ports. CNAs can encapsulate Fibre Channel frames in Ethernet for FCoE transport and de-encapsulate Fibre Channel frames from FCoE to native Fibre Channel.
• The FC-MAP value used to generate a fabric-provided MAC address. • The association between the FCoE VLAN ID and FC fabric ID where the desired storage arrays are installed. Each Fibre Channel fabric serves as an isolated SAN topology within the same physical network. • The priority used by a server to select an upstream FCoE forwarder (FCF priority). • FIP keepalive (FKA) advertisement timeout. NOTE: In each FCoE map, the fabric ID, FC-MAP value, and FCoE VLAN must be unique.
PG:1 TSA:ETS Priorities:4 BW:30 PFC:OFF PG:2 TSA:ETS Priorities:3 BW:40 PFC:ON Default FCoE map Dell(conf)#do show fcoe-map Fabric Name Fabric Id Vlan Id Vlan priority FC-MAP FKA-ADV-Period Fcf Priority Config-State Oper-State Members Fc 0/9 Te 0/4 SAN_FABRIC 1002 1002 3 0efc00 8 128 ACTIVE UP DCB_MAP_PFC_OFF Dell(conf)#do show qos dcb-map DCB_MAP_PFC_OFF ----------------------State :In-Progress PfcMode:OFF -------------------Dell(conf)# Enabling Fibre Channel Capability on the Switch Enable the Fi
Step Task Command Command Mode allocated according to the specified percentages. If a priority group does not use its allocated bandwidth, the unused bandwidth is made available to other priority groups. Restriction: You can enable PFC on a maximum of two priority queues.
Step Task Command Command Mode Repeat this step to apply a DCB map to more than one port or port channel. Creating an FCoE VLAN Create a dedicated VLAN to send and receive Fibre Channel traffic over FCoE links between servers and a fabric over an NPG. The NPG receives FCoE traffic and forwards decapsulated FC frames over FC links to SAN switches in a specified fabric. Step Task Command Command Mode 1 Create the dedicated VLAN for FCoE traffic. interface vlan vlan-id CONFIGURATION Range: 2–4094.
Step Task Command Command Mode Default: None. 5 Configure the priority used by a server CNA to select the fcf-priority priority FCF for a fabric login (FLOGI). Range: 1–255. Default: 128. FCoE MAP 6 Enable the monitoring FIP keepalive messages (if it is disabled) to detect if other FCoE devices are reachable. Default: FIP keepalive monitoring is enabled. keepalive FCoE MAP 7 Configure the time interval (in seconds) used to transmit FIP keepalive advertisements.
Each Aggregator, with the FC port, is associated with an Ethernet MAC address (FCF MAC address). When you enable a fabricfacing FC port, the FCoE map applied to the port starts sending FIP multicast advertisements using the parameters in the FCoE map over server-facing Ethernet ports. A server sees the FC port, with its applied FCoE map, as an FCF port. Step Task Command Command Mode 1 Configure a fabric-facing FC port.
Dell(config)# fcoe-map SAN_FABRIC_A Dell(config-fcoe-name)# fabric-id 1002 vlan 1002 Dell(config-fcoe-name)# description "SAN_FABRIC_A" Dell(config-fcoe-name)# fc-map 0efc00 Dell(config-fcoe-name)# keepalive Dell(config-fcoe-name)# fcf-priority 128 Dell(config-fcoe-name)# fka-adv-period 8 5. Enable an upstream FC port: Dell(config)# interface fibrechannel 0/0 Dell(config-if-fc-0)# no shutdown 6.
Te Te Te Te Te Te Te Te Fc Fc Te Te 0/1 0/2 0/3 0/4 0/5 0/6 0/7 0/8 toB300 0/9 0/10 0/11 0/12 Up Down Up Down Up Up Up Down Up Up Down Down 10000 Mbit Auto 10000 Mbit Auto 10000 Mbit 10000 Mbit 10000 Mbit Auto 8000 Mbit 8000 Mbit Auto Auto Full Auto Full Auto Full Full Full Auto Full Full Auto Auto 1-4094 1-1001,1003-4094 1-1001,1003-4094 1-1001,1003-4094 1-4094 1-4094 1-4094 1-1001,1003-4094 ----- Table 24.
Table 25. show fcoe-map Field Descriptions Field Description Fabric-Name Name of a SAN fabric. Fabric ID The ID number of the SAN fabric to which FC traffic is forwarded. VLAN ID The dedicated VLAN used to transport FCoE storage traffic between servers and a fabric over the NPG. The configured VLAN ID must be the same as the fabric ID. VLAN priority FCoE traffic uses VLAN priority 3. This setting is not user-configurable.
Field Description PFC PFC setting for the priority group: On (enabled) or Off. Priorities 802.1p priorities configured in the priority group.
LoginMethod Secs Status : : : FLOGI 5593 LOGGED_IN ENode[1]: ENode MAC ENode Intf FCF MAC Fabric Intf FCoE Vlan Fabric Map ENode WWPN ENode WWNN FCoE MAC FC-ID LoginMethod Secs Status : : : : : : : : : : : : : 00:10:18:f1:94:22 Te 0/12 5c:f9:dd:ef:10:c9 Fc 0/10 1003 fid_1003 10:00:00:00:c9:d9:9c:cb 10:00:00:00:c9:d9:9c:cd 0e:fc:03:01:02:02 01:02:01 FDISC 5593 LOGGED_IN Table 28.
Switch WWN Dell# : 10:00:5c:f9:dd:ef:10:c0 Table 29. show fc switch Command Description Field Description Switch Mode Fibre Channel mode of operation of an Aggregator. Default: NPG (configured as an NPIV proxy gateway). Switch WWN Factory-assigned worldwide node (WWN) name of the Aggregator. The Aggregator WWN name is not user-configurable. Displaying NPIV Proxy Gateway Information To display information on the NPG operation, use the show commands in the following table: Table 30.
Table 31. show interfaces status Field Descriptions Field Description Port Server-facing 10GbE Ethernet (Te), or fabric-facing Fibre Channel (FC) port with slot/ port information. Description Text description of port. Status Operational status of port: Ethernet ports - up (transmitting FCoE and LAN storage traffic) or down (not transmitting traffic).
VLAN priority FCoE traffic uses VLAN priority 3. This setting is not user-configurable. FC-MAP FCoE MAC-address prefix value - The unique 24-bit MAC address prefix that identifies a fabric. FKA-ADV-period Time interval (in seconds) used to transmit FIP keepalive advertisements. FCF Priority The priority used by a server to select an upstream FCoE forwarder.
ENode-Intf ENode-WWPN FCoE-Vlan Fabric-Intf Fabric-Map LoginMethod Status ------------------------------------------------------------------------------------------------------Te 0/11 FLOGI Te 0/12 FDISC 20:01:00:10:18:f1:94:20 LOGGED_IN 10:00:00:00:c9:d9:9c:cb LOGGED_IN 1003 Fc 0/9 fid_1003 1003 Fc 0/10 fid_1003 Table 34. show npiv devices brief Field Descriptions Field Description Total NPIV Devices Number of downstream ENodes connected to a fabric over the Aggregator with the NPG.
Secs Status : : 5593 LOGGED_IN Table 35. show npiv devices Field Descriptions Field Description ENode [number] Server CNA that has successfully logged in to a fabric over an Aggregator with the Ethernet port in ENode mode. Enode MAC MAC address of a server CNA port. Enode Intf Port number of a server-facing Ethernet port operating in ENode mode. FCF MAC Fibre Channel forwarder MAC: MAC address of Aggregator with the FCF interface.
23 Upgrade Procedures To find the upgrade procedures, go to the Dell Networking OS Release Notes for your system type to see all the requirements needed to upgrade to the desired Dell Networking OS version. To upgrade your system type, follow the procedures in the Dell Networking OS Release Notes. Get Help with Upgrades Direct any questions or concerns about the Dell Networking OS upgrade procedures to the Dell Technical Support Center. You can reach Technical Support: • On the web: http://support.dell.
24 Debugging and Diagnostics This chapter contains the following sections:. • • • • • Debugging Aggregator Operation Software Show Commands Offline Diagnostics Trace Logs Show Hardware Commands Supported Modes Standalone, PMUX, VLT Debugging Aggregator Operation This section describes common troubleshooting procedures to use for error conditions that may arise during Aggregator operation.
2. Verify that the downstream port channel in the top-of-rack switch that connect to the Aggregator is configured correctly. Broadcast, unknown multicast, and DLF packets switched at a very low rate Symptom: Broadcast, unknown multicast, and DLF packets are switched at a very low rate. By default, broadcast storm control is enabled on an Aggregator and rate limits the transmission of broadcast, unknown multicast, and DLF packets to 1Gbps.
G - GVRP tagged, M - Trunk, H - VSN tagged i - Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged Name: TenGigabitEthernet 0/1 802.1QTagged: Hybrid SMUX port mode: Admin VLANs enabled Vlan membership: Q Vlans U 1 T 2-5,100,4010 Native VlanId: 1 Software show Commands Use the show version and show system stack-unit 0 commands as a part of troubleshooting an Aggregator’s software configuration. Table 37.
NOTE: Diagnostic is not allowed in Stacking mode, including member stacking. Avoid stacking before executing the diagnostic tests in the chassis. Important Points to Remember • You can only perform offline diagnostics on an offline standalone unit. You cannot perform diagnostics if the ports are configured in a stacking group. Remove the port(s) from the stacking group before executing the diagnostic test. • Diagnostics only test connectivity, not the entire data path.
Trace Logs In addition to the syslog buffer, the Dell Networking OS buffers trace messages which are continuously written by various software tasks to report hardware and software events and status information. Each trace message provides the date, time, and name of the Dell Networking OS process. All messages are stored in a ring buffer. You can save the messages to a file either manually or automatically after failover.
• show hardware stack-unit {0-5} buffer unit {0-1} port {1-64 | all} buffer-info View the forwarding plane statistics containing the packet buffer statistics per COS per port. EXEC Privilege mode • show hardware stack-unit {0-5} buffer unit {0-1} port {1-64} queue {0-14 | all} bufferinfo View input and output statistics on the party bus, which carries inter-process communication traffic between CPUs.
• Enable environmental monitoring.
Troubleshoot an Over-Temperature Condition To troubleshoot an over-temperature condition, use the following information. 1. Use the show environment commands to monitor the temperature levels. 2. Check air flow through the system. Ensure that the air ducts are clean and that all fans are working correctly. 3. After the software has determined that the temperature levels are within normal limits, you can re-power the card safely. To bring back the line card online, use the power-on command in EXEC mode.
OID String OID Name Description chSysPortXfpRecvTemp OID displays the temperature of the connected optics. Temperature .1.3.6.1.4.1.6027.3.10.1.2.5.1.7 NOTE: These OIDs only generate if you enable the enable opticinfo-update-interval is enabled command. Hardware MIB Buffer Statistics .1.3.6.1.4.1.6027.3.16.1.1.4 fpPacketBufferTable View the modular packet buffers details per stack unit and the mode of allocation. .1.3.6.1.4.1.6027.3.16.1.1.
You can configure dynamic buffers per port on both 1G and 10G FPs and per queue on CSFs. By default, the FP dynamic buffer allocation is 10 times oversubscribed. For the 48-port 1G card: • Dynamic Pool= Total Available Pool(16384 cells) — Total Dedicated Pool = 5904 cells • Oversubscription ratio = 10 • Dynamic Cell Limit Per port = 59040/29 = 2036 cells Figure 31.
• Change the dedicated buffers on a physical interface. BUFFER PROFILE mode buffer dedicated • Change the maximum number of dynamic buffers an interface can request. BUFFER PROFILE mode buffer dynamic • Change the number of packet-pointers per queue. BUFFER PROFILE mode buffer packet-pointers • Apply the buffer profile to a CSF to FP link.
6 7 9.38 9.38 256 256 Example of Viewing the Buffer Profile Allocations Dell#show running-config interface tengigabitethernet 0/6 ! interface TenGigabitEthernet 0/6 mtu 9252 switchport no shutdown buffer-policy myfsbufferprofile Example of Viewing the Buffer Profile (Interface) Dell#show buffer-profile detail int te 0/2 Interface Te 0/2 Buffer-profile fsqueue-fp Dynamic buffer 1256.00 (Kilobytes) Queue# Dedicated Buffer Buffer Packets Kilobytes) 0 3.00 256 1 3.00 256 2 3.00 256 3 3.00 256 4 3.00 256 5 3.
If you have already applied a custom buffer profile on an interface, the buffer-profile global command fails and a message similar to the following displays: % Error: User-defined buffer profile already applied. Failed to apply global pre-defined buffer profile. Please remove all user-defined buffer profiles. Similarly, when you configure buffer-profile global, you cannot not apply a buffer profile on any single interface.
• • • • • • • • • • • show hardware system-flow layer2 stack-unit 0-5 port-set 0-1 [counters] show hardware drops interface [range] interface show hardware stack-unit buffer-stats-snapshot unit resource x show hardware buffer inteface interface{priority-group { id | all } | queue { id| all} ] buffer-info show hardware buffer-stats-snapshot resource interface interface{priority-group { id | all } | queue { ucast{id | all}{ mcast {id | all} | all} show hardware drops interface interface clear hardw
Policy Discards Packets dropped by FP (L2+L3) Drops Port bitmap zero Drops Rx VLAN Drops --- Ingress MAC counters--Ingress FCSDrops Ingress MTUExceeds --- MMU Drops --Ingress MMU Drops HOL DROPS(TOTAL) HOL DROPS on COS0 HOL DROPS on COS1 HOL DROPS on COS2 HOL DROPS on COS3 HOL DROPS on COS4 HOL DROPS on COS5 HOL DROPS on COS6 HOL DROPS on COS7 HOL DROPS on COS8 HOL DROPS on COS9 HOL DROPS on COS10 HOL DROPS on COS11 HOL DROPS on COS12 HOL DROPS on COS13 HOL DROPS on COS14 HOL DROPS on COS15 HOL DROPS on COS
rxError rxDatapathErr rxPkt(COS0) rxPkt(COS1) rxPkt(COS2) rxPkt(COS3) rxPkt(COS4) rxPkt(COS5) rxPkt(COS6) rxPkt(COS7) rxPkt(UNIT0) rxPkt(UNIT1) rxPkt(UNIT2) rxPkt(UNIT3) transmitted txRequested noTxDesc txError txReqTooLarge txInternalError txDatapathErr txPkt(COS0) txPkt(COS1) txPkt(COS2) txPkt(COS3) txPkt(COS4) txPkt(COS5) txPkt(COS6) txPkt(COS7) txPkt(UNIT0) :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 The show hardware stack-unit cpu party-bus statistics co
Output 00.06 Mbits/sec, Dell# 8 packets/sec, 0.00% of line-rate Enabling Buffer Statistics Tracking You can enable the tracking of statistical values of buffer spaces at a global level. The buffer statistics tracking utility operates in the max use count mode that enables the collection of maximum values of counters. To configure the buffer statistics tracking utility, perform the following step: 1. Enable the buffer statistics tracking utility and enter the Buffer Statistics Snapshot configuration mode.
--------------------------------------Q# TYPE Q# TOTAL BUFFERED CELLS --------------------------------------MCAST 3 0 Unit 1 unit: 3 port: 21 (interface Fo 1/164) --------------------------------------Q# TYPE Q# TOTAL BUFFERED CELLS --------------------------------------MCAST 3 0 Unit 1 unit: 3 port: 25 (interface Fo 1/168) --------------------------------------Q# TYPE Q# TOTAL BUFFERED CELLS --------------------------------------MCAST 3 0 Unit 1 unit: 3 port: 29 (interface Fo 1/172) -----------------------
Restoring the Factory Default Settings Restoring factory defaults deletes the existing NVRAM settings, startup configuration and all configured settings such as stacking or fanout. To restore the factory default settings, use the restore factory-defaults stack-unit {0-5 | all} {clear-all | nvram} command in EXEC Privilege mode. CAUTION: There is no undo for this command. Important Points to Remember • When you restore all the units in a stack, all units in the stack are placed into stand-alone mode.
25 Standards Compliance This chapter describes standards compliance for Dell Networking products. NOTE: Unless noted, when a standard cited here is listed as supported by the Dell Networking Operating System (OS), the system also supports predecessor standards. One way to search for predecessor standards is to use the http:// tools.ietf.org/ website. Click “Browse and search IETF documents,” enter an RFC number, and inspect the top of the resulting document for obsolescence citations to related RFCs.
General Internet Protocols The following table lists the Dell Networking OS support per platform for general internet protocols. Table 39.
Network Management The following table lists the Dell Networking OS support per platform for network management protocol. Table 41.
RFC# Full Name radiusAuthClientMalformedAccessResponses radiusAuthClientUnknownTypes radiusAuthClientPacketsDropped 3635 Definitions of Managed Objects for the Ethernet-like Interface Types 2674 Definitions of Managed Objects for Bridges with Traffic Classes, Multicast Filtering and Virtual LAN Extensions 2787 Definitions of Managed Objects for the Virtual Router Redundancy Protocol 2819 Remote Network Monitoring Management Information Base: Ethernet Statistics Table, Ethernet History Control Table
RFC# Full Name FORCE10-PRODUCTS-MIB Force10 Product Object Identifier MIB FORCE10-SS-CHASSIS-MIB Force10 S-Series Enterprise Chassis MIB FORCE10-SMI Force10 Structure of Management Information FORCE10-SYSTEM-COMPONENT-MIB Force10 System Component MIB (enables the user to view CAM usage information) FORCE10-TC-MIB Force10 Textual Convention FORCE10-TRAP-ALARM-MIB Force10 Trap Alarm MIB FORCE10-FIPS NOOPING-MI B Force10 FIP Snooping MIB (Based on T11-FCoE-MIB mentioned in FCBB-5) FORCE10-DCB -