Traverse/TransNav Planning and Engineering Guide TR5.0.x/TN6.0.
Copyright © 2011 Force10 Networks, Inc. All rights reserved. Force10 Networks ® reserves the right to change, modify, revise this publication without notice. Trademarks Force10 Networks® and E-Series® are registered trademarks of Force10 Networks, Inc. Traverse, TraverseEdge, TraversePacketEdge, TransAccess, are registered trademarks of Force10 Networks, Inc. Force10, the Force10 logo, and TransNav are trademarks of Force10 Networks, Inc.
C ONTENTS Chapter 1 Traverse Equipment Specifications Traverse Dimensions Summary Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Traverse Rack Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Power Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Power Cabling . . . . . . . . . . . . . . . .
Solaris Platform for TransNav Management Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solaris Platform Management Server Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows Platform Requirements for TransNav Management Server . . . . . . . . . . . . . . . . . . . . . . . Windows Platform Management Server Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 1 Traverse Equipment Specifications Introduction This chapter includes the following topics: • Traverse Dimensions Summary Table • Traverse Rack Configuration • Power Consumption • Power Cabling • Fiber Connectors and Cabling • Electrical Coax and Copper Connectors and Cabling • Shelf and Rack Density • Regulatory Compliance For guidelines on card placement in specific Traverse shelves and information on GCM redundancy, see the Operations and Maintenance Guide, Chapter 21—“Card Placement Planning
Traverse Dimensions Summary Table The following table gives the dimensions for the Traverse components. Table 1 Traverse Component Dimensions Assembly 1 4 Height Width Weight Empty Depth Weight Fully Loaded Traverse 20001 18.33 in 21.1 in 13.75 in. 16 lbs 63 lbs 46.56 cm 53.6 cm 34.93 cm 7.2 kg 28.58 kg Traverse 16001 18.33 in 17.25 in 13.75 in 15 lbs 52 lbs 46.56 cm 43.82 cm 34.93 cm 6.8 kg 23.59 kg Traverse 600 6.50 in 17.25 in 13.75 in 8 lbs 21 lbs 16.51 cm 43.
Traverse Rack Configuration The Traverse 1600 and Traverse 600 shelves install in either a standard 19-in (483 mm) or 23-in (584 mm) wide relay rack. The Traverse 1600 and Traverse 600 shelves requires mounting brackets for installing in a 23-in (584 mm) wide rack. The Traverse 2000 shelf installs only in a standard 23-in (584 mm) wide relay rack. To provide proper air flow, 3/8-in (9.5 mm) of space is required between the PDAP and the first (top most) Traverse shelf assembly. SD Notes: 1.
This figure shows an example of four Traverse 1600 shelves installed with the PDAP-4S in a 19-in (483 mm) wide relay rack.
Power Consumption The power draw of the Traverse system is dependent on the configuration of each system. From a base configuration consisting of the chassis and a fan tray, the addition of each card increases the power draw of the system. A typical single shelf configuration consumes from 745 to 915 watts. Fully equipped configurations are normally less than 1400 watts. All Traverse cards operate between -40 and -60 VDC. Important: Carefully plan your power supply capacity.
Table 2 Power Distribution Per Traverse Card (continued) Component SONET/SDH Cards 8 Watts Per Card / Component Card or Component Type GCM with 1-port OC-48 ELR/STM-16 LH DWDM, CH19, 191.
Table 2 Power Distribution Per Traverse Card (continued) Component Electrical Cards Ethernet Cards Watts Per Card / Component Card or Component Type 1-port OC-192 ELR/STM-64 LH ITU DWDM 90 28-port DS1 49 12-port DS3/E3/EC-1 Clear Channel 42 24-port DS3/E3/EC-1 Clear Channel 50 12-port DS3/EC-1 Transmux 46 21-port E1 49 UTMX-24 48 UTMX-48 55 VT/TU 5G Switch 42 VT-HD 40G Switch 112 4-port GbE (LX or SX) plus 16-port 10/100BaseTX 75 4-port GbE CWDM (40 km) plus 16-port 10/100BaseTX 2
Power Cabling Redundant central office battery and battery return is connected to the PDAP. The PDAP-2S distributes battery and battery return to up to two Traverse shelves and up to ten pieces of auxiliary equipment in a rack. The PDAP-4S distributes battery and battery return to up to four Traverse shelves and up to five pieces of auxiliary equipment in a rack. Both the PDAP-2S and PDAP-4S have two DC power inputs (Battery ‘A’ and Battery ‘B’).
Electrical Coax and Copper Connectors and Cabling The DS3/E3/EC-1 Clear Channel and DS3/EC-1 Transmux cards are cabled using standard coax cables with BNC or Mini-SMB connectors. Coax cables are connected to the DS3/E3 electrical connector module (ECM) at the main backplane. The 10/100BaseTX, GbE TX plus 10/100BaseTX Combo, other GbE plus 10/100BaseTX Combos, DS1, and E1 cards are cabled using standard twisted-pair copper cables with Telco connectors.
Table 4 Traverse Interface Options and Maximum Densities1 (continued) Traverse 2000 Service Interface Card Traverse 1600 Traverse 600 Cards per Shelf Ports per Shelf Ports per Rack Cards per Shelf Ports per Shelf Ports per Rack Cards per Shelf Ports per Shelf 16 64/256 256/ 1024 12 48/192 192/768 4 16/64 16 32/32/ 256 128/128/ 1024 12 24/24/ 192 96/96/ 768 4 8/8/64 1-port 10GbE (dual slot) 9 9 36 7 7 28 — — 10-port GbE (dual slot) 8 80 320 6 60 240 — — 4-port OC-
VT Card Interface Options and Maximum Density The following table provides information on the maximum number of VT/VC cards per shelf.
14 Chapter 1 Traverse Equipment Specifications
Chapter 2 Compliance Introduction The highest levels of quality testing and the most stringent compliance standards that can be achieved are the goals of Force10 Networks. The Force10 Quality Management System has met ISO 9000-2008 certification.
ETSI Environmental Standards In addition to the testing required for a CE Mark, Force10’s products are also tested to the following ETSI specifications: • Storage: ETS 300 019-2-1, class T1.2 • Transportation: ETS 300 019-2-2, class T2.3 • Operational: ETS 300 019-2-3, class T3.1 and T3.1E NEBS Compliance and Certification Network Equipment-Building Systems (NEBS) standards define a rigid and extensive set of performance, quality, environmental, and safety requirements developed by Telcordia.
Most of the requirements specified by Telcordia for the above-listed types of configurations are for a per-channel availability of 99.999%. As required by GR-418-CORE and GR-499-CORE, circuit pack failure rate predictions are performed in accordance with the requirements of TR-332. Also, GR-418-CORE and SR-TSY-001171 are used in the analysis of system availability and other reliability parameters. The current predicted per-channel availability meets the 99.999% requirement.
The automatic X-ray inspection is conducted on all circuit boards, with all components and solder joints being inspected. For certain component technologies, such as ball-grid-arrays (BGAs), there is no method other than X-ray that adequately verifies solder quality.
Chapter 3 Network Feature Compatibility Introduction The Traverse system is a gateway solution providing unified feature support for both SONET and SDH networks. As there are variances between these two network types, Force10 offers the following topics: • Compatibility Matrix for Network Features • Comparative Terminology for SONET and SDH Compatibility Matrix for Network Features Traverse gateway solutions (i.e., ITU_default and ANSI_default) provide features from both SONET and SDH networks.
Comparative Terminology for SONET and SDH The following table provides you with a short list of terms as they relate to the SONET and SDH network feature sets.
Table 2 SONET and SDH Comparative Terminology (continued) Term SONET Network SDH Network STS-3c/AU-4 Contiguous concatenation of 3 STS-1 synchronous payload envelopes (SPE) (STS-3c) Administrative Unit Level 4 (AU-4) STS-3c/VC-4 Contiguous concatenation of 3 STS-1 synchronous payload envelopes (SPE) (STS-3c) VC level 4 (VC-4) STS-12c/VC-4-4c Contiguous concatenation of 12 STS-1 SPEs (STS-12c) Contiguous concatenation of 4 VCs at Level 4 (VC-4-4c) UPSR/SNCP Unidirectional Path Switched Ring Sub
22 Chapter 3 Network Feature Compatibility
Chapter 4 Protected Network Topologies Introduction This chapter includes the following topics: • Point-to-Point or Linear Chain • Ring • Mesh • Interconnected Ring Topologies • Interconnected Gateway Topologies • Supported Protected Topologies (Summary) • Node and Tunnel Diversity for Low Order Tunneled Services Point-to-Point or Linear Chain A simple point-to-point topology connects two nodes with two fibers.
Ring In a ring configuration, each node is connected to two adjacent nodes. Each node uses two trunk cards (east and west). In a Traverse network, the port on the east card always transmits the working signal clockwise around the ring. The port on the west card always receives the working signal. In ring configurations, each east port is physically connected to the west port of the next node.
Mesh This topology provides a direct connection from one node to every other node in the network. Traffic is routed over a primary path as well as an alternative path in case of congestion or failure. .
Single Node Interconnected Rings This topology uses one node to connect two separate rings. The interconnecting node uses four optical ports (two for each ring). Each ring must use two ports on two separate cards (east and west).
Two Node Overlapping Rings This topology connects two rings using a single fiber between two optical cards. At each interconnecting node there are three optical ports: two east and a shared west. Each ring shares the bandwidth of the west port.
The Traverse supports the following protection schemes in two node ring interconnections: • UPSR <–> UPSR • UPSR <–> BLSR • BLSR <–> BLSR • UPSR <–> SNCP ring • SNCP ring <–> SNCP ring • SNCP ring <–> MS-SPRing • MS-SP ring <–> MS-SPRing Four Node Interconnected Rings This topology uses four nodes to connect two rings. The links between the interconnecting nodes are unprotected or protected. This topology protects traffic within each ring, as well as from any failure on the interconnecting node.
Supported Protected Topologies (Summary) This table summarizes supported topologies and protection schemes for a Traverse network.
Node and Tunnel Diversity for Low Order Tunneled Services Use of Low Order end-to-end tunneled services in your network requires additional planning for node and tunnel diversity. For more information on Low Order end-to-end SONET services, see the TransNav Management System Provisioning Guide, Chapter 28—“Creating SONET Low Order End-to-End Services and Tunnels.
To ensure node diversity, define an egress point on the head node to assure tunneled services use separate paths. In the following example, if an egress point of 11 is set on Node CS 133, the tunneled service in red is routed through Node DP 102 to terminate at Node CS 112.
32 Chapter 4 Protected Network Topologies
Chapter 5 TransNav Management System Requirements Introduction The TransNav management system software package contains both server and client workstation applications. The server functions communicate with the nodes and maintain a database of topology, configuration, fault, and performance data for all nodes in the network. The client workstation application provides the user interface for managing the network.
Management System Deployment The TransNav management system software package contains server applications, client workstation applications, and agent applications that reside on the node.
• • See Chapter 7—“IP Address Planning,” In-Band Management with Static Routes for an example and a detailed description. See Chapter 7—“IP Address Planning,” Out-of-Band Management with Static Routes for an example and a detailed description. Control Plane Domain A control plane domain is a set of nodes completely interconnected by the intelligent control plane. One TransNav management system can manage up to 200 nodes in a single control plane domain.
36 Solaris Platform for TransNav Management Server This table lists the minimum requirements for a Solaris system TransNav management server.
Planning and Engineering Guide, Release TR5.0.x/TN6.0.x Table 4 Solaris Requirements, TransNav Management Server (continued) Component Description Small networks 1-50 nodes Less than or equal to 10 users PDF Viewer Medium networks 50-100 nodes Less than or equal to 20 users Large networks 100-200 nodes Less than or equal to 30 users Extra-large networks More than 200 nodes Over 40 users Mega networks 500-1000 nodes Over 40 users To view product documentation: Adobe® Acrobat® Reader® 9.3 for Solaris.
38 Solaris Platform Management Server Requirements This table lists the minimum requirements for a Solaris system TransNav management server, including requirements allowing TN-Xpert to reside on the same workstation server.
Planning and Engineering Guide, Release TR5.0.x/TN6.0.
40 Windows Platform Requirements for TransNav Management Server This table lists the minimum requirements for a Windows platform TransNav management server. Table 6 Windows Requirements, TransNav Management Server Component Description Small networks 1-50 nodes Less than or equal to 10 users Medium networks 50-100 nodes Less than or equal to 20 users Large networks 100-200 nodes Less than or equal to 30 users System Dual Core Pentium Class Processor - 2.8 GHz Dual Core Pentium Class Processor - 3.
Planning and Engineering Guide, Release TR5.0.x/TN6.0.
42 Table 6 Windows Requirements, TransNav Management Server (continued) Component Description Small networks 1-50 nodes Less than or equal to 10 users Medium networks 50-100 nodes Less than or equal to 20 users Large networks 100-200 nodes Less than or equal to 30 users Extra-large networks More than 200 nodes Over 40 users Mega networks 500 - 1000 nodes Over 40 users Gateway Server Not applicable Not applicable Not applicable Not applicable Recommend 2 Gateway servers Recommend 4 Gateway server
Planning and Engineering Guide, Release TR5.0.x/TN6.0.x Windows Platform Management Server Requirements This table lists the minimum requirements for a Windows platform TransNav management server, including requirements allowing TN-Xpert to reside on the same server.
44 Table 7 Windows Requirements, Management Server with TransNav and TN-Xpert (continued) Component Description Small networks 1-50 nodes Less than or equal to 10 users Medium networks 50-100 nodes Less than or equal to 20 users Large networks 100-200 nodes Less than or equal to 30 users Extra-large networks More than 200 nodes Over 40 users Mega networks 500 - 1000 nodes Over 40 users Software Operating Environment Windows XP Professional Service Pack 3 Windows 7 Windows Server 2008.
Planning and Engineering Guide, Release TR5.0.x/TN6.0.
TransNav Management Server GUI Application Requirements You require a client workstation to access the TransNav management server from the graphical user interface (GUI). Force10 recommends installing the application directly on the client workstation for faster initialization, operation, and response time.
TransNav Client and Node GUI Application Requirements The TransNav Client and Node GUI are a subset of the TransNav server GUI. Access to a TransNav management server is required only to download the application to the client workstation or laptop. Information in the Node GUI is obtained directly from the Traverse platform. The Node GUI release must match the corresponding Traverse release to avoid unexpected behavior.
TN-Xpert Client Application Guidelines This table lists the minimum requirements for TN-Xpert Client workstations if the TN-Xpert management system resides on the same server as the TransNav management system. Table 10 TN-Xpert Client GUI Application Requirements Component Description Solaris Client Requirements Windows Client Requirements Hardware CPU Sun SPARC based processor Windows PC or laptop with a Dual Core Pentium Class Processor - 2.
Chapter 6 TransNav Management System Planning Introduction This chapter outlines a recommended procedure to create and manage using the TransNav management system. SONET networks can be set up to also contain the TN-Xpert management system, allowing you to access both the TransNav and TN-Xpert management systems, Traverse nodes, TE-100 nodes, and TE-206 nodes from a single server. Currently, the TE-206 nodes must be installed using the TN-Xpert management system and have an IP address assigned.
Table 11 Network Configuration Procedure and References (continued) Step Procedure Reference 6 Initialize, then start, the server. Start the Primary server first, then initialize and start the Secondary servers. Software Installation Guide 7 Install, connect, and commission nodes and peripheral equipment according to the network plan.
Table 11 Network Configuration Procedure and References (continued) Step Procedure 11 If necessary, configure equipment, cards, and interfaces. Reference TransNav Management System Provisioning Guide TraverseEdge 50 User Guide TraverseEdge 100 User Guide TransAccess 200 Mux User Guide SONET systems only: • TransNav Xpert Users Guide • TraverseEdge 206 Users Guide 12 Create services or other applications.
52 Chapter 6 TransNav Management System Planning
Chapter 7 IP Address Planning Introduction This chapter includes the following information on creating and managing a network using the TransNav management system: • IP Addresses in a TransNav Network • IP Addressing Guidelines • Quality of Service • Proxy ARP • In-Band Management with Static Routes • In-Band Management with Router and Static Routes • In-Band Management of CPEs Over EOP Links • Out-of-Band Management with Static Routes • For information on provisioning IP QoS, see the TransNav Management
Assign the relevant IP addresses through the CLI during node commissioning. Table 12 IP Address Node Connectivity Parameters Parameter Name Required? Force10 Recommendation Description node-id Required on every node A user-defined name of the node. Enter alphanumeric characters only. Do not use punctuation, spaces, or special characters. Use the site name or location. node-ip Required on every node This parameter specifies the IP address of the node.
Table 12 IP Address Node Connectivity Parameters (continued) Parameter Name bp-dcn-gw -ip ems-ip Required? Description Required for each bp-dcn-i p If the node is connected directly to the management server, this address is the IP gateway of the management server. Required if there is a router between this node and the managemen t server. This address is the IP address of the TransNav management server. Force10 Recommendation Depends on site practices.
• The bp-dcn-ip and the node-ip of the proxy node must be the same IP address. In a proxy network, all of the node-ip addresses must be in the same subnetwork as the bp-dcn-ip of the proxy node. Once you plan the network with one node as the proxy, you cannot arbitrarily re-assign another node to be the proxy ARP server.
• • bp-dcn-gw-ip: This address is in the same subnetwork as the bp-dcn-ip of this node. bp-dcn-mask: The address mask of the bp-dcn-ip of this node. The IP address of the TransAccess Mux will have the following characteristics: • IP address: This IP address can be on the same subnetwork as the node bp-dcn-ip. • Gateway: This IP address is the bp-dcn-ip of the node. • Mask: This mask is the address mask of the bp-dcn-ip of the node.
EMS Server IP Network IP QoS Enabled Port IP A Traverse Network TN 00155 Figure 8 IP Quality of Service See the TransNav Management System Provisioning Guide, Chapter 9—“Creating and Deleting Equipment,” Node Parameters for detailed information about setting up IP Quality of Service in a TransNav-managed network.
Proxy ARP Proxy address resolution protocol (ARP) is the technique in which one host, usually a router, answers ARP requests intended for another machine. By faking its identity, the router accepts responsibility for routing packets to the real destination. Using proxy ARP in a network helps machines on one subnet reach remote subnets without configuring routing or a default gateway. Proxy ARP is defined in RFC 1027. IP Gateway Mask 172.168.0.2 172.168.0.1 255.255.255.0 IP Network EMS Server 172.140.0.
In-Band Management with Static Routes In-band management with static routes means the management server is directly connected by static route to one node (called the management gateway node), and the data communications channel (DCC) carries the control and management data. In this simple example, the TransNav management server (EMS server) is connected to a management gateway node (Node 1) using the Ethernet interface on the back of the shelf.
In-Band Management with Router and Static Routes In this example, the management server is connected by static route to a router that, in turn, is connected to the management gateway node (Node 1). The server communicates to the other nodes in-band using the DCC. EMS Server 172.169.0.10 172.169.0.1 255.255.255.0 Add routes for each node-ip to EMS server. 10.100.100.1 255.255.255.0 172.169.0.1 10.100.100.2 255.255.255.0 172.169.0.1 10.100.100.3 255.255.255.0 172.169.0.
In-Band Management of CPEs Over EOP Links In this example, the management server is connected by static route to a router that, in turn, is connected to the management gateway node (Node 1). The server communicates to the other nodes in-band using the DCC, including the node that has CPE devices attached (Node 3).
The EoPDH cards are connected by EOP links through the electrical cards to the CPEs as shown below. Figure 13 Connecting CPEs through EOP Links See the topic IP Addresses in a TransNav Network for detailed information about assigning IP addresses in a TransNav-managed network. Planning and Engineering Guide, Release TR5.0.x/TN6.0.
Out-of-Band Management with Static Routes Out-of-band management with static routes means that the management server is directly connected by static route to each node by the Ethernet interface on the back of each shelf. In this example, the management server communicates to each node directly or through a router. Add routes for each node-ip to router. 10.100.100.2 255.255.255.0 172.169.0.2 10.100.100.3 255.255.255.0 172.170.0.2 IP Network IP Network 172.168.0.
Chapter 8 Network Time Protocol (NTP) Sources Introduction This chapter includes the following information on managing a Traverse network: • NTP Sources in a Traverse Network • NTP Sources on a Ring Topology • NTP Sources on a Linear Chain Topology NTP Sources in a Traverse Network Network Time Protocol provides an accurate time of day stamp for performance monitoring and alarm and event logs.
NTP Sources on a Ring Topology Force10 recommends using the adjacent nodes as the primary and secondary NTP sources in a ring configuration. Use the Management Gateway Node (MGN) or the node closest to the MGN as the primary source and the other adjacent node as the secondary source. The following example shows NTP sources in a ring topology.
Chapter 9 Network Cable Management Introduction This chapter includes the following topics: • Fiber Optic Cable Routing • Copper/Coax Cable Management Fiber Optic Cable Routing A fiber cable management tray (for MPX-specific cables) is integrated into the fiber optic backplane cover for routing fiber optic cables. Cable management bars (for copper, coax, and SCM fiber cables) are customer-installable on the rear of the shelf.
Fiber optic cables route out the bottom of the Traverse 600 shelf for horizontal central office rack installation.
Copper/Coax Cable Management Copper and coax cable routing is as follows: • Traverse 1600 and Traverse 2000 Copper and Coax Cable Routing • Traverse 600 Copper and Coax Cable Routing Traverse 1600 and Traverse 2000 Copper and Coax Cable Routing Copper and coax cables tie-wrap to the cable management bar(s), route out to the right or left side of the Traverse shelf (from the rear view), and continue routing up the rack to intermediate patch panels.
The following image shows Traverse shelves with two cable management bars each, Mini-SMB cabling, and ECMs. There is an opening with a protruding cover in the left-most cover to route DCN Ethernet and RS-232 cables. .
DCN Ethernet and RS-232 cable opening Route coax and copper cables out the bottom and to the right or left Figure 16 Traverse 600 Shelf Horizontal Installation—Cable Routing Planning and Engineering Guide, Release TR5.0.x/TN6.0.
72 Chapter 9 Network Cable Management
I NDEX C Cabling electrical coax, 11 Card cabling, 10 electrical coax, 11 placement, 11 power consumption, 7 distribution, 7 CE Mark, see Certification Certification CE Mark, 15 Compliance certification, 16 configuration, 16 electro-magnetic compatibility, 15 environmental standards ETSI FCC standards, 16 NEBS, 16 UL standards, 16 D Daylight Saving Time support, 65 Density interface cards, 11 DS3 card electrical coax cabling, 11 E Electro-Magnetic Compatibility, 15 Environmental standards ETSI, 16 ETSI, s
P Placement card, 11 Power cabling, 10 consumption by card, 7 distribution by card, 7 Primary server, see Servers Protection point-to-point topologies, 23 supported topologies summary, 29 Proxy ARP, 59 gateway, 26 single node rings, 26 two node overlapping rings, 27 two node rings, 27 protected types summary, 29 supported protection schemes, 29 types, 23 mesh, 25 ring, 24 U UL standards, 16 R Reliability system, 17 testing temperature, 17 S Secondary server, see Servers Servers nodes number managed, 35