QFX5130 Switch Datasheet
Download DatasheetProduct Overview
The QFX5130 Switch offers a high-density, cost-optimized 1 U 400GbE fixed-configuration platform ideal for data centers , enterprise data centers and campus distribution/core where cloud services are being added. These services require higher network bandwidth perrack, as well as flexibility, making the 10/25/40/100/400GbE interface options ideal for server and intra-fabric connectivity. The QFX5130 is an optimal choice for spine-and-leaf deployments in enterprise, service provider, and cloud provider data centers.
Coupled with the widespread adoption of overlay technologies, the QFX5130 lays a strong foundation for your evolving business and network needs, offering deployment versatility to future-proof your network investment.
Product Description
The Juniper Networks® QFX5130 Switch is a next-generation, fixed-configuration spine-and-leaf switch that offers flexible, cost-effective, high-density 400GbE, 200GbE, 100GbE, 50GbE, 40GbE, 25GbE, and 10GbE interfaces for server and intra-fabric connectivity.
A versatile, future-proofed solution for today’s data centers, the QFX5130 leverages the power of a fully programmable chipset to support and deliver a diverse set of use cases. It supports advanced Layer 2, Layer 3, and Ethernet VPN (EVPN)- Virtual Extensible LAN (VXLAN) features. For large public cloud providers—early adopters of high-performance servers to meet explosive workload growth—the QFX5130 supports very large, dense, and fast 400GbE IP fabrics based on proven Internet scale technology. For enterprise data center customers seeking investment protection as they transition their server farms from 10GbE to 25GbE, the QFX5130 switch also provides a high radix-native 100GbE/400GbE EVPN-VXLAN spine option at reduced power and a smaller footprint.
The QFX5130 supports diverse use cases such as neural networks for AI applications, including autonomous driving, disaggregated storage, high frequency trading, packet brokering, and over-the-top streaming services. Delivering 25.6 Tbps of bidirectional bandwidth, the switch is optimally designed for spine-and-leaf deployments in enterprise, high-performance computing (HPC), service provider, and cloud data centers.
The QFX5130-32CD offers 32 ports in a low-profile 1 U form factor. High-speed interfaces support a wide variety of port configurations, including 400GbE, 100GbE, 25GbE, 40GbE, and 10GbE. The QFX5130-32CD is equipped with two AC or DC power supplies, providing 1+1 redundancy when all power supplies are present. Six hot-swappable fans offer back-to-front (AFO) or front-to-back (AFI) airflow options, providing 5+1 redundancy.
The QFX5130 includes an Intel XeonD-1500 processor to drive the control plane, which runs the Junos® OS Evolved operating system software.
Product Highlights
The QFX5130 includes the following capabilities. Please refer to the Specifications section for currently shipping features.
Native 400GbE Configuration
The QFX5130-32CD offers 32 ports in a 1 U form factor. The high-speed ports support a wide variety of configurations, including 100GbE and 40GbE.
High-Density Configurations
The QFX5130 is optimized for high-density fabric deployments, providing options for 32 ports of 400GbE/100GbE/40GbE, 64 ports of 50GbE , or 128 ports of 25GbE/10GbE with breakout cables.
Flexible Connectivity Options
The QFX5130 offers a choice of interface speeds for server and intra-fabric connectivity, providing deployment versatility and investment protection.
Key Product Differentiators
Increased Scale and Buffer
The QFX5130 provides enhanced scale with up to 1.24 million routes, 80,000 firewall filters, and 160,000 media access control (MAC) addresses. It supports high numbers of egress IPv4/IPv6 rules by programming matches in egress ternary content addressable memory (TCAM) along with ingress TCAM.
132MB Shared Packet Buffer
Today’s cloud-native applications have critical dependency on buffer size to prevent congestion and packet drops. The QFX5130 has 132 MB shared packet buffer that is allocated dynamically to congested ports.
Programmability
The QFX5130 revolutionizes performance for data center networks by providing a programmable software-defined pipeline in addition to the comprehensive feature set provided in the Juniper Networks QFX5120 Switch line. The QFX5130 uses a compiler-driven switch data plane with full software program control to enable and serve a diverse set of use cases, including in-band telemetry, fine-grained filtering for traffic steering, traffic monitoring, and support for new protocol encapsulations.
Power Efficiency
With its low-power 7 nm process, the QFX5130 consumes a typical of 373 W, bringing improvements in speed, less power consumption, and higher density on chip.
Features and Benefits
- Automation and programmability: The QFX5130-32CD supports a number of network automation features for plug-and-play operations, including zero-touch provisioning (ZTP), Network Configuration Protocol (NETCONF), Juniper Extension Toolkit (JET), Junos telemetry interface, operations and event scripts, automation rollback, and Python scripting.
- Cloud-level scale and performance: The QFX5130 supports best-in-class cloud-scale L2/L3 deployments with a low latency of 630 ns and superior scale and performance. This includes L2 support for 160,000 MAC addresses and Address Resolution Protocol (ARP) learning, which scales up to 64,000 entries at 500 frames per second. It also includes L3 support for 1.24 million longest prefix match (LPM) routes and 160,000 host routes on IPv4. Additionally, the QFX5130 supports 610,000 LPM routes and 80,000 host routes on IPv6, 128-way equal- cost multipath (ECMP) routes, and a filter that supports 80,000 ingress and 18,000 egress exact match filtering rules. The QFX5130 supports up to 128 link aggregation groups, 4096 VLANs, and Jumbo frames of 9216 bytes. Junos OS Evolved provides configurable options through a CLI, enabling each QFX5130 to be optimized for different deployment scenarios.
- VXLAN overlays: The QFX5130 is capable of both L2 and L3 gateway services. Customers can deploy overlay networks to provide L2 adjacencies for applications over L3 fabrics. The overlay networks use VXLAN in the data plane and EVPN or Open vSwitch Database (OVSDB) for programming the overlays, which can operate without a controller or be orchestrated with an SDN controller.
- IEEE 1588 PTP Boundary Clock with Hardware Timestamping*: IEEE 1588 PTP transparent/boundary clock is supported on QFX5130, enabling accurate and precise sub-microsecond timing information in today’s data center networks. In addition, the QFX5130 supports hardware timestamping; timestamps in Precision Time Protocol (PTP) packets are captured and inserted by an onboard field-programmable gate array (FPGA) on the switch at the physical (PHY) level.
- Data packet timestamping*: When the optional data packet timestamping feature is enabled, select packets flowing through the QFX5130 are timestamped with references to the recovered PTP clock. When these packets are received by nodes in the network, the timestamping information can be mirrored onto monitoring tools to identify network bottlenecks that cause latency. This analysis can also be used for legal and compliance purposes in institutions such as financial trading, video streaming, and research establishments.
- RoCEv2: As a switch capable of transporting data as well as storage traffic over Ethernet, the QFX5130 provides an IEEE data center bridging (DCB) converged network between servers with disaggregated flash storage arrays or an NVMe-enabled storage-area network (SAN). The QFX5130 offers a full-featured DCB implementation that provides strong monitoring capabilities on the top- of-rack switch for SAN and LAN administration teams to maintain clear separation of management. The RDMA over Converged Ethernet version 2 (RoCEv2) transit switch functionality, including priority-based flow control (PFC) and Data Center Bridging Capability Exchange (DCBX), are included as part of the default software.
- Junos Evolved features: The QFX5130 switch supports features such as L2/L3 unicast, EVPN-VXLAN*, BGP add- path, RoCEv2 and congestion management, multicast, 128- way ECMP, dynamic load balancing capabilities, enhanced firewall capabilities, and monitoring.
- Junos OS Evolved Architecture: Junos OS Evolved is a native Linux operating system that incorporates a modular design of independent functional components and enables individual components to be upgraded independently while the system remains operational. Component failures are localized to the specific component involved and can be corrected by upgrading and restarting that specific component without having to bring down the entire device. The switches control and data plane processes can run in parallel, maximizing CPU utilization, providing support for containerization, and enabling application deployment using LXC or Docker.
- Retained state: State is the retained information or status pertaining to physical and logical entities. It includes both operational and configuration state, comprising committed configuration, interface state, routes, hardware state, and what is held in a central database called the distributed data store (DDS). State information remains persistent, is shared across the system, and is supplied during restarts.
- Feature support: All key networking functions such as routing, bridging, management software, and management plane interfaces, as well as APIs such as CLI, NETCONF, JET, Junos telemetry interface, and the underlying data models, resemble those supported by the Junos operating system. This ensures compatibility and eases the transition to Junos Evolved.
Deployment Options
Data Center Fabric Deployments
The QFX5130-32CD can be deployed as a universal device in cloud data centers to support 100GbE, 200GbE server access and 400GbE spine-and-leaf configurations, optimizing data center operations by using a single device across multiple network layers (see Figure 1). The QFX5130-32CD can also be deployed in more advanced overlay architectures like an EVPN-VXLAN fabric. Depending on where tunnel terminations are desired, the QFX5130-32CD can be deployed in either Edge Routed Bridging (ERB) deign (Figure 2) or the Bridged Overlay (Figure 3) architecture. Juniper offers complete flexibility and range of data center fabric designs that cater to data centers of different sizes, scale built by Cloud Operators, Service Providers and Enterprises. Here’re the main data center design options where QFX5130 can be used as a server leaf, spine node or border-leaf node:
- Architecture 1: ERB - Edge Routed Bridging EVPN-VXLAN with distributed anycast IP gateway architecture supporting L2 and L3 for Enterprises and 5G Telco-Cloud. This type of design offers a combination of L2 stretch between multiple leaf/ToR switches and L2 active/active multihoming to the server with MAC-VRF EVI L2 virtualization support as well as L3 IP VRF virtualization at the leaf/ToR through the Type-5 EVPN-VXLAN. This type of design in DC use-case can be used to connect in redundant and optimized way the servers/compute nodes, Blade Centers, IP storage nodes running ROCEv2, as well as other appliances.
- Architecture 2: BO - Bridged Overlay EVPN-VXLAN design using MAC-VRF instances and different EVPN service-types (vlan-aware, vlan-bundle, vlan-based). In this case an external to the fabric first hop IP gateway can be used – for example at the firewall or external existing DC gateway routers. In this design the DC fabric is offering L2 active/active multihoming using ESI-LAG and fabric wide L2 stretch between the leaf ToR nodes.
- Architecture 3: Seamless Data Center Interconnect (DCI) for ERB fabric design – DCI border-leaf design with seamless T2/T2 EVPN-VXLAN to EVPN-VXLAN tunnel stitching (RFC 9014) and T5/T5 EVPN-VXLAN tunnel stitching support. With this design the data center gets a benefit of geographical redundancy for the application deployed in private cloud DC. The QFX5130 is used in this design also as a border-leaf node.
- Architecture 4: Collapsed Spine design with ESI-LAG support and anycast IP – in this case the pair of qfx5130-32cd switches is deployed with a back-to-back connect, without spine layer. The L2 active/active multihoming using ESI-LAG is used for the server NIC high availability as well as anycast IP gateway.
Management, Monitoring, and Analytics Data Center Fabric Management
Juniper® Apstra provides operators with the power of intent-based network design to help ensure changes required to enable data center services can be delivered rapidly, accurately, and consistently. Operators can further benefit from the built-in assurance and analytics capabilities to resolve Day 2 operations issues quickly.
Apstra key features are:
- Automated deployment and zero-touch deployment
- Continuous fabric validation
- Fabric life-cycle management
- Troubleshooting using advanced telemetry
For more information on Apstra, see www.juniper.net/us/en/products/network-automation/apstra/apstra-system.html
Campus Fabric Deployments
EVPN-VXLAN for Campus Core, Distribution, and Access
The QFX5130 switches can be deployed in campus distribution/core layer networks using 100GbE/400GbE ports to support technologies such as EVPN multihoming and campus Fabric.
Juniper offers complete flexibility in choosing any of the following validated EVPN-VXLAN designs that cater to networks of different sizes, scale, and segmentation requirements:
- EVPN multihoming (collapsed core or distribution): A collapsed core architecture combines the core and distribution layers into a single switch, turning the traditional three-tier hierarchal network into a two-tier network. EVPN Multihoming on a collapsed core eliminates the need for Spanning Tree Protocol (STP) across campus networks by providing link aggregation capabilities from the access layer to the core layer. This topology is best suited for small to medium distributed enterprise networks and allows for consistent VLANs across the network. This topology uses ESI (Ethernet Segment Identifier) LAG (Link Aggregation) and is a standards-based protocol.
- Campus Fabric Core distribution: When EVPN VXLAN is configured across core and distribution layers, it becomes a campus Fabric Core Distribution architecture, which can be configured in two modes: centrally or edge routed bridging overlay. This architecture provides an opportunity for an administrator to move towards campus-fabric IP Clos without fork-lift upgrade of all access switches in the existing network, while bringing in the advantages of moving to a campus fabric and providing an easy way to scale out the network.
- Campus Fabric IP Clos: When EVPN VXLAN is configured on all layers including access, it is called the campus fabric IP Clos architecture. This model is also referred to as “end-to-end,” given that VXLAN tunnels are terminated at the access layer. The availability of VXLAN at access provides us with the opportunity to bring policy enforcement and microsegmentation to the access layer (closest to the source) using standards-based Group Based Policy (GBP) to segment traffic even within a VLAN. GBP tags are assigned dynamically to clients as part of Radius transaction by Mist Cloud NAC. This topology works for small-medium and large campus architectures that need macro and microsegmentation.
In all these EVPN-VXLAN deployment modes, QFX5130 switches can be used in Distribution or core, as seen in Figure 4. All three topologies are standards-based and hence are inter-operable with 3rd party vendors.
Managing AI-Driven Campus Fabric with the Juniper Mist Cloud
Juniper Mist Wired Assurance brings cloud management and Mist AI to campus fabric. It sets a new standard that moves away from traditional network management towards AI-driven operations, while delivering better experiences to connected devices. The Juniper Mist Cloud streamlines deployment and management of campus fabric architectures by allowing:
- Automated deployment and zero touch deployment (ZTD)
- Anomaly detection
- Root cause analysis
For more information, read the Juniper Mist Wired Assurance datasheet.
Port Combinations | Switch | Deployment |
128x100GbE | QFX5130-32CD | 100GbE access or leaf |
128x25GbE | QFX5130-32CD | 25GbE access or leaf with 25GbE break out |
128x10GbE + 2 SFP+ ports |
QFX5130-32CD | 10GbE access or leaf with 10GbE break out |
64x50GbE | QFX5130-32CD | 50GbE access or leaf |
32x400GbE | QFX5130-32CD | 400GbE spine or leaf |
32x100GbE | QFX5130-32CD | 100GbE spine or leaf |
32x40GbE | QFX5130-32CD | 40GbE access or leaf |
Architecture and Key Components
The QFX5130 can be used in L3 fabrics and L2 networks. You can choose the architecture that best suits your deployment needs and easily adapt and evolve as requirements change over time. The QFX5130 serves as the universal building block for these switching architectures, enabling data center operators to build cloud networks in their own way.
Layer 3 fabric: For customers looking to build scale-out data centers, a Layer 3 spine-and-leaf Clos fabric provides predictable, nonblocking performance and scale characteristics.
A two-tier fabric built with QFX5130 switches as leaf devices and Juniper Networks QFX10000 modular switches in the spine can scale to support up to 128 40GbE ports or 128 25GbE and/or 10GbE server ports in a single fabric.
Junos OS Evolved ensures a high feature and bug fix velocity and provides first-class access to system state, allowing customers to run DevOps tools, containerized applications, management agents, specialized telemetry agents, and more.
Junos Telemetry Interface
The QFX5130 supports Junos telemetry interface, a modern telemetry streaming tool that provides performance monitoring in complex, dynamic data centers. Streaming data to a performance management system lets network administrators measure trends in link and node utilization and troubleshoot issues such as network congestion in real time.
Junos telemetry interface provides:
- Application visibility and performance management by provisioning sensors to collect and stream data and analyze the application and workload flow path through the network
- Capacity planning and optimization by proactively detecting hotspots and monitoring latency and microbursts
- Troubleshooting and root cause analysis via high frequency monitoring and correlating overlay and underlay networks
Specifications
Hardware
Specification | QFX5130-32CD |
System throughput | Up to 25.6 Tbps (bidirectional) |
Forwarding capacity | 5.68 billion packets per second |
Port density | 128 x 10/25GbE 64 x 50GbE / 200GbE 32 x 40/100/400GbE |
SFP+/SFP28 | 2 small form-factor pluggable plus (SFP+) transceiver ports for in-band network management |
Specification | QFX5130-32CD |
Dimensions (W x H x D) | 17.26 x 1.72 x 21.1 in. (43.8 x 4.3 x 53.59 cm) |
Rack units | 1 U |
Weight | 24.5 lb (11.11 kg) with power supplies and fans installed |
Operating system | Junos OS Evolved |
CPU | Intel Xeon D-1500 |
Power |
|
Cooling |
|
Total packet buffer | 132MB |
Recommended Software Version | Junos OS Evolved 20.3R1 and Later |
Warranty | Juniper standard one-year warranty |
Software
- MAC addresses per system: 160,000
- VLAN IDs: 4000 (QFX5130-32CD)
- Number of link aggregation groups (LAGs): 128
- Number of ports per LAG: 64
- Firewall filters: up to 80,000 ACLs
- IPv4 unicast routes: 1.24 million* prefixes; 160,000 host routes
- IPv6 unicast routes: 610,000 prefixes; 80,000 host routes
- ARP entries: 32,000 (tunnel mode); 64,000 (non-tunnel mode)
- Neighbor Discovery Protocol (NDP) entries: 32,000 (tunnel mode); 64,000 (non-tunnel mode)
- Generic routing encapsulation (GRE) tunnels: 1000
- Jumbo frame: 9216 bytes
- Traffic mirroring: 8 destination ports per switch
Layer 2 Features
- STP—IEEE 802.1D (802.1D-2004)
- Rapid Spanning Tree Protocol (RSTP) (IEEE 802.1w); MSTP (IEEE 802.1s)
- Bridge protocol data unit (BPDU) protect*
- Loop protect
- Root protect
- RSTP and VLAN Spanning Tree Protocol (VSTP) running concurrently
- VLAN—IEEE 802.1Q VLAN trunking
- Routed VLAN interface (RVI)
- Port-based VLAN
- MAC address filtering
- Static MAC address assignment for interface
- MAC learning disable
- Link Aggregation and Link Aggregation Control Protocol (LACP) (IEEE 802.3ad)
- IEEE 802.1AB Link Layer Discovery Protocol (LLDP)
Link Aggregation
- LAG load sharing algorithm—bridged or routed (unicast or multicast) traffic:
- IP: Session Initiation Protocol (SIP), Dynamic Internet Protocol (DIP), TCP/UDP source port, TCP/UDP destination port
- L2 and non-IP: MAC SA, MAC DA, Ether type, VLAN ID, source port
Layer 3 Features
- Static routing
- OSPF v1/v2
- OSPF v3
- Filter-based forwarding
- Virtual Router Redundancy Protocol (VRRP)*
- IPv6
- Virtual routers
- Loop-free alternate (LFA)
- BGP (Advanced Services or Premium Services license)
- IS-IS (Advanced Services or Premium Services license)
- Dynamic Host Configuration Protocol (DHCP) v4/v6 relay
- VR-aware DHCP
- IPv4/IPv6 over GRE tunnels (interface-based with decap/ encap only)
Multicast
- Internet Group Management Protocol (IGMP) v1/v2
- Multicast Listener Discovery (MLD) v1/v2
- IGMP proxy, querier
- IGMP v1/v2/v3 snooping
- Intersubnet multicast using IRB interface
- MLD snooping
- Protocol Independent Multicast PIM-SM, PIM-SSM, PIM- DM, PIM-Bidir
- Multicast Source Discovery Protocol (MSDP)
Security and Filters
- Secure interface login and password
- Secure boot
- RADIUS
- TACACS+
- Ingress and egress filters: Allow and deny, port filters, VLAN filters, and routed filters, including management port filters and loopback filters for control plane protection
- Filter actions: Logging, system logging, reject, mirror to an interface, counters, assign forwarding class, permit, drop, police, mark
- SSH v1, v2
- Static ARP support
- Storm control, port error disable, and autorecovery
- Control plane denial-of-service (DoS) protection
- Image rollback
Quality of Service (QoS)
- L2 and L3 QoS: Classification, rewrite, queuing
- Rate limiting:
- Ingress policing: 1 rate 2 color, 2 rate 3 color
- Egress policing: Policer, policer mark down action
- Egress shaping: Per queue, per port
- 12 hardware queues per port (8 unicast and 4 multicast)
- Strict priority queuing (LLQ), shaped-deficit weighted round-robin (SDWRR), weighted random early detection (WRED)
- 802.1p remarking
- Layer 2 classification criteria: Interface, MAC address, Ethertype, 802.1p, VLAN
- Congestion avoidance capabilities: WRED
- Trust IEEE 802.1p (ingress)
- Remarking of bridged packets
EVPN-VXLAN
- EVPN support with VXLAN transport
- EVPN pure type-5 route support with symmetric inter-irb routing
- All-active multihoming support for EVPN-VXLAN (ESI-LAG aka EVPN-LAG)
- Multiple EVI (EVPN instances) aka multiple MAC-VRF for Mac advertisement
- MAC-VRF (EVI) multiple EVPN service-type support: vlan- based, vlan-aware, vlan-bundle
- ARP/ND suppression aka proxy-arp/nd
- Ingress multicast Replication
- IGMPv2 snooping support fabric wide: using EVPN route type-6,
- IGMPv2 snooping support for L2 multihoming scenarios: EVPN route type-7 and type-8
- IP prefix advertisement using EVPN with VxLAN encapsulation
- symmetric inter-irb routing using RT2/MAC-IP (Integrated Routing and Bridging in Ethernet VPN (EVPN)
- IP Prefix Advertisement in Ethernet VPN (EVPN-VxLAN)
- DCI - Data Center Interconnect using seamless tunnel stitching EVPN-VxLAN to EVPN-VxLAN (Interconnect Solution for Ethernet VPN (EVPN) Overlay Networks
- OISM - EVPN Optimized Inter-Subnet Multicast (OISM) Forwarding . draft-ietf-bess-evpn-irb-mcast
- Multicast Assisted Replication AR-leaf and AR-spine: Optimized Ingress Replication solution for EVPN
draft-ietf-bess-evpn-optimized-ir - Network Virtualization Overlay Solution Using Ethernet VPN (EVPN) RFC 8365: MAC-VRF instances suport with vlan-based, vlan-aware, vlan-bundle service-types in EVPN-VxLAN fabric
Data Center Bridging (DCB)
- Explicit congestion notification (ECN)
- Priority-based flow control (PFC)—IEEE 802.1Qbb
High Availability
- Bidirectional Forwarding Detection (BFD)
- Uplink failure detection (UFD)*
Visibility and Analytics
- Switched Port Analyzer (SPAN)
- Remote SPAN (RSPAN)
- Encapsulated Remote SPAN (ERSPAN)
- sFlow v5
- Junos telemetry interface
Management and Operations
- Role-based CLI management and access
- CLI via console, telnet, or SSH
- Extended ping and traceroute
- Junos OS Evolved configuration rescue and rollback
- SNMP v1/v2/v3
- Junos OS Evolved XML management protocol
- High frequency statistics collection
- Automation and orchestration
- Zero-touch provisioning (ZTP)
- Python
- Junos OS Evolved event, commit, and OP scripts
- Juniper Apstra Management, Monitoring, and Analytics for Data Center Fabrics
- Juniper Mist Wired Assurance for Campus
Standards Compliance
IEEE Standards
- IEEE 802.1D
- IEEE 802.1w
- IEEE 802.1
- IEEE 802.1Q
- IEEE 802.1p
- IEEE 802.1ad
- IEEE 802.3ad
- IEEE 802.1AB
- IEEE 802.3x
- IEEE 802.1Qbb
- IEEE 802.1Qaz
- T11 Standards
- INCITS T11 FC-BB-5
Environmental Ranges
Parameters | QFX5130-32CD |
Operating temperature | 32° to 104° F (0° to 40° C) |
Storage temperature | -40° through 158° F |
Operating altitude | Up to 6000 feet (1828.8 meters) |
Relative humidity operating | 5 to 90% (noncondensing) |
Relative humidity nonoperating | 5 to 95% (noncondensing) |
Seismic | Designed to meet GR-63, Zone 4 earthquake requirements |
Power Consumption
Parameters | QFX5130-32CD |
Maximum power draw | 220-240 V: 839 W |
Typical power draw | 220-240 V: 373 W |
Safety and Compliance
Safety
- CAN/CSA-C22.2 No. 60950-1 Information Technology Equipment—Safety
- UL 60950-1 Information Technology Equipment—Safety
- EN 60950-1 Information Technology Equipment—Safety
- IEC 60950-1 Information Technology Equipment—Safety (All country deviations)
- EN 60825-1 Safety of Laser Products—Part 1: Equipment Classification
Security
- FIPS/CC*
- TAA
Electromagnetic Compatibility
- 47 CFR Part 15, (FCC) Class A
- ICES-003 Class A
- EN 55022/EN 55032, Class A
- CISPR 22/CISPR 32, Class A
- EN 55024
- CISPR 24
- EN 300 386
- VCCI Class A
- AS/NZS CISPR 32, Class A
- KN32/KN35
- BSMI CNS 13438, Class A
- EN 61000-3-2
- EN 61000-3-3
- ETSI
- ETSI EN 300 019: Environmental Conditions & Environmental Tests for Telecommunications Equipment
- ETSI EN 300 019-2-1 (2000)—Storage
- ETSI EN 300 019-2-2 (1999)—Transportation
- ETSI EN 300 019-2-3 (2003)—Stationary Use at Weather- protected Locations
- ETSI EN 300 019-2-4 (2003)—Stationary Use at Non Weather-protected Locations
- ETS 300753 (1997)—Acoustic noise emitted by telecommunications equipment
Telco
- Common Language Equipment Identifier (CLEI) code
Environmental Compliance
Restriction of Hazardous Substances (ROHS) 6/6
Silver PSU Efficiency
Recycled material
Waste Electronics and Electrical Equipment (WEEE)
Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH)
China Restriction of Hazardous Substances (ROHS)
Juniper Networks Services and Support
Juniper Networks is the leader in performance-enabling services that are designed to accelerate, extend, and optimize your high-performance network. Our services allow you to maximize operational efficiency while reducing costs and minimizing risk, achieving a faster time to value for your network. Juniper Networks ensures operational excellence by optimizing the network to maintain required levels of performance, reliability, and availability. For more details, please visit www.juniper.net/us/en/products-services.
Product Number | Description |
Hardware | |
QFX5130-32CD-AFI | QFX5130 (hardware with base software), 32 QSFP-DD/QSFP+/QSFP28 ports, redundant fans, 2 AC power supplies, back-to-front airflow |
QFX5130-32CD-AFO | QFX5130 (hardware only; software services sold separately), 32 QSFP-DD/QSFP+/QSFP28 ports, redundant fans, 2 AC power supplies, front-to- back airflow |
QFX5130-32CD-D-AFI | QFX5130 (hardware only; software services sold separately), 32 QSFP-DD/QSFP+/QSFP28 ports, redundant fans, 2 DC power supplies, back-to- front airflow |
QFX5130-32CD-D-AFO | QFX5130 (hardware only; software services sold separately), 32 QSFP-DD/QSFP+/QSFP28 ports, redundant fans, 2 DC power supplies, front-to- back airflow |
JPSU-1600W-1UACAFI | QFX5130-32CD-AFI 1 U AC power supply unit |
JPSU-1600W-1UACAFO | QFX5130-32CD-AFO 1 U AC power supply unit |
JPSU-1600W-1UDCAFI | QFX5130-32CD-D-AFI 1 U DC power supply unit |
JPSU-1600W-1UDCAFO | QFX5130-32CD-D-AFO 1 U DC power supply unit |
QFX5220-32CD-4PRMK | 4-Post Rack Mount Kit for QFX5130-32CD |
QFX5220-32CD-FANAI | Airflow in (AFI) back-to-front airflow fans for QFX5130-32CD |
QFX5220-32CD-FANAO | Airflow out (AFO) front-to-back airflow fans for QFX5130-32CD |
Software Licenses SKUs | |
S-QFX5K-C3-A1-X (X=3,5) | Base L3 Software Subscription (X Years; X=3,5) License for QFX5130-32CD |
S-QFX5K-C3-A2-X (X=3,5) | Advanced Software Subscription (X Years; X=3,5) License for QFX5130-32CD |
S-QFX5K-C3-P1-X (X=3,5) | Premium Software Subscription (X Years; X=3,5) License for QFX5130-32CD |
About Juniper Networks
At Juniper Networks, we are dedicated to dramatically simplifying network operations and driving superior experiences for end users. Our solutions deliver industry-leading insight, automation, security and AI to drive real business results. We believe that powering connections will bring us closer together while empowering us all to solve the world’s greatest challenges of well-being, sustainability and equality.
1000680 - 007 - EN OCTOBER 2023