NVIDIA/Mellanox MMA1B00-E100 Compatible 100G QSFP28 SR4 Optical Transceiver – 100m, MMF, 850nm, MPO-12, DOM
The MMA1B00-E100 compatible QSFP28 optical transceiver is a high-performance, 4-channel 100G SR4 module designed for short-reach InfiniBand EDR and 100GbE applications. It supports transmission distances up to 100 meters over OM4 multimode fiber (MMF) using an MPO-12/UPC connector.
Each of the four channels operates at data rates up to 25.78?Gb/s, delivering a total bandwidth of 100Gbps. The module is fully compliant with the PSM4 MSA, SFF-8636, and the QSFP28 MSA specifications, ensuring broad compatibility and ease of integration.
With full Digital Optical Monitoring (DOM) via I²C interface, this pluggable transceiver offers real-time diagnostics of key operating parameters. Qualified for use in InfiniBand EDR systems, it is ideal for high-performance computing, data center aggregation, and GPU-based architectures. Commonly deployed in Mellanox SB7800 switches and ConnectX-5 adapters.
Technical Specifications – NVIDIA/Mellanox MMA1B00-E100 Compatible 100G QSFP28 SR4 Transceiver
• Compatibility: NVIDIA/Mellanox InfiniBand EDR
• Model: MMA1B00-E100
• Form Factor: QSFP28
• Maximum Data Rate: 103.125Gbps (4 × 25.78Gbps)
• Wavelength: 850nm
• Maximum Cable Distance:
o 100 meters over OM4 MMF
o 70 meters over OM3 MMF
• Connector: MTP/MPO-12 UPC
• Fiber Type (Media): Multimode Fiber (MMF)
• Transmitter Type: VCSEL (Vertical-Cavity Surface-Emitting Laser)
• Receiver Type: PIN photodiode
Optical & Electrical Performance
• Transmit Power (TX): -8.4 to 2.4dBm
• Receiver Sensitivity: < -10.3dBm
• Power Budget: 1.9dB
• Receiver Overload: 2.4dBm
• Extinction Ratio: > 2dB
• Power Consumption: = 2.5W
• Modulation (Electrical): 4 × 25?Gb/s NRZ
• Modulation (Optical): 4 × 25Gb/s NRZ
• DDM/DOM: Supported (Digital Diagnostics Monitoring)
• Operating Temperature: 0C to 70C
Standards & Applications
• Protocols:
o QSFP28 MSA
o SFF-8636
o IEEE 802.3bm
• Application:
o InfiniBand 100G EDR
o High-speed interconnects in data centers, GPU clusters, and HPC environments