PCIe Gen 1 thru 7 internal cables

Blog

HomeHome / Blog / PCIe Gen 1 thru 7 internal cables

Apr 14, 2024

PCIe Gen 1 thru 7 internal cables

By ecady | August 25, 2021 PCIe interface signaling and physical layer implementation has usually involved mostly PCBAs and various edge connectors. However, each signaling rate generation and

By ecady | August 25, 2021

PCIe interface signaling and physical layer implementation has usually involved mostly PCBAs and various edge connectors. However, each signaling rate generation and associative connectivity have had standard and non-standard cabling solutions that support a variety of applications and related topologies. Even the older ISA, EISA, PCI, PCI-X interface standards implemented various internal flat cables using related older generation edge connectors before the various PCIe CEM edge connectors. Many of those early applications included internal cabled power distribution options often combined within the IO ribbon cable legs.

Newer standards like CXL 1.0 also use the latest PCIe CEM connector and thus some more internal cables as PCIe 6.0 and PCIe 7.0 applications may roll out this year or early next year.

Earliest PCIe 1.0 applications included production testbeds for PCIe add-in boards, motherboard, server chassis, and backplane extenders. Attention to wire termination became even more important.

PCIe 2.0 5G per lane 8x internal cables were attached to paddleboards with PCIe CEM standard edge connectors usually making a Full Bus extender in a loose bundle, flat to oval 40G Link cable assembly. New and better twin-axial cable supported longer link reaches, see example circa 2007 from Meritec below.

PCIe 2.0 applications included various embedded computer planar cables for cPCI and ATCA interconnects and various form-factors and topologies. Well-shielded designs supported external flat and round full bus twin-axial cables applications.

PCIe 3.0 internal cable solutions were prolific as there were many other new types of applications other than embedded bus extenders and test adapters. Better controlling crosstalk, internal, flat foil shielded twin-axial cables were a major prolific step up in application interconnect solution set within server and storage boxes at that time. Application-specific foldable solutions and shipments have extended into PCIe 4.0, inside-the-box applications. These internal flat foil shielded cables are still used for many 16GTs, 25Gbps, and 32GTs NRZ per lane applications.

Later, most tight, bent, foil-shield, twin-axial cable assemblies did not meet or compete well nor achieve well-margined Link budget and reach requirements especially those performing at 56G PAM4 or 112G PAM4 per lane and higher speed rates due to reach Link budget limitation at each creased and bended fold consuming .5dB or worst. Straight internal cable assemblies performed better.

PCIe 4.0 16GTs NRZ per lane x16 internal cable with CEM connector for 256G Links circa 2017. Finer pitch ribbon cables required better wire termination design, especially concerning ground wire termination SI design symmetrical structures and processes. Facilitating better and faster automated optical inspection for in-line production testing equipment, manufacturer’s often use clear polymer material like silicone.

PCIe 5.0 x16 to GenZ 4C 1.1 connector adapter cable assemblies have various Power Delivery options compatible with PCIe CEM r5.0 32GT NRZ per lane edge connectors. The 5 RSVD pins are important in supporting the Flex Bus system as it is called. 12V and 48V power options are specified as internal cabled options. The GenZ SFF TA 1002 smaller form-factor connector types work fine at 56G PAM4 and several at 112G PAM4 per lane. See below concept image circa 2019.

PCIe 6.0 64GTs PAM4 per lane specifications are nearly complete and scheduled for release within 2021. Many new internal cable assembly and connector applications and products are being developed. For example, PCIe 6.0 CEM x16 connector to multiple M.2 connector adapter cables or harness are being developed. One wonders if the SFF TA 1002 x32 connector or other type will become the next PCIe 7.0 CEM connector for PCIe newest internal cable designs.

New types of smaller form-factor system packaging requirements include having very hot interiors with tighter routing requirements inside the box. Many more internal high-speed IO cable assemblies are being designed for two generations performance capabilities like 53G & 106G or 56G & 112G or 128G & 224G capability. 8, 16 and 32 lane links seem to have very good growth forecasting, especially internal pluggable connector cables with SFF TA 1002 type on the other end.

Fortunately, at least one major supplier has developed a new type of very high-performance Twinax flat raw cable capable of supporting PCIe 6.0 64GT and potentially PCIe 7.0 128GT internal cable assemblies as well as external 56/112G per lane DAC applications. Luxshare Tech’s Optamax™ new Twinax cable has been proven to support accurate and stable SI performance when folded as well as in active bending applications. They offer a large amount of test data that exceed many corporate testing regimes.

Seems their stimulation models were very close to their actual physical measurements even thru all the considerable testing regimes. This is a special achievement as the industry has struggled a lot the last 2 years with faulty 100G signaling modeling, stimulation and quite different subsequent real measurement testing results, so kudos to their team!

This type of break-thru performance raw cable enables the various tight routings of inside the box cable assemblies within smaller and tighter form-factors like PECL, EDSFF, OCP NIC, Ruler and several others.

Seems using some best and latest conductor and dielectric insulation materials, symmetric designs, tolerance controlled, process control, inline SI full testing, active optical inspection and histograms are supporting a growing family of cable types that have been put thru various Telcordia, TIA/EIA, ISO and Tier 1 user labs multiple testing regimes per application sets. Seems the optimized wrap protects the signal right to its termination points and the dielectric insulation has fine symmetrical memory. See the balanced option 2021 below.

The Optamax Twinax family feature set includes:

PCIe 7.0 128GTs PAM4 per lane internal cable solutions may include inside the box optical interconnect options like COBO OBO or different CPOs types. Probably the copper Optamax twinax internal cable could support potential PCIe 7.0 128GTs and 128G PAM4 short reach inside the box and maybe inside the rack applications? For now, this is a nascent Birds of a Feather development effort at least.

Some observations

The trend of using higher speed signaling, wider 16 and 32 lane IO PHY interfaces will likely greatly increase the requirement for more power and control circuits. Using a smaller footprint GenZ internal interconnect system that supports 256 lane interfaces may better support some new Hyperscaler DataCenter systems while PCIe does only 128 lanes.

Higher volume internal cable usage may demand using a very fast ramp production line similar to consumer high-speed cabling manufacturing methodologies. Sources need to provide quality reliability and product lifecycle cost and margin goals.

Currently, the CXL accelerator link uses the latest PCIe CEM connector revision. This CXL link is primarily an internal connector and cabling application as GenZ has an agreement with the CXL consortium of being primarily an external Link interface for Inter-Rack topologies. But will CXL developers also use the SFF TA 1002 connectors and cable or other types in order to achieve PCIe 7.0 128GT per lane performances or GenZ or both take over from PCIe?

PCIe2.0PCIe4.0PCIe6.0PCIe7.0Some observations