Signal Integrity, Then and Now

Monday, June 15, 2020
When I compare "Signal Integrity" now to what we practiced in the 1980s, it is unrecognizable. Integration has consumed the old task giving way to the new, and so the next generation will not practice the Signal Integrity of the previous.

Not surprisingly, SI methodology has adapted to keep pace with a 10,000x increase in data rate and a 100,000x increase in integration. Indeed, I've seen major shifts in the practice of SI about every 10 years. Amidst that much change it's important to understand what is still relevant and what isn't. Knowing the older practice of Signal Integrity is well-documented elsewhere, here we'll take a quick and instructive look at how and why SI changed.

In the Beginning

In the earliest days of SI, signal integrity problems surprised engineers. Why did the system work with Vendor A's component but not with Vendor B's pin-compatible version of the same thing? Why do certain data patterns cause failures? Why is this PCB route with fewer layers non-functional when the netlist hasn't changed? Why does this component pass the IC tester yet fail in the customer's system?

Prior to large-scale integration static timing was done at the PCB level and signaling delays were ignored. In time, edge rates became fast enough for signals to "see" the PCB traces and "Signal Integrity" was born. PCB-level interconnect now had to be managed and accounted for; it wasn't enough to simply connect traces, we now had to think about how they are connected.

Truth be told, many SI groups were on-call firefighters waiting for the next elusive issue to show up. "SI Methodology" was an oxymoron, as most would acknowledge work needed to be done but many were unsure what that was. When the urgent issues slowed down, we worked on "process" issues to ensure designs were "done right the first time". I even snuck a short break in the action to write the IBIS Specification, but then had to get back to work.

My favorite photo from the time showed a few engineers holding a cardboard sign that read "Will Do Analog Simulation for Food". The term "Signal Integrity" was not yet established, so we differentiated ourselves by saying we work on "the analog side of digital" - something particularly relevant to asynchronous signaling.

Asynchronous or Synchronous?

Today's systems are synchronous (i.e., signals are qualified by a "clock"), but this was not always true. Many of my first projects used "asynchronous" signals that qualify themselves; high means one thing and low means another. And so all edge transitions must be clean and "monotonic" to ensure logic does not trigger falsely. Furthermore, as signals flow through variable paths, "race conditions" also cause intermittent logic glitches. Minimum and maximum delays must be juggled carefully because signals are always valid, all the time. Although we had a few years of debate about which type of signaling was most efficient, increasing asynchronous complexities caused synchronous systems to prevail.

Synchronous systems define moments in time when digital signals are valid and can be sampled. If you think about it, the idea that signals are not always valid is partly a concession to Signal Integrity. Signals need time to get to where they're going and then to stabilize. We call this "time of flight" or "flight time". There was a time when most digital engineers didn't believe flight time existed; signals were either a "1" or a "0" - at least that's how they looked on a Logic Analyzer.

Enter the Nanosecond

One constant throughout the digital decades is signal propagation speed in PCBs, or about 6" per nanosecond. Because 6" is the average length of a PCB route, flight times remained irrelevant until the era of the nanosecond. When did that occur? Although we did simulate 8 MHz designs, at 33 MHz the nanosecond passed through 3% of the cycle time and became relevant. As this was the frequency of the original PCI bus, I inherited the task of persuading the design community signals need time to propagate and hence require a timing budget. Furthermore, signals in a system transition differently than the clean "standard load" RC-like timings assumed by the IC design process.

System flight times and standard load timings are inherently different, as shown in Figure 1. Driver (red), Receiver (blue) and Standard Load (green) waveforms reveal IC design's timing assumption (green's threshold crossing) matches neither the driver nor receiver threshold crossings - even to within several nanoseconds. As such to resolve system-level timing paths, a new SI task quantified the time difference between driving the standard load and the system implementation. This is why SI tools optionally extract the standard load from the IBIS file and simulate it in parallel with the system simulation.

Comparing Driver (red), Receiver (blue) and Standard Load (green) Waveforms

Comparing Driver (red), Receiver (blue) and Standard Load (green) Waveforms

Standard load versus system-level SI simulation is synonymous with the era of "Common Clock" interfacing, where a single clock is distributed to all devices. Compared with asynchronous systems, now only clock edges trigger and latch when signals are driven and valid. During the common clock era, I encountered my first fractions of nanoseconds working out the timings for Intel's 66 MHz processor. Using my 3% metric a ½ nanosecond is now relevant (1/66M * 0.03 ~= 0.5ns), launching us into the world of picoseconds. As data rates increased, measurement tools and simulators quickly rose to the challenges of accurately handling picoseconds.

Common Clock methodology prevailed for almost a decade until around 100 MHz when flight times and clock distribution skews consumed too much of the cycle. A new method of signaling with a fresh approach to clocking was required.

Source-Synchronous and Slicing Picoseconds

"Source-Synchronous" signaling steps in when Common Clock skews, cycle times and flight times become irreconcilable. In source-synchronous signaling each device now transmits both signals and clocks, thus replacing common clock skews with matched output delays. However, to remove signal propagation skews, PCB layout must match route lengths (flight times). Decades later, thousands of DDR DQ (data) and DQS (strobe) signals have been carefully matched, proving we have fully embraced - if not exploited - the idea that signals require time to propagate through PCB traces.

Given the interplay of IC and system-level variables, source-synchronous timing and implementation is complex. Multiple design teams - IC, package, module, layout, system, and firmware - must implement their tasks correctly. Thankfully much of the complexity is automated in EDA tools, and so functional implementations abound. Nevertheless, design success hinges on tens of picoseconds - bringing us to the edge of what we can reliably engineer.

Clever engineers continue to find ways to increase the frequency of source-synchronous interfaces, as exemplified by DDR memory's 20x data rate increase. Yet DDR not only brings implementation challenges, it also requires significant PCB real estate. And this is where the problem lies. Around the year 2000 - the same year DDR was introduced - gates became cheaper than PCB traces. Integration invited signal integrity inside the chip, becoming a key enabler for serial links.

Swimming in the Serial Stream

When high-speed attached itself to IC integration data rates scaled rapidly. Welcome to serial interfacing; technology that re-defines high-speed. In less time than DDR advanced 20x, serial links have increased data rates 50x. As Figure 2 shows, serial technology defined the knee of the high-speed curve (red). The same PCB traces engineers didn't think were relevant now routinely hold 5 bits in every inch of trace (green).

High-Speed Metrics, Then and Now

High-Speed Metrics, Then and Now

Granted "Serial" is a bit of a misnomer. Serial links routinely parallelize out to x16 wide, yet DDR's careful matching of lengths and skews disappear. While some continue to match serial lanes out of habit, they soon find out the journey into serial is one of possibility rather than complexity - provided basic items are addressed. Integration has fueled serial advances in the form of equalization, even though it is rarely exploited to its fullest potential. And that, in itself, is a testament to the extensibility of serial technology.

While serial links required new SI design skills, model formats and tools, the majority of the disruption is over. That accomplished we're free to focus on a new and simplified set of SI skills, as articulated throughout this series on Signal Integrity, In Practice.

In Conclusion

For good and natural reasons high-speed interfacing, and hence the practice of Signal Integrity, has advanced through a series of transitions. If you were an electron running around in PCB traces over the last forty years you would have first found yourself first struggling for identity, then racing against your contemporaries, and finally burning out only to be revived after arriving at your destination. Sounds like the normal course of life, doesn't it?

SiSoft engineers have been centerstage in every SI era and transition, first booting up the world of SI with common clock tools and then setting the benchmark for source-synchronous methodology automation in QSI. Not much for sitting still, they then established award-winning tools for the serial revolution in QCD. While someone once asked me if SiSoft's tag line "We Are Signal Integrity" is an overstatement, I have never viewed it as such. On the contrary, Signal Integrity has been fortunate to have capable caretakers ready to reach to forward to what's ahead. And this same spirit will carry us into the next generation of SI, beyond the "then" and into the "now".

Donald Telian, SiGuys - Guest Blogger 6/15/2020

Add your comments:

Items in bold indicate required information.