Author: Donald Telian, SI Guys - Guest Blogger
Early February, over a foot of snow outside my office and the power is out. Good day to think about Power Integrity. Power matters. But why here in a series on Signal Integrity? Good question.
Power Integrity (PI) and Signal Integrity (SI) are intimately intertwined. Here are a few reasons why:
- PI and SI problems cause similar intermittent issues in systems
- PI problems can cause SI problems, and vice versa
- Because of 1 and 2 above, the problems are often solved by the same engineer
- Because of 1, 2, and 3 above, SI and PI tools tend to be developed by and marketed to the same people
However, my favorite explanation came from one of the inventors of Hspice. He quoted Seymour Cray (high-speed signaling pioneer and visionary behind Cray Supercomputers) as saying some day it will all come down to our ability to deliver power to our ICs because, if you think about it, a high-speed signal is really just the clean delivery of power from one component to another. Advice from a guy who lived it.
The Origins of Power Integrity
Obviously, every design does manage power in some way. Every product design quantifies its power requirements and generates, distributes, stores and decouples the necessary voltages to within certain tolerances. By definition, this has been going on since the beginning of electronics – with much of the challenge handled by design rules and experience.
Over time, integration combined with high-frequency switching forced an incredible amount of power to move quickly through a very small space. Heat sinks helped dissipate power-related heat that didn’t fit in that small space, but that’s more of a “DC-ish” type of problem. In time an “AC-ish” problem emerged, as low-voltage circuits required consistent voltage levels to be held to tight tolerances. So what’s the mechanism for stabilizing voltage fluctuation amidst substantial and increasing high-frequency switching currents?
This high-frequency challenge forced us to think about real-time “Power Distribution Networks” (PDNs); power-delivery circuits that must present a low impedance from power source to IO buffer over a wide frequency range. Around the year 2000, this concept began to work its way around the electronics industry. I was fortunate to be working in the EDA field on a team that released the first open-market tools to address this PDN challenge, and even recall the conference call when I suggested the term “Power Integrity” (PI). It carried instant and clear meaning, as we were all people who normally worked on “Signal Integrity”. While we articulated the PI challenge and the methodology required to address it correctly, the tools didn’t immediately find their way into every team’s design process. “PI”, like its older cousin “SI”, would need time to mature.
Power Integrity Methodologies
As has PI matured, additional tools, design rules, consultants, and books have emerged to address the PI challenge – some with competing approaches. Many in the industry, including myself, liken this transitional time to the way SI practices slowly solidified – yet about 20 years behind. Which leaves us somewhat in the “wild-wild-west” era of Power Integrity with an array of tools and opinions. So hang in there, because no doubt a clearly-defined and easy-to-understand-and-implement approach will emerge for the mainstream. In the meantime, the different approaches have been beneficial for the edges of the design space: low-volume/high-cost and high-volume/low-cost product design. Engineers working on these types of products are good at pioneering solutions that are later implemented in tools and methodologies for the rest of us.
The three methodologies currently in use to manage Power Integrity include:
Design Rules and Guard-banding. This is still the dominant methodology, and it has been with us for some time. Components specify their voltage tolerance and advise us on decoupling capacitor values and placement. Power supplies specify their voltages, tolerances, and load capacity. And many SI Methodologies budget guard-bands for “power effects” in their timing and/or eye margins. These methods continue to work well in numerous situations and design flows.
PDN Target Impedance. This methodology separates the PI and SI problems, and solves them independently. Based on dynamic current consumption and component voltage tolerances, a “target impedance” (R=V/I) is determined. The PCB’s PDN is meshed and populated with sufficient decoupling capacitors to maintain this impedance across a wide frequency range. The assumption is that once your PDN is stabilized, signals can be delivered cleanly and SI analysis can proceed as it always did (with ideal power supplies and guardbands). This PI methodology was first published in 1994 at EPEP by Larry Smith – 3 years before I articulated the emerging practice of “Signal Integrity Engineering”.
SI/PI Co-Design. Thanks to ever-increasing compute power, there are various tools aimed at extracting and simulating the complete SI and PI interaction, aiming at solving both problems simultaneously. By definition, this methodology must include a significant portion of a design to be successful. An example would be simulating a x16 DRAM implementation; all 16 data bits and their strobes amidst a 3D extraction of all its PCB routes and power structures. This assumes that DRAM vendor provided a “power-aware” model of its internal power structures along with IO models that not only connect to that internal PDN but also capture how transistor behavior changes with power fluctuation. No simple task.
Given this backdrop of the maturing PI landscape, what is happening in practice?
Power Integrity, in Perspective
So how big is the issue? Does every design need to manage and simulate its PI? How does this impact my SI simulation?
No doubt, finding the correct answer to these questions involves assessing how close you are to the “edges of the design space” I mentioned earlier. At this point in time larger companies tend to have a verification step that executes the PDN Target Impedance methodology prior to fabrication. And certainly purveyors of the very large ICs and FPGAs do a tremendous amount of PI work to determine how their components must be decoupled inside and outside the chip. And as always, the very high-volume product designs deploy sophisticated PI methodologies to minimize component count, PCB layers, assembly/test costs, etc.
While I’m certainly interested in studying and modeling the complete SI/PI interaction, I’m still waiting to see something published that makes a compelling case for it. Yes, of course non-ideal power changes a signal – we’ve known and simulated that for decades. What I’m waiting to see is the borderline application that requires it. History suggests that if a component requires a complex and expensive methodology to get designed in, that component does not succeed in the open market. If you’ve found a compelling scenario here, please share it in the Comments section below.
Solving PI Problems in Hardware
The most famous of all SI/PI design problems is “ground bounce”, and/or its cousin “voltage droop”. This happens when certain data patterns switch so much current through an IC’s (under-designed and inductive) power structure the reference levels (voltage or ground) become unstable (V=Ldi/dt), adversely affecting the performance of other IOs sharing that voltage rail. These temporary voltage shifts can cause a timing error (due to a very slow edge) or the transmission of a false logic level (i.e, a static “0” “bounces” high enough to be perceived as a “1” at its receiver). As shown in Figure 1, if you experience an intermittent failure that is data-dependent the problem is either ground bounce or crosstalk – depending on if the data in question interact within an IC or on the PCB, respectively.
Figure 1 is helpful for isolating sporadic Signal Integrity (SI) issues in hardware. The flowchart presumes you have found the problematic signal(s) that cause the problem, but have not yet determined its root cause. In general, crosstalk and ground bounce problems (at left) are intermittent, while the single-line transmission problems (at right) occur every time a signal switches. Though I first published this flowchart more than 20 years ago, it is still relevant today. The good news is that over the years we’ve added capabilities to adapt a driver’s edge rate and strength (ala DDRx) and equalization (ala Serial Links), increasing our ability to fix these types of problems in software.
Donald Telian, SiGuys - Guest Blogger