Author: Donald Telian, SiGuys - Guest Blogger
Day one at my
first engineering job and my manager is explaining the problem I need to
solve. Signals need to transmit in a new
configuration at a faster data rate than before, and it wasn’t clear if it
would work. Little did I know how many times
I’d get handed that same problem in the decades to come. But this time was different. Fresh out of college, my mind was full of
circuit theory, network nodal analysis and differential equations, so I proposed
a solution a bit beyond my mathematical abilities. Somewhat amused, he smiled and said: “Are you
kidding? We use simulators for that.”
Integrity Engineers use simulators and models like carpenters use hammers and
nails. These are the tools of our trade,
and with practice we discover how to use them to build things. Our learning curve involves juggling parameters,
nuances, inputs and outputs. Simulators
are also quirky and so bug fixes and software updates are the norm; deeper
mathematics have been abstracted to a user interface so there’s no way to fix
what has been previously compiled.
Yet when everything
is working right, simulators open the door to explore and tackle electronics’
greatest challenges. Overstated? Maybe.
If you use a Flight Simulator correctly, you arrive at your destination
without crashing your airplane. But
you’re not actually there. When you use
an SI Simulator correctly you gain insight into how to design and build your
product and, unlike the airplane, you are largely “there”. The power to hunt for solutions across
manufacturing variables prior to hardware is powerful and has consistently enabled
us to figure out the next great thing.
by definition, is not the real thing. So
how much can I trust it? How can I get
good at it? Is there a way to figure out
if simulation output is correct? …or
even reasonable? And to what degree? Furthermore, how can I perform meaningful
simulations when models are bad or not available?
a Good Simulator
precise, and simulators use math. Lots
of it. So then why do simulation outputs
differ tool by tool? Under the hood,
simulators deploy differing techniques to trade-off performance, accuracy, convergence,
and throughput – in addition to using unique configuration and interface paradigms. Because simulators compete on the open
market, there’s a good amount of variety – be it “secret sauce” or “snake oil”
– lubricating how they function. Looking
from the outside, I typically characterize simulators as “cold” (conservative),
“hot” (optimistic), or somewhere in between.
Experience and Measurement
Correlation help you understand where you are in that spectrum. And Bogatin’s
Rule #9 helps you stay in the realm of reality.
So how can I
recognize a “good” simulator? Given the
rapid changes in electronics I’ve learned to judge them based on two factors:
(1) how the simulator functions now, and (2) how it will function in the future. While (1) can be determined with a good
evaluation and verified references, (2) requires you to examine the vision,
skills, investment, support systems, and staff size of a potential vendor. That said, as with other realms of technology,
don’t underestimate the energy and breakthroughs that happen at unproven startups. And while it seems exotic to be at a big
company using “in-house” tools, history suggests that non-EDA companies
eventually decide making and maintaining tools is not their core competency causing
those tools to become unsupported and out-of-date.
SI simulation is performed using the four methods shown in Figure 1. There was a day when SPICE was the only solution, but
in 2004 faster Convolution and Statistical computation methods arrived on the
open market. These techniques, combined
with the SerDes
equalization capabilities that motivated them, represent the most
significant SI advancements I have witnessed – providing a way to extend the
lifespan of copper interconnect on PCBs.
Figure 1 reveals
the primary difference between techniques is the number of bits you can
simulate in a reasonable amount of time.
With the increasing relevance of BER prediction, the number of bits
simulated becomes an important differentiator.
While some would argue the accuracy trade-offs between the methods, I
view all methods as useful in providing solutions for certain tasks. Most simulators implement all four methods, and
it’s important to know how to use them all because models often dictate which
type of simulation must be used.
1: Four Common Types of Active System SI
method, Peak Distortion Analysis (PDA), extended the usefulness of SPICE yet
was mostly a stopgap solution until Convolution arrived. PDA parses the interconnect to determine a
worst-case bit pattern, providing the ability to study performance-limiting
scenarios using a shorter SPICE simulation.
PDA lives on in most simulators yet has largely been supplanted by the
of the type of analysis chosen, as I discovered my first day on the job,
simulators enable insight that would otherwise be inaccessible. But simulators by themselves are like cars
unable to move until you put in the gas.
Which brings us to our next topic:
Models. Models are the gas that empowers
the simulator to do something useful.
World of Models
Project, the first thing I work on is models.
When the model type is new and/or hard to get, I might even work on
models before the Project starts. Good
models are imperative so, as unglamorous as it is, step one is to procure and
validate them. As one engineer put it,
“Models are like underwear. No one sees
them and you don’t want to think about them, but they are essential and go on
first.” Sorry for that imagery, but it’s
vendors don’t have models for their products while others keep them locked up
in Fort Knox (translation: they are difficult to acquire
in a reasonable amount of time). My
favorite vendors let you download models from their websites using click-through
license agreements. Figure out which
vendors are protective and slow and act accordingly. And if you’re a model maker, realize that a
great model your customers cannot obtain in a reasonable amount of time is the
same as no model at all.
finished a Project one month before a troublesome vendor’s model finally arrived. If that seems impossible, these are important
paragraphs for you. Succeeding at Signal
Integrity Engineering will require you to produce meaningful and actionable
data even when you can’t get a certain model.
The secret: develop a proxy model that is conservative (pessimistic) yet
reasonable (practical) and bounds the behaviors you will see when the model
finally arrives. To make this mental
jump you must move beyond thinking that the device defines the model because,
in practice, the model defines the device.
If you think about it, all devices start that way; behaviors are desired
and expected, yet not built yet. Once
models (Tx/Rx), when you can’t get a model for one end of a signal path the
most common solution is to substitute the model from the other end. However, now that all simulators have IBIS
and AMI template models, it’s reasonable to
build your own model using a component’s datasheet for the missing model. Another good solution is “spec” models. These models capture the boundary behaviors
defined by a Specification and are often used when the only thing you know
about a device is that it is compliant. Figure
2 illustrates SATA spec model behaviors, highlighting voltage (left) and edge
rate (right) ranges for fast, typical, and slow (red, green, and blue,
respectively) corners. If spec models
are not provided with your simulator, they can be built by inserting the Specification’s
characteristics into template or IBIS models.
2: SATA Spec Model Characteristics,
Voltage (left) and Time (right)
without the models you need brings out the creative
side of SI. If you have a large
passive assembly (e.g., multiple cables, connectors, and PCBs) that is built and
not performing well, it may be the simplest – if not the most accurate – way to
model it is to measure the end-to-end path using a Vector Network Analyzer
(VNA). Or send it to someone who can do that. This will produce S-parameters which may in
themselves (perhaps converted to TDR) reveal where the problem is. The measurement produces a model you can place
into your system simulation with confidence the nuances of the passives are
modeled correctly. The only downside is
that it’s difficult to tolerance that type of model, however I have adjusted characteristics
by changing S-parameter reference impedances.
about that connector model that isn’t available? Aside from the ideas of measuring it or switching
to a vendor that has a model, how about constructing a boundary model? A connector is just interconnect, and
interconnect is a series of impedances, propagation delays, and perhaps
couplings. After seeing enough connector
models trends emerge that can be mimicked with a few transmission lines,
ideally parameterized to cover the boundaries (Figures
1 and 2). Again, prove the system
works in a conservative scenario and you’ve substantially removed risk that it
will work in the real one. Don’t forget,
“all models are
wrong, but some are useful”. Learn
to make useful models when vendor models cannot be obtained.
might be inclined to trust a vendor model, be forewarned they can lead you
astray too. Never use anyone’s model
until you qualify
it – particularly S-parameters. Some vendors reliably deliver quality models,
while others are learning how. Expect to
participate in the process of ensuring your models are good, and budget time
The best SI
Engineers learn how to manage the imperfect world of models and produce
meaningful results; results that may not be “perfect” yet are accurate enough
to inform and guide the design process.
Here’s the challenge: know when
your simulation results are “good enough”.
Good enough to be actionable, provide design insight, and remove
risk. As one SI group manager warns:
“I’ve worked with a few SI engineers who were considered technical experts yet
ineffective simply because they got bogged down studying details. These “scientists” studied and analyzed
everything, worked to the nth degree of simulation accuracy, yet were unable to
come up with the answer in a timely fashion (if at all).”
delegate complex mathematics to computers, allowing us to focus on design tasks. History has proven this partnership’s ability
to advance our understanding and definition of “high-speed”. The practice of Signal Integrity requires
proficiency with simulators, along with an understanding of their strengths and
weaknesses. Simulators require good
models to be useful, and often model development and qualification must be performed
by the SI Engineer. For those who brave
the learning curve the rewards are substantial.
Climb this mountain and solve the next great thing.