True design-for-manufacturability critical to 65-nm design success
Dwayne Burek
EE Times
(11/07/2007 5:29 H EST)
True design for manufacturability (DFM) at 65-nm and below technology nodes
has become more critical due to the shrinking of the critical dimensions of
structures on the chip where the same absolute physical variations can result
in relatively large electrical variations.
At 65 nm and below, lithographic effects become the biggest contributor to
manufacturing variability. The problem is that the features (structures) on
the silicon chip are now smaller than the wavelength of the light used to
create them. If a feature were replicated as-is in the photomask based on the
lithographic image, the corresponding form appearing on the silicon would
drift farther and farther from the ideal, with the decreasing feature sizes
associated with the newer technology nodes.
The way this is currently addressed in conventional design flows is to
postprocess the GDSII file with a variety of reticle enhancements techniques
(RETs), such as optical proximity correction and phase-shift mask. For
example, the physical design tool modifies the GDSII file by augmenting
existing features or adding new features, known as subresolution assist
features, to obtain better printability. This means that if the tool projects
that the printing process will be distorted in a certain way, it can add its
own distortion in the opposite direction, attempting to make the two
distortions cancel each other out.
The issue is that every structure in the design is affected by other
structures in close proximity. That is, if two geometric shapes are created
in the GDSII file and photomask in isolation, these shapes print in a certain
way. But if the same shapes are now located in close proximity to each other,
interference effects between these shapes modify each of the shapes, often in
nonintuitive ways. The results of all these effects are variations in timing,
noise, power consumption and, ultimately, yield.
Manufacturing and yield problems typically fall into four main categories:
catastrophic, parametric, systematic (feature-driven) and statistical
(random). Catastrophic problems are those such as a missing via, which cause
the chip to fail completely. By comparison, parametric problems leave the
chip functioning, but out of its specified range, such as a 500-MHz device
that runs at only 300 MHz, or a part that is required to consume less than 5
W of power that actually consumes 8W. The origins of both catastrophic and
parametric problems can be subdivided into systematic (feature-driven)
effects and statistical (random) occurrences.
A true DFM-aware solution has to address each of these problem categories,
which means that it must be able to model all systematic and statistical
effects during implementation, analysis, optimization and verification.
The way to reach acceptable performance and yield goals is to make the entire
design flow, including cell characterization, IC implementation, analysis,
optimization and sign-off, DFM-aware. Within such a flow, manufacturability
issues are understood and addressed at the most appropriate and efficient
step, creating tighter links between design and manufacturing so that design
intent feeds forward to manufacturing, while fab data feeds back to design.
Design tools (particularly the implementation, analysis and optimization
engines) have traditionally been rules-based. That means they were provided
with a set of rules and they analyzed and modified the design to ensure that
none of the rules were violated. In today's ultra-deep submicron
technologies, however, these rules no longer reflect the underlying physics
of the fabrication process. Even if the design tools meticulously follow all
of the rules provided by the foundry, the ensuing chips can still exhibit
parametric (or even catastrophic) problems.
To address these problems, tools now need to employ model-based techniques.
This means that the tools model the way in which the chips will actually be
fabricated. In lithographic simulations, for example, the tools model the way
in which light will pass through the photomasks and any lenses, as well as
how it will react with the chemicals on the surface of the silicon chip and
how the resulting structures will be created.
A true DFM-aware design environment begins with DFM-aware characterization.
This involves taking the various files associated with the standard-cell
libraries, along with the process design kit and DFM data and models provided
by the foundry, and then characterizing the libraries with respect to process
variations and lithographic effects to create statistical probability density
functions in the context of timing, power, noise and yield. As part of this
process, a variety of technology rules are automatically extracted and/or
generated for use by downstream tools.
A true DFM-aware characterization environment also provides yield scoring for
individual cells, taking into account chemical mechanical polishing effects
and using techniques like critical-area analysis to account for random
particulate defects. This allows the model characterization process to
provide both sensitivity and robustness metrics that can be subsequently
exploited by the implementation, analysis and optimization engines. By
knowing the delay or leakage sensitivity of each cell, for example, the
implementation tool can optimize critical timing paths by avoiding such
cells, or by altering their placement to minimize such sensitivity.
Conventional synthesis engines perform their selections and optimizations
based on the timing, area and power characteristics of the various cells in
the library, coupled with the design constraints provided by the designer. In
a DFM-aware environment, the synthesis engine takes into account each cell's
noise and yield characteristics, and the variability characteristics (process
and lithographic) of the cells forming the library and the way in which these
characteristics affect each cell's timing, power, noise and yield.
With regard to the physical design portion of the flow, as was noted earlier,
every structure in the design is affected by its surrounding environment in
the form of other structures in close proximity, often in nonintuitive ways.
This requires the placement tool to be lithography aware and to heed the
limitations and requirements of the downstream manufacturing RET tools.
Similarly, embedding lithographic simulation capability in the routing engine
allows it to identify patterns that must be avoided and locations where the
layout must be modified to avoid creating lithography hotspots that
downstream RET cannot fix. The combination of lithographic-aware placement
and routing helps minimize the need for postlayout RET, and increases the
effectiveness of any RET that is required.
A true DFM-aware design environment must enable the analysis and optimization
of timing, power, noise and yield effects. First, consider timing. Each
element forming a path through the chip, such as a wire segment, via and cell
(logic gate), has a delay associated with it. These delays vary as a function
of process, voltage and temperature. Traditional design environments have
been based on worst-case analysis engines, such as static timing analysis
(STA). STA assumes the worst-case delays for the different paths. STA
assumes, for example, that all of the delays forming a particular path are
minimum or maximum, which is both unrealistic and pessimistic. To address
these issues, a DFM-aware design environment must employ statistical-based
approaches using, for example, a Statistical Static Timing Analyzer (SSTA).
A key aspect of a true DFM-aware design environment is that DFM-aware
analysis is of limited use without a corresponding DFM-aware optimization
capability. To perform variability-aware timing optimization, for example,
the DFM-aware SSTA engine must account for sensitivity and criticality.
In traditional STA, the more critical path is the one that affects the
circuit delay the most; that is, the one with the most negative slack. By
comparison, in DFM-aware SSTA, the most critical path is the one with the
highest probability of affecting the circuit delay the most. It is for this
reason that DFM-aware SSTA optimizations must be based on functions such as a
criticality metric that is used to determine the critical paths: the paths
with the most likelihood of becoming the limiting factor.
In addition to timing analysis and optimization, all of the other analysis
and optimization engines (leakage power, noise and yield) must also employ
variability-aware statistical techniques to efficiently account for
variability. Using these techniques, it is possible to make the design more
robust and less sensitive to variations, thereby maximizing yield throughout
the lifespan of the device.
Lastly, the environment must provide DFM-aware sign-off verification. In this
stage, the DFM-optimized design is passed to a suite of verification engines
for checks such as design rule check (DRC) and lithography process check
(LPC). Once again, all of these engines must analyze and verify the design
with respect to process variations and lithographic effects in the context of
timing, power, noise and yield. Because many manufacturability issues are
difficult to encode as hard-and-fast rules, the physical verification
environment must accommodate model-based solutions. Furthermore, a huge
amount of design data needs to be processed, so the verification solution
must be efficient and scalable.
A key requirement of a true DFM design flow is that it employs a unified data
model and all of the implementation, analysis and optimization engines have
immediate and concurrent access to exactly the same data. What this means in
real terms is that at the same time as the router is laying down a track, for
example, the RC parasitics are being extracted; delay, power, noise and yield
calculations are being performed; the signal integrity of that route is being
evaluated; and the router is using this data to automatically and invisibly
make any necessary modifications.
By integrating DFM within the implementation flow, potential design
iterations caused by separate point-tool approaches are eliminated. Any
design decisions or tradeoffs are done within the context of the whole
design. Thus, any core improvements, such as area reduction and dynamic and
static power reduction, can be immediately accessible, and designers can
ensure that potential DFM consequences do not interfere or degrade such
benefits. After the design has been completed, automated DFM-aware sign-off
verification prior to tapeout can be performed using the DRC/LVS/litho
engines.
To achieve acceptable performance and yield goals at the 65-nm technology
node and below, the entire design flow must become DFM aware. This includes
DFM-aware characterization; DFM-aware implementation, analysis and
optimization; and DFM-aware sign-off verification. A true DFM-aware
environment is one that accounts for process variability and lithographic
effects in the context of timing, power, noise and yield at every stage of
the flow. This begins with the characterization of the cell library,
continues through implementation, analysis and optimization and ends with
sign-off verification.
http://www.eetimes.com/news/design/showArticle.jhtml?articleID=202803596&pgno=1
--
※ 發信站: 批踢踢實業坊(ptt.cc)
◆ From: 140.112.5.65
※ 編輯: yellowfishie 來自: 140.112.5.65 (12/02 17:01)
※ 編輯: yellowfishie 來自: 140.112.5.65 (12/02 17:16)