推 ye11owfish:good article for model-based DFM :) 07/03 20:50
Characterisation to silicon
To get the most out of design for manufacturability, a design environment
needs much more than just postprocessing of design files, as Dwayne Burek
explains
Until now, design for manufacturability (DFM) in silicon design has meant
post processing the GDSII file with a variety of DFM fixes followed by
resolution enhancement techniques (RET) such as optical proximity correction.
This is no longer viable in chips created at the 65nm technology node and
below. To achieve acceptable performance and yield goals, the entire design
flow has to become aware of the needs of DFM, and this affects the
characterisation, implementation, analysis and optimisation, and –
ultimately – the sign-off verification.
A true DFM-aware design environment must be able to model all systematic and
statistical effects during implementation, analysis, optimisation and
verification.
Requirements
Design tools have traditionally been rules-based, but today, these rules no
longer reflect the underlying physics of the fabrication process. Even if the
design tools meticulously follow all of the rules provided by the foundry,
the ensuing chips can still exhibit parametric (or even catastrophic)
problems.
Tools now need to employ model-based techniques, for example, modelling the
way in which light will pass through the photomasks and any lenses, how it
will react with the chemicals on the surface of the silicon chip and how the
resulting structures will be created, and feeding that back into the design
environment.
Characterisation
So in a turnaround to the traditional approach, the design environment now
has to start with the characterisation of the process. This takes the various
files associated with the standard cell libraries – along with the process
design kit and DFM data and models provided by the foundry – and then
characterises process variations and lithographic effects to create
statistical models for timing, power, noise and yield. As part of this
process, a variety of technology rules are automatically extracted and/or
generated for use by downstream tools.
This characterisation also provides key yield data for individual cells,
taking into account chemical mechanical polishing (CMP) effects and using
techniques like critical-area analysis (CAA) to account for random particle
defects. By knowing the delay or leakage sensitivity of each cell, for
example, the implementation tool can optimise critical timing paths by
avoiding such cells or by altering their placement to minimise such
sensitivity.
Implementation
Conventional synthesis engines perform their selections and optimisations
based on the timing, area and power characteristics of the various cells in
the library, coupled with the design constraints provided by the designer. In
a DFM-aware environment, the synthesis engine additionally takes into account
yield and variability characteristics (process and lithographic) of the cells
forming the library, resulting in a design that is more robust and less
sensitive to process variations.
The problem is that every structure in the design is affected by its
surrounding environment, so this requires the placement tool to be
lithographyaware, and to heed the limitations and requirements of the
resolution enhancing RET tools.
Analysis
Traditional design environments have been based on worst-case analysis
engines, such as static timing analysis (STA) which assumes the worstcase
delays for the different paths. STA assumes, for example, that all of the
delays forming a particular path are minimum or maximum, which is both
unrealistic and pessimistic. To address these issues, a DFM-aware design
environment must employ statistical-based approaches using, for example, a
statistical static timing analyser (SSTA).
In traditional STA, the most critical path is the one that affects the
circuit delay the most; i.e. the one with the most negative slack. By
comparison, in DFM-aware SSTA the most critical path is the one with the
highest probability of affecting the circuit delay the most. Therefore, the
SSTA optimisations must be based on the paths with the most likelihood of
becoming the limiting factor.
Sign-off verification
The environment must also provide DFM-aware signoff verification. In this
stage, the DFM-optimised design is passed to a suite of verification engines,
for checks such as design rule checking (DRC) and lithography process checks
(LPC). Once again, all of these engines must analyse and verify the design
for process variations and lithographic effects in the context of timing,
power, noise and yield.
Because many manufacturability issues are difficult to encode as hard and
fast rules, the physical verification environment must also accommodate
model-based solutions. Furthermore, a huge amount of design data needs to be
processed, so the verification solution must be efficient and scalable.
DFM-aware design
Magma’s QuickCap NX and SiliconSmart DFM engines can provide a full model
characterisation environment for timing, power, noise and yield including
support for lithographic and process variation effects, see figure 2.
The DFM-aware implementation, analysis and optimisation comes from the Talus
platform, which employs DFM-aware engines such as Talus DFM, Quartz SSTA,
Quartz RC (variable-aware parasitic extraction) and the Quartz DRC-litho
sign-off engine.
These tools use a unified data model and all of the implementation, analysis
and optimisation engines have immediate and concurrent access to exactly the
same data. At the same time as the router is laying down a track, for
example, the RC parasitics are being extracted, delay, power, noise and yield
calculations are being performed, the signal integrity of that route is being
evaluated, and the router is using this data to automatically make any
necessary modifications.
By integrating DFM into the implementation flow, the design iterations needed
by separate tools aren’t necessary and any design decisions or tradeoffs are
done within the context of the whole design.
After the design has been completed, automated DFM-aware sign-off
verification prior to tape out can be performed using the Quartz
DRC/LVS/Litho engines.
Conclusion
To achieve acceptable performance and yield goals at the 65nm technology node
and below, the entire design flow must become DFM aware. This means having
DFM-aware characterisation, implementation, analysis and optimisation as well
as DFM-aware sign-off verification.
About the Author:
Dwayne Burek is Product Director for the Design Implementation Business Unit
at Magma
http://www.cieonline.co.uk/cie2/articlen.asp?pid=1486&id=15808
--
※ 發信站: 批踢踢實業坊(ptt.cc)
◆ From: 140.112.25.195