VDEC D2T Symposium

June 29th (Tue.), 2010. 10:00-18:30
Takeda-Hall, 5th floor of Takeda Building, The University of Tokyo


Abstracts

Cost-Effective Test Methodology for Analog and Mixed-Signals in SoCs
Mohamed Abbas (University of Tokyo)
Testing of analog and mixed-signal (AMS) circuits in SOC is not yet as matured as the purely digital part. Although the research in this area had been started long time ago, the test methods of analog circuits are still mostly ad-hoc based. The difficulty in accurately manipulating analog signals and the lack of accurate and practical fault models and analog ATPG represent some of the challenges in testing AMS systems. Migration from analog to Digitally-Assisted Analog could be a breakthrough in testing analog part through the digital complement of the design.
As a part of the research activities in D2T-VDEC, we introduced the signature-based test for digitally assisted analog design including the automatic analog test pattern generation (AATPG) for digitally assisted adaptive equalizer. In this talk, I will explore our test methodology using two application examples: the digitally-assisted adaptive equalizer and digitally-calibrated pipe-lined ADC focusing more on the later application.

Board-Level Fault Diagnosis Methods to Target No-Trouble-Found Failures
Krishnendu Chakrabarty (Duke University)
Increasing integration densities and high operating speeds are leading to subtle manifestations of defects at the board level, which leads to the confounding problem of "No Trouble Found" outcomes during diagnosis. New board-level diagnosis strategies are needed to address this problem and to improve product quality. In this talk, the speaker will show how Bayesian inference can form the basis for a new board-level diagnosis framework that allows us to identify faulty devices or faulty modules within a device on a failing board with high confidence. Bayesian inference offers a powerful probabilistic method for pattern analysis, classification, and decision making under uncertainty. This inference technique is used by first generating a database of fault syndromes obtained using fault-insertion test at the module pin level on a fault-free board, and then using the database along with the observed erroneous behavior of a failing board to infer the most likely faulty device.
In the second part of the talk, a generic fault-diagnosis method based on an error-flow dictionary will be presented to identify the root cause of functional failures on a chip or board. Error propagation mimics actual dataflow in a circuit, thus it reflects the native (functional) mode of circuit operation. In contrast to conventional fault syndromes, error flow includes the failure information in terms of circuit functionality, which significantly facilitates the diagnosis of functional failures. In the proposed diagnosis procedure, error flow is first learned from a good circuit by intentionally inserting faults, and then the root cause of a failing circuit is determined by comparing the similarity between the pre-learned error flow and the error flow observed from the failing circuit. The similarity of two error flows is evaluated based on the length of the longest common subsequence in string matching. Results will be presented for an  industrial communication circuit.

A Tool Chain for Dependable VLSI Design - A challenge to soft-error tolerant VLSI Systems -
Hiroto Yasuura (Kyushu University)
In the CREST project of "Fundamental Technology of Dependable VLSI System", we are working on modeling, correction and recovery techniques for dependable VLSI. On soft-errors caused by neutrons, which have been negligible in the advanced VLSI systems, we are modeling soft-errors in system, RT, logic and circuit levels, and defining index of dependability in each level. We are also developing techniques to estimate the indexes and a tool chain of analyzing dependability of VLSI systems. In this talk, the goal and obtained results of our research are presented.

Ultra Dependable VLSI Processor Architecture
Shuichi Sakai (University of Tokyo)
Innovative architectural/software technologies for substantially improving microprocessor dependability  proposed by speaker's team in CREST will be presented and discussed.

Robust System Design in Scaled CMOS and Beyond
Subhasish Mitra (Stanford University)
Complex systems are an indispensable part of all our lives. The impacts of malfunctions in these systems continue to increase as systems become more complex, interconnected, and pervasive. Robust system design is required to ensure that future systems perform correctly despite rising levels of complexity and increasing disturbances. Hardware failures are especially a growing concern because:
- Existing validation and test methods barely cope with today’s complexity. New techniques will be essential to minimize the effects of defects and design flaws going forward.
- For coming generations of silicon ICs, several failure mechanisms, largely benign in the past, are becoming visible at the system-level. A large class of future systems will require tolerance of hardware errors during their operation.
- Emerging nanotechnologies are inherently prone to high rates of imperfections. Nevertheless such technologies are being seriously explored to build highly energy-efficient systems of the future. The inherent imperfections must be overcome before such nanotechnologies can be harnessed with practical benefits to society.
This talk will address these outstanding challenges, ranging from immediate concerns blocking progress today to major obstacles in exploratory nanotechnologies, as described below:
- Thorough validation and test despite enormous complexity.
- Tolerance and prediction of hardware failures.
- Correct circuit operation in emerging nanotechnologies prone to imperfections.

Enabling Robust Systems through On-Line Test and Diagnosis
Shawn Blanton (Carnegie Mellon University)
Producing reliable, integrated systems is becoming extremely difficult due to (1) the increasing variability and uncertainty inherent in advancing fabrication technologies, (2) the worsening effects of various wear-out mechanisms and (3) environmental disturbances (e.g., soft errors). Solutions to this problem centering on traditional approaches involving redundant resources are likely to be impractical for a number of reasons involving power, chip area and performance. Because of this trend, it is widely believed that approaches for enabling chips to self-X (where X = {monitor, diagnose, calibrate, compensate, heal, etc.}) will be needed to ensure reliable operation over a chip’s expected lifetime. In collaboration with Prof. Mitra of Stanford University, our work in this area is focused on developing methodologies that enable integrated systems to (a) self-monitor, (b) self-diagnose, and (c) self-compensate for various non-idealities.
In this talk, I will describe how state-of-the-art diagnosis of failing ICs today is used today to extract valuable information about design, manufacturing and test itself. Although tremendously beneficial, a fault simulator, significant amounts of design data, and powerful computer servers are needed to perform diagnosis. It is therefore infeasible to implement traditional effect-cause diagnosis within a system. We are therefore developing efficient cause-effect (i.e., fault dictionary) based approaches for performing in-system diagnosis. I will conclude the talk by describing approaches for developing efficient fault dictionaries for on-chip implementation.

VLSI Design and Education Center (VDEC), The University of Tokyo