Exploiting Timing Error Locality to Maximize Power Efficiency of Microprocessor

Advisor: Prof. Russ Joseph

Circuit-level timing speculation has been proposed as a technique to reduce dependence on design margins, eliminating power and performance overheads. Recent work has proposed microarchitectural methods to dynamically detect and recover from timing errors in processor logic. To a large extent this work has relied on statistical error models and has not evaluated potential disparity of error rates at the level of static instructions.

In this project, we examine gate-level hardware models that reveal pronounced locality in instruction-level error rates due to value locality and data dependence structure within an execution pipeline. We propose timing error prediction to dynamically anticipate timing errors at the instructionlevel and error padding technique to avoid the full recovery cost of timing errors.


System Modeling and Optimization

Advisor: Prof. Russ Joseph

Hard faults in SRAM arrays due to random effects will become increasing prevalent in future technology generations. For branch predictor arrays and other structures which track speculative state, probability models that survey performance for a given fault rate could help designers evaluate cost-benefit tradeoffs in developing more resilient arrays. This project offers the first attempt at analytic models for SRAM structures accuracy under hard failures.

Our approach applies classification and mixture models to concisely model the impact that failure rate has on overall predictor performance for a specified application.
We believe that these types of models provide useful projections and help to build insight that may be used to reduce adverse effects of SRAM defects in thoese structures.

Dynamic Cache Resizing and Voltage Scaling Using Machine Learning Algorithm

(Note: course project for Machine Learning) Advisor: Prof. Bryan Pardo

Large data cache capacity is introduced to decrease cache misses, but since most of the cache blocks will be left idle in normal applications, additional power consumption becomes a byproduct of the large cache. In addition to the shrink of circuits size, power leakage has gradually been the concern of processor design. Based on this phenomena, cache resizing techniques have been developed to increase power efficiency.

Machine learning gives us the opportunity to explore the hidden information about the relationship between memory profile and available cache capacity. Then an intuitive idea is to take the advantage of characteristics of memory profile to build more accurate and efficient cache resizing policy. In this project we applied some profile of the memory access instructions and machine learning algorithm (i.e., genetic algorithm) to build rules mapping the specific memory access profile to the optimistic cache capacity. Moreover, since the policy decision rule is independent to hardware implementation of cache structure, we can also easily introduce voltage scaling to further reduce leakage power in processor by adding a parameter for voltage.

 

ҳģ