CIM: A reliable metric for evaluating program phase classifications

Sreekumar V. Kodakara, Jinpyo Kim, David J. Lilja, Douglas Hawkins, Wei Chung Hsut, Pen Chung Yew

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

We propose the use of the Confidence Interval of estimated Mean (CIM), a metric based on statistical sampling theory, to evaluate the quality of a given phase classification and for comparing different phase classification schemes. Previous research on phase classification used the Weighted Average of Coefficient of Variation (CoVwa) to estimate phase classification quality. We found that the phase quality indicated by CoV wa could be inconsistent across different phase classifications. We explain the reasons behind this inconsistency and demonstrate the inconsistency using data from several SPEC CPU2000 benchmark programs. We show that the Confidence Interval of estimated Mean (CIM) correctly estimates the quality of phase classification with a meaningful statistical interpretation.

Original languageEnglish (US)
JournalIEEE Computer Architecture Letters
Volume6
Issue number1
DOIs
StatePublished - Jan 2007

Bibliographical note

Funding Information:
VI. ACKNOWLEDGEMENT This work was supported in part by grant NSF no. CCF-0541162, EIA-0220021, Intel, the University of Minnesota Digital Technology Center, and the Minnesota Supercomputing Institute. REFERENCES

Copyright:
Copyright 2011 Elsevier B.V., All rights reserved.

Fingerprint

Dive into the research topics of 'CIM: A reliable metric for evaluating program phase classifications'. Together they form a unique fingerprint.

Cite this