Statistical sampling, especially stratified random sampling, is a promising technique for estimating the performance of the benchmark program without executing the complete program on microarchitecture simulators or real machines. The accuracy of the performance estimate and the simulation cost depend on the three parameters, namely the interval size, the sample size, and the number of phases (or strata). Optimum values for these three parameters depends on the performance behavior of the program and the microarchitecture configuration being evaluated. In this paper, we quantify the effect of these three parameters and their interactions on the accuracy of the performance estimate and simulation cost. We use the Confidence Interval of estimated Mean (CIM), a metric derived from statistical sampling theory, to measure the accuracy of the performance estimate; we also discuss why CIM is an appropriate metric for this analysis. We use the total number of instructions simulated and the total number of samples measured as cost parameters. Finally, we characterize 21 SPEC CPU2000 benchmarks based on our analysis.