TY - GEN
T1 - A compiler framework for supporting speculative multicore processors
AU - Yew, Pen Chung
PY - 2007
Y1 - 2007
N2 - As multi-core technology is currently being deployed in computer industry primarily to limit power consumption and improve throughput, continued performance improvement of a single application on such systems remains an important and challenging task. Because of the shortened on-chip communication latency between cores, using thread-level parallelism (TLP) to improve the number of instructions executed per clock cycle, i.e., to improve ILP performance, has shown to be effective for many general-purpose applications. However, because of the program characteristics of these applications, effective speculative schemes at both thread- and instruction-level are crucial. Processors that support speculative multithreading have been proposed for sometime now. However, efforts have only begun recently to develop compilation techniques for this type of processors. Some of these techniques would require efficient architectural support. The jury is still out on how much performance improvement could be achieved for general-purpose applications on this kind of architectures. In this talk, we focus on a compiler framework that supports thread-level parallelism with the help of control and data speculation for general-purpose applications. This compiler framework has been implemented on the Open64 compiler that includes support for efficient data dependence and alias profiling, loop selection schemes, as well as speculative compiler optimizations and effective recovery code generation schemes to exploit thread-level parallelism in loops and the remaining code regions.
AB - As multi-core technology is currently being deployed in computer industry primarily to limit power consumption and improve throughput, continued performance improvement of a single application on such systems remains an important and challenging task. Because of the shortened on-chip communication latency between cores, using thread-level parallelism (TLP) to improve the number of instructions executed per clock cycle, i.e., to improve ILP performance, has shown to be effective for many general-purpose applications. However, because of the program characteristics of these applications, effective speculative schemes at both thread- and instruction-level are crucial. Processors that support speculative multithreading have been proposed for sometime now. However, efforts have only begun recently to develop compilation techniques for this type of processors. Some of these techniques would require efficient architectural support. The jury is still out on how much performance improvement could be achieved for general-purpose applications on this kind of architectures. In this talk, we focus on a compiler framework that supports thread-level parallelism with the help of control and data speculation for general-purpose applications. This compiler framework has been implemented on the Open64 compiler that includes support for efficient data dependence and alias profiling, loop selection schemes, as well as speculative compiler optimizations and effective recovery code generation schemes to exploit thread-level parallelism in loops and the remaining code regions.
UR - http://www.scopus.com/inward/record.url?scp=38048998699&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=38048998699&partnerID=8YFLogxK
U2 - 10.1007/978-3-540-74309-5_1
DO - 10.1007/978-3-540-74309-5_1
M3 - Conference contribution
AN - SCOPUS:38048998699
SN - 9783540743088
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 1
BT - Advances in Computer Systems Architecture - 12th Asia-Pacific Conference, ACSAC 2007, Proceedings
PB - Springer Verlag
T2 - 12th Asia-Pacific Computer Systems Architecture Conference, ACSAC 2007
Y2 - 23 August 2007 through 25 August 2007
ER -