A performance evaluation of the two classes of real-time distributed CPU scheduling algorithms (sender-initiated and receiver-initiated) has been conducted, with emphasis on several factors, including network load variance, task laxity, communication costs, and mean arrival rate. Results show the pivotal role of task laxity, communication cost, and network load variance in determining the performance of each algorithm. It was observed that, when the algorithms are run on a fully connected network, receiver-initiated algorithms outperform sender-initiated algorithms with low-laxity tasks. If the laxity of the tasks is high, then both sender-initiated and receiver-initiated perform well, and the difference is not significant. Due to the low communication overhead on a fully connected network, the receiver initiated algorithm can take advantage of a frequent polling scheme to maintain up-to-date peers' CPU surplus information. Consequently, it achieves a relatively low overall miss ratio.