Dynamic task scheduling using online optimization

Babak Hamidzadeh, Lau Ying Kit, David J. Lilja

Research output: Contribution to journalArticlepeer-review

40 Scopus citations


Algorithms for scheduling independent tasks on to the processors of a multiprocessor system must trade-off processor load balance, memory locality, and scheduling overhead. Most existing algorithms, however, do not adequately balance these conflicting factors. This paper introduces the Self-Adjusting Dynamic Scheduling (SADS) class of algorithms that use a unified cost model to explicitly account for these factors at runtime. A dedicated processor performs scheduling in phases by maintaining a tree of partial schedules and incrementally assigning tasks to the least-cost scheduling phase terminates whenever any processor becomes idle, at which time partial schedules are distributed to the processors. An extension of the basic SADS algorithm, called DBSADS, controls the scheduling overhead by giving higher priority to partial schedules with more task-to-processor assignments. These algorithms are compared to two distributed scheduling algorithms within a database application on an Intel Paragon distributed-memory multiprocessor system.

Original languageEnglish (US)
Pages (from-to)1151-1163
Number of pages13
JournalIEEE Transactions on Parallel and Distributed Systems
Issue number11
StatePublished - Nov 1 2000

Fingerprint Dive into the research topics of 'Dynamic task scheduling using online optimization'. Together they form a unique fingerprint.

Cite this