Shortest Processing Time First and Hadoop - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Shortest Processing Time First and Hadoop

Résumé

Big data has revealed itself as a powerful tool for many sectors ranging from science to business. Distributed data-parallel computing is then common nowadays: using a large number of computing and storage resources makes possible data processing of a yet unknown scale. But to develop large-scale distributed big data processing, one have to tackle many challenges. One of the most complex is scheduling. As it is known to be an optimal online scheduling policy when it comes to minimize the average flowtime, Shortest Processing Time First (SPT) is a classic scheduling policy used in many systems. We then decided to integrate this policy into Hadoop, a framework for big data processing, and realize an implementation prototype. This paper describes this integration, as well as tests results obtained on our testbed.
Fichier principal
Vignette du fichier
bare_conf.pdf (122.35 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01308183 , version 1 (27-04-2016)

Identifiants

  • HAL Id : hal-01308183 , version 1

Citer

Laurent Bobelin, Patrick Martineau, Haiwu He. Shortest Processing Time First and Hadoop. 3rd IEEE International Conference on Cyber Security and Cloud Computing (CSCloud 2016), Jun 2016, Pékin, China. ⟨hal-01308183⟩
197 Consultations
1655 Téléchargements

Partager

Gmail Facebook X LinkedIn More