Towards Memory-Optimized Data Shuffling Patterns for Big Data Analytics - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Towards Memory-Optimized Data Shuffling Patterns for Big Data Analytics

Résumé

Big data analytics is an indispensable tool in transforming science, engineering, medicine, healthcare, finance and ultimately business itself. With the explosion of data sizes and need for shorter time-to-solution, in-memory platforms such as Apache Spark gain increasing popularity. However, this introduces important challenges, among which data shuffling is particularly difficult: on one hand it is a key part of the computation that has a major impact on the overall performance and scalability so its efficiency is paramount, while on the other hand it needs to operate with scarce memory in order to leave as much memory available for data caching. In this context, efficient scheduling of data transfers such that it addresses both dimensions of the problem simultaneously is non-trivial. State-of-the-art solutions often rely on simple approaches that yield sub-optimal performance and resource usage. This paper contributes a novel shuffle data transfer strategy that dynamically adapts to the computation with minimal memory utilization, which we briefly underline as a series of design principles.
Fichier principal
Vignette du fichier
short.pdf (89.16 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01355227 , version 1 (22-08-2016)

Identifiants

Citer

Bogdan Nicolae, Carlos Costa, Claudia Misale, Kostas Katrinis, Yoonho Park. Towards Memory-Optimized Data Shuffling Patterns for Big Data Analytics. CCGrid’16: 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, May 2016, Cartagena, Colombia. pp.409-412, ⟨10.1109/CCGrid.2016.85⟩. ⟨hal-01355227⟩
42 Consultations
364 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More