Skip to Main content Skip to Navigation
Journal articles

Hierarchical Clustering

Abstract : Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a “good” hierarchical clustering is one that minimizes a particular cost function [23]. He showed that this cost function has certain desirable properties: To achieve optimal cost, disconnected components (namely, dissimilar elements) must be separated at higher levels of the hierarchy, and when the similarity between data elements is identical, all clusterings achieve the same cost. We take an axiomatic approach to defining “good” objective functions for both similarity- and dissimilarity-based hierarchical clustering. We characterize a set of admissible objective functions having the property that when the input admits a “natural” ground-truth hierarchical clustering, the ground-truth clustering has an optimal value. We show that this set includes the objective function introduced by Dasgupta. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better and faster algorithms for hierarchical clustering. We also initiate a beyond worst-case analysis of the complexity of the problem and design algorithms for this scenario.
Document type :
Journal articles
Complete list of metadatas
Contributor : Vincent Cohen-Addad <>
Submitted on : Wednesday, November 20, 2019 - 10:00:54 AM
Last modification on : Monday, August 3, 2020 - 3:44:10 AM

Links full text



Vincent Cohen-Addad, Varun Kanade, Frederik Mallmann-Trenn, Claire Mathieu. Hierarchical Clustering. Journal of the ACM (JACM), Association for Computing Machinery, 2019, 66 (4), pp.1-42. ⟨10.1145/3321386⟩. ⟨hal-02371814⟩



Record views