Abstract / Description of output
The increase in data volume is challenging the suitability of non-distributed and non-scalable algorithms, despite advancements in hardware. An example of this challenge is clustering. Considering that optimal clustering algorithms scale poorly with increased data volume or are intrinsically non-distributed, accurate clustering of large datasets is increasingly resource-heavy, relying on substantial and expensive compute nodes. This scenario forces users to choose between accuracy and scalability. In this work, we introduce HiErArchical Data Splitting and Stitching (HEADSS), a Python package designed to facilitate clustering at scale. By automating the splitting and stitching, it allows repeatable handling, and removal, of edge effects. We implement HEADSS in conjunction with HDBSCAN, where we achieve orders of magnitude reduction in single node memory requirements for both non-distributed and distributed implementations, with the latter offering similar order of magnitude reductions in total run times while recovering analogous accuracy. Furthermore, our method establishes a hierarchy of features by using a subset of clustering features to split the data.
Original language | English |
---|---|
Article number | 100709 |
Pages (from-to) | 1-9 |
Number of pages | 9 |
Journal | Astronomy and Computing |
Volume | 43 |
Early online date | 8 Apr 2023 |
DOIs | |
Publication status | Published - 21 Apr 2023 |
Keywords / Materials (for Non-textual outputs)
- Methods:data analysis
- methods: statistical
- methods: miscellaneous
- Methods: numerical