Improved methodology for longitudinal Web analytics using Common Crawl

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Common Crawl is a multi-petabyte longitudinal dataset containing over 100 billion web pages which is widely used as a source of language data for sequence model training and in web science research. Each of its constituent archives is on the order of 75TB in size. Using it for research, particularly longitudinal studies, which necessarily involve multiple archives, is therefore very expensive in terms of compute time and storage space and/or web bandwidth. Two new methods for mitigating this problem are presented here, based on exploiting and extending the much smaller (<200 gigabytes (GB) compressed) index which is available for each archive. By adding Last-Modified timestamps to the index we enable longitudinal exploration using only a single archive. By comparing the distribution of index features for each of the 100 segments into which archive is divided with their distribution over the whole archive, we have identified the least and most representative segments for a number of recent archives. Using this allows the segment(s) that are most representative of an archive to be used as proxies for the whole. We illustrate this approach in an analysis of changes in URI length over time, leading to an unanticipated insight into the how the creation of Web pages has changed over time.
Original languageEnglish
Title of host publicationWebSci '24: Proceedings of the 16th ACM Web Science Conference 2024
Number of pages11
Publication statusAccepted/In press - 31 Jan 2024
Event16th ACM Web Science Conference 2024 - Stuttgart, Germany
Duration: 21 May 202424 May 2024
Conference number: 16


Conference16th ACM Web Science Conference 2024
Abbreviated titleWebsci 2024


Dive into the research topics of 'Improved methodology for longitudinal Web analytics using Common Crawl'. Together they form a unique fingerprint.

Cite this