Tradeoffs in XML Database Compression

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Large XML data files, or XML databases, are now a common way to distribute scientific and bibliographic data, and storing such data efficiently is an important concern. A number of approaches to XML compression have been proposed in the last five years. The most competitive approaches employ one or more statistical text compressors based on PPM or arithmetic coding in which some of the context is provided by the XML document structure. The purpose of this paper is to investigate the relationship between the extant proposals in more detail. We review the two main statistical modeling approaches proposed so far, and evaluate their performance on two representative XML databases. Our main finding is that while a recently-proposed multiple-model approach can provide better overall compression for large databases, it uses much more memory and converges more slowly than an older single-model approach.
Original languageEnglish
Title of host publicationData Compression Conferenc '06
Pages392-401
Number of pages10
Publication statusPublished - 2006

Fingerprint

Dive into the research topics of 'Tradeoffs in XML Database Compression'. Together they form a unique fingerprint.

Cite this