|
Click here for full text:
Comparative Analysis of Arithmetic Coding Computational Complexity
Said, Amir
HPL-2004-75
Keyword(s): entropy coding; compression; complexity
Abstract: Some long-held assumptions about the most demanding computations for arithmetic coding are now obsolete due to new hardware. For instance, it is not advantageous to replace multiplication-which now can be done with high precision in a single CPU clock cycle--with comparisons and table-based approximations. A good understanding of the cost of the arithmetic coding computations is needed to design efficient implementations for the current and future processors. In this work we profile these computations by comparing the running times of many implementations, trying to change at most one part at a time, and avoiding small effects being masked by much larger ones. For instance, we test arithmetic operations ranging from 16-bit integers to 48-bit floating point; and renormalization outputs from single bit to 16 bits. To evaluate the complexity of adaptive coding we compare static models and different adaptation strategies. We observe that significant speed gains are possible if we do not insist on updating the code immediately after each coded symbol. The results show that the fastest techniques are those that effectively use the CPU's hardware: full- precision arithmetic, byte output, table look-up decoding, and periodic updating. The comparison with other well known methods shows that the version that incorporates all these accelerations is substantially faster, and that its speed is approaching Huffman coding. Notes: Copyright IEEE 2004 Published in IEEE Data Compression Conference, 23-25 March 2004, Snowbird, Utah, USA
21 Pages
Back to Index
|