Sharable Parallel Image Compression Algorithms

Objective

The objective of this activity is to design, develop, implement, and test adaptive image compression techniques on massively parallel architectures in an attempt to meet NASA's existing and potential needs of sharing archival images across satellite and terrestrial networks.

Approach

Figure 1. Segmentation results (a) using QT, (b) using CRP-ABC   (smallest blocks are in black)

Image compression is generally achieved by identifying and then reducing redundancies in the image.  A commonly used approach is to replacing "large" smooth areas in the image with "patches" (or blocks) that require less bits to represent.  For example, in Figure 1, a large part of the background and the shoulder area of Lena can be encoded with blocks larger than those required for the detailed area, such as the hair.  Consequently, the more large blocks used, the higher compression ratio achievable, since fewer blocks will be needed to encode the same image.  The difficulty of this approach, however, is how to determine the appropriate sizes and shapes of the blocks, as well as their placement, so as to maintain the quality of the compressed image.  As demonstrated in Figure 1b, we have developed a Variable Block-Size (VBS) image coding algorithm that, with the aid of Constrained Resource Planning search engine, produces a very flexible placement of the blocks that uses much fewer blocks than that of a conventional approach (Fig. 1a).

The primary advantage of the VBS technique for image compression lies in its adaptability to local image characteristics. VBS is often applied to effectively divide an image into blocks of various sizes, so that each block can be compressed accordingly.  The goal of this project is to provide a sharable framework for VBS algorithm on a parallel platform. Many existing image compression techniques, such as JPEG, Wavelet Transform Compression, Fractal Image Compression, Vector Quantization, etc., are either "block-based" or can adopt the VBS approach and achieve higher compression rates. However, VBS generally requires significantly more computation in order to maintain the visual quality, resulting in a computational complexity that is both appropriate and necessary for using parallel machines. This project has developed such a VBS image compression software using both PVM and MPI (two of the most popular parallel languages) on the IBM SP2 at the Maui High Performance Computing Center (MHPCC).

Conventionally, segmenting images into blocks of variable sizes in the VBS techniques is accomplished by using the well-known quad-tree (QT) decomposition (Figure 1a), which divides images into square block sizes of power of two. QT decomposition is favorable for its tree structure representation, which allows efficient bit storage for encoding the tree structure, and straightforward binary decision on split/merge nodes. Nevertheless, QT's rigid restriction on placing blocks at regular locations (cornered at 2k) often conflicts with the arbitrary location of the low-detailed (or high-detailed) regions in a typical image. Consequently, QT will use more, and smaller, blocks to encode these areas. A VBS algorithm should allow the blocks to be placed at arbitrary locations and more flexible block shapes to adapt to those unpredictable image characteristics.

Previous VBS implementations have largely neglected complicated contentions resulting from flexible block shapes and their placements. A simple example in Figure 2 illustrates this contention, where the area shown is to be covered by using either 1x1 or 2x2 blocks. Most VBS algorithms would choose the (2x2) block B1 first, and then be forced to use six 1x1 blocks. But, the better solution is clearly to use the two 2x2 blocks, B2 and B3, plus two 1x1 blocks on top. In other words, the contention among B1 ~ B4 needs to be considered.

Based on these observations, four major features are implemented in our algorithm to maximize the benefits of the VBS approach:

  1. flexible block placement;
  2. arbitrary scanning or sweeping order;
  3. the use of rectangular blocks; and,
  4. a CRP-based search engine that strategically selects blocks for placement. Here, the Constrained Resource Planning (CRP) methodology and the associated planning engine are used.

The Adaptive Block-size Compression (ABC) problem of placing variable-size blocks at arbitrary locations turns out to be difficult (NP-complete) and was determined to be tackled by a proven planning and scheduling paradigm of Constrained Resource Planning (CRP). The resultant CRP-ABC algorithm iteratively determines the best block-size and then the most beneficial location for placing the block, until the image is filled with non-overlapping blocks. It achieves better PSNR/compression ratio by resolving the competition among neighboring blocks. In our work, we choose to use Vector Quantization as the encoding scheme for the blocks to further demonstrate the advantages of this approach.

The parallelization of the algorithm is then done by dividing the image into equal-sized partitions with overlapping boundaries. Each processor take care of the partition assigned to it based on the same principle as described above. Information exchange among processors (interprocessor communication) occur when the placement of blocks happens on the overlapping area.

Accomplishment(s)

 

Significance

Modern space and earth exploratory technology has enabled rapid acquisition of huge amounts of scientific measurement data, among which most are image and video signals. Some data also requires visualization in image form. As a result, image databases are building up at an astonishing rate, exerting a profound impact on both storage and transmission. Thus, the need for an efficient image compression scheme, or a suite of complementing schemes with varying advantages, remains essential in data archiving and sharing. Many existing image compression techniques, including JPEG, Vector Quantization, and Fractal Compression, are block-based. Despite the importance of developing a better variable block-size coding algorithm, little success has been reported on optimizing both the block sizes/shapes and their placement.  By removing the restriction imposed by QT, we developed a series of adaptive block compression algorithms, with demonstrated capabilities of achieving better compression by reducing the number of blocks required. These ABC algorithms provide a simple yet powerful framework for exploiting variable block schemes. Other approaches, such as Vector Quantization, wavelet transform, and JPEG, may be used in a complementary manner with our scheme to generate even better results, or to achieve a broad suite of algorithms for different types of images.

Point(s) of Contact

David Y. Y. Yun
University of Hawaii
dyun@spectra.eng.hawaii.edu
808-956-7627

Chao-wen Chen
University of Hawaii
cwchen@spectra.eng.hawaii.edu
808-956-7249
Homepage