The previous article discussed the original PANCROMA method for Landsat gap filling ETM+ SLC-Off images. This method transfers data from one or more SLC-On images (called the Adjust images) to fill the missing regions in the SLC-Off image (called the Reference image). The method depends on an accurate histogram match in order to prevent striping due to systematic differences in surface radiance levels between the two images. Although this can be effective, as shown by the examples in that article, its success in creating a successful gap fill is very much dependent on the success of the histogram match.
An alternative Landsat gap fill technique that does not depend on histogram matching is the Hayes Interpolation method. This procedure uses a computational approach to solving the gap filling problem. Instead of directly substituting the missing pixels from the Adjust image into the Reference image, it computes the brightness level of each missing pixel in the Reference image using information from a collection of corresponding pixels in the Adjust image.
Like the Transfer method, the Hayes gp fill method also requires exactly matched Reference and Adjust images. The procedure for producing them is exactly the same as described in the previous article. After you produce the matched images the gap filling procedure is similar with a few important differences. You start the procedure by opening first the Reference (grayscale) image (the one with the gaps), followed by the Adjust image. When the images are loaded, select 'Gap Fill' | 'Gap Fill Hayes Interpolation Method'. A Gap Fill Data Form will appear. You can then select the algorithm parameters: 'Search Extents', the 'Gap Threshold' and the 'Comparison Radius'. The default values are 20, 4 and 2, respectively. The Search Extents defines the size of the sliding window within which PANCROMA will conduct its pixel-wise computations. PANCROMA will search the target plus-or-minus the Search Extents, so the size of the window is twice the Search Extents value. The Comparison Radius value determines how closely a searched pixel must be to the target pixel in order to be considered 'similar'. The default value is 4, meaning that pixels within 4 levels of the target will be considered similar. For example, if the target has value 83 and the Comparison Radius is 4, then coordinates of pixels with values 79, 80, 81, 82, 83, 84, 85, 86 and 87 will be considered as similar for purposes of the computation. The Gap Threshold determines which pixels in the Reference image will be computed. Any pixel with a brightness value less than the Gap Threshold will be considered a missing pixel and PANCROMA will attempt to fill it. After you select your values, the program will compute the gap filled image. Note that a histogram match will not be computed for the Hayes method, as it does not generally benefit the result.
You must be careful when selecting the input parameters, as your choice will have affect both the quality of the gap filled image and the processing time (the higher the quality, the greater the time). For example, increasing the size of the Search Extents increases the number of candidate pixels for the comparison computation. In general, the larger the value of the Search Extents, the better the pixel interpolation. However, choosing a large Search Extents value will rapidly bog down computational speeds, as the extents must be searched for each gap pixel in the image, plus the black collar pixels as there is no way to distinguish between a black gap pixel and a black collar pixel. Choosing too small of a window will result in failure to find any matching pixels. (The optimum combination of Search Extents and Threshold cannot be computed and must be determined empirically.) Although the algorithm has been streamlined for computational speed, whatever the size of the Search Extents, the Hayes method will be slower to process than the Transfer method.
Another reason to keep the Search Extents as small as possible is that the edge of the window will eventually bump against the edge of the image. Since the target pixel is offset from the edge by the Search Extents, the result is that there will exist a band of uncomputed pixels on each side of the image. If you are processing a Landsat scene, this will not matter much because this band will be in the collar area. If you are computing a subset image, however, this will result in a "fringe" around the edges of the image. This fringe area will be gap filled by transferring pixels from the Adjust image using the Transfer method. This may result in a slight mismatch of color tones at the fringe area only.
Your selection of a Comparison Radius value will also have a pronounced effect on the resulting gap filled image quality. Choosing a Comparison Radius of zero means that only pixels that exactly match the target pixel will be included in the computation. However, this may mean that no matching pixels will be discovered within the search window. In this case, PANCROMA increases the size of the search window and repeats the process up to two more times. Repeated failures to find any matching pixels will slow down the computation. If no match is found, then no gap fill is computed, resulting in a black pixel in the processed image. On the other hand, if the Comparison Radius is set too large, the computation will not be very discriminating. This will result in blurring within the filled gap and loss of detail.
It is generally a good idea to create a set of small subset Reference and Adjust images to be used for empirically determining the optimal gap fill settings before attempting to fill entire scenes. If you are able to crop out the image collars entirely the processing speed will be increased significantly. This is often difficult with Landsat scenes as the path direction is not aligned with true north so that a lot of the useful image may be lost if all collar area is cropped.
The set of sample images shown to the right is taken from a Row 91 Path 76 Landsat image of southern New Zealand. This is a difficult terrain with a lot of mountains, snow and some cloud cover. Each pair wise-run took 1540 seconds to process using the default settings. After processing, I prepared three RGB color composites. The first is of the unprocessed Reference image, showing the gaps. The second image was gap filled using the Transform method. The third was gap filled using the Hayes interpolation. I also included a detail section of the Hayes-processed image below the full scene. In general, the Hayes image exhibits much better matching of color tones as compared to the Transfer image. Detail in the filled gaps is fairly good as a result of the relatively small Comparison Radius value that I used. There are a few black spots in the former gap areas. This is a result of the algorithm failing to find any pixel matches at all. These could have been eliminated by increasing the Search Extents, but the already considerable processing time would have increased further.
The Hayes method is generally results in a much better match of pixel brightness level between the gap fill pixels and the reference image pixels and a better image overall. However, the method generally requires greatly increased computational times as compared to the Transform method. This method, like all gap-filling schemes struggles with clouds, and snow.