next up previous
Next: Removing shadows Up: Normalizing and classifying orthoimages Previous: Removing shading effects

   
Classifying orthoimages

Figure 4 shows a 480m by 340m section of an orthoimage of an area of the Rocky Mountains. Included within the image are regions of pine trees, brush, talus, rock cliffs, and snow. The pine trees are surrounded by an understory consisting of dirt, grass, and shrub. Portions of talus, cliff, and snow are in shadow. Each of these classes of surface cover has a distinct coloration. Given the panchromatic brightness at each pixel and the corresponding surface type, it is straightforward to produce a relatively accurate color version of the image.


  
Figure 4: 480m by 340m section of an orthoimage of the Rocky Mountains
\begin{figure}
\centerline{\epsfig{figure=figures/hogum-sub.ps,width=3.3in} }
\end{figure}

Image brightness can yield a rough categorization of these regions: pine is dark, talus a mid-gray, and snow is bright. A quantitative examination of image values, however, quickly demonstrates that thresholding cannot adequately separate the classes of interest, no matter how carefully the thresholds are chosen. Computer vision techniques based on 2-D shape analysis are not likely to succeed either, given the complexity of the images. Instead, we have successfully used a pattern classification approach similar to that used to classify multi-spectral satellite data.

For each pixel in the deshaded orthoimage, we computed eight features:

1.
pixel brightness
2.
average neighborhood brightness
3.
minimum neighborhood brightness
4.
maximum neighborhood brightness
5.
elevation
6.
slope
7.
aspect
8.
angle to southern occluder
Features 2-4 allow consideration of brightness within a local context. Features 5-8 are computed by interpolating 30m DEM values. Feature 7 measures the direction a given point on a slope is facing, and important determinant of vegetation cover. Feature 8 measure the angle from a given point to the southern skyline. Larger values increase the likelihood that the point will be in shadow when the image was acquired.

For each class, several hundred image locations were selected manually to form a training set, in a process requiring only a few minutes of time. Statistics on the distributions of feature values for the training set were determined and used to form the discriminant functions for a maximum likelihood Bayes classifier [20]. This classifier was then used to categorize each pixel location in the full orthoimage. A final decluttering step reclassified very small regions based on the dominant surrounding class.

Classification results are shown in Figure 5. While ground-truth validation has not been done, spot checking of the results corresponds closely with what would be expected from a careful examination of the orthoimage. It is important to note that the classification was accomplished with no hand tuning of parameters or other manual adjustments, other than the selection of training samples.


  
Figure 5: Classification results
\begin{figure}
\centerline{\epsfig{figure=figures/hogum-classified-sub.ps,width=...
...par\centerline{\epsfig{figure=figures/color-key.eps,width=1.75in} }
\end{figure}


next up previous
Next: Removing shadows Up: Normalizing and classifying orthoimages Previous: Removing shading effects
Comments: Simon PREMOZE
1999-02-05