USGS
South Florida Information Access
SOFIA home
Help
Projects
by Title
by Investigator
by Region
by Topic
by Program
Results
Publications
Meetings
South Florida Restoration Science Forum
Synthesis
Information
Personnel
About SOFIA
USGS Science Strategy
DOI Science Plan
Education
Upcoming Events
Data
Data Exchange
Metadata
publications > flyer > Reflectance Calibrated Digital Multispectral Video


U.S. Department of the Interior
U.S. Geological Survey

Reflectance Calibrated Digital Multispectral Video: A Test-Bed for High Spectral and Spatial Resolution Remote Sensing

Publication in Photogrammetric Engineering & Remote Sensing, v. 63, no. 3, p. 224-229

John E. Anderson, Gregory B. Desmond, George P. Lemeshewsky, and Donald R. Morgan

Digital Multispectral Video | Correction and Post Processing of DMSV Data | Spectral Sensing with DMSV | Conclusion | References

The field of remote sensing will change dramatically in the next few years with the launch of a number of new satellite systems that will provide large quantities of new types of data with higher spatial and spectral resolution. In addition to these space-borne instruments, many existing and planned airborne systems also will provide data of increased spectral and spatial fidelity. Imagery data sets offering spectral channels less than 25 um wide and ground sample distances of 1 to 5 m will contribute new and complex ways to observe the Earth and its features. High resolution spectral sensing will allow analysts to identify an object, and, based on its radiometric signature, characterize its biological, chemical, and/or physical condition.

For at least a decade, spectral sensing has been maturing as an advanced remote sensing technique. The availability of portable field and laboratory radiometric measuring devices allows calibrations to be performed on data sets, greatly increasing the accuracy of nearly any given image analysis. Higher spatial resolution imagery also will require new strategies to analyze features. Areas which were generalized by coarse resolution sensors soon will be distinguished by the configuration of small details within their structure. For example, vegetation communities that have clear, well-perceived boundaries at 30 m may, at 5 m be presented as a mosaic of individual canopies, each with a distinct shape and form. New and modified image processing techniques will he needed to augment the traditional approaches to exploiting remotely sensed data of coarser resolution. We must understand the advantages and limitations of high resolution remote sensing before many of these sensors become fully operational. The sheer quantity and size of these high resolution data sets, even given the great strides being made in data handling technologies, is challenging for large database applications.

One such applications study is being conducted in support of the U.S. Geological Survey's (USGS) South Florida Ecosystem Program. This program is part of an inter-governmental effort to re-establish and maintain the ecosystem of south Florida (Figure 1). The role of the USGS is to provide sound hydrologic, geologic, and geospatial data and information for ecosystem restoration decision making. This ecosystem has been altered greatly during the last 100 years by a complex water-management system; this system includes levees, canals, and water-control structures that regulate flooding and provide a steady supply of fresh water to urban and agricultural areas. These drainage projects have diverted much of the water that originally flowed slowly southward from Lake Okeechobee through the Everglades. Restoration and management of the Everglades requires understanding and manipulating the amount and timing of water flowing throughout the ecosystem (Lee, 1996).

Digital Multispectral Video
map showing location of natural and man-made areas, Native American lands and South Florida ecosystem study area boundary
Figure 1 - This map shows the parks, preserves, sanctuaries, Native American lands, the Everglades Agricultural Area, and the South Florida Ecosystem Program Study boundary. (Click on image to view larger map.)

DMSV technology offers a comparatively cost-effective and flexible way to acquire data for numerous applications. It also provides an innovative means to test exploitation strategies using multisensor digital data sets that cover a broad areal extent and that possess a range of spatial and spectral resolutions. Multiple digital video cameras with two-dimensional (2-D) charged-coupled device (CCD) arrays are being integrated for use as part of a single multisensor optical head. The development of these Systems over the past 20 years was well documented by researchers such as Thompson (1979), Mausel et al. (1992), and Escobar et al. (1995). Today, several configurations are offered commercially; these feature four cameras with changeable optical filters, and enable the acquisition of specific spectral bands of imagery within 350 to 950 nm. The spectral range is governed by the quality and Sensitivity of the silicon-based CCD detectors; however, bandwidths as narrow as 10 nm may be achieved if the hardware design considers the interrelated effects of the system optics, for example filters, lenses, aperture controls, shutter speeds.

The PE&RS March cover image of the ValuJet Flight 592 crash site in the Florida Everglades was captured using a DMSV system developed by SpecTerra Ltd. and the U.S. Army Corps of Engineers Topographic Engineering Center (CETEC). The CETEC DMSV integrates four Cohu series 4800 cameras into a single optical head that provides an image pixel array of 740 columns by 578 rows. Each camera has a 24-mm focal length and a changeable narrow bandpass interference filter as part of the fore-optics. Combinations of spectral filters provide the system's narrow band, multispectral capability. The cameras are mounted in a rigid frame allowing vertical and horizontal adjustment. In addition, each camera's video output connects to a personal computer via a high performance, multi-channel frame grabber that allows a sequence of frame imagery to be captured along the direction of flight directly to hard disk. Several mass storage devices are integrated for archiving data. The frame grabber also permits real-time viewing of the imagery on a red-green-blue monitor as well as to review post-processed scenes. This allows the entire mission to be evaluated in near-real time in case data needs to be reacquired.

Correction and Post Processing of DMSV Data

DMSV imagery is subject to many of the same radiometric and geometric problems associated with more complex spaceborne and airborne sensors. However, the increased use of video as a remote Sensing tool has fostered the development and application of a variety of statistical operations for correcting many of these problems. Using field spectral radiometers and/or on-board radiometers to record target spectral reflectance and absorption allows radiometric calibration models to be derived for any given filter bandpass. Neale et al. (1995) describe such methods using a video imaging system operated by Utah State University. Algorithm corrections for brightness problems associated with the bidirectional reflectance distribution function (BRDF)-occurring across a sequence of frames were presented in open literature by Royer et al. (1985), Qi at al. (1995), Richards (1993), and Jensen at al. (1995) to name a few. Available radiometric normalization methods range from complex deterministic models requiring atmospheric parameters as input, to more empirical models based on simple spatial statistical operations.

Whereas the application of radiometric correction methods can improve mosaics of imagery flight-lines and help spectral classification, many problems associated with BRDF can be minimized effectively through detailed mission planning. Manipulation of the relationship between viewing geometry and solar elevation and azimuth parameters at the time of the mission can reduce the effects of brightness differences from frame to frame and across flight-lines. Also, camera apertures and CCD integration times (comparable to shutter speed) should be calibrated before the flight. Lengthy missions may require in-flight adjustments to these parameters to avoid detector saturation as the sun angle increases. Conversely, apertures and integration times may require adjustment for lower sun angles to ensure sufficient illumination of the CCD detector. In-flight adjustments to these parameters on the DMSV are guided by reviewing intensity histograms of the imagery. This allows for optimal CCD image capture depending on the available lighting conditions and optical characteristics of the filters.

Geometric distortion in the DMSV interlaced images typically results from two sources. First, because the long axis orientation of the camera, CCDs are positioned parallel to the line-of-flight, along with aircraft motion, there is a relative displacement between respective interlaced lines of the two individual frames. This effect results from the normal delay (usually 1160 of a second) in recording the scan lines of two frames that produces the final image and the positional change of the aircraft induced by roll, pitch, and yaw. Second, there is misalignment between frames due to changing viewing geometry as each camera ray converges on a slightly different portion of the target of interest. Solutions to the problems associated with these distortions have been addressed by Pickup at al. (1995) and Mitchell et al. (1995). They describe methods that apply correction algorithms to the affected imagery, based on the spatial autocorrelation function of the imagery data.

Although these methods correct within frame geometry, imagery is still distorted with respect to ground geometry. Absolute geometric correction requires positional information on the aircraft as well as ground survey points. Portable GPS receivers have made obtaining positional location routine and highly accurate. In addition to field Systems, small aerial GPS systems can provide positional data that are encoded in the header of the digital imagery as it is captured. This position usually is referenced to the principal point of each frame with a GPS coordinate. Currently, research is underway to design and implement a large format 2-D, calibrated CCD array. This would introduce a metric interior orientation, similar to film-based camera systems, complete with fiducial references and a unique calibration report.

Spectral Sensing with DMSV

ValuJet Spectral Analysis showing spectral signature for sawgrass
Figure 2 - Ground spectrographic measurement for healthy sawgrass is presented as a trace. Image-derived spectral reflectance for healthy, stressed, and dead sawgrass was plotted to show the vegetation changes from the exposure to toxic fluids that leaked from the aircraft. (click on image above to view larger graph.)
DMSV allows in-depth examinations of the realm of spectral classification by matching library spectral signatures with image signatures. Spectral classifiers have been used effectively with hyperspectral data for military and geologic analyses for years. However, only recently has the utility of this type of analytical procedure been demonstrated with vegetation data. Because of the dynamic nature of vegetation, multispectral video permits the fast acquisition of imagery coincident with the collection of a spectral signature catalogue that is critical in effective spectral matching and classification. DMSV imagery and spectral reflectance measurements for Everglades plant communities are being collected as part of the USGS study investigating the resistance vegetation provides to water flow through the marsh system. Ground spectra are used to develop calibration models for the narrow band video imagery so vegetation classification can be performed. DMSV imagery missions are flown concurrently or within the same solar window as the collection of the field measurements providing spectral reflectance signatures and field truth for the generation of high quality data sets. The DMSV imagery will be used as part of a multi-temporal, multi-resolution data set to accomplish the sensor fusion portion of the project. The DMSV data will be used to develop vegetation classification strategies that can be transferred to coarser resolution sensors to monitor the entire Everglades system. One approach to vegetation classification at a coarser resolution is to first use the classification of DMSV imagery at its fine resolution to produce vegetation class labels for individual pixels at coarse resolutions. At reduced resolution (30 m for example), associating a group of DMSV pixels and aggregate class label with the corresponding Landsat TM multispectral pixel can provide training data (that is spectral signature and class) for multilayer feedforward neural network classifiers. While there may be many pixel samples with heterogeneous class labels that describe mixtures of DMSV vegetation classes, a neural network classifier may be robust to these types of class definitions and its output may be an indicator of homogeneous class mixing (Paula and Schowengerdt, 1995). DMSV data will be valuable to evaluate the trained classifier performance.

An example of spectral sensing with multi-band video is presented using the image of the ValuJet crash site. The multispectral composite on the cover was recorded in August 1996 (almost three months after the tragic disaster that killed all 110 passengers aboard) at an altitude of 1 km above ground level providing a 30 cm pixel resolution. Four 25-nm wide bandpass interference filters centered at 450, 550, 650, and 770 nm (visible and near infrared) were selected as the input spectral wavebands. The imagery was calibrated using the ground reflectance spectra and imagery digital numbers as input to regression correction models generated for each camera and filter band. As indicated in the scene, vegetation stress due to jet fuel and caustic hydraulic fluid is manifested as reflectance and absorption changes in the sawgrass (C. jamaicense) dominated plant community. Figure 2 provides an example of a normal, healthy ground spectral signature for sawgrass plotted with the signatures derived from the image. Obvious changes in the plant leaf reflectance and absorption characteristics are indicated in all imagery wavebands for stressed and dead sawgrass. Spectral classification, based on matching library signatures with the imagery pixel signatures, was performed to derive the classmap presented in Figure 3. The classification shows three zones of vegetation adversely affected by the aircraft's impact and subsequent release of toxic fuel and fluids. The classification also shows the effect on vegetation of air boats used in the recovery of victims and wreckage. Air boat trails allowed the transport of toxic liquids from the crash zone as evidenced by the stressed vegetation along their course.

Conclusion

Classmap showing affected vegetation zones based on matching library signatures with the DMSV imagery pixel signatures
Figure 3. - Classification map showing affected vegetation zones based on training samples developed using a spectral library and DMSV imagery signatures. (click on image for larger map.)
The maturity of digital multispectral CCD device camera technology for remote sensing has accelerated alongside the advances in personal computing. The development of this technology has allowed the remote Sensing community to evaluate present methods as well as develop new strategies that can be applied to future high resolution sensors. Using multi-band video imagery as a test-bed for how the remote sensing community can exploit these new data are proving to be extremely valuable. The capability of these systems allows us to investigate spatial and spectral attributes of high resolution data. As the demand and application for high spatial and spectral resolution data grows, DMSV will provide a technology to complement the future's more sophisticated remote sensing systems.


References

Escobar, D.E., J.H. Everitt, J.R. Noriega, I. Cavazos, and M.R. Davis. 1995. "A multispectral digital video computer cyclone for Dee as a research tool." ASPRS 15th Biannual Workshop on videography and Color Photography in Resource Assessment. Paul Mausel, ed. ASPRS: Bethesda, MD.

Jensen, J.R., K. Rutchey, M.S. Koch, and S. Narumalani. 1995. "Inland wetland change detection in the Everglades Water conservation Area 2A using a time series of normalized remotely sensed data."
PE&RS 61 (1995) 199-299.

Lee, J.K., 1996. "Vegetation affects water movement in the Florida Everglades." USGS Fact Sheet FS-147-96.

Mausel, P.W., J.H. Everilt, D.E Escobar, and D.J. King.
1992. "Airborne videography current status and future perspectives." PE&RS
58 (1992) 1189-1195.

Mitchell, T.A., J. Qi, T. Clarke, M.S. Moran, C.M.U. Neale, and R.A. Schowengerdt. 1995. "Geometric rectification of multi-temporal multiband videographic imagery." ASPRS 15th Biennial Workshop on Videography and Color Photography in Resource Assessment. Paul Mausel, ed. ASPRS: Bethesda, MD.

Neale, C.M.U., J. Qi, M.S. Moran, P. Pinter, T. Clarke, T. Mitchell, S. Sundararaman, and R. Ahmed. 1995. "Methods of radiometric calibration and reflectance determinations from air-borne multispectral digital video imagery." ASPRS 15th Biennial Workshop on videography and Color Photography in Resource Assessment. Paul Mausel, ed. ASPRS: Bethesda, MD.

Paola. J.D., and R.A. Schowengerdt, "A detailed comparison of back propagation
neural network and maximum-likelihood classifiers for urban land use classification." IEEE Trons. Geosci. Remote Sensing 13 (1995) July: 981-996.

Pickup, G., G.N. Bastin, V.H. Chewings, and D.M. Jacobs. 1995. "Correction and classification procedures for assessing rangeland vegetation cover with airborne video data," ASPRS 15th Biennial Workshop on Videography and Color Photography in Resource Assessment. Paul Mausel, ed. ASPRS: Bethesda, MD.

Qi, J., T. Mitchell, M.S. Moran, P.Pinter, T. Clarke, C.M.U. Neale, S. Sundararaman, and R. Ahmed. 1995. "Normalization of bidirectional effect in videographic imagery. ASPRS 15th Biennial Workshop on Videography and Color Photography in Resource Assessment. Paul Mausel, ed. ASPRS: Bethesda, MD.

Richards, J.A. 1993. Remote Sensing Digital Image Analysis: An Introduction. 2nd ed.  New York: Springer-Verlag.

Royer, A., P. Vincent and F. Bonn. 1985. "Evaluation and Correction of viewing angle effects on satellite measurements of bi-directional reflectance." PE&RS
51 (1985) 1899-1914.

Thompson, L.L. 1979. "Remote sensing using solid state array technology.
PE&RS 45 (1979): 47-55.
 


Click here for a printable version of this flyer (note: document will open in a new browser window)

For more information contact:

Greg Desmond, U.S. Geological Survey, MS 521 National Center, Reston, VA 20192; email: gdesmond@usgs.gov.

(Greg Desmond and George Lemeshewsky are affiliated with USGS and are working with the Bureau's South Florida Ecosystem Program. John Anderson, PhD, and Don Morgan are affiliated with the U.S. Army Topographic Engineering Center and are providing DMSV support to USGS.)

Related information:

SOFIA Project: Land Characteristics from Remote Sensing



| Disclaimer | Privacy Statement | Accessibility |

U.S. Department of the Interior, U.S. Geological Survey
This page is: http://sofia.usgs.gov/publications/fs/remote_sens/index.html
Comments and suggestions? Contact: Heather Henkel - Webmaster
Last updated: 22 May, 2007 @ 02:02 PM(TJE)