Toolbox Image

Visibility

Automated observing systems are being installed at airports across the nation. The primary system, the Automated Surface Observing System (ASOS), is found at both towered airports and major non- towered facilities. The Automated Weather Observing System (AWOS) is located at many additional airports. Both systems are a collection of electronic sensors providing information to a computer that creates observations for users. Minute-by-minute observations are available to pilots through ground-to-air radio or telephone. National data circuits send Special and Hourly observations through to remote users. This training module focuses on how automated systems determine visibility. It will also clarify the variations between human and automated visibility.

Understanding Human Visibility

Prevailing visibility is the value reported in non-automated surface observations. A human observer determines it by identifying objects and landmarks at known distances in a full 360 degree circle around the observation point. The greatest visibility observed over 50% or more of the 360 degree area is the prevailing visibility. If there is a sector of the 360 degree area that is significantly different from the prevailing visibility, the system may add a remark reflecting that difference. Pilots must realize, however, that nearly half the area around an airport may have lower conditions than the reported prevailing visibility.

Visual contrast is a key factor in determining how far and clearly you can see. Ground observers have the benefit of seeing dark objects against the lighter background of the sky. A pilot's view is generally toward the ground, providing a darker, more limited contrast background. This lack of contrast makes it much more difficult to identify objects. Even on days with reports of good surface visibility, pilots have missed "seeing" other aircraft or finding airports. Surface visibility may not always match well with flight visibility.

The human observer on or near the ground often has more restricted visibility than pilots. The ground observer may not see the true horizon when determining visibility, but rather looks slightly upward over trees, ridges, and buildings. This upward view can cause the observed horizon to merge with low stratus and force an artificially low visibility. If an observer's view is blocked within 8 degrees of the horizon, a cloud layer at 500 feet would preclude the observer from seeing targets more than .75 mile from the observation. A cloud layer at 3,000 feet would create a 4 mile limit (see Figure 1). Visibility Limit Diagram

At night, observers must be able to adapt their eyes to darkness to determine accurate visibility. Often at major airports, nearby lights are too bright to allow eyes to completely adapt to darkness. Light scattering, sun angle, altitude, and individual visual acuity all effect the ability to "see" in the atmosphere. When haze, smoke, light precipitation, fog, or snow is in the atmosphere, pilots in flight may encounter visibility distinctly different from the ground observer's.

Automated Sensor Operation

The visibility sensor does not directly measure how far one can "see," instead it measures the clarity of the air. ASOS converts a sensor-derived value of clarity to a visibility corresponding to what the human eye would see. This concept of converting sensor measurements into a visibility value is called Sensor Equivalent Visibility (SEV).

Since an automated system measures air clarity, the sensor always reports true horizontal surface visibility. The sensor is not affected by terrain, site location, buildings, trees, bright lights, or cloud layers near the surface. A singular automated sensor will measure the visibility the same way, site-to-site. Once users learn to relate to the transmitted automated visibilities, they can expect similar consistent values from all systems throughout the country.

ASOS employs a Belfort model 6220 forward scatter visibility meter to measure the clarity of the air. The system cants the transmitter and receiver at a small angle, preventing direct light from striking the receiver. ASOS projects light from a Xenon flash lamp in a cone-shaped beam. The receiver measures only the light scattered forward.

The more moisture, dust, snow, rain, or particles in the light beam the more light scattered. The sensor measures the return every 30 seconds. The visibility value transmitted is the average 1-minute value from the past 10 minutes. The sensor samples only a small segment of the atmosphere, 0.75 feet. To "broaden" the evaluation, an algorithm (mathematical logic) processes the air passing through the sensor for the past 10 minutes to provide a representative visibility. This algorithm generally provides an accurate visibility within 2-3 miles of the site.

Naturally, the more uniform the weather the more accurate the automated visibility. How quickly can ASOS respond to rapidly changing conditions? ASOS employs a special processing algorithm, called the "harmonic" mean, to provide better system responsiveness in rapidly changing conditions.

Each minute ASOS processes the most recent 10 minutes of sensor data to obtain representative visibility. When the visibility drops suddenly (in one minute) from 7 miles to 1 mile, it takes about 3 minutes for the 10-minute mean values to register 3 miles and transmit a Special observation.

"Specials" alert users to a significant change. A total of 9 minutes will pass before the algorithm lowers the visibility to 1 mile. When the visibility rapidly improves from 1 mile to 7 miles, ASOS generates a Special observation after 6 minutes, when the harmonic mean reaches the 2-mile threshold. After 10 minutes, ASOS will report 7 miles. Why longer to improve? By using the harmonic mean, where lower values have a greater impact than higher values, the visibility is more slowly improved and more quickly lowered. This feature adds a margin of safety and buffers rapid changes when the visibility is fluctuating widely and quickly.

The current reportable ASOS values of visibility in statute miles are: <1/4, 1/4, 1/2, 3/4, 1, 11/4, 11/2, 13/4, 2, 21/2, 3, 4, 5, 6, 7, 8, 9, 10+. Siting the visibility sensor is critical. If the sensor is located in areas that favor fog, blowing dust, or near water, it may report conditions not representative of an entire airport. Airports covering a large area or near lakes or rivers may need multiple sensors to provide a representative observation.

Sensor and Eye Discrepancies

Although automated sensors are more objective and consistent than human observers, they are not perfect. There are times when the perception of the human eye and the sensor clarity measurements do not match. The human observers face physical limitations, such as viewing angle, objects, contrasts, and individual eye response, in determining "representative visibility." ASOS can determine visibility only by sampling the air moving through the sensor. Thus, there will be times when ASOS and human observations will differ.

One condition that heavily affects the human eye is bright backscattered light, which sharply reduces visibility. These conditions usually occur during the daytime when clouds, fog, light snow, flurries, or light drizzle reflects sunlight in the atmosphere. It is comparable to the headlights of your automobile shining into the fog or snow. The brightly reflected light may blind you and limit your visibility. Yet the lights of an approaching vehicle seem to penetrate the fog; you can see the approaching vehicle further into the fog.

The visibility difference is caused because the headlights of your vehicle are reflected back toward your eyes while the approaching vehicle's light is scattered toward you (forward scattered). Research has shown the visibility difference under these conditions between forward scattered and backscattered light is 2:1 (see Figure 2).Bright Day Diagram

If conditions are bright enough for a pilot or a controller to use sunglasses, you can expect the automated systems to report visibility approximately twice what the human eye perceives. If an ASOS observation reports a 4 mile visibility, you can expect a report of around 2 miles by a human observer. When denser clouds reduce the light, the visibilities of the human observer and the automated sensor compare quite closely. Visibilities reported by observers on a hazy day may be lower than reported by ASOS. Again, brightness seems to be part of the problem. Yet pilots often find flight visibility better than the reported ground visibility.

I took off from an airport where the observer reported the visibility at 5 miles in haze. My co-pilot noted that at just 100 feet off the ground the visibility really was 10 to 15 miles. The automated system was reporting 10+. What was the difference? Possibly a slightly different sun angle or a limitation of the surface observer due to surrounding trees. At night, human observers seek distant lights to measure visibility. The human observer is using forward scattered light, the same principle applied by the ASOS sensor. Therefore, visibilities tend to match more closely between observers and ASOS at night.

For many years, pilots have reported flight visibilities different from those of ground observers. Those differences will continue. No single point visibility may be right for the larger area surrounding an airport, especially in changing weather conditions. When these variations occur, pilots should provide pilot reports (PIREPS) to provide in-flight weather, including visibilities. PIREPS are especially important when the pilot's flight weather differs significantly from the reported surface weather.

Summary

It is difficult to determine a visibility that is representative for everyone. There are many limitations that affect even well- trained, experienced observers in determining representative visibility. ASOS has been designed to measure objectively the clarity of the air in an attempt to move away from the more subjective evaluations of human observers. If users understand the performance of automated systems, they can successfully use the information to make proper decisions.

References

Bradley, J.J. and Imbembo, S.M., 1985: "Automated Visibility Measurements for Airports," American Institute of Aeronautics and Astronautics: Preprints, Reno, NV

Bradley, J.J. and Lewis, Richard, 1993: "Representativeness and Responsiveness of Automated Weather Systems," Fifth International Conference on Aviation Weather Systems, AMS, Vienna, VA, pp 163-167

Clark, P., 1995: "Automated Surface Observations, New Challenges - New Tools," Sixth Conference on Aviation Weather Systems: Preprints, Dallas, TX

Humphreys, W.J., 1964: Physics of the Air, Dover Publication Inc.

Middleton, W.E., 1935: Visibility in Meteorology, University of Toronto Press.

Minnaert, M., 1964: Light and Color, Dover Publication Inc.

U.S. Dept. of Commerce, NOAA, 1992: ASOS Users Guide, Government Printing Office

Questions

  1. How does ASOS measure visibility?
  2. What limitations does a human observer face on determining accurate visibility?
  3. Identify at least two examples where ASOS and the human observer may face perceptional differences.
  4. Name at least two limitations that are not found in sensor derived visibilities.
  5. Describe the difference between prevailing visibility and ASOS visibility.

July 1995


To: ASOS HOME PAGE