Environmental Management

, Volume 55, Issue 6, pp 1446–1456

Can a Rapid Underwater Video Approach Enhance the Benthic Assessment Capability of the National Coastal Condition Assessment in the Great Lakes?

  • Julie E. Lietz
  • John R. Kelly
  • Jill V. Scharold
  • Peder M. Yurista
Article

DOI: 10.1007/s00267-015-0475-3

Cite this article as:
Lietz, J.E., Kelly, J.R., Scharold, J.V. et al. Environmental Management (2015) 55: 1446. doi:10.1007/s00267-015-0475-3

Abstract

Over 400 sites were sampled in the nearshore of the U.S. Great Lakes during the U.S. National Coastal Condition Assessment (NCCA) field survey in summer 2010. Underwater video images were recorded in addition to routine NCCA benthic assessment measures. This paper has two objectives: (1) to develop a process to evaluate video performance with acceptance criteria, exploring reasons for poor images, and (2) to use acceptable videos in an example application with invasive mussels, evaluating the enhancement potential of video to supplement traditional grab sampling. A standard hierarchical protocol was developed to rank video performance based on quality and clarity. We determined controllable and uncontrollable factors affecting video performance. Moreover, specific thresholds limiting video were identified: >0.5/m for light extinction and >3.5 µg/L for chlorophyll a concentration. To demonstrate the utility and enhancement potential of video sampling, observed dreissenid presence from excellent (221 of 362 videos) videos was compared with NCCA benthic taxonomy, in the context of the statistically based NCCA survey. Including video increased the overall area estimate of the U.S. Great Lakes nearshore with invasive mussels by about 15 % compared to PONAR alone; 44 % (7570 km2) of the surveyed region had mussels. The proportion of the nearshore area having mussels varied from low (3.5 %) in Lake Superior to >50 % in the lower lakes. PONAR and video have unique strengths and weaknesses as sampling tools in the Great Lakes nearshore environment, but when paired were complimentary and thus provided a more thorough benthic condition assessment at lake and regional scales.

Keywords

Underwater videoNational coastal condition assessmentBenthic conditionDreissenid musselsPONAR

Introduction

The U.S. National Coastal Condition Assessment (NCCA) is a statistically based survey for defined target coastal resources and is the principal basis for the associated National Coastal Condition Report (NCCR) (e.g., U.S. EPA 2012). The NCCA, like other National Aquatic Resource Surveys (NARS) conducted by U.S. Environmental Protection Agency’s (U.S. EPA) Office of Water, was developed to enable a consistent and comprehensive reporting on the Nation’s water resources (streams, rivers, lakes, wetlands, coastal ecosystems), as directed by the Clean Water Act (CWA) and Clean Water Action Plan. The overarching goal of the NCCA is to establish the “condition” of coastal waters, nationally and regionally, via a simple set of core measures and metrics and, serve as a basis for evaluating long-term change. The NCCA survey has been conducted in an evolving and expanding manner for about 2 decades along the U.S. Marine coastline, and is now established by U.S. EPA and partners in a 5-year repeating cycle. In 2010, the opportunity arose to include the Great Lakes coastal zone for the first time as part of the NCCA statistical design, field survey, and assessment.

The timing was opportune because recognition of a serious information gap for the Great Lakes coastal zone has been growing (e.g., Environment Canada and U.S. EPA 2009; Mackey and Goforth 2005; SOLEC 1996). The vulnerability of the Great Lakes coastal zone to a range of stressors (e.g., Bertram et al. 2003; Niemi and Kelly 2007; Shear et al. 2003) has been the driving concern. The ability to make sound coastal assessments in the Great Lakes has been hindered by a lack of regular, spatially comprehensive, statistically robust, and consistent monitoring. Overall, the situation has promoted the call for development of a bi-national, integrated nearshore framework under the recently revised Great Lakes Water Quality Agreement (GLWQA) between the U.S. And Canada (United States and Canada 2012). Thus, the prospect of an NCCA-style probability survey of the entire shoreline of the Great Lakes was welcomed as a viable region-wide approach that might help fill the information gap needed for resource assessment, protection, and restoration.

The NCCA survey in 2010 collected a medley of water quality, fish tissue contaminant, and benthic quality (taxonomy, bioassays, sediment chemistry) indicators and also used underwater video sampling as an intended test case to provide a strong visual archive and supplement to traditional benthic sampling. Given our experience and understanding of the bottom substrate variability in the nearshore and embayment environments (from soft-bottom depositional areas to hard sands to rock), we thought it a good opportunity to evaluate underwater video as a seemingly simple supplemental measure. Underwater video also had the potential to enhance the assessment by providing a landscape level view of sample sites, and by contributing observational data such as substrate type, vegetation, fish, and particularly invasive mussels (Dreissena polymorpha and Dreissena rostiformis).

We specifically focus in this paper on a comparison of video and PONAR data of dreissenid presence/absence as an example application of how underwater video can enhance the Great Lakes benthic assessment. In the past, benthos sampling of soft sediments has been conducted in the (marine-coastal) NCCA effort using a Young-modified Van Veen Grab sampler (U.S. EPA 2001a), PONAR or Ekman dredge (Van Rein et al. 2009). Although PONAR sampling is a common technique for benthic assemblages in freshwater (Nalepa et al. 1988; Powers and Robertson 1967), processing samples takes time and expertise. Grab samplers are also limited to sampling soft sediments such as sand, silt, clay, or mud (Van Rein et al. 2009). In addition, a small grab sample does not always give insight to the broader benthic habitat (i.e., density and types of vegetation, density of large boulders, etc.). Underwater video’s macroscopic scale gives a different perspective to observe the otherwise cryptic benthic environment, in addition to creating a permanent record that can be observed for change over time (Van Rein et al. 2009). Video sampling by drop down camera can yield data regarding invertebrate and fish assemblages, in addition to providing habitat information (Van Rein et al. 2009). Cost of video sampling is relatively low (Custer and Custer 1997; Ozersky et al. 2011; Schaner et al. 2009; Van Rein et al. 2009) and video images are fairly quick to process (Coakley et al. 1997; Ozersky et al. 2009, 2011; Van Rein et al. 2009). These attributes made video sampling attractive for a recurring expansive monitoring effort such as the NCCA.

We focused on dreissenid mussels to test the enhancement capabilities of video sampling because dreissenids (1) can be easily detected by video, (2) are stationary, and (3) serve as an important factor influencing the ecological condition of an area. Dreissenids are known to cause movement of particulate matter from the water column into the sediments through filter feeding and waste excretion (Klerks et al. 1996). This has caused a cascade of effects such as increased phosphorus (Hecky et al. 2004), clearer water (Holland 1993; Howell et al. 1996; Klerks et al. 1996), phytoplankton decline (Caraco et al. 1997; Nicholls and Hopkins 1993), increased Cladophora growth (Auer et al. 2010), among others. Therefore, knowing where mussels are present, and in what relative abundance, is a useful indicator of coastal condition for the NCCA.

This paper examines the use of underwater video images captured in the field effort across >400 Great Lakes survey sites in 2010. In doing so, we developed a standardized process to view and analyze video, ascertained what general information video could provide, and examined what factors seemed to most affect video performance. In addition, we determined in what ways the observed video data could be applied to the overall findings of the NCCA survey. Specifically, we examined how video detection of dreissenid mussels compared to PONAR results, in order to provide an example of ways video data can add both quantitative and qualitative information to a data set.

Materials and Methods

Survey Design

The target population was defined by a GIS frame to represent the coastal water resource of the Great Lakes. Briefly, the nearshore zone was defined as a narrow zone from the shoreline (buffered to include river mouths and smaller indentations with waters continuous with the lake) to the 30-m-depth contour, unless it first reached a maximum of 5 km from shore. The entire nearshore survey frame, the GIS-explicit approximation of the target resource, covered 18,881 km2 (12 %) of the U.S. Area of the five Lakes. The 2010 NCCA was conducted for the whole U.S. Great Lakes coastal zone, with an approximate shoreline length of 6100 km, measured as a simplified linear waterline. The sample population for the survey was about 45 nearshore sites per lake; with 10 % of the sites revisited within the summer season. Sample collection occurred from May to September 2010. The embayment frame was a subset of the nearshore defined by geometric size and shape criteria to represent a target population of area close to shore and vulnerable to landscape-derived stressors, with more restricted water circulation and, as a population, shallower water than the more physically open nearshore. Embayment polygons were semi-enclosed (geomorphically) areas in small to medium harbors and bays from ~1 to <100 km2. Larger bays like Green Bay and Saginaw Bay had sites as part of the nearshore frame. The embayment area was 929 km2, or 4.9 % of the nearshore zone, and sampling included 151 sites across the Great Lakes region. NCCA site locations for both nearshore and embayment populations were selected as a probability survey, using generalized random tessellation stratification (Stevens and Olsen 2004). Additionally, as an extension to the NCCA, sites were also video sampled by partners in National Park Service (NPS) waters. In evaluations of video performance, 362 videos were used (nearshore sites, embayment sites, NPS sites, and revisited sites). When comparing video findings directly to estimates based on PONAR data, we only used 2010 NCCA design sites: 309 videos.

Sample Collection and Processing

Underwater Video-Collection in the Field

All video sampling was done with a 500 lines of horizontal resolution (LoHR) SeaViewer Sea-Drop color camera 950 with Unified Sea-Light™ LED light and Sea-DVR: Mini Digital Video Recorder. At each site, the camera was lowered to the bottom on a cable. Once a clear image of station bottom was observed on the Sea-DVR screen, researchers held the camera as still as possible and began recording. Recording duration was at least 1 min. In certain low light conditions, the camera would automatically switch to infrared mode. However, researchers were advised to turn on the camera light to see if image quality improved. If the image was unviewable or poor in quality the procedure was repeated (U.S. EPA 2009a). The camera was not secured to a frame, and the area surveyed at each site was variable. For this reason, data retrieved from video images were strictly reported on a presence/absence basis, with the understanding that (similar to PONAR), absence findings could be false. We recognize the benefit and need of having a more standardized sampling method and known sample area; developing the means for improved performance is a goal for future NCCA surveys and part of the reason for analysis reported in this paper.

Video Performance-Development of Protocols for Viewing, Rating, and Using Images

Videos were variable in quality and clarity, and it was evident we needed a way to classify video performance. We assessed quality by determining how well the video image showed features of interest such as bottom type, vegetation, fish (i.e., Gobies), and invasive dreissenid mussels. Videos were assigned a video quality rating of excellent, marginal, or poor (Fig. 1). In addition to evaluating video quality, images were also rated on their clarity. Focusing on clarity was important to determining the relationship between video rating and water clarity parameters. This comparison could not be done with video quality rating because quality did not always reflect clarity. For example, a video image may have been perfectly clear, but operator error caused the camera to remain too high in the water column to make confident claims about dreissenid mussels. This video, although clear, would be rated marginal for quality. Therefore, to examine relationships between video rating and water clarity parameters we needed a rating that accurately reflected video clarity. Videos were therefore assigned both quality and clarity ratings of excellent, marginal, or poor (Fig. 1). Finally, we wanted to apply our observed dreissenid video data to the NCCA benthic data set, to see if it could supplement or enhance the findings. However, we again could not use video quality rating as a filter for which videos to use. For example, some videos received a marginal rating, even if dreissenid mussel presence was confident, because the viewer was not confident in presence or absence of Gobies. To correct for this, all videos were assigned a third ranking based on a specific observation (in our study: dreissenid mussels) (Table 1). Only excellent videos were used in analysis.
https://static-content.springer.com/image/art%3A10.1007%2Fs00267-015-0475-3/MediaObjects/267_2015_475_Fig1_HTML.gif
Fig. 1

Flow chart designed to assign underwater videos a quality and clarity rating. Three video ratings are possible: excellent-information should be used for analysis, marginal-information should be used with caution for analysis, poor-information should not be used for analysis

Table 1

This table is designed to assign underwater videos a rating based on a specific observation of interest

Rating

Description

Excellent

Confident claims can be made about “X”a

Marginal

You cannot make confident claims concerning “X” because the image quality is marginal/poor or the image view is extremely narrow and you don’t feel confident the entire site was represented

Poor

The image is not useable for detecting “X”

Three video ratings are possible: excellent-information should be used for analysis, marginal-information should be used with caution for analysis, poor-information should not be used for analysis

a“X” denotes the specific observation the viewer wants to use video image data for, such as dreissenid mussels, Gobies, species of vegetation, etc

Benthos

The NCCA measures a suite of indicators (U.S. EPA 1992), and we selected several to use in this investigation. PONAR samples for benthic community taxonomy were used to make qualitative and quantitative comparisons to some video results. Sediment and benthic macroinvertebrates were collected using a standard PONAR grab (box size 22.9 cm × 22.9 cm with depth of 9 cm) with removable top screens (U.S. EPA 2009a). Processing samples for benthic macroinvertebrates was completed in accordance with NCCA manual guidelines (U.S. EPA 2009a, b).

Water Clarity

Water clarity measures were used to examine aspects that might effect the light environment and performance of the video. A vertical light profile for each site was obtained using a photosynthetically active radiation (PAR) meter attached to a data logger such as LI-COR® (U.S. EPA 2009a). Vertical profiles were used to calculate a site’s extinction coefficient (Kd, expressed as m−1) assuming Beer’s law with exponential loss over depth, following standard modeling techniques (U.S. EPA 2009a). Low Kd values mean high light penetration and therefore clearer water. Water was collected for chlorophyll a analysis (µg/L) using a Niskin, Van Dorn, or Kemmerer bottle (U.S. EPA 2009a). Samples were processed and analyzed according to NCCA manual guidelines (U.S. EPA 2009a, b).

Statistics

Differences in Kd and chlorophyll a concentration at sites with excellent, marginal, or poor video rankings were compared using the Kruskal–Wallis test (non-parametric ANOVA). Post hoc multiple comparison tests were conducted using Dunn’s test. To determine whether capturing an excellent video was related to water clarity across all sample sites, a logistic regression was used. Video clarity (excellent = 1 and marginal/poor = 0) was regressed against Kd and chlorophyll a concentration. To determine whether detection of dreissenid mussels in video images was related to dreissenid abundance across all sample sites, a logistic regression was used. Detection of dreissenids in video images (yes = 1 and no = 0) was regressed against number of dreissenids per m2 for excellent videos only. SYSTAT v13 was used for statistical analysis (Systat Software, Inc., San Jose California USA, www.sigmaplot.com).

For overall assessment purposes and comparisons of results for either the whole nearshore or an embayment sub-population, analyses were area-weighted following survey statistical design specifications using R-survey software of Kincaid and Olsen (2012). Estimates of the resource population’s mean abundances (for numerical results) or percentage of area with presence (using categorical results) used pre-defined design site weightings to calculate area-weighted values (mean and variance) based on the actual sites that were sampled for each estimate (Stevens and Olsen 2003, 2004). The number of sites included in various comparisons and analyses varied; we specify the relevant data set when results are compared. Survey population results were compared for differences using a standard z test (Snedecor and Cochran 1967).

Results and Discussion

Overall

A total of 362 videos were collected, representing 78 % of all sites visited in the Great Lakes for the NCCA survey in 2010: 128 in Lake Michigan, 87 in Lake Superior, 48 in Lake Huron, 43 in Lake Erie, and 56 in Lake Ontario. 55 % of all videos had a video quality rating of excellent, 35 % marginal, and 10 % poor.

Aspects Affecting Video Performance

Not all videos had the quality necessary for confident use. The reason for rating videos marginal or poor was classified as controllable or uncontrollable. 32 % of the marginal and poor videos had controllable reasons for their rating: (1) the view was not close enough to the bottom and/or rocks (86.8 %) (2) the camera light needed adjustment (7.5 %) (3) the camera angle was poor (3.8 %) (4) the camera moved too fast (1.9 %). Adjusting deployment procedures to correct for these controllable reasons could yield more usable images. In general, high current flows were not often encountered or mentioned in field notes. Currents could be additional complicating factors in areas within connecting channels (not sampled in 2010) or at the edges of selected river mouths. Such situations might be considered controllable by technique modifications (drifting with the current, deploying on a weighted bottom sled, etc.), but they were not further considered here.

Conversely, 68 % of the marginal and poor videos were classified as having uncontrollable reasons for their rating: (1) the image was turbid or had a dark green or brown tint (36.9 %) (2) the site had heavy vegetation which blocked the view or increased particulates when the vegetation was agitated by the camera (25.2 %) (3) the Goby-like fish did not display themselves in a way to confidently see a dorsal spot (18.9 %) (4) the video had other infrequent issues such as mechanical problems, waves causing the image to be too bouncy, or sunlight rendering the image too bright (18.9 %). The most frequent problem of turbidity and dark tints is a common difficulty with video sampling (Van Rein et al. 2009). Researchers have found it difficult to detect small organisms like the Round Goby in their video images when water quality parameters were poor (Schaner et al. 2009; Wilson et al. 2006). Advances in technology have been made to improve imaging in turbid conditions (Spink and Read 2011). Whether such technology, or something similar, is feasible and cost effective for the NCCA needs to be determined.

To further examine the relationship between video performance and water clarity, video clarity ratings were compared with Kd values to identify what specific conditions result in a marginal or poor image. The Kruskall–Wallis test showed that mean ranks of Kd values per site were significantly different among excellent, marginal, and poor video images (H = 63.999, 2 df, P < 0.05). Kd values were significantly higher (less light penetration) at sites where marginal (Q = 5.081, 2 df, P < 0.05) and poor (Q = 7.030, 2 df, P < 0.05) images were collected. In addition, Kd values were significantly higher at sites with poor images compared to sites with marginal images (Q = 3.324, 2 df, P < 0.05). Examining the histogram and means (Fig. 2) of Kd values across the three ratings indicated a possible threshold for capturing an excellent video was approximately Kd ≤ 0.5/m. Logistic regression further enforced that video clarity was inversely related to Kd (X2 = 59.007, df = 1, P < 0.05). The logistic model predicted that a Kd = 0.5/m would give a 58.9 % chance of capturing an excellent video (Fig. 3).
https://static-content.springer.com/image/art%3A10.1007%2Fs00267-015-0475-3/MediaObjects/267_2015_475_Fig2_HTML.gif
Fig. 2

Histogram (left) of extinction coefficient (Kd) values at sites where excellent, marginal, and poor videos were collected. Extinction coefficient (right), Kd (mean ± SD) of sites where excellent, marginal, and poor videos were collected. Differing letters indicate significant difference among treatments (Kruskal–Wallis, H = 93.999, 2 df, P < 0.05; Dunn’s Test, P < 0.05). Since the Kruskal–Wallis test does not compare means, the histogram is a more accurate representation of the data. The means are presented as a visual aid for determining the marginal/poor video threshold of 0.5 Kd

https://static-content.springer.com/image/art%3A10.1007%2Fs00267-015-0475-3/MediaObjects/267_2015_475_Fig3_HTML.gif
Fig. 3

Logistic regression model predicting the probability of capturing an excellent video based on extinction coefficient (Kd) values (X2 = 59.007, df = 1, P < 0.05)

Chlorophyll a concentration at sites was examined to further explore video image clarity. The Kruskall–Wallis test showed that mean ranks of chlorophyll a concentration per site were significantly different among excellent, marginal, and poor video images (H = 81.830, 2 df, P < 0.05). Chlorophyll a concentrations were significantly higher at sites where marginal (Q = 6.354, 2 df, P < 0.05) and poor (Q = 7.764, 2 df, P < 0.05) images were collected. In addition, chlorophyll a concentrations were significantly higher at sites with poor images compared to sites with marginal images (Q = 2.724, 2 df, P < 0.05). The histogram and means (Fig. 4) of chlorophyll a concentrations across the three ratings indicated a possible threshold for capturing an excellent video was approximately 3.5 µg/L. Results of the logistic regression further supported that video clarity was related to chlorophyll a concentration (X2 = 99.231, df = 1, P < 0.05). The model showed that a 3.5 µg/L chlorophyll a concentration would result in a 53.5 % chance of capturing an excellent video (Fig. 5).
https://static-content.springer.com/image/art%3A10.1007%2Fs00267-015-0475-3/MediaObjects/267_2015_475_Fig4_HTML.gif
Fig. 4

Histogram (left) of chlorophyll a concentration at sites where excellent, marginal, and poor videos were collected. Chlorophyll a concentration (right) (mean ± SD) of sites where excellent, marginal, and poor videos were collected. Differing letters indicate significant difference among treatments (Kruskal–Wallis, H = 81.830, 2 df, P < 0.05; Dunn’s Test, P < 0.05). Since the Kruskal–Wallis test does not compare means, the histogram is a more accurate representation of the data. The means are presented as a visual aid for determining the marginal/poor video threshold of 3.5 µg/L

https://static-content.springer.com/image/art%3A10.1007%2Fs00267-015-0475-3/MediaObjects/267_2015_475_Fig5_HTML.gif
Fig. 5

Logistic regression model predicting the probability of capturing an excellent video based on chlorophyll a concentrations (X2 = 99.231, df = 1, P < 0.05)

Examining how Kd and chlorophyll a concentration jointly influenced video clarity at sites showed how the ability to capture an excellent video image was influenced by water quality parameters related to clarity (Fig. 6a). Most excellent videos are associated with low Kd values and chlorophyll a concentrations. Application of both proposed threshold values encompasses the majority of excellent images as seen by the shaded region (Fig. 6a, b). Four sites with excellent images were not included in the shaded region and were outside the threshold of both Kd and chlorophyll a concentration (Fig. 6a). Sites within the threshold bounds highlight the need to acknowledge the effect of environmental limitations on capturing high quality videos (Fig. 6b). To further illustrate such constraints, we estimated the average values for chlorophyll a and Kd for the whole set of NCCA sites as 3.84 µg/L and 0.33/m, respectively; cumulative distribution functions showed that about 10–20 % of the survey area would exceed either threshold. Evaluation of the embayment sub-resources as distinct from the nearshore yielded average embayment values significantly higher than the nearshore alone. Means were 4.96 µg/L and 0.68/m for chlorophyll a and Kd, respectively, and thus above the identified video thresholds. Cumulative distribution functions showed that about 30–50 % of the embayment area (only <5 % of the nearshore size) exceeded either threshold.
https://static-content.springer.com/image/art%3A10.1007%2Fs00267-015-0475-3/MediaObjects/267_2015_475_Fig6_HTML.gif
Fig. 6

a (Right) Extinction coefficient (Kd) and chlorophyll a concentration values at sites where excellent, marginal, and poor videos were collected. The solid line represents the chlorophyll a threshold of 3.5 µg/L. All points right of the solid line would be excluded. The dotted line represents the extinction coefficient threshold of 0.5 Kd. All points above this line would be excluded. The shaded box represents samples within both thresholds bounds. b (Left) Extinction coefficient (Kd) and chlorophyll a concentration values, within the threshold bounds, at sites where excellent, marginal, and poor videos were collected. The majority (80 %) of videos collected within these bounds were rated excellent. The numbers of sites within each category are as follows: excellent (160), marginal (36), and poor (4)

A Complement to PONARs

In total, 49 sites that were not sampled by PONAR had an excellent video captured. Approximately 73 % of these sites had visible rock substrate, which likely prevented a successful PONAR grab. The ability to sample hard bottom is an important piece of the coastal condition assessment puzzle, which is often lost and prevents us from seeing the whole picture. Invasive dreissenid mussels have been found in higher densities on substrates such as bedrock, cobble, and wreck debris (Coakley et al. 1997).

Video sampling detected dreissenids at eight sites where PONAR sampling did not, and at 17 sites were PONAR sampling was unsuccessful. PONAR sampling may have missed dreissenid detection at those 8 sites if their distribution was patchy, or if they were found solely on hard substrates (Coakley et al. 1997). At 32 sites video sampling did not detect dreissenids even though PONAR data showed they were present. Half of these 32 sites had less than 5 dreissenids per PONAR grab. Dreissenid abundance does influence video detection ability (X2 = 14.576, df = 1, P < 0.05). The logistic regression model predicted a 34 % chance of dreissenid detection from video sampling when abundance in a full size PONAR was 5 (Fig. 7), which is equivalent to ~96 mussels/m2. As mussel abundance increases, so does the video’s mussel detection ability (Fig. 7).
https://static-content.springer.com/image/art%3A10.1007%2Fs00267-015-0475-3/MediaObjects/267_2015_475_Fig7_HTML.gif
Fig. 7

Logistic regression model predicting the probability of detecting dreissenid mussels in video images based dreissenid abundance (X2 = 14.576, df = 1, P < 0.05)

Enhancement Potential to Benthic Condition Assessment

Deciding what sampling gear to use in an ecological study is one aspect that can affect the scope of the resulting data and in turn the overall findings of the study. For the NCCA, benthic grab sampling offered data regarding sediment type, color, and odor. In addition, the sample was processed to identify benthic macroinvertebrates that were used as indicators for ecosystem health. While this small-scale sampling does provide good large-scale information, one must recognize the limitations of a grab sample fully representing the entirety of a site. Vegetation, rock, and organisms can be present in patches, and therefore could be easily missed by such a sampling technique. Toward this point, we make a brief assessment using mussels as our example to show how video affected quantitative estimates of percentage area coverage by mussels.

A strength of the NCCA survey, with its explicit spatially balanced statistical design foundation, is that it can be used to estimate the fraction of a total resource in a certain condition. In this case, we estimated the portion of the total area surveyed that had populations of live dreissenid mussels. For the whole U.S. nearshore region (including embayments), PONAR sampling estimated that 38 % of the entire zone had dreissenids. When all successful video samples were used to supplement the PONAR sampling, the estimate rose to 44 %. The estimate represented an area with mussels present as 7570 km2, with a 95 % confidence interval of ±7 %. The increase with supplemental video was significant at p = 0.057 (one-sided Z test). This difference is, quite likely, conservative. As we noted, there was a detection limit for video when dreissenids are present in low concentrations. Moreover, we need to strive to improve controllable video performance, as more quality results might also slightly elevate estimates. Related to this, there were some sites where a few mussel shells were noted but we did not have enough confidence in the images to categorize as having presence of live mussels. Overall, video results raised regional nearshore average estimates by 15 % and significantly more than that in a couple lakes (Huron and Ontario).

Figure 8 graphically compares results, with and without supplementary video results added to PONAR-based estimates, for the whole region’s nearshore and embayments, as well as for the nearshore of each individual lake. This illustration is not to show a correlation or regression between the two sets—they are not independent. Instead it simply highlights that in most cases video inclusion into the overall assessment raised the area estimate, i.e., points mostly fall well above a 1:1 line. Thus, video sampling tended to increase the estimated area occupied by dreissenids compared to PONAR sampling alone. Lakes Erie, Ontario, and Michigan showed the highest occupancy (54–59 %, with video included). Lake Superior showed the lowest percent occupancy (by PONAR and video). Even though we supplemented many clear-water Lake Superior sites that were not PONAR-able with videos at a number of rocky-bottom sites, the area with dreissenid mussels present did not increase because dreissenids were not present on hard substrate. Three of the four Lake Superior sites where dreissenids were detected were soft-bottom sites in the Duluth-Superior Harbor area (a known zebra/quagga area, Grigorovich et al. 2008). The only other site with a dreissenid mussel (a sandy nearshore environment eastward of Chequamegon Bay WI) had only one specimen identified in a PONAR sample.
https://static-content.springer.com/image/art%3A10.1007%2Fs00267-015-0475-3/MediaObjects/267_2015_475_Fig8_HTML.gif
Fig. 8

Estimated area with invasive mussels detected by PONAR sampling and taxonomic analyses, compared to estimates supplemented with excellent video mussel detection. Data are for statistically derived population estimates for U.S. Great Lakes region for nearshore population (N = 246 sites), embayment sub-population (N = 156 sites), and for whole nearshore (both nearshore and embayment sites) computed by lake. Dotted line depicts 1:1 relationship

Therefore, PONAR-based surveys will slightly underestimate the actual extent of dreissenid mussels in many, but not all, nearshore areas. The embayments as a whole (as a sub-population within the nearshore) had a slightly higher presence of mussels. Video supplement was less effective for embayments in the sense that it did not significantly increase the area-occupied estimate (48 vs. 50 % of the embayment sub-area, Fig. 8). As previously noted, embayments as a group had higher Kd (lower water clarity) and thus the success of video was more limited (as per Fig. 3) than in the clearer open nearshore. In contrast, hard substrate was more prevalent in the open, more exposed areas of nearshore, which cannot be sampled by PONAR. We noted above that where PONARs were not taken and video was collected, the large majority of sites showed rocky bottom. Enhancement to benthic condition assessment in the Great Lakes with our video protocol was in this case, and generally will be, more pronounced for the open nearshore than in the backwater, depositional softer bottom sediments more characteristic of embayments.

PONARS and video samples were obtained at a similar proportion of the total sites (76–78 %), but the two sets of successful sites were only partially intersecting. Video was successful at a high proportion of sites without PONAR; therefore the union of their sampling sites was a larger set than that of PONAR alone. The un-surveyed area (by PONAR), 22 % of the total area, had a significant presence of mussels (17 of 30 excellent videos), which otherwise would have gone undetected. On the other hand, mussel detection rate by video was lower where PONARs were collected, because there were more marginal and poor videos due principally to water clarity. Thus, to optimize mussel detection for future NCCA surveys: (1) sampling by video should be a high priority where PONARS are not successful and (2) video methods should consider options available to overcome poor water clarity situations.

Summary

We highlighted a number of features that suggest video is a complementary sampling gear that produced both quantitative and qualitative results. Video sampling helped prevent missing data in areas that could not be sampled by PONAR because of substrate composition, resulting in more landscape level qualitative observations. Since video sampling provides a broad look at site heterogeneity, it allows the researcher to know if the PONAR sample adequately represented the site (Hewitt et al. 1998). If not, video image data can compensate for the PONAR’s limitations by preventing a false absence or no data finding. Missing data (no sample) at a site increases overall population uncertainty, in terms of statistical confidence intervals, and thus hinders some space or trend analysis in assessing differences. Therefore, quantitative estimates of dreissenid mussels for example can be enhanced when both a PONAR and video sample are collected in tandem. Hewitt et al. (1998) discussed that this type of paired sample design is an appropriate approach for fast-paced studies examining macrofaunal communities and environmental characteristics. Although we did identify some limitations in video performance, we provide guidance on how performance could be improved, and succeeding surveys should try to do so. However, we do recognize that like all other sampling techniques, video sampling cannot be used in all locations. Overall, video sampling was a quick and useful sampling method that contributed valuable data to the coastal assessment of the Great Lakes.

Acknowledgments

We thank scientists at the U.S. EPA Office of Research and Development—specifically, Will Bartsch (ORISE Participant) and Tim Corry at the Mid-Continent Ecology Division (Duluth), Tony Olsen and Tom Kincaid at the Western Ecology Division (Corvallis), and John Kiddon at the Atlantic Ecology Division (Narragansett)—for support on various aspects of the frame development, survey design, field pilot survey and camera testing, and crew training. Many of these folks generously offered assistance with some aspect of data management and analysis, including statistical summaries/analysis of this initial NCCA for the Great Lakes. John Kiddon performed the Kd calculations for the Great Lakes data presented here. John McCauley at the Gulf Ecology Division (Gulf Breeze, FL) Greg Collianni (OW), and Tony Olsen were critical, helping to motivate and include the NCCA-Great Lakes effort. We thank EPA’s Office of Water (OW), specifically Greg Collianni, Sarah Lehmann, Treda Grayson, and Hugh Sullivan, as well as their OW Contractors, and numerous staff from the Great Lakes States who coordinated and/or conducted the extensive field sampling in 2010, including collection of the video images. Contractors to OW completed the taxonomic analyses and water quality analyses we used for comparison with video results. Mari Nord at EPA Region 5 has been a great liaison between ORD research efforts and the collection/use of the data by Great Lakes States in Regions 2, 3, and 5. Paul Horvatin, Paul Bertram, Glenn Warren, and Beth Hinchey Malloy at the Great Lakes National Program Office (GLNPO)/Region 5 were instrumental in helping to develop and maintain the relationship between ORD, the Regions, OW and the States as a strong partnership to adopt the NCCA approach in surveying the Great Lakes nearshore; they also were key in securing Great Lakes Restoration Initiative funding to make possible the Great Lakes 2010 enhancements, including the underwater video tool evaluated here. Beth kindly offered review comments of a draft of the manuscript. Julie (Barker) Lietz was an ORISE participant at EPA, during which she conducted the video analysis and developed the standard processing/rating protocols presented in this study. This work was supported in part by an appointment to the ORISE participant research program supported by an interagency agreement between EPA and DOE (IA 92298301); the views expressed in this paper are those of the authors and do not necessarily reflect the views or policies of the U.S. EPA.

Copyright information

© Springer Science+Business Media New York (outside the USA) 2015

Authors and Affiliations

  • Julie E. Lietz
    • 1
    • 2
  • John R. Kelly
    • 1
  • Jill V. Scharold
    • 1
  • Peder M. Yurista
    • 1
  1. 1.Mid-Continent Ecology Division, National Health and Environmental Effects Research LaboratoryU.S. Environmental Protection Agency, Office of Research and DevelopmentDuluthUSA
  2. 2.ORISE (Oak Ridge Institute for Science and Education) ParticipantOak RidgeUSA