Skip to contentUnited States Department of Transportation - Federal Highway AdministrationSearch FHWAFeedback

Pavements

<< PreviousContentsNext >>

4. SELECTING TOPICS FOR DETAILED ANALYSES

FIRST PANEL MEETING

In accordance with contract provisions, once the specifications development flowchart was developed (see chapter 3), members of the research team met with the panel to seek its approval of the process and to identify topics for detailed analyses. The meeting took place on March 1, 1999, at the Turner-Fairbank Highway Research Center (TFHRC) in McLean, VA. The minutes of that meeting are included in appendix E.

Two major goals were planned for the meeting. The first goal was to present the preliminary specifications development flowchart to the panel members, discuss their comments and input, and obtain approval from the panel to proceed with a final process flowchart. The second goal was to determine the specific topics that the panel wanted to include for detailed analyses in the project. The minutes in appendix E indicate the process that was followed during the meeting.

With regard to the first goal, the researchers obtained input from the panel members and it was agreed that some modifications would be made to the initial flowchart. These changes were included in the final flowcharts shown in chapter 3. Concerning the second goal, there was discussion on a number of potential topics; however, there was not sufficient time for the panel to select the most desirable items for further study. It was therefore agreed that the principal investigator would distribute a survey form to the panel members to solicit their rankings of the various topics to be analyzed during the project.

SURVEY OF TOPICS FOR DETAILED ANALYSES

The principal investigator distributed a survey form to the panel members to determine a priority ranking for the various topics that were candidates for detailed analyses. The survey form that was distributed is shown in figure 5.

Of the 20 survey forms distributed (19 State representatives plus the COTR), 18 were returned. Two ranking methods were used. The first asked the respondents to group the topics into three categories-highest priority, next highest priority, and lowest priority. In summarizing these results, 5 points, 3 points, and 1 point were assigned to the topics in each category, respectively. The second ranking method asked the respondents to rank the topics in decreasing order from highest to lowest priority. In summarizing these results, 10 points were assigned to the highest priority topic, with the points decreasing to 9, 8, 7, ... 3, 2, 1. Zero points were assigned to any topics that were not on the list of the top 10.

Figure 5. Survey sent to panel members.

FAX TO: ______[contact]_______  FROM: ____________________

Optimal Acceptance Procedures for Statistical Specifications

Complete the tables below using two different ranking methods. Keep in mind that all items in the flowcharts will be addressed in the manual and the report. Some will just be addressed in general conceptual terms, while others will need to include detailed analyses to support recommendations.

First: Rank the 4 highest priority numbered items in the table shown, along with the 4 items with second highest priority, and, finally, the 4 items with lowest priority. You may include write-in items in your priority rankings.

PriorityNumbered Items From the List
  
Highest (list 4) 
  
Next Highest (list 4) 
  
Lowest (list 4) 

Second: Rank the 10 highest priority numbered items in decreasing order from most important, 1, to less important, 10. You may include write-in items in your priority rankings.

Priority Ranking (1-10)Numbered Item From the List
1 (highest) 
2 
3 
4 
5 
6 
7 
8 
9 
10 (lowest) 

Third: Cross out any of the bulleted items that you do not feel need to be included.

Fourth: Fax your ratings (and pages with crossed-out bullets) to Jim Burati at 864-656-2670.

Figure 5. Survey sent to panel members (continued).

List of Possible Topics for Further Analysis

  1. Analysis of the Percent Within Limits (PWL) approach, including:
    • Bias and precision of the PWL estimates versus sample size
    • Precision in OC curves for PWL versus sample size
    • Precision of average project PWL versus number of project lots
    • Precision of individual payments based on PWL
    • Precision of average project payment versus number of project lots
    • Effects of non-normal populations (bimodal and skewed)
  2. Analysis of the Average Absolute Deviation (AAD) approach, including:
    • Bias and precision of the AAD estimates versus sample size
    • Methodology for developing and presenting AAD OC curves
    • Precision in OC curves for AAD versus sample size
    • Precision of individual payments based on AAD
    • Effects of non-normal populations (bimodal and skewed)
  3. Analysis of the sample mean (mean) acceptance approach, including:
    • Bias and precision of the mean estimates versus sample size
    • Precision in OC curves for mean versus sample size
    • Precision of average project mean versus number of project lots
    • Precision of individual payments based on mean
    • Precision of average project payment versus number of project lots
    • Effects of non-normal populations (bimodal and skewed)
  4. Analysis of the Conformal Index (CI) approach, including:
    • Bias and precision of the CI estimates versus sample size
    • Methodology for developing and presenting CI OC curves
    • Precision in OC curves for CI versus sample size
    • Precision of individual payments based on CI
    • Effects of non-normal populations (bimodal and skewed)
  5. Analysis of the single sample variability (X ± ks) approach, including:
    • Bias and precision of the X ± ks estimates versus sample size
    • Methodology for developing and presenting X ± ks OC curves
    • Precision in OC curves for X ± ks versus sample size
    • Precision of individual payments based on X ± ks
    • Effects of non-normal populations (bimodal and skewed)
  6. Analysis of the moving average (m) approach for acceptance, including:
    • Bias and precision of the m estimates versus sample size
    • Investigation of the possibility of developing and presenting m OC curves
    • Methods for applying price adjustments when using m
    • Precision of individual payments based on m
    • Precision of average project payment versus number of project lots
    • Effects of non-normal populations (bimodal and skewed)

    Note: Some of the bulleted items for moving averages may not be possible to determine.

  7. Analysis of methods for determining lot pay factors for individual acceptance properties
  8. Analysis of methods for determining composite lot pay factors when multiple acceptance properties are used
  9. Analysis of the use of Bayesian procedures that incorporate information from prior lots or prior projects into the acceptance decision for the current lot
  10. Analysis of procedures for verifying or validating contractor and agency test results, including:
    • Use of the F-test and t-test (AASHTO QA Guide Spec.)
    • Use of a single agency test and the mean and range of contractor tests (AASHTO QA Guide Spec.)
    • Use of a maximum allowable difference between individual agency and contractor tests
  11. Analysis of various individual "bells and whistles," that is, additional provisions that are used in conjunction with the traditional acceptance approaches, for example:
    • Use of payment based on PWL but with no price reductions applied if all individual tests are within the limits
    • Use of sample mean for acceptance, but also placing wider limits on individual test results
    • Use of limits on sample range or standard deviation in addition to limits on the sample average
    • Other provisions:
      _______________________________________________________________
      _______________________________________________________________
  12. Other major items for analyses:
    _______________________________________________________________
    _______________________________________________________________

Survey Results

A summary of the survey responses is provided in table 4 for the first ranking method and in table 5 for the second ranking method. The same results are shown in graphical form, from highest to lowest priority, in figures 6 and 7 for the first and second ranking methods, respectively.

Table 4. Survey results for the first ranking method.
Agency 1 2 3 4 5 6 7 8 9 10 11 12
FHWA131313555315*
CT31351355153 
ID51333155153 
IL53533511153 
KS553535111313?
LA51113355533 
MN51135535133 
NV35313355151 
NJ331313551515+
NY33113155355 
ON51311355353 
OR55311353153 
PA55353133151 
SC53111355353 
TX55333510530 
VA00000055050 
WA53331553151 
WI53135351153 
Total735139454155746735803813

* Procedures for determining acceptable alpha and beta risks

? Listed an item 12 in the ranking, but did not identify it

+ Establishment of the relationship between quality/performance/value

Table 5. Survey results for the second ranking method.
Agency 1 2 3 4 5 6 7 8 9 10 11 12
FHWA2506039784110*
CT623100587194 
ID102543187096 
IL957431022086 
KS1095748210603?
LA102106389754 
MN921371058064 
NV586034109271 
NJ5304069817210+
NY651240910378 
ON100321687594 
OR107321496085 
PA109584263170 
SC104201378596 
TX910534820760 
VA00000099090 
WA7.55552.57.59.52.519.50 
WI105169472083 
Total138.583536652.584.5127.5105.541133.55423

* Procedures for determining acceptable alpha and beta risks

? Listed an item 12 in the ranking, but did not identify it

+ Establishment of the relationship between quality/performance/value

Click for text description

Figure 6. Graphical presentation of survey results for the first ranking method.

Click for text description

Figure 7. Graphical presentation of survey results for the second ranking method.

Table 6 shows the rankings from the two different methods and the overall ranking, which is the average of the rankings from the two ranking methods.

Table 6. Overall rankings of the survey topics.
Topic: Analysis of ... First Ranking Method Second Ranking Method Overall Ranking
10Procedures for verifying or validating contractor's and agency's test results121
1PWL approach312
7Methods for determining lot pay factors for individual acceptance properties233
8Methods for determining composite pay factors when multiple properties are used444
6Moving average approach555
2AAD approach666
4CI approach777
3Sample mean approach998 (tie)
5Sample variability approach8108 (tie)
11Various "bells and whistles"1088 (tie)
9Use of Bayesian procedures111111

As would be expected, the two ranking methods had very similar results. The clear winners were the topics related to verifying or validating the contractor's results, the percent within limits (PWL) approach, and the determination of payment factors. There was a considerable dropoff between this group and the moving average, average absolute deviation (AAD), and conformal index (CI) approaches.

Two additional topics were proposed (each by one responder). These were "procedures for determining acceptable α and β risks" and "establishment of the relationship between quality, performance, and value." Each of these additional proposed topics would require considerable effort and, indeed, would constitute major research projects in their own right. It was not possible to tackle these topics with the time and resources that were allocated for the current project.

TOPICS SELECTED FOR DETAILED ANALYSES

Table 6 identifies the priority topics that, in the opinion of the panel, required detailed analyses during the current project. The priority items selected by the panel can be reiterated as:

  • Analysis of the procedures for verifying or validating the contractor's and agency's test results.
  • Analysis of the use of PWL as the quality measure.
  • Analysis of the methods for determining lot pay factors for individual acceptance properties.
  • Analysis of the methods for determining the composite payment factor when multiple acceptance properties are used.

These are essentially the same topics that were identified from the process flowcharts in chapter 3. Those topics were:

  • What quality measure should be used for individual quality characteristics?
  • What payment relationships should be used for individual quality characteristics?
  • How should multiple quality characteristics be combined into a single payment factor?
  • What procedures should be used to verify the contractor's test results if they are to be used in the acceptance and payment decision?

The only difference is that the panel members were interested primarily in the PWL quality measure, while the flowcharts indicate that a quality measure must be selected but do not imply that it must be PWL. Therefore, it was decided to conduct initial analyses on several potential quality measures, but to concentrate the detailed analyses on the PWL measure as long as the initial analyses indicated that it was the recommended quality measure.

Each of the bulleted items listed above is presented in depth in subsequent chapters.

FHWA-HRT-04-046

<< PreviousContentsNext >>

Events

More Information

Contact

Peter Kopac
Turner Fairbank
202-493-3151
E-mail Peter

 
 
This page last modified on 06/09/06
 

FHWA
United States Department of Transportation - Federal Highway Administration