Bureau of Transportation Statistics (BTS)
Printable Version

Statistically-Based Validation of Computer Simulation Models in Traffic Operations and Management - Discussion 1

Laurence R. Rilett
Clifford H. Spiegelman*
Texas A&M University and Texas Transportation Institute

The authors are to be commended for a timely article. With the recent advances in intelligent transportation systems (ITS) deployment, the corresponding availability of large traffic databases, and the increased use of traffic microsimulation models by transportation engineers, the issues examined are important. However, we would like to take this opportunity to point out some additional issues and research questions related to the proposed methodology.

The success of this study is attributable to a number of factors. Determining the portability of the methodology to other situations, even similar situations, is a matter for further study. The model has several inputs: data, technical expertise, and network development. In the assessment of CORSIM, a large set of factors is evaluated, many of which are inputs to CORSIM. Thus, we wonder if a group with less expertise than the paper authors were to change the signal timing, would they have been as successful? If a junior engineer developed the network input to CORSIM, would the model have been as successful? If the data were collected at a different location, would it have been as accurate? How accurate do input data have to be for the model to make quality predictions?

In almost all areas of science, engineering, and life, making a prediction and then having that prediction judged accurate is powerful evidence that the method of making the prediction is a good one. Still one wonders how many successes and what proportion of successes is needed to validate a model. Babe Ruth gave a famous baseball prediction when he pointed to the Yankee Stadium center field stands and predicted that he would hit a home run. He hit a home run immediately after his prediction. Does this mean that he could repeat the process anytime he wanted? What percentage of times would he need to be right and out of what number of pointing tries would he need to be right? We think issues like those addressed in this paper can be handled by developing an appropriate statistical theory for field experiments.

In this study only a select number of parameters were calibrated while the majority of parameters were left to their default values. In fact, only three types of changes were made: new sinks/sources were added to the network, the free flow speed on another link was reduced, and some entry volumes (more correctly λ values) were adjusted to better reflect downstream measurements. Interestingly, the behavioral parameters (e.g., gap acceptance, car following headway) went untouched, which is typically not done. For example, it is often assumed that the field values are relatively accurate and the behavior parameters (e.g., gap acceptance, driver aggressiveness) are calibrated so that the modeled output and traffic data are similar (see references). Regardless, the question of transferability arises—is the validation methodology appropriate for all locations or just for those locations where the default parameters apply? At a minimum, further study is required before statements such as “…with careful calibration and tuning, CORSIM output will match field observations and be an effective predictor.” In our opinion it is easy to intuit situations where no amount of expert manipulation of input will allow CORSIM to be used, because the behavior modeling (i.e., default behavior parameters) in CORSIM would not apply to the drivers in the traffic network being simulated. Two related questions to the above argument are: 1) which parameters should be calibrated and which should be left alone? and 2) when is it reasonable to calibrate high-fidelity-type microsimulation models, such as CORSIM, with relatively low-fidelity data?

Lastly, the authors make two implicit assumptions in their prediction methodology that need to be explicitly identified. The first is that the entering volumes and turning movements are fixed, which implies that the origin-destination movements are fixed. For small networks, such as the test bed, this may be reasonable. However, for larger networks, better signal timing will lead to an increase in capacity—in congested networks, such as Chicago where there can be significant latent demand, this would normally lead to an increase in observed volume. This is one of the reasons before and after studies of major transportation improvements are so difficult. In fairness to the authors, they did perform a sensitivity analysis on demand based on observed volume counts after the change. However, the point remains that research is required on when this assumption of constant demand can be made. Intuitively, the relationship between demand and network capacity would be important for both the after analysis and the actual traffic signal optimization.

The second assumption is that the routes chosen by the drivers remain constant as evidenced by the constant turning percentages. Intuitively, if there is a significant change in signal timing, drivers will change their routes if they can find a faster way to get to their destination. The fact that the authors observed that the turning percentages changed after the new signal timing was implemented lends some credence to this argument. Similar to the demand assumption, the constant route assignment assumption needs to be studied so that the conditions under which it can be made will be known. Obviously, if either assumption is invalid then the potential of the proposed methodology could be limited.

In closing, we are pleased to see that traffic microsimulation model validation is receiving the research attention it deserves. Hopefully, the research discussed in this paper will spur further research in this area and the important questions raised in the article will be adequately addressed.

REFERENCES

Benekohal, R.F. 1989. Procedure for Validation of Microscopic Traffic Flow Simulation Models. Transportation Research Record 1320:190–202.

Cheu, R., X. Jin, K. Ng, and Y. Ng. 1998. Calibration of FRESIM for the Singapore Expressway Using Genetic Algorithm. Journal of Transportation Engineering 124, 6:526–35.

Davis, C.F. and T.A. Ryan. 1981. Comparison of NETSIM Results with Field Observations and Webster Predictions for Isolated Intersections. The Application of Traffic Simulation Models. Special Report 194. Washington, DC: Transportation Research Board.

Lee, D., X.Yang, P. Chandrasekar. 2001. Parameter Calibration for PARAMICS Using Genetic Algorithm. Paper presented at the 80th Annual Meetings of the Transportation Research Board, Washington, DC. January.

Prevedourous, P. and Y. Wang. 1999. Simulation of a Large Freeway/Arterial Networkwith CORSIM, INTEGRATION and WATSim. Paper presented at the 78th Transportation Research Board Annual Meetings, Washington, DC. January.

Radwan, A.E., F. Naguib, and J. Upchurch. 1991. Assessment of the Traffic Experimental and Analysis Simulation Computer Model Using Field Data. Transportation Research Record 1320:216–26.

Rakha, H., M. Van Aerde, L. Bloomberg, and X. Huang. 1998. Construction and Calibration of a Large-Scale Microsimulation Model of the Salt Lake Area. Transportation Research Record 1644:93–102.

Rilett, L.R., K. Kim, and B. Raney. 2000. A Comparison of the Low Fidelity TRANSIMS and High Fidelity CORSIM Highway Simulation Models Using ITS Data. Transportation Research Record 1739:1–8.

Wang, Y. and P.D. Prevedouros. 1998. Comparison of INTEGRATION, TSIS/CORSIM, and WATSim in Replicating Volumes and Speeds on Three Small Networks. Transportation Research Record 1644:80–92.

Address for Correspondence

Clifford H. Spiegelman, Department of Statistics, Room 3143, Texas A&M University College Station, TX 77845-3143. Email: cliff@stat.tamu.edu.