3. Traffic Analysis Tools Incorporating Weather3.1 OverviewOne universally correct method to conduct traffic analyses cannot exist due to the fact that all traffic analyses do not share the same objectives. Objectives are an important deciding factor in choosing a type of traffic analysis. For instance, if a study's objective is to simulate traffic flow for multiple regions of a State, analysts need to choose a traffic analysis tool that can handle the amount of data that is involved in the study. The three main types of analysis are: macroscopic analysis, mesoscopic analysis, and microscopic analysis. This section defines each analysis type and discusses their modeling capabilities and limitations. 3.1.1 Macroscopic AnalysisIf there is one thing to take away from learning about transportation operations, it is that flow, speed, and density are all related to each other. The conditions of two will affect the third traffic stream parameter. For instance, if drivers on a highway are able to travel at their free-flow speeds and maximum density has not been reached, then the flow of traffic will run smoothly. When users incorporate macroscopic simulation models into their traffic analyses, they are analyzing the relationship among the three traffic stream parameters. When we consider the word "macroscopic," we think "large scale." Accordingly, a key feature of most macroscopic models is their ability to model large study areas. Using the flows, speeds, and density measures of a large network, macroscopic models can provide simple representations of the traffic behavior in that network. Because these models do not require detailed data such as driver characteristics, model set up can be done quickly and the simulation can output results in a timely manner (Traffic Analysis Toolbox Volume I – Alexiadis et al., 2004). Although the ability of macroscopic models to output a simple representation of traffic flow in a timely manner is considered a benefit, it is considered a limitation as well. Macroscopic models can simulate traffic stream parameters (i.e., flow, speed, and density) on a large scope, but they cannot model detailed behavior in individual vehicle movements (e.g., saturation headway and lost startup time). The FHWA Traffic Analysis Toolbox lists the commonly used macroscopic simulation models and can be found at the following link:
Note that no literature was found in the weather-related analysis using macroscopic modeling. This is because weather impacts cannot be effectively analyzed using macroscopic tools. As a result, this chapter does not discuss incorporating weather in macroscopic analysis. 3.1.2 Mesoscopic AnalysisMacroscopic models can only provide so much detail in simulating real world traffic conditions. In some cases, research requires more in-depth simulation results. This is where mesoscopic models come into play. These models have the ability to model large study areas but they provide users with more detailed information than macroscopic models. More detailed traffic scenarios can be modeled using mesoscopic simulation models. For instance, users have the capability to model diversion routes from major roadways (e.g., freeways and highways) to other road types (e.g., signalized arterial). This could not be accomplished using macroscopic models. With these capabilities come weaknesses. One key limitation in using mesoscopic models is the inability to model detailed operational strategies such as a coordinated traffic network. This operational strategy involves programming the traffic signals at several intersections so that the flow of traffic is optimized (i.e., drivers do not receive a red signal at each intersection they approach). Such an operation would be better suited in a microscopic model because it would require more detailed data. Mesoscopic models provide users with higher accuracy in simulating real world traffic behavior than macroscopic models, but microscopic models simulate real world traffic behavior with higher accuracy than mesoscopic models. Commonly used mesoscopic simulation models include those from the DYNASMART and DYNAMIT family. Recent studies of inclement weather impacts on the transportation system have incorporated mesoscopic analysis with these models. The FHWA Traffic Analysis Toolbox provides additional information on mesoscopic simulation models at the following link:
3.1.3 Microscopic AnalysisWhen research requires heavily detailed analysis of real world traffic behavior, users would use microscopic simulation models. These models are intended to simulate the movement of individual vehicles, which can be done by using car-following models, longitudinal motion models (e.g., acceleration and deceleration models), gap-acceptance models, and lane-changing models. Microscopic models allow users to simulate the stochastic nature of traffic. The drivers that you share the road with are not going to drive in the same manner as you. Their thinking patterns and comfort levels will vary for each traffic scenario presented to them. Incorporating driver behavior data is essential to simulate traffic conditions with the highest accuracy. The ability of microscopic models to simulate traffic behavior with high accuracy is a benefit but also a weakness. In order to gain such a high level of accuracy, microscopic simulation models require substantial amounts of roadway geometry, traffic control, traffic pattern, and driver behavior data. Providing this amount of data will limit users to modeling smaller networks than those that can be modeled in macroscopic and mesoscopic analyses. The required input data will also cause each simulation run to take a very long time to output results. The FHWA Traffic Analysis Toolbox lists commonly used microscopic simulation models. These include: CORSIM, VISSIM, AIMSUN, and PARAMICS. For additional information on microscopic simulation tools please refer to the following sites:
3.2 Mesoscopic AnalysisAs noted, this chapter does not cover macroscopic analysis as weather impacts cannot be effectively analyzed using macroscopic tools. Mesoscopic analysis is an emerging method for simulating and studying traffic. As explained in the 3.1 overview section, macroscopic simulation is a highly aggregated method for analyzing traffic that assumes all vehicles on the roadway have the same characteristics. However, this method is not appropriate to predict and understand changes happening at the vehicular level. Microscopic analysis looks at every individual vehicle and its unique characteristics. This method makes identifying changes and conflicts among vehicles easier; however, it requires a great deal of computing power and is most effective in smaller geographic networks. One advantage of mesoscopic analysis is the ability to analyze larger geographic areas than microscopic analysis while still providing some of the detailed data that macroscopic analysis cannot provide. Mesoscopic analysis also allows for the analysis of road segments, multiple routes within a network, basic signalized intersections, freeways and ramps. The major disadvantage to mesoscopic analysis is the heavy data requirement. Mesoscopic analysis requires almost as much data as microscopic simulation and for large geographic regions the data requirements are comparable to those of transportation planning studies. Another disadvantage to mesoscopic analysis is that some complex traffic features currently cannot be simulated well, such as sophisticated traffic signals. 3.2.1 Mesoscopic Traffic Simulation Model SetupThere are a number of software packages available for mesoscopic traffic modeling. In the United States, the more commonly used software are the Traffic Estimation and Prediction Systems (TrEPS) tools, formerly known as Dynamic Traffic Assignment (DTA) tools. These tools include DynaMIT-P, DynaMIT-X, DYNASMART-P, DYNASMART-X, and DynusT. Some other mesoscopic tools are listed below. This TAT module focuses primarily on TrEPS tools.
A typical network in any of these programs might look something like Figure 3-1. It clearly shows the extent of the geographic area that this model can analyze: an area much larger than a typical microscopic simulation would be able to handle. Figure 3-1. Sample DynusT Network in Portland, OR (DynusT, 2010) The basic principles of traffic simulation vary little from one type of traffic simulation to the next. The overall procedure of developing and applying traffic simulation modeling to a traffic analysis consists of seven steps, four of which are part of the model set up. a flow chart of the model set up for traffic analysis is shown in Figure 3-2. Figure 3-2. Flow Chart of Model Set Up 3.2.1.1 Project ScopeAn important first step for any project is assessing its scope. When it comes to traffic modeling, thoroughly scoping out the project can be very useful in deciding what traffic analysis tool is best for the goals of the project. Projects that require modeling of large geographic areas, faster computing times, or networks with several routes for drivers to take should consider using mesoscopic modeling; the drawback being a loss in fidelity in the output results. The microscopic simulation section further elaborates on this topic. 3.2.1.2 Data CollectionSetting up the model cannot be done until all of the necessary data are complete. All of the model setup information falls into one of three groups: Network, Control, or Movement. Network: The network data contain all of the links and nodes that geographically build the network. Links represent roadways and nodes and points on the map where multiple roads connect. A node could be an intersection, a freeway ramp, or simply a point where the road curves to make for a more accurate representation of the roadway network. Control: The control data are needed for intersections where signals or signs are used to govern vehicle movements. The data would include information on the location and timing of traffic signals or the locations of Stop or Yield signs. It also includes data for ramp metering or variable message sign (VMS) information being provided to drivers. Movement: The movement data are also necessary for intersection control and define how a vehicle moves when at an intersection. These data work hand-in-hand with the Control data to accurately move vehicles throughout the network. 3.2.1.3 Base Model DevelopmentTo complete this step all of the data that were collected need to be organized and formatted correctly into the proper program-specific input data so that the modeling tool will be able to read and use the data without problems. Mesoscopic traffic models can be very data intensive and require a large number of input data in order to build an entire network. 3.2.1.4 Error CheckingThe primary purpose for Error Checking is to ensure that the model being developed will accurately simulate what is occurring or will occur in real networks. Data calibration and validation will be discussed in detail in the coming sections.3.2.2 Data PreparationTo prepare a mesoscopic simulation model, data are typically needed for estimating supply and demand parameters. Data for supply parameters are speed, volume, and density from each segment type of the transportation network that is being studied. Data for demand parameters (i.e., origin destination demand matrix) are the historical origin-destination (OD) matrix and observed counts. Conducting data preparation allows for quality assurance in the input data of the study. It is made up of review, error checking, and the reduction of the data collected in the field (Dowling et al., 2004). Data verification and validation should be performed during the data preparation step. The following are data verification and validation checks:
3.2.3 Traffic Model Calibration for Normal ConditionsCalibrating the mesoscopic traffic model is based on the same principle as calibration of any model. Real data such as vehicle counts, speed studies, and travel time data; the Highway Capacity Manual standards; and historic origin and destination demand data are all useful for calibrating and validating the model. Figure 3-3 briefly outlines the process of calibrating supply and demand data for a mesoscopic simulation. Figure 3-3(a) shows calibration as a three-step process: disaggregated level, sub-network level, and system level calibration. The disaggregated level calibration deals with calibration over entire segments of the speed-density relationships and the capacity. As shown in Figure 3-3(b), the sub-network calibration discusses more specific procedures. The sub-network calibration is the process of estimating and calibrating demand on the segment level. Finally, system-level calibration is calibrating supply and demand parameters on the network scale to ensure that the supply and demand match up with one another. Figure 3-3 Calibration of Both Supply and Demand Data Process Flow Chart Some specific methods for calibrating both the supply and demand parameters are discussed further below. 3.2.3.1. Supply ParametersSupply parameters are the characteristics of a roadway that describe its ability to move vehicles: capacity, speed, flow. Supply parameters are determined by some type of traffic flow model. Traffic flow models dictate how vehicles move in general. Individual vehicle movements are not simulated in mesoscopic models. Mesoscopic traffic flow models use a macroscopic traffic flow model. The following model is the traffic flow model used by DynaMIT software to determine the supply parameters (Park et al., 2004). This model uses the speed and flow relationship. Where: qobs = field flow (vph) This is a two-regime function, which means that free-flow conditions and non-free flow conditions are treated separately. In the free flow regime u is the free flow speed while curve fitting is done to estimate speed in the second regime. The parameters α, β are determined by plotting and examining the data collected. DYNASMART and DynusT use similar traffic flow models. As noted, to calibrate the supply parameters for normal conditions, speed, flow, and density data from a real network during normal conditions need to be collected. The data should then be input into one of the model equations shown above. The model equation could be transformed into a linear model so that linear regression analysis can be performed and values for traffic flow model parameters can be determined (i.e., α and β in speed-flow relationship equation). This process can be repeated if necessary to improve the accuracy of the supply parameters. This process is well illustrated in the supply parameter calibration procedure.
3.2.3.2. Demand ParametersA 2008 study by Park et al., titled Online Implementation of DynaMIT: A Prototype Traffic Estimation and Prediction Program, clearly describes the procedures for calibrating demand parameters during normal conditions. Their procedure is summarized in Section 3.2.3. As mentioned in the data preparation section, initial origin-destination matrices could be obtained by updating historical OD matrices. The update typically implements a gravity model with observed traffic counts. Optimization can then be used to check to see that the OD matrices are convergent. These new OD matrices are then run through the simulation to replace the original OD matrices. The resulting matrices are once again optimized for convergence and run through the simulation again. This process can be repeated as many times as necessary until the resulting matrices are well calibrated and accurately reflect the real network being simulated. 3.2.4. Calibration for Weather ImpactsAs discussed, understanding the impact of weather on a transportation system could greatly improve transportation management, overall mobility, and efficiency. In order to include the impacts of weather in a traffic simulation, the supply and/or demand parameters are adjusted to better reflect either 1) what the roadway can offer for capacity or travel speed or 2) the number of travelers commuting between origins and destinations and via which routes. 3.2.4.1. Supply ParametersDetermining the supply parameters for traffic analysis during inclement weather is a two step process. First, Weather Adjustment Factors (WAFs) must be determined and then they must be calibrated. Weather Adjustment Factors (WAFs) are used to reduce supply parameters to a level that is more appropriate for the inclement weather conditions being observed. The three weather parameters used to determine WAFs are visibility, rain intensity, and snow intensity. Visibility is measured in miles and both rain and snow intensities are measured in inches per hour. A WAF then needs to be calculated for each supply parameter in the traffic flow model for each weather condition. For each of the parameters, a weather adjustment factor (WAF) was calculated. WAF is calculated as: Where, Fi = WAF for parameter i Under inclement weather, the supply parameters are calculated as follows: Where, F' = weather adjusted parameter In order to calibrate the WAFs the following calculation should be done and the results should be analyzed using linear regression analysis. Where, F' = weather adjusted parameter 3.2.4.2. Demand ParametersThe demand parameters are origins, destinations, and demand volumes. If weather causes drivers to change their destination, change the time at which they make their trip, or results in a change in the number of drivers using a particular route in order to complete their travel, then the demand parameters for that particular weather event would be unlike the normal condition. Understanding these decisions would require understanding the behavior of every driver on the road. For the purposes of calibrating the simulation, historic driver and weather data can be paired up and calibrated to generate factors similar to the Weather Adjustment Factors used in the supply parameter calibration section. One method for determining and calibrating demand parameters is outlined by Samba and Park (2009). This study is one of the first attempts to analyze the impact of inclement weather on demand parameters. In their study, they proposed a probabilistic approach to determine the percent average reduction of traffic demand under rainy and snowy conditions for seven sites surrounding major central business districts in Minnesota and Virginia. Factors including time-of-day and varying precipitation intensity were incorporated in the analysis. Weather data for Minnesota and Virginia were collected from the National Oceanic and Atmospheric Administration hourly precipitation report for the years 2006 and 2007. Traffic data was grouped into 1-hour intervals to be consistent with the weather data format. They separated traffic data by each month and parsed out weekends, holidays, and any other non-inclement weather days that produced atypical Average Daily Traffic (ADT) values or hourly volume curves (e.g., atypical ADT resulting from construction). Then a spreadsheet of the inclement weather days was developed in the analysis. The non-inclement weather days were studied so that the mean volume and standard deviation could be produced for a typical dry day for each month. The yielded mean was used as a baseline to compute the percent difference of inclement weather volume for each hour. Equation 3-6 presents the percent difference equation that was used in the analysis. A 95 percent confidence value can be determined with the use of the yielded standard deviation. The 95 percent confidence value is used in analyses to calculate a threshold value that is above and below the mean that is expected 95 percent of the time. If the absolute percent difference of a precipitation day's hourly volume exceeds the threshold value then it can be stated with 95 percent confidence that the volume falls outside the expected range and can be attributed to the impact of inclement weather. The percent difference can be used to predict the changes to traffic demand that are caused by inclement weather conditions. Users can follow the proposed procedure:
This procedure can be best explained with an example:
3.2.5. Performance MeasuresTo ensure that the model is working properly and providing reliable results, it needs to be validated against real data from the network being modeled. This is done by comparing the mesoscopic model outputs and field measurements. The following parameters can be established as performance measures for mesoscopic traffic analyses:
It is important that a performance measure be appropriate (i.e., it can provide an adequate representation of at least one objective established in the analysis) and measurable. 3.2.6. Weather Model Implementation and AnalysisThis section presents mesoscopic model implementation and analysis under inclement weather conditions. 3.2.6.1. Supply Parameters CalibrationThe two methods for determining and calibrating supply parameters are (1) obtaining inclement weather data and analyzing it by fitting it to the traffic flow models or (2) applying Weather Adjustment Factors (WAFs). The direct curve fitting method estimates parameters of a traffic flow model directly using flow and speed data observed during inclement weather conditions. Figure 3-4 shows the steps for this process. First the data needs to be sorted by weather condition. A speed-flow curve can be plotted and the traffic flow model parameters can be determined. It is noted that Kjam represents jam density and alpha and beta are calibration parameters of the mesoscopic traffic model. Figure 3-4. Supply Parameter Calibration Procedure Table 3-1 shows the supply parameters for inclement weather conditions that were estimated for the DynaMIT program using Hampton Roads data by following the supply parameter calibration procedure shown in Figure 3-4.
The WAF method can be used to calibrate the supply parameter for inclement weather conditions. The method is already discussed under Section 3.2.4. The WAF method is a regression model capturing impacts of different weather conditions (e.g., normal, light rain, moderate rain, and light snow). The regression model is developed on the basis of speed and density relationship functions estimated for various weather conditions. Table 3-2 presents calibrated coefficients of the weather adjustment factors.
The results of implementing weather adjustment factor were illustrated by Dong et al. (2010) when they simulated a stretch the of I-95 corridor and nearby arterial networks in Maryland from Washington D.C. to Baltimore. The three scenarios simulated were a clear and normal weather day, a moderate rain event, and a heavy rain event. Moderate rain is defined as visibility of 1 mile and rain intensity of 0.2 inch/hour, while heavy rain is defined as visibility of 0.5 miles and rain intensity of 0.5 inch/hour. Supply parameters for these weather conditions were adjusted using WAFs from Table 3-3. As shown in Figure 3-5, the time-varying network travel times became longer as weather conditions became worse. This is due to rain impacts on reduced capacity and saturation flow rates. Figure 3-5. Travel Time Comparisons under Three Weather Conditions 3.2.6.2. Demand CalibrationThe estimation of demand parameters can be approached two ways: (1) by adjusting demand "on-the-fly" or (2) by developing a mathematical representation of how demand may change. The first method calibrates demand (i.e., OD matrix) by reactively updating OD demand based on real-time sensor counts during inclement weather conditions. This makes sense because studies have shown few changes during morning peak hours but some reductions during non-peak hours under inclement weather conditions. An example of the real-time operation of the DynaMIT program's application is in the Hampton Roads area of Virginia (Park et al., 2004). While the real-time implementation was not conducted during inclement weather conditions, dynamic OD estimation was implemented based on real-time sensor counts. Thus, demand (i.e., the OD matrix) was updated using observed traffic counts to reflect changes in network travel behaviors, including weather, recurrent, and non-recurrent events. The second method is less clear because there is no one model available that perfectly describes the factors that determine whether or not a person will decide to make a trip during various weather conditions. It is not clear what impact weather has on the driver's decision to make a trip. This method uses probabilistic demand adjustment factor based on empirical data observed during inclement weather conditions. The method is already described earlier in Section 3.2.4.2 Samba and Park (2009) applied this method using traffic and weather data from several locations in Virginia and Minnesota. The resulting probabilities of significant volume change under inclement weather as well as averages and ranges of volume changes are shown in Table 3-3. Note that the probability of significant volume change is determined when volume reduction is bigger than normal traffic volume variations at a given location.
It should be noted that these estimated demand adjustment values may not be suitable for traffic operators conducting their own analyses because these values correlate to a specific study area (i.e., using these demand adjustment values will not adequately reflect the change in demand due to inclement weather). Traffic operators could follow the procedure presented in Section 3.2.4.2 to obtain their own inclement weather-based regionalized demand adjustment values. 3.2.6.3. Implementation and AnalysisA mesoscopic model under inclement weather conditions has not been implemented in the real world. Instead, a simulation study was implemented to demonstrate the impact of weather and control strategies (Mahmassani et al., 2009). In the study, the calibrated supply parameters were input to the I-95 corridor and adjacent arterial networks. The supply parameters were further adjusted for inclement weather conditions including a rain event and a rain plus VMS event. Thus, three scenarios, including normal condition, are (1) a clear and normal day, (2) a rain event, and (3) rain plus VMS event. The third event includes driver information to help change the route. In the implementation, Dong et al. (2010) assumed the following weather event, in Figure 3-6, during the morning peak hours. Figure 3-6. Weather Events during Peak Hour The results of these three simulations are seen in Figure 3-7. It is very clear that the rain scenario had the worst performance, as indicated by the low travel speeds. The rain with VMS case did not perform as well as the clear day base case, but did come close to the same speed at some points in the simulation. Figure 3-7. Link Speed Comparisons Similar studies could be performed for other weather responsive transportation management strategies that were listed previously to determine what impacts those measures have on traffic flow. Additionally, because every weather event and every roadway is unique, these simulations could be run during an event to help determine what actions should be taken at that time for that specific event. 3.3. Microscopic AnalysisOf the three types of analysis discussed in this module, microscopic analysis, based on individual vehicle movement, is the finest representation of the transportation system. Microscopic simulation models simulate the movement of individual vehicles, which can be done by using car-following models, lane-changing models, and gap acceptance models. Utilizing microscopic simulation, users can input detailed traffic data into the analysis, thereby creating an opportunity to incorporate diversity in vehicles and driver characteristics, enabling accurate simulation of real-world traffic. 3.3.1. Microscopic Traffic Simulation Model Set UpThe overall procedure of developing and applying microscopic traffic simulation modeling to a traffic analysis consists of seven steps, four of which are part of the model set up. FHWA constructed a flow chart of the model set up for microscopic analysis, which is depicted in Figure 3-1 in Section 3.2.1. 3.3.1.1. Project ScopeThe project scope consists of five tasks: 1) Define project purpose, 2) Identify influence areas, 3) Select approach, 4) Select model, and 5) Estimate staff time (Holm et al., 2007). It is appropriate to present questions in this section that would help identify the study breadth – influence areas. Questions that can be asked pertaining to inclement weather are:
The criteria used for selecting the analytical tool are tied to the analytical approach. FHWA states that key criteria for choosing a modeling tool include technical capabilities, input/output interfaces, user training/support, and ongoing software enhancements (Dowling et al., 2004). To gain accuracy in simulating driver behavior during inclement weather conditions, it is necessary to choose a microscopic modeling tool such as CORSIM, VISSIM, PARAMICS, INTEGRATION and AIMSUN2. The final task within the project scope is establishing the staff time for the project. Performing this function provides certainty that the project can be completed with the chosen approach in the allotted time. Creating a schedule for key project milestones is one approach to accomplishing this task. 3.3.1.2. Data CollectionRequired input data for microscopic simulations vary based on the analytical tool and modeling application. In most microscopic modeling applications, required input data include the following:
For many transportation microscopic analyses, detailed data on vehicle and driver characteristics (e.g., vehicle length and driver aggressiveness) also need to be included in modeling applications. Since field data of such caliber are difficult to collect, they are established as default values in most microscopic traffic analytical tools. When possible, data collection should be managed in such a manner that there is consistency in the datasets (Road and Traffic Authority, 2009). For instance, travel time and queue length should be recorded at the same time period as traffic count data. 3.3.1.3. Base Model DevelopmentOnce the required data is collected, one can proceed to develop a microscopic simulation model. A successful model entails the development of a model blueprint (the link-node diagram). The link-node diagram is a visual representation of the study area in terms of links and nodes. It can be created within the microscopic simulation tool or it can be created by other tools such as CAD programs. Once the blueprint is established, the model can be built in the following sequence: coding links and nodes, establishing link geometries, adding traffic control data at appropriate nodes, coding travel demand data, adding driver behavior data, and selecting control parameters that will be used to run the model (Dowling et al., 2004). 3.3.1.4. Error CheckingBefore any model runs are made it is both essential and beneficial to perform error checking so that the calibration process does not output distorted results. Calibrating model parameters relies on the assurance that major errors in demand and network coding are found and removed before proceeding in the model set up (Dowling et al., 2004). Error checking is carried out in three primary stages: 1) software error checking, 2) input coding error checking, and 3) animation review to find obscure input errors (Holm et al., 2007). 3.3.2. Data PreparationConducting data preparation allows for quality assurance in the input data of the study. It is made up of review, error checking, and the reduction of the data collected in the field. The main purpose is to check for any discrepancies in the data (e.g., breaks in geometric continuity, unexplained large gains or losses in traffic volume, and unrealistic speeds for roadway segments) before proceeding with the analysis. Failure to do so can result in false outputs generated in the model. Please refer to FHWA's Traffic Analysis Toolbox Volume III for detailed data verification and validation checks. 3.3.3. Traffic Model Calibration for Normal ConditionsTraffic model calibration is the process of fine-tuning the data inputs that represent characteristics of the vehicle and driver. This is executed by comparing and adjusting absolute measures (clear, definitive, and measurable parameters) such as flow rate and mean speed. Figure 3-8 presents a flow chart of the overall calibration and validation process for microscopic traffic simulation modeling. Figure 3-8. Calibration and Validation Process Flow Chart 3.3.3.1. Simulation Model Set UpThe first part of the calibration and validation process involves the set up of the simulation model. It involves tasks identical to those in the microscopic traffic simulation set up previously discussed. Please refer to that section for the model set up procedure. 3.3.3.2. Initial EvaluationThe default parameter set (i.e., uncalibrated parameters) is the focus of this stage of the calibration process. A feasibility test is required at this stage. This is a test that is conducted to ensure that the field data is well represented by the distribution of the simulation results. If the default parameter set produces acceptable results (results that accurately reproduce field conditions), the calibration and validation procedure may be skipped and further analysis can be conducted with the default parameter set. Two steps are involved in the feasibility test. First, the user needs to perform multiple runs of the simulation model using the default parameter set. Then consecutive comparisons to calibration data should be made. Figure 3-9 presents the initial evaluation process. Figure 3-9. Initial Evaluation Process Multiple runs should be performed because simulation does not output the exact same results for each run. The randomly generated seed number is the cause of simulation resulting in similar but not exact outputs. It is important to include the randomly generated seed number in microscopic simulation models because its purpose is to be the decision-maker. It decides the speeds that vehicles are traveling at, the type of vehicles that are included in the simulation, and the paths that the vehicles will take. Without the randomly generated seed number, the simulation loses the stochastic nature of real world conditions. Conducting multiple runs is a must, but how many runs should there be? Too many runs are not necessarily bad, but too many runs means time lost in performing additional simulation runs. Too few runs raise questions as to whether field conditions are well represented in the simulation runs that have been conducted. The user can perform the following procedure to estimate the minimum number of simulation runs:
Typically, the user should perform four simulation runs in the first step of the procedure and then analyze the calibration data from these runs. This is done so that the user can get a feel for what the distribution of simulation results will be and can make an educated guess as to how many simulation runs are required for the analysis. The user can estimate the sample standard deviation using the calibration data from Step 1 and the following equation: Where, x = output value for each simulation repetition One can get a sense of the difference between a set of data and the average value by calculating the sample standard deviation. Typically, a 95 percent confidence level is used in model simulation. Performing this task allows the user to decide the accuracy of results. What can be said about a 95 percent confidence level is that the user is 95 percent confident that field conditions are within the range set by the simulation outcomes for each repetition. In finding the minimum number of simulation repetitions, the user can use Table 3-4, which consists of confidence levels ranging from 90 percent to 99 percent and the minimum number of repetitions corresponding to those confidence levels.
To use Table 3-4, a user must calculate the ratio of confidence level to standard deviation, C 1-α/S. α is the selected confidence level and C = 1 – α. If the confidence level is 95 percent then C equals 0.05. So a user now has an estimated the minimum number of simulation runs that are required in the analysis and has conducted them. Now, the user has to determine the validity of the default parameter set. This can be done with a histogram or X-Y plot. Both are visual aids to the user to determine the worth of the default parameter set. The histogram shows the frequency of data points and presents the modeler to view the distribution of simulation output data. For a feasible default parameter set, the field data must fall within the distribution set by the simulation output data. Unlike a histogram, the frequency of a particular value is not required to create an X-Y plot. The level or location of each simulation output data is what is required. On an X-Y plot, each data point shows the location of where two variables, in this case two performance measures, intersect. An example showing how a user can determine the feasibility of the default parameter set may be helpful here. Let's assume that the field-collected data for performance measures 1 and 2 range from 62 to 86 and 8.3 to 15.2, which is outlined in the dark-shaded box seen in Figure 3-10. Figure 3-10. Example of X-Y plot with Acceptable Results 3.3.3.3. Initial CalibrationInitial calibration consists of three steps: 1) identifying calibration parameters, 2) sampling different cases within a determined range, and 3) verifying whether the determined ranges are appropriate. Calibration parameters vary depending on the modeling applications. The calibration parameters are values that the user places in the simulation model so that the simulation conditions accurately represent field conditions. The calibration parameters vary based on the simulation model. In CORSIM, calibration parameters range from mean value of lost start-up time to minimum deceleration for lane change to desired free flow speed. Once the calibration parameters are identified, a user should begin sampling from the combinations of a parameter set. The combinations of the parameters can be quite large, to the point where they exceed millions. If this is the case, then it would be unpractical to sample these combinations because such a process would last years. To avoid this obstacle, a user can use an algorithm called Latin Hypercube Design (LHD) which reduces the number of combinations that need to be analyzed while still maximizing the parameter cover. Like the previous step of determining the feasibility of the default parameter set, a user needs to conduct multiple runs using the selected parameter sets. Doing so will allow the user to simulate the stochastic nature of field conditions. After conducting multiple runs, the user needs to determine if the parameter sets provide acceptable ranges. Procedures used to validate the default parameter set can be used here. Whichever procedure is used, histogram or X-Y plot, the field conditions must fall within the 90 percent confidence level in order for the parameter set range to be considered acceptable. 3.3.3.4. Feasibility Test and AdjustmentIf the parameter set range from the initial calibration was determined unacceptable (field measures did not fall within the 90 percent distribution of the range), then conducting a feasibility test is necessary in order to find an appropriate parameter set range to use in the main calibration process. This can be conducted by two methods: X-Y plots or a statistical method known as Analysis of Variance (ANOVA). These two methods allow the user to identify the key calibration parameters. When using the X-Y plots, the user should look for a relationship that exists between the calibration parameter and the measure that goes along with it. If the data points on the plot show a relationship, then that X-Y plot is one of a key parameter. If the data points are scattered and a relationship cannot be made, then the calibration parameter is not be considered a key parameter. Like the X-Y plots, the user should look for a relationship among the calibration parameter and the corresponding measure when using ANOVA. This method provides statistical outputs such as Sums of Squares, F-statistic, and P-value. For the purposes of identifying key parameters, the p-value is the most important statistic in ANOVA. If the p-value of a parameter is less than the confidence interval, then that parameter is a key calibration parameter. Once the key parameters are identified, the user can adjust the calibration parameter ranges so that they reflect field conditions. This can be done by merely shifting the parameter range of the X-Y plot used for the key parameter identification step. Remember to shift the parameter range so that data from the field is well represented. A feasibility test needs to be conducted again once the parameter ranges are adjusted. 3.3.3.5. Parameter CalibrationOnce an acceptable parameter range is determined, the next step is to select a parameter set that best represents data collected from the field. An optimization method, such as the Genetic Algorithm (GA), is used to complete this task. The GA uses a specific number of digits, known as the chromosome. These digits, which are generated at random, correspond to calibration parameter values. When the digits are generated, the parameter values are generated as well. A randomized simulation run can be completed afterwards. 3.3.3.6. Evaluation of Parameter SetFor this step, the user should make an assessment of the performance of the calibrated parameter when it is set to another parameter set. This allows the user to confirm that the calibrated parameter will output more accurate results than the other parameter set. Comparing the default parameter set to the calibrated parameter set could be completed in the evaluation step. As conducted in previous steps, the user should conduct multiples runs for each model using a different parameter set and check the feasibility for each model using a histogram. 3.3.3.7. Model ValidationValidation of the untried data is the last step of the model calibration and validation procedure. It is imperative to conduct this step because, if successful, it will show that the calibrated parameter set is versatile. In the validation process, the model needs to be tested with the untried data. Multiple runs are conducted with the random seeded numbers and the calibrated parameter set. The user needs to create a histogram of the simulation output data so that there is a visual representation of it. The field data must fall within the 90 percent confidence interval region (acceptable region) in order to validate the fine-tune process of calibration applied to the parameter set. Figure 3-11 shows the histogram format used for model validation. Figure 3-11. Concept of Acceptable Region For additional information on the calibration and validation procedure, please refer to Microscopic Simulation Model Calibration and Validation Handbook (Park and Won, 2006) and Traffic Analysis Toolbox Volume IV: Guidelines for Applying CORSIM Microsimulation Modeling Software (Holm et al., 2007). 3.3.4. Calibration for Weather ImpactsCalibration for weather impacts is no easy feat but it is possible to do in microscopic traffic simulation modeling. This section discusses the methods that others have used to calibrate supply and demand parameters for inclement weather in existing microscopic traffic simulation models. 3.3.4.1 Supply ParametersSupply parameters are defined as the conditions that the roadway can offer to drivers such as roadway capacity. This parameter has been the focus of several past and recent studies on the effect of inclement weather on the transportation system. Based on the study conducted by FHWA (2009), it was recommended that longitudinal models (i.e., car-following, deceleration, and acceleration models) and other models be used to reflect field conditions during inclement weather. To calibrate for weather impacts in microscopic traffic analyses, weather conditions are used as the basis for the adjustment of macroscopic traffic stream parameters. These weather conditions are represented by Weather Adjustment Factors (WAFs). The macroscopic traffic stream parameters that are adjusted by these WAFs include the following: free-flow speed (uf), speed at capacity (uc), and saturation flow at capacity (qc). Weather Adjustment Factors are a function of precipitation type, intensity level, and visibility level as seen in Equation 3-8: Where, i = Precipitation intensity (cm/h) A user can determine the microscopic traffic simulation parameters after estimating the macroscopic traffic stream parameters that were adjusted using WAFs. The calibration of traffic stream of longitudinal motion can be accomplished in the microscopic simulation analysis through steady-state modeling and nonsteady-state modeling. Steady-state (stationary) conditions occur when the traffic remains relatively constant over a short period of time and distance. For nonsteady-state behavior, analysis involves the movement of vehicles from one state to another. When calibrating for nonsteady-state behavior, the user can simulate capacity reduction, which is a result of traffic conditions such as congestion and capacity loss during lost start-up time. Calibration of steady-state and nonsteady-state behaviors can be performed with the use of car-following models. Car-following models explain the behavior of drivers in vehicles that follow the lead vehicle. Currently, several car-following models exist in various microsimulation software packages. These include:
One can calibrate car-following models with the use of the adjusted macroscopic traffic stream parameters: free-flow speed (uf), speed at capacity (uc), saturation flow at capacity (qc), and jam density (kj). Remember, these macroscopic traffic stream parameters are influenced by inclement weather. So using these parameters will allow users to calibrate the car-following models under inclement weather conditions. The calibration procedure for steady-state behavior in these car-following models is as follows (Hranac et al., 2006):
Additional methods of calibrating for weather impacts exist. These methods include: deceleration modeling, acceleration modeling, gap acceptance modeling, and lane-changing modeling. For more information on the feasibility of incorporating weather-related factors into these analyses, please refer to the FHWA report (2009) Microscopic Analysis of Traffic Flow in Inclement Weather at: http://ntl.bts.gov/lib/32000/32500/32539/tfiw_final.pdf. 3.3.4.2. Demand ParametersVery few studies have been successful in modeling the impacts of inclement weather on demand (e.g., traffic volume) because of the random nature of driver behavior. Studies that analyzed the impact of inclement weather on traffic conditions typically assumed that demand remains at volumes observed under normal, dry conditions. This approach quantifies the effect of inclement weather, but the accuracy of traffic demand behavior suffers. A probabilistic approach, as discussed in Section 3.2.4.2, can be used to determine the percent of average reduction in traffic demand under inclement weather, such as rainy and snowy conditions. Please refer to Section 3.2.4.2 to obtain the procedure that was used to calibrate demand parameters for inclement weather. 3.3.4.3. Driver Behavior ParametersDriver behaviors influence the supply and demand parameters. For instance, a driver who decides not to travel affects traffic demand, and a driver who slows during inclement weather conditions will have lower speed, longer saturation headway, and lost startup time. Saturation headway and lost startup time are considered driver behavior parameters because they are developed in response to supply and demand traffic parameters. Calibrating for these two parameters can be accomplished, but the procedure will differ based on the analysis. Users can obtain such driver behavior parameters (e.g., saturation headway and/or lost startup time) from the field during inclement weather conditions. 3.3.5. Performance MeasuresThe purpose of conducting simulation models is to recreate field conditions and to evaluate the impacts of untried strategies over the base case. In order to adequately assess the performance of a transportation facility, a user should select performance measures for their analyses. These measures give insight into the performance of a project's traffic operations objectives. The performances measures used in mesoscopic traffic analyses (see Section 3.2.5) can also be used in microscopic traffic analyses. Such performance measures include travel time, speed, delay, queue, stops, and density. The user may also establish a reliability measure (e.g., travel time variance) as a performance measure for microscopic traffic analyses. Travel time variance would be easier to obtain for microscopic traffic analysis rather than mesoscopic analysis because microscopic traffic analysis has the capability to model individual movement of vehicles. When choosing performance measures they should, of course, be measureable, and have some relation to the objective of the project. 3.3.6. Model Implementation and AnalysisPerforming model implementation involves model development. This includes: model setup, data preparation, and calibration and validation. Please refer back to the procedure for simulation model setup discussed in Section 3.3.1. Because the focus of this discussion is on weather model implementation, it is imperative to discuss the procedures that others have adopted to incorporate the effects of inclement weather in their analyses. Since a universally "correct" method of modeling to determine inclement weather impacts is nonexistent, procedures will vary. 3.3.6.1. Data PreparationModeling the impact of inclement weather in traffic analyses is a complex task. Therefore, users should reduce data to prevent increasing the complexity of the analysis. This can be accomplished by simply setting limitations on what will be analyzed, such as restricting the number of lanes or datasets in the analysis. Use the objectives and goals of the analysis to decide which values are important and should remain in the analysis 3.3.6.2 CalibrationTo calibrate for weather impacts, a user can use macroscopic traffic stream parameters that have been adjusted to reflect inclement weather conditions in microscopic simulation modeling. Weather Adjustment Factors (WAFs) can be used to adjust the key macroscopic traffic stream parameters: free-flow speed, speed at capacity, and saturation flow at capacity. Please refer to Section 3.3.4 for more information on the calibration for weather impacts. 3.3.6.3. Model Implementation of Weather ImpactsStudies that have attempted to model the impact of inclement weather on the transportation system do not all use the same approach or method for model implementation. Some studies incorporate weather impacts by making assumptions as to how traffic conditions react to weather while other studies use actual field observations. This section presents a model implementation of weather impacts by using updated microscopic simulation parameters reflecting traffic conditions under inclement weather conditions. These parameters include:
For example, Lieu and Lin (2004) modified these microscopic traffic parameters by assuming reductions/augmentations to account for inclement weather conditions. The weather case, which consisted of wet and slushy weather/road surface conditions, had free-flow speed reduced by 20 percent and had queue discharge headway and lost startup time increased by 20 percent from those of the base condition. Three scenarios were analyzed in the study: 1) base case, 2) weather case, 3) weather case with signal retiming. After one hour of simulation, it was seen that speed reduced significantly in the weather base case. If average speed was 25 mph for the major-street demand of 1,450 vph in the base case then the average speed was 16 mph for a reduced major-street demand of 1,230 vph. Figure 3-12 shows the relationship of traffic speed and demand volume for 1 hour of simulation for the "weather without signal retiming" case and the "weather with signal retiming" case. Figure 3-12. Relationship between Speed and Volume for Base and Weather Cases Based on the analysis, implementing signal retiming specifically for inclement weather conditions improve traffic conditions if demand volumes on the major street are between 1100 vph and 1700 vph. As seen from Figure 3-13, demand volume of 1,230 vph coincides with a travel speed of 19 mph when signal retiming is put into effect for inclement weather conditions. For this demand volume, travel speed is 16 mph when signal retiming is not implemented. A user can develop optimal signal timing plans for inclement weather conditions and evaluate those plans using microscopic models that reflect inclement weather conditions. For more information on this method please refer to Case Study 2. You will need the Adobe Acrobat Reader to view the PDFs on this page. |
United States Department of Transportation - Federal Highway Administration |