Office of Operations
21st Century Operations Using 21st Century Technologies

3. Traffic Analysis Tools Incorporating Weather

3.1 Overview

One universally correct method to conduct traffic analyses cannot exist due to the fact that all traffic analyses do not share the same objectives. Objectives are an important deciding factor in choosing a type of traffic analysis. For instance, if a study's objective is to simulate traffic flow for multiple regions of a State, analysts need to choose a traffic analysis tool that can handle the amount of data that is involved in the study. The three main types of analysis are: macroscopic analysis, mesoscopic analysis, and microscopic analysis. This section defines each analysis type and discusses their modeling capabilities and limitations.

3.1.1 Macroscopic Analysis

If there is one thing to take away from learning about transportation operations, it is that flow, speed, and density are all related to each other. The conditions of two will affect the third traffic stream parameter. For instance, if drivers on a highway are able to travel at their free-flow speeds and maximum density has not been reached, then the flow of traffic will run smoothly. When users incorporate macroscopic simulation models into their traffic analyses, they are analyzing the relationship among the three traffic stream parameters.

When we consider the word "macroscopic," we think "large scale." Accordingly, a key feature of most macroscopic models is their ability to model large study areas. Using the flows, speeds, and density measures of a large network, macroscopic models can provide simple representations of the traffic behavior in that network. Because these models do not require detailed data such as driver characteristics, model set up can be done quickly and the simulation can output results in a timely manner (Traffic Analysis Toolbox Volume I – Alexiadis et al., 2004).

Although the ability of macroscopic models to output a simple representation of traffic flow in a timely manner is considered a benefit, it is considered a limitation as well. Macroscopic models can simulate traffic stream parameters (i.e., flow, speed, and density) on a large scope, but they cannot model detailed behavior in individual vehicle movements (e.g., saturation headway and lost startup time).

The FHWA Traffic Analysis Toolbox lists the commonly used macroscopic simulation models and can be found at the following link:

Note that no literature was found in the weather-related analysis using macroscopic modeling. This is because weather impacts cannot be effectively analyzed using macroscopic tools. As a result, this chapter does not discuss incorporating weather in macroscopic analysis.

3.1.2 Mesoscopic Analysis

Macroscopic models can only provide so much detail in simulating real world traffic conditions. In some cases, research requires more in-depth simulation results. This is where mesoscopic models come into play. These models have the ability to model large study areas but they provide users with more detailed information than macroscopic models.

More detailed traffic scenarios can be modeled using mesoscopic simulation models. For instance, users have the capability to model diversion routes from major roadways (e.g., freeways and highways) to other road types (e.g., signalized arterial). This could not be accomplished using macroscopic models.

With these capabilities come weaknesses. One key limitation in using mesoscopic models is the inability to model detailed operational strategies such as a coordinated traffic network. This operational strategy involves programming the traffic signals at several intersections so that the flow of traffic is optimized (i.e., drivers do not receive a red signal at each intersection they approach). Such an operation would be better suited in a microscopic model because it would require more detailed data. Mesoscopic models provide users with higher accuracy in simulating real world traffic behavior than macroscopic models, but microscopic models simulate real world traffic behavior with higher accuracy than mesoscopic models.

Commonly used mesoscopic simulation models include those from the DYNASMART and DYNAMIT family. Recent studies of inclement weather impacts on the transportation system have incorporated mesoscopic analysis with these models. The FHWA Traffic Analysis Toolbox provides additional information on mesoscopic simulation models at the following link:

3.1.3 Microscopic Analysis

When research requires heavily detailed analysis of real world traffic behavior, users would use microscopic simulation models. These models are intended to simulate the movement of individual vehicles, which can be done by using car-following models, longitudinal motion models (e.g., acceleration and deceleration models), gap-acceptance models, and lane-changing models.

Microscopic models allow users to simulate the stochastic nature of traffic. The drivers that you share the road with are not going to drive in the same manner as you. Their thinking patterns and comfort levels will vary for each traffic scenario presented to them. Incorporating driver behavior data is essential to simulate traffic conditions with the highest accuracy.

The ability of microscopic models to simulate traffic behavior with high accuracy is a benefit but also a weakness. In order to gain such a high level of accuracy, microscopic simulation models require substantial amounts of roadway geometry, traffic control, traffic pattern, and driver behavior data. Providing this amount of data will limit users to modeling smaller networks than those that can be modeled in macroscopic and mesoscopic analyses. The required input data will also cause each simulation run to take a very long time to output results.

The FHWA Traffic Analysis Toolbox lists commonly used microscopic simulation models. These include: CORSIM, VISSIM, AIMSUN, and PARAMICS. For additional information on microscopic simulation tools please refer to the following sites:

3.2 Mesoscopic Analysis

As noted, this chapter does not cover macroscopic analysis as weather impacts cannot be effectively analyzed using macroscopic tools.

Mesoscopic analysis is an emerging method for simulating and studying traffic. As explained in the 3.1 overview section, macroscopic simulation is a highly aggregated method for analyzing traffic that assumes all vehicles on the roadway have the same characteristics. However, this method is not appropriate to predict and understand changes happening at the vehicular level. Microscopic analysis looks at every individual vehicle and its unique characteristics. This method makes identifying changes and conflicts among vehicles easier; however, it requires a great deal of computing power and is most effective in smaller geographic networks.

One advantage of mesoscopic analysis is the ability to analyze larger geographic areas than microscopic analysis while still providing some of the detailed data that macroscopic analysis cannot provide. Mesoscopic analysis also allows for the analysis of road segments, multiple routes within a network, basic signalized intersections, freeways and ramps.

The major disadvantage to mesoscopic analysis is the heavy data requirement. Mesoscopic analysis requires almost as much data as microscopic simulation and for large geographic regions the data requirements are comparable to those of transportation planning studies. Another disadvantage to mesoscopic analysis is that some complex traffic features currently cannot be simulated well, such as sophisticated traffic signals.

3.2.1 Mesoscopic Traffic Simulation Model Setup

There are a number of software packages available for mesoscopic traffic modeling. In the United States, the more commonly used software are the Traffic Estimation and Prediction Systems (TrEPS) tools, formerly known as Dynamic Traffic Assignment (DTA) tools. These tools include DynaMIT-P, DynaMIT-X, DYNASMART-P, DYNASMART-X, and DynusT. Some other mesoscopic tools are listed below. This TAT module focuses primarily on TrEPS tools.

Mesoscopic Traffic Simulation Software Options

CONTRAM (Continuous Traffic Assignment Model) http://www.contram.com

DYNAMIT-P, DYNAMIT-X, DYNASMART-P, DYNASMART-X, DynusT: http://www.dynamictrafficassignment.org

VISTA (Visual Interactive System for Transport Algorithms) http://www.vistatransport.com/ (revision date June 18, 2009)

(Source: Traffic Analysis Toolbox Volume II – Jeannotte et al., 2004)

A typical network in any of these programs might look something like Figure 3-1. It clearly shows the extent of the geographic area that this model can analyze: an area much larger than a typical microscopic simulation would be able to handle.

Figure 3-1. Sample DynusT Network in Portland, OR (DynusT, 2010)
Sample image of a large geographic area's road network. Image provides a scale example for the type of location that can be analyzed by the macroscopic DynusT Network.

The basic principles of traffic simulation vary little from one type of traffic simulation to the next. The overall procedure of developing and applying traffic simulation modeling to a traffic analysis consists of seven steps, four of which are part of the model set up. a flow chart of the model set up for traffic analysis is shown in Figure 3-2.

Figure 3-2. Flow Chart of Model Set Up
The overall procedure of developing and applying traffic simulation modeling to a traffic analysis entails four steps that are part of the model set up. These are Project Scope (which includes define project purpose, identify influence areas, select model, and estimate staff time.
(Source: Modified from Traffic Analysis Toolbox Volume III – Dowling et al., 2004)

3.2.1.1 Project Scope

An important first step for any project is assessing its scope. When it comes to traffic modeling, thoroughly scoping out the project can be very useful in deciding what traffic analysis tool is best for the goals of the project. Projects that require modeling of large geographic areas, faster computing times, or networks with several routes for drivers to take should consider using mesoscopic modeling; the drawback being a loss in fidelity in the output results. The microscopic simulation section further elaborates on this topic.

3.2.1.2 Data Collection

Setting up the model cannot be done until all of the necessary data are complete. All of the model setup information falls into one of three groups: Network, Control, or Movement.

Network:

The network data contain all of the links and nodes that geographically build the network. Links represent roadways and nodes and points on the map where multiple roads connect. A node could be an intersection, a freeway ramp, or simply a point where the road curves to make for a more accurate representation of the roadway network.

Control:

The control data are needed for intersections where signals or signs are used to govern vehicle movements. The data would include information on the location and timing of traffic signals or the locations of Stop or Yield signs. It also includes data for ramp metering or variable message sign (VMS) information being provided to drivers.

Movement:

The movement data are also necessary for intersection control and define how a vehicle moves when at an intersection. These data work hand-in-hand with the Control data to accurately move vehicles throughout the network.

3.2.1.3 Base Model Development

To complete this step all of the data that were collected need to be organized and formatted correctly into the proper program-specific input data so that the modeling tool will be able to read and use the data without problems. Mesoscopic traffic models can be very data intensive and require a large number of input data in order to build an entire network.

3.2.1.4 Error Checking

The primary purpose for Error Checking is to ensure that the model being developed will accurately simulate what is occurring or will occur in real networks. Data calibration and validation will be discussed in detail in the coming sections.

3.2.2 Data Preparation

To prepare a mesoscopic simulation model, data are typically needed for estimating supply and demand parameters. Data for supply parameters are speed, volume, and density from each segment type of the transportation network that is being studied. Data for demand parameters (i.e., origin destination demand matrix) are the historical origin-destination (OD) matrix and observed counts.

Conducting data preparation allows for quality assurance in the input data of the study. It is made up of review, error checking, and the reduction of the data collected in the field (Dowling et al., 2004). Data verification and validation should be performed during the data preparation step. The following are data verification and validation checks:

Data Verification and Data Validation
  • Geometric and control data should be reviewed for apparent violations of design and/or traffic engineering practices. Sudden breaks in geometric continuity (such as a short block of a two-lane street sandwiched in between long stretches of a four-lane street) may also be worth checking with people who are knowledgeable about local conditions. Breaks in continuity and violations of design standards may be indicative of data collection errors.
  • Internal consistency of counts should be reviewed. Upstream counts should be compared to downstream counts. Unexplained large jumps or drops in the counts should be reconciled.
  • Floating car run results should be reviewed for realistic segment speeds.
  • Counts of capacity and saturation flow should be compared to the HCM estimates for these values. Large differences between field measurements and the HCM warrant double checking the field measurements and the HCM computations.

(Source: Traffic Analysis Toolbox Volume III – Dowling et al., 2004)



3.2.3 Traffic Model Calibration for Normal Conditions

Calibrating the mesoscopic traffic model is based on the same principle as calibration of any model. Real data such as vehicle counts, speed studies, and travel time data; the Highway Capacity Manual standards; and historic origin and destination demand data are all useful for calibrating and validating the model. Figure 3-3 briefly outlines the process of calibrating supply and demand data for a mesoscopic simulation. Figure 3-3(a) shows calibration as a three-step process: disaggregated level, sub-network level, and system level calibration. The disaggregated level calibration deals with calibration over entire segments of the speed-density relationships and the capacity. As shown in Figure 3-3(b), the sub-network calibration discusses more specific procedures. The sub-network calibration is the process of estimating and calibrating demand on the segment level. Finally, system-level calibration is calibrating supply and demand parameters on the network scale to ensure that the supply and demand match up with one another.

Figure 3-3 Calibration of Both Supply and Demand Data Process Flow Chart
(a) Framework for supply and demand calibration
Flow diagram describes the framework for supply and demand calibration.
(b) Subnetwork calibration
Flow diagram describes the framework for subnetwork calibration.
(Kunde, 2000)

Some specific methods for calibrating both the supply and demand parameters are discussed further below.

3.2.3.1. Supply Parameters

Supply parameters are the characteristics of a roadway that describe its ability to move vehicles: capacity, speed, flow. Supply parameters are determined by some type of traffic flow model. Traffic flow models dictate how vehicles move in general. Individual vehicle movements are not simulated in mesoscopic models. Mesoscopic traffic flow models use a macroscopic traffic flow model.

The following model is the traffic flow model used by DynaMIT software to determine the supply parameters (Park et al., 2004). This model uses the speed and flow relationship.

Equation 3-1. Estimated flow equals field flow and estimated speed equals free flow speed if field flow divided by field speed is less than or equal to free flow density.


Equation 3-2. Estimated flow equals field speed times the set of jam density times (the closed interval of one minus the result of field speed divided by free flow speed to the power of one over alpha) raised to the power of one over Beta plus free flow density, if field flow divided by field speed is greater than free flow density.

Where:

qobs = field flow (vph)
uobs = field speed (vph)
q = estimated flow (vph)
u = estimated speed (mph)
uf = free flow speed (mph)
k0 = free flow density (veh/mile/lane)
kjam = jam density (veh/mile/lane)
α and β = parameters

This is a two-regime function, which means that free-flow conditions and non-free flow conditions are treated separately. In the free flow regime u is the free flow speed while curve fitting is done to estimate speed in the second regime. The parameters α, β are determined by plotting and examining the data collected. DYNASMART and DynusT use similar traffic flow models.

As noted, to calibrate the supply parameters for normal conditions, speed, flow, and density data from a real network during normal conditions need to be collected. The data should then be input into one of the model equations shown above. The model equation could be transformed into a linear model so that linear regression analysis can be performed and values for traffic flow model parameters can be determined (i.e., α and β in speed-flow relationship equation). This process can be repeated if necessary to improve the accuracy of the supply parameters. This process is well illustrated in the supply parameter calibration procedure.

Supply Parameter Calibration Procedure

Step 1 Process observation data

Step 1.1 Categorize the traffic data (speed and occupancy), for each location.
Step 1.2 Convert occupancy into density
Step 1.3 For each location, perform Steps 2 through 5.
Step 2 Fit the data into a dual-regime model. For initial kbp of 10 vpmpl, do the following:
Step 2.1 Divide the data set into to subsets based on the initial kbp, that is, the first and second regime observations.
Step 2.2 For the first regime, the free-flow speed, uf , is estimated as the mean of the speeds. Root mean squared error for speeds is also calculated.
Step 2.3 For the second regime, set vo and kjam based on the observations, that is, the minimum speed observed and maximum density observed.
Step 2.4 Transform the second regime data, speed and density, as follows:

Equation. Y equals ln times the difference between v sub i and v sub o, and X equals LN times the difference of 1 minus k sub o divided by k sub jam. Let b equal ln times the difference between v sub f and v sub 0.

Step 2.5 Perform linear regression of the function Y = alpha;X + b to estimate α and b.
Step 2.6 Recover vf from the estimated b, that is vf = eb + v0
Step 2.7 Calculate R-squared value for the second regime.
Step 2.8 Calculate difference in estimated speeds at the joint of two regimes by comparing uf in the first regime and the modeled speed value at kbp in the second regime.

Step 3 Increase kbp by 1 vpmpl and repeat Step 2.1 to 2.8 until kbp becomes 30 vpmpl.

Step 4 Find the optimal value of kbp based on Measures of Effectiveness (MOEs) of the fitted models for each regime and joint fit observations for the entire models.

Step 5 Choose the function that best fits the data set for each weather condition.

(Source: Incorporating Weather Impacts in Traffic Estimation and Prediction Systems, Mahmassani et al., 2009)



3.2.3.2. Demand Parameters

A 2008 study by Park et al., titled Online Implementation of DynaMIT: A Prototype Traffic Estimation and Prediction Program, clearly describes the procedures for calibrating demand parameters during normal conditions. Their procedure is summarized in Section 3.2.3. As mentioned in the data preparation section, initial origin-destination matrices could be obtained by updating historical OD matrices. The update typically implements a gravity model with observed traffic counts. Optimization can then be used to check to see that the OD matrices are convergent. These new OD matrices are then run through the simulation to replace the original OD matrices. The resulting matrices are once again optimized for convergence and run through the simulation again. This process can be repeated as many times as necessary until the resulting matrices are well calibrated and accurately reflect the real network being simulated.

3.2.4. Calibration for Weather Impacts

As discussed, understanding the impact of weather on a transportation system could greatly improve transportation management, overall mobility, and efficiency. In order to include the impacts of weather in a traffic simulation, the supply and/or demand parameters are adjusted to better reflect either 1) what the roadway can offer for capacity or travel speed or 2) the number of travelers commuting between origins and destinations and via which routes.

3.2.4.1. Supply Parameters

Determining the supply parameters for traffic analysis during inclement weather is a two step process. First, Weather Adjustment Factors (WAFs) must be determined and then they must be calibrated.

Weather Adjustment Factors (WAFs) are used to reduce supply parameters to a level that is more appropriate for the inclement weather conditions being observed. The three weather parameters used to determine WAFs are visibility, rain intensity, and snow intensity. Visibility is measured in miles and both rain and snow intensities are measured in inches per hour. A WAF then needs to be calculated for each supply parameter in the traffic flow model for each weather condition.

For each of the parameters, a weather adjustment factor (WAF) was calculated. WAF is calculated as:

Where,

Fi = WAF for parameter i
v = visibility (miles)
r = precipitation intensity of rain
s = precipitation intensity of snow β0, β1, β2, β3, β4, β5 = coefficients

Under inclement weather, the supply parameters are calculated as follows:

Equation 3-4. The weather adjusted parameter equals the normal weather parameter times the weather adjustment factor.

Where,

F' = weather adjusted parameter
Fo = normal weather parameter
Fi = weather adjustment factor

In order to calibrate the WAFs the following calculation should be done and the results should be analyzed using linear regression analysis.

Equation 3-5. The weather adjustment factor equals the result of the weather adjusted parameter divided by the normal weather parameter.

Where,

F' = weather adjusted parameter
Fo = normal weather parameter
Fi = weather adjustment factor

3.2.4.2. Demand Parameters

The demand parameters are origins, destinations, and demand volumes. If weather causes drivers to change their destination, change the time at which they make their trip, or results in a change in the number of drivers using a particular route in order to complete their travel, then the demand parameters for that particular weather event would be unlike the normal condition. Understanding these decisions would require understanding the behavior of every driver on the road. For the purposes of calibrating the simulation, historic driver and weather data can be paired up and calibrated to generate factors similar to the Weather Adjustment Factors used in the supply parameter calibration section.

One method for determining and calibrating demand parameters is outlined by Samba and Park (2009). This study is one of the first attempts to analyze the impact of inclement weather on demand parameters. In their study, they proposed a probabilistic approach to determine the percent average reduction of traffic demand under rainy and snowy conditions for seven sites surrounding major central business districts in Minnesota and Virginia. Factors including time-of-day and varying precipitation intensity were incorporated in the analysis. Weather data for Minnesota and Virginia were collected from the National Oceanic and Atmospheric Administration hourly precipitation report for the years 2006 and 2007. Traffic data was grouped into 1-hour intervals to be consistent with the weather data format. They separated traffic data by each month and parsed out weekends, holidays, and any other non-inclement weather days that produced atypical Average Daily Traffic (ADT) values or hourly volume curves (e.g., atypical ADT resulting from construction). Then a spreadsheet of the inclement weather days was developed in the analysis. The non-inclement weather days were studied so that the mean volume and standard deviation could be produced for a typical dry day for each month. The yielded mean was used as a baseline to compute the percent difference of inclement weather volume for each hour. Equation 3-6 presents the percent difference equation that was used in the analysis.

Equation 3-6. Percent difference equals the difference between Inclement Weather Volume minus Dry Baseline Average Volume divided by Dry Baseline Average Volume.

A 95 percent confidence value can be determined with the use of the yielded standard deviation. The 95 percent confidence value is used in analyses to calculate a threshold value that is above and below the mean that is expected 95 percent of the time. If the absolute percent difference of a precipitation day's hourly volume exceeds the threshold value then it can be stated with 95 percent confidence that the volume falls outside the expected range and can be attributed to the impact of inclement weather. The percent difference can be used to predict the changes to traffic demand that are caused by inclement weather conditions. Users can follow the proposed procedure:

Traffic Volume Reduction Procedure

Step 1: Determine precipitation impact probability. Depending on the amount of rain or snow, V percent can be determined from the maximum statistically significant volume.

Step 2: Determine a median percent reduction in traffic demand at a given hour PM-HR. The median percent reduction can be obtained using Equation 3-2.

Step 3: Determine traffic volume reduction as: Step 1.1 Categorize the traffic data (speed and occupancy), for each location.

Volume Reduction Δ = V percent × PM-HR

Where,

V percent = Probability of significant volume reduction, and PM-HR = the median percent reduction from the mean at a specific hour


(Source: Samba and Park, 2009)


This procedure can be best explained with an example:

Volume Reduction Example Problem

Step 1: For light snow event, the "V percent" value is 76 percent.

Step 2: From using Equation 3-2, the median percent of volume reduction due to snow event between 4 pm and 5 pm is 32 percent.

Step 3: A traffic volume reduction of 24.3 percent as obtained from 76 percent × 32 percent.

A traffic engineer can use multiple simulation runs to consider day-to-day variability in traffic demands. Let's assume a total 50 replications will be made. As seen above, Step 1 determined V percent of 76 percent. Thus, traffic volume will significantly change with a 76 percent probability. Consequently, 38 cases out of 50 total runs will have reduced traffic demand volumes. Furthermore, Step 2 shows 32 percent reduction in traffic demand. Thus, the 38 cases will be evaluated with the 32 percent reduced traffic demand, while the remaining 12 runs will be made with the original traffic demand. Inclement weather impact can be estimated from these 50 runs using a distribution of selected measures of effectiveness (e.g., average speed or average travel time).


(Source: Samba and Park, 2009)


3.2.5. Performance Measures

To ensure that the model is working properly and providing reliable results, it needs to be validated against real data from the network being modeled. This is done by comparing the mesoscopic model outputs and field measurements. The following parameters can be established as performance measures for mesoscopic traffic analyses:

  • Travel Time;
  • Speed;
  • Delay;
  • Queue;
  • Stops; and
  • Density.

It is important that a performance measure be appropriate (i.e., it can provide an adequate representation of at least one objective established in the analysis) and measurable.

3.2.6. Weather Model Implementation and Analysis

This section presents mesoscopic model implementation and analysis under inclement weather conditions.

3.2.6.1. Supply Parameters Calibration

The two methods for determining and calibrating supply parameters are (1) obtaining inclement weather data and analyzing it by fitting it to the traffic flow models or (2) applying Weather Adjustment Factors (WAFs).

The direct curve fitting method estimates parameters of a traffic flow model directly using flow and speed data observed during inclement weather conditions. Figure 3-4 shows the steps for this process. First the data needs to be sorted by weather condition. A speed-flow curve can be plotted and the traffic flow model parameters can be determined. It is noted that Kjam represents jam density and alpha and beta are calibration parameters of the mesoscopic traffic model.

Figure 3-4. Supply Parameter Calibration Procedure
Flow chart describes supply parameter calibration procedure: 1) Segment classification; 2) Assign inclement weather condition (including capacity, free flow speed, and K sub 0 estimation); 3) Data analysis (including flow speedcurve fitting, K sub jam, Alpha, Beta, and calibration); 4) Satisfactory Fit? 4a) if no, return to data analysis, 4b) if yes, these are the inclement weatehr condition supply parameters.
(Source: Modified from Field Evaluation of DynaMIT in Hampton Roads. Park et al., 2004)

Table 3-1 shows the supply parameters for inclement weather conditions that were estimated for the DynaMIT program using Hampton Roads data by following the supply parameter calibration procedure shown in Figure 3-4.

Table 3-1. Calibrated Supply Parameters
Segment Type vf (mph) kjam (vplvm) α β Capacity (vphpl) k0 (v/lm)
1 lane basic freeway 61 160 5.21 1.68 2000 15
3 and 3+ lane basic freeway 66 160 5.33 1.65 2100 15
2 lane merging area 61 160 4.91 1.68 2150 15
3 and 3+ lane merging area 64 160 5.16 1.71 2100 15
2 lane diverging area 59 160 11.17 2.2 2000 15
3 and 3+ lane diverging area (I-64 and I-564) 61 160 11.18 2.23 2100 15
3 and 3+ lane diverging area (I-264) 56 160 5.16 1.85 2100 15
Weaving area (I-64 and I?564) 61 160 9.62 2.05 2000 15
Weaving area (I-264) 56 160 8.52 2.07 1900 15
Ramps 46 150 1.82 1.52 1900 10


The WAF method can be used to calibrate the supply parameter for inclement weather conditions. The method is already discussed under Section 3.2.4. The WAF method is a regression model capturing impacts of different weather conditions (e.g., normal, light rain, moderate rain, and light snow). The regression model is developed on the basis of speed and density relationship functions estimated for various weather conditions. Table 3-2 presents calibrated coefficients of the weather adjustment factors.

Table 3-2. Weather Adjustment Factor Coefficients
Input Data Traffic Properties β0 β1 β2 β3 β4 β5
Traffic Flow Model 1. Speed-intercept, (mph) 0.91 0.009 -0.404 -1.455 0 0
Traffic Flow Model 2. Minimal speed, (mph) 1 0 0 0 0 0
Traffic Flow Model 3. Density break point, (pcpmpl) 0.83 0.017 -0.555 -3.785 0 0
Traffic Flow Model 4. Jam density, (pcpmpl) 1 0 0 0 0 0
Traffic Flow Model 5. Shape term alpha 1 0 0 0 0 0
Link 6. Maximum service flow rate, (pcphpl or vphpl) 0.85 0.015 -0.505 -3.932 0 0
Link 7. Saturation flow rate, (vphpl) 0.91 0.009 -0.404 -1.455 0 0
Link 8. Posted speed limit adjustment margin, (mph) 0.91 0.009 -0.404 -1.455 0 0
Left-Turn Capacity 9. g/c ratio 0.91 0.009 -0.404 -1.455 0 0
2-way Stop Sign Capacity 10. Saturation flow rate for left-turn vehicles 0.91 0.009 -0.404 -1.455 0 0
2-way Stop Sign Capacity 11. Saturation flow rate for through vehicles 0.91 0.009 -0.404 -1.455 0 0
2-way Stop Sign Capacity 12. Saturation flow rate for right-turn vehicles 0.91 0.009 -0.404 -1.455 0 0
4-way Stop Sign Capacity 13. Discharge rate for left-turn vehicles 0.91 0.009 -0.404 -1.455 0 0
4-way Stop Sign Capacity 14. Discharge rate for through vehicles 0.91 0.009 -0.404 -1.455 0 0
4-way Stop Sign Capacity 15. Discharge rate for right-turn vehicles 0.91 0.009 -0.404 -1.455 0 0
Yield Sign Capacity 16. Saturation flow rate for left-turn vehicles 0.91 0.009 -0.404 -1.455 0 0
Yield Sign Capacity 17. Saturation flow rate for through vehicles 0.91 0.009 -0.404 -1.455 0 0
Yield Sign Capacity 18. Saturation flow rate for right-turn vehicles 0.91 0.009 -0.404 -1.455 0 0
(Source: Incorporating Adverse Weather Impacts in Dynamic Traffic Simulation-Assignment Models: Methodology and Application, Dong et al., 2010)


The results of implementing weather adjustment factor were illustrated by Dong et al. (2010) when they simulated a stretch the of I-95 corridor and nearby arterial networks in Maryland from Washington D.C. to Baltimore. The three scenarios simulated were a clear and normal weather day, a moderate rain event, and a heavy rain event. Moderate rain is defined as visibility of 1 mile and rain intensity of 0.2 inch/hour, while heavy rain is defined as visibility of 0.5 miles and rain intensity of 0.5 inch/hour. Supply parameters for these weather conditions were adjusted using WAFs from Table 3-3. As shown in Figure 3-5, the time-varying network travel times became longer as weather conditions became worse. This is due to rain impacts on reduced capacity and saturation flow rates.

Figure 3-5. Travel Time Comparisons under Three Weather Conditions
Graph illustrates the correlation between increased travel time and worsening weather conditions.
(Source: Incorporating Adverse Weather Impacts in Dynamic Traffic Simulation-Assignment Models: Methodology and Application, Dong et al., 2010)

3.2.6.2. Demand Calibration

The estimation of demand parameters can be approached two ways: (1) by adjusting demand "on-the-fly" or (2) by developing a mathematical representation of how demand may change.

The first method calibrates demand (i.e., OD matrix) by reactively updating OD demand based on real-time sensor counts during inclement weather conditions. This makes sense because studies have shown few changes during morning peak hours but some reductions during non-peak hours under inclement weather conditions. An example of the real-time operation of the DynaMIT program's application is in the Hampton Roads area of Virginia (Park et al., 2004). While the real-time implementation was not conducted during inclement weather conditions, dynamic OD estimation was implemented based on real-time sensor counts. Thus, demand (i.e., the OD matrix) was updated using observed traffic counts to reflect changes in network travel behaviors, including weather, recurrent, and non-recurrent events.

The second method is less clear because there is no one model available that perfectly describes the factors that determine whether or not a person will decide to make a trip during various weather conditions. It is not clear what impact weather has on the driver's decision to make a trip. This method uses probabilistic demand adjustment factor based on empirical data observed during inclement weather conditions. The method is already described earlier in Section 3.2.4.2 Samba and Park (2009) applied this method using traffic and weather data from several locations in Virginia and Minnesota. The resulting probabilities of significant volume change under inclement weather as well as averages and ranges of volume changes are shown in Table 3-3. Note that the probability of significant volume change is determined when volume reduction is bigger than normal traffic volume variations at a given location.

Table 3-3. Demand Adjustment due to Inclement Weather Conditions
Weather Condition Probability of Weather Impacted Volumes Volume Reduction Change Average Volume Reduction
Light Rain 16.60% 1.0% to 6.3% 2.32%
Heavy Rain 31.30% 3.1% to 4.4% 3.75%
Light Snow 76.00% 10.6% to 56.2% 28.80%
Heavy Snow 42.90% 4.7% to 30.4% 13.30%
(Source: Probabilistic Modeling of Inclement Weather Impacts on Traffic Volume, Samba and Park, 2009)


It should be noted that these estimated demand adjustment values may not be suitable for traffic operators conducting their own analyses because these values correlate to a specific study area (i.e., using these demand adjustment values will not adequately reflect the change in demand due to inclement weather). Traffic operators could follow the procedure presented in Section 3.2.4.2 to obtain their own inclement weather-based regionalized demand adjustment values.

3.2.6.3. Implementation and Analysis

A mesoscopic model under inclement weather conditions has not been implemented in the real world. Instead, a simulation study was implemented to demonstrate the impact of weather and control strategies (Mahmassani et al., 2009).

In the study, the calibrated supply parameters were input to the I-95 corridor and adjacent arterial networks. The supply parameters were further adjusted for inclement weather conditions including a rain event and a rain plus VMS event. Thus, three scenarios, including normal condition, are (1) a clear and normal day, (2) a rain event, and (3) rain plus VMS event. The third event includes driver information to help change the route.

In the implementation, Dong et al. (2010) assumed the following weather event, in Figure 3-6, during the morning peak hours.

Figure 3-6. Weather Events during Peak Hour
Graph illustrates a correlation between increasing rain intensity and decreasing visibility as well as decreasing rain intensity and increasing visibility during teh morning peak period.
(Source: Incorporating Adverse Weather Impacts in Dynamic Traffic Simulation-Assignment Models: Methodology and Application, Dong et al., 2010)

The results of these three simulations are seen in Figure 3-7. It is very clear that the rain scenario had the worst performance, as indicated by the low travel speeds. The rain with VMS case did not perform as well as the clear day base case, but did come close to the same speed at some points in the simulation.

Figure 3-7. Link Speed Comparisons
Graph illustrates the comparison of traffic under varying weather conditions (clear, VMS case, and rain) and shows that while traffic did slow in the VMS case, it was much slower in the rain case.
(Source: Incorporating Adverse Weather Impacts in Dynamic Traffic Simulation-Assignment Models: Methodology and Application, Dong et al., 2010)

Similar studies could be performed for other weather responsive transportation management strategies that were listed previously to determine what impacts those measures have on traffic flow. Additionally, because every weather event and every roadway is unique, these simulations could be run during an event to help determine what actions should be taken at that time for that specific event.

3.3. Microscopic Analysis

Of the three types of analysis discussed in this module, microscopic analysis, based on individual vehicle movement, is the finest representation of the transportation system. Microscopic simulation models simulate the movement of individual vehicles, which can be done by using car-following models, lane-changing models, and gap acceptance models. Utilizing microscopic simulation, users can input detailed traffic data into the analysis, thereby creating an opportunity to incorporate diversity in vehicles and driver characteristics, enabling accurate simulation of real-world traffic.

3.3.1. Microscopic Traffic Simulation Model Set Up

The overall procedure of developing and applying microscopic traffic simulation modeling to a traffic analysis consists of seven steps, four of which are part of the model set up. FHWA constructed a flow chart of the model set up for microscopic analysis, which is depicted in Figure 3-1 in Section 3.2.1.

3.3.1.1. Project Scope

The project scope consists of five tasks: 1) Define project purpose, 2) Identify influence areas, 3) Select approach, 4) Select model, and 5) Estimate staff time (Holm et al., 2007). It is appropriate to present questions in this section that would help identify the study breadth – influence areas. Questions that can be asked pertaining to inclement weather are:

  • How large is the study area that is being analyzed for inclement weather impacts?
  • What weather-related resources are available to the analyst?
  • What measure of effectiveness (MOEs) will be required to analyze the inclement weather impacts?

The criteria used for selecting the analytical tool are tied to the analytical approach. FHWA states that key criteria for choosing a modeling tool include technical capabilities, input/output interfaces, user training/support, and ongoing software enhancements (Dowling et al., 2004). To gain accuracy in simulating driver behavior during inclement weather conditions, it is necessary to choose a microscopic modeling tool such as CORSIM, VISSIM, PARAMICS, INTEGRATION and AIMSUN2.

The final task within the project scope is establishing the staff time for the project. Performing this function provides certainty that the project can be completed with the chosen approach in the allotted time. Creating a schedule for key project milestones is one approach to accomplishing this task.

3.3.1.2. Data Collection

Required input data for microscopic simulations vary based on the analytical tool and modeling application. In most microscopic modeling applications, required input data include the following:

  • Road geometry (lanes, lengths, curvature);
  • Traffic controls (signs, signal timing);
  • Demands (entry volumes, turning volumes, O-D table); and
  • Calibration data (traffic counts and performance data such as speed, queues).

For many transportation microscopic analyses, detailed data on vehicle and driver characteristics (e.g., vehicle length and driver aggressiveness) also need to be included in modeling applications. Since field data of such caliber are difficult to collect, they are established as default values in most microscopic traffic analytical tools.

When possible, data collection should be managed in such a manner that there is consistency in the datasets (Road and Traffic Authority, 2009). For instance, travel time and queue length should be recorded at the same time period as traffic count data.

3.3.1.3. Base Model Development

Once the required data is collected, one can proceed to develop a microscopic simulation model. A successful model entails the development of a model blueprint (the link-node diagram). The link-node diagram is a visual representation of the study area in terms of links and nodes. It can be created within the microscopic simulation tool or it can be created by other tools such as CAD programs. Once the blueprint is established, the model can be built in the following sequence: coding links and nodes, establishing link geometries, adding traffic control data at appropriate nodes, coding travel demand data, adding driver behavior data, and selecting control parameters that will be used to run the model (Dowling et al., 2004).

3.3.1.4. Error Checking

Before any model runs are made it is both essential and beneficial to perform error checking so that the calibration process does not output distorted results. Calibrating model parameters relies on the assurance that major errors in demand and network coding are found and removed before proceeding in the model set up (Dowling et al., 2004). Error checking is carried out in three primary stages: 1) software error checking, 2) input coding error checking, and 3) animation review to find obscure input errors (Holm et al., 2007).

3.3.2. Data Preparation

Conducting data preparation allows for quality assurance in the input data of the study. It is made up of review, error checking, and the reduction of the data collected in the field. The main purpose is to check for any discrepancies in the data (e.g., breaks in geometric continuity, unexplained large gains or losses in traffic volume, and unrealistic speeds for roadway segments) before proceeding with the analysis. Failure to do so can result in false outputs generated in the model. Please refer to FHWA's Traffic Analysis Toolbox Volume III for detailed data verification and validation checks.

3.3.3. Traffic Model Calibration for Normal Conditions

Traffic model calibration is the process of fine-tuning the data inputs that represent characteristics of the vehicle and driver. This is executed by comparing and adjusting absolute measures (clear, definitive, and measurable parameters) such as flow rate and mean speed. Figure 3-8 presents a flow chart of the overall calibration and validation process for microscopic traffic simulation modeling.

Figure 3-8. Calibration and Validation Process Flow Chart
Flow chart depicts the calibration and validation process.
(Source: Park and Won, 2006)

3.3.3.1. Simulation Model Set Up

The first part of the calibration and validation process involves the set up of the simulation model. It involves tasks identical to those in the microscopic traffic simulation set up previously discussed. Please refer to that section for the model set up procedure.

3.3.3.2. Initial Evaluation

The default parameter set (i.e., uncalibrated parameters) is the focus of this stage of the calibration process. A feasibility test is required at this stage. This is a test that is conducted to ensure that the field data is well represented by the distribution of the simulation results. If the default parameter set produces acceptable results (results that accurately reproduce field conditions), the calibration and validation procedure may be skipped and further analysis can be conducted with the default parameter set.

Two steps are involved in the feasibility test. First, the user needs to perform multiple runs of the simulation model using the default parameter set. Then consecutive comparisons to calibration data should be made. Figure 3-9 presents the initial evaluation process.

Figure 3-9. Initial Evaluation Process
Three step initial evaluation entails 1) Input Value (input file with default parameter set; 2) Trial with default parameter set; and 3) Output Value (feasibility of default parameter set by using histogram).
(Source: Park and Won, 2006)

Multiple runs should be performed because simulation does not output the exact same results for each run. The randomly generated seed number is the cause of simulation resulting in similar but not exact outputs. It is important to include the randomly generated seed number in microscopic simulation models because its purpose is to be the decision-maker. It decides the speeds that vehicles are traveling at, the type of vehicles that are included in the simulation, and the paths that the vehicles will take. Without the randomly generated seed number, the simulation loses the stochastic nature of real world conditions.

Conducting multiple runs is a must, but how many runs should there be? Too many runs are not necessarily bad, but too many runs means time lost in performing additional simulation runs. Too few runs raise questions as to whether field conditions are well represented in the simulation runs that have been conducted. The user can perform the following procedure to estimate the minimum number of simulation runs:

  1. Execute a few simulation repetitions;
  2. Estimate the sample standard deviation;
  3. Select an appropriate confidence level; and
  4. Calculate minimum number of simulation runs.

Typically, the user should perform four simulation runs in the first step of the procedure and then analyze the calibration data from these runs. This is done so that the user can get a feel for what the distribution of simulation results will be and can make an educated guess as to how many simulation runs are required for the analysis.

The user can estimate the sample standard deviation using the calibration data from Step 1 and the following equation:

Equation 3-7. The standard deviation squared equals the result of the summation of x minus x bar squared divided by the number of simulation repetitions minus 1.

Where,

x = output value for each simulation repetition
x bar = average value of all simulation repetitions
N = number of simulation repetitions
S = standard deviation

One can get a sense of the difference between a set of data and the average value by calculating the sample standard deviation.

Typically, a 95 percent confidence level is used in model simulation. Performing this task allows the user to decide the accuracy of results. What can be said about a 95 percent confidence level is that the user is 95 percent confident that field conditions are within the range set by the simulation outcomes for each repetition.

In finding the minimum number of simulation repetitions, the user can use Table 3-4, which consists of confidence levels ranging from 90 percent to 99 percent and the minimum number of repetitions corresponding to those confidence levels.

Table 3-4. Minimum Number of Simulation Repetitions
C1-α/S Selected Confidence Level Minimum Number of Repetitions
0.5 99% 130
0.5 95% 83
0.5 90% 64
1 99% 36
1 95% 23
1 90% 18
1.5 99% 18
1.5 95% 12
1.5 90% 9
2 99% 12
2 95% 8
2 90% 6
(Source: Park and Won, 2006)


To use Table 3-4, a user must calculate the ratio of confidence level to standard deviation, C 1-α/S. α is the selected confidence level and C = 1 – α. If the confidence level is 95 percent then C equals 0.05.

So a user now has an estimated the minimum number of simulation runs that are required in the analysis and has conducted them. Now, the user has to determine the validity of the default parameter set. This can be done with a histogram or X-Y plot. Both are visual aids to the user to determine the worth of the default parameter set.

The histogram shows the frequency of data points and presents the modeler to view the distribution of simulation output data. For a feasible default parameter set, the field data must fall within the distribution set by the simulation output data.

Unlike a histogram, the frequency of a particular value is not required to create an X-Y plot. The level or location of each simulation output data is what is required. On an X-Y plot, each data point shows the location of where two variables, in this case two performance measures, intersect. An example showing how a user can determine the feasibility of the default parameter set may be helpful here. Let's assume that the field-collected data for performance measures 1 and 2 range from 62 to 86 and 8.3 to 15.2, which is outlined in the dark-shaded box seen in Figure 3-10.

Figure 3-10. Example of X-Y plot with Acceptable Results
Scatter plot. Assume that the field-collected data for performance measures 1 and 2 range from 62 to 86 and 8.3 to 15.2, which is outlined in a dark-shaded box. The light-shaded box represents the 90 percent confidence interval region of simulation output data, and can be seen overlapping the dark box, meaning that the default parameter set is feasible.
(Source: Park and Won, 2006)

3.3.3.3. Initial Calibration

Initial calibration consists of three steps: 1) identifying calibration parameters, 2) sampling different cases within a determined range, and 3) verifying whether the determined ranges are appropriate. Calibration parameters vary depending on the modeling applications.

The calibration parameters are values that the user places in the simulation model so that the simulation conditions accurately represent field conditions. The calibration parameters vary based on the simulation model. In CORSIM, calibration parameters range from mean value of lost start-up time to minimum deceleration for lane change to desired free flow speed.

Once the calibration parameters are identified, a user should begin sampling from the combinations of a parameter set. The combinations of the parameters can be quite large, to the point where they exceed millions. If this is the case, then it would be unpractical to sample these combinations because such a process would last years. To avoid this obstacle, a user can use an algorithm called Latin Hypercube Design (LHD) which reduces the number of combinations that need to be analyzed while still maximizing the parameter cover.

Like the previous step of determining the feasibility of the default parameter set, a user needs to conduct multiple runs using the selected parameter sets. Doing so will allow the user to simulate the stochastic nature of field conditions. After conducting multiple runs, the user needs to determine if the parameter sets provide acceptable ranges. Procedures used to validate the default parameter set can be used here. Whichever procedure is used, histogram or X-Y plot, the field conditions must fall within the 90 percent confidence level in order for the parameter set range to be considered acceptable.

3.3.3.4. Feasibility Test and Adjustment

If the parameter set range from the initial calibration was determined unacceptable (field measures did not fall within the 90 percent distribution of the range), then conducting a feasibility test is necessary in order to find an appropriate parameter set range to use in the main calibration process. This can be conducted by two methods: X-Y plots or a statistical method known as Analysis of Variance (ANOVA). These two methods allow the user to identify the key calibration parameters. When using the X-Y plots, the user should look for a relationship that exists between the calibration parameter and the measure that goes along with it. If the data points on the plot show a relationship, then that X-Y plot is one of a key parameter. If the data points are scattered and a relationship cannot be made, then the calibration parameter is not be considered a key parameter. Like the X-Y plots, the user should look for a relationship among the calibration parameter and the corresponding measure when using ANOVA. This method provides statistical outputs such as Sums of Squares, F-statistic, and P-value. For the purposes of identifying key parameters, the p-value is the most important statistic in ANOVA. If the p-value of a parameter is less than the confidence interval, then that parameter is a key calibration parameter.

Once the key parameters are identified, the user can adjust the calibration parameter ranges so that they reflect field conditions. This can be done by merely shifting the parameter range of the X-Y plot used for the key parameter identification step. Remember to shift the parameter range so that data from the field is well represented. A feasibility test needs to be conducted again once the parameter ranges are adjusted.

3.3.3.5. Parameter Calibration

Once an acceptable parameter range is determined, the next step is to select a parameter set that best represents data collected from the field. An optimization method, such as the Genetic Algorithm (GA), is used to complete this task. The GA uses a specific number of digits, known as the chromosome. These digits, which are generated at random, correspond to calibration parameter values. When the digits are generated, the parameter values are generated as well. A randomized simulation run can be completed afterwards.

3.3.3.6. Evaluation of Parameter Set

For this step, the user should make an assessment of the performance of the calibrated parameter when it is set to another parameter set. This allows the user to confirm that the calibrated parameter will output more accurate results than the other parameter set. Comparing the default parameter set to the calibrated parameter set could be completed in the evaluation step. As conducted in previous steps, the user should conduct multiples runs for each model using a different parameter set and check the feasibility for each model using a histogram.

3.3.3.7. Model Validation

Validation of the untried data is the last step of the model calibration and validation procedure. It is imperative to conduct this step because, if successful, it will show that the calibrated parameter set is versatile. In the validation process, the model needs to be tested with the untried data. Multiple runs are conducted with the random seeded numbers and the calibrated parameter set. The user needs to create a histogram of the simulation output data so that there is a visual representation of it. The field data must fall within the 90 percent confidence interval region (acceptable region) in order to validate the fine-tune process of calibration applied to the parameter set. Figure 3-11 shows the histogram format used for model validation.

Figure 3-11. Concept of Acceptable Region
Histogram showing a bell curve. Verticle dashed lines indicate the 5 percent, 50 percent, and 95 percentpoints on the curve; the areas outside the 5-95 percent region are labeled with red X's, and the areas to the left and right of the 50 percent point are marked with blue circles and are labeled 'acceptable region.'
(Source: Park and Won, 2006)

For additional information on the calibration and validation procedure, please refer to Microscopic Simulation Model Calibration and Validation Handbook (Park and Won, 2006) and Traffic Analysis Toolbox Volume IV: Guidelines for Applying CORSIM Microsimulation Modeling Software (Holm et al., 2007).

3.3.4. Calibration for Weather Impacts

Calibration for weather impacts is no easy feat but it is possible to do in microscopic traffic simulation modeling. This section discusses the methods that others have used to calibrate supply and demand parameters for inclement weather in existing microscopic traffic simulation models.

3.3.4.1 Supply Parameters

Supply parameters are defined as the conditions that the roadway can offer to drivers such as roadway capacity. This parameter has been the focus of several past and recent studies on the effect of inclement weather on the transportation system.

Based on the study conducted by FHWA (2009), it was recommended that longitudinal models (i.e., car-following, deceleration, and acceleration models) and other models be used to reflect field conditions during inclement weather. To calibrate for weather impacts in microscopic traffic analyses, weather conditions are used as the basis for the adjustment of macroscopic traffic stream parameters. These weather conditions are represented by Weather Adjustment Factors (WAFs). The macroscopic traffic stream parameters that are adjusted by these WAFs include the following: free-flow speed (uf), speed at capacity (uc), and saturation flow at capacity (qc). Weather Adjustment Factors are a function of precipitation type, intensity level, and visibility level as seen in Equation 3-8:

Equation 3-8. Weather adjustment factors equal alpha sub 1 plus alpha sub 2 times i plus alpha sub 3 times i squared plus alpha sub 4 times v plus alpha sub 5 times v squared plus alpha sub 6 times iv.

Where,

i = Precipitation intensity (cm/h)
v = Visibility level (km)
iv = Interaction between precipitation and visibility
α1, α2, α3, α4, α5, α6 = Calibrated model parameters

A user can determine the microscopic traffic simulation parameters after estimating the macroscopic traffic stream parameters that were adjusted using WAFs.

The calibration of traffic stream of longitudinal motion can be accomplished in the microscopic simulation analysis through steady-state modeling and nonsteady-state modeling. Steady-state (stationary) conditions occur when the traffic remains relatively constant over a short period of time and distance. For nonsteady-state behavior, analysis involves the movement of vehicles from one state to another. When calibrating for nonsteady-state behavior, the user can simulate capacity reduction, which is a result of traffic conditions such as congestion and capacity loss during lost start-up time. Calibration of steady-state and nonsteady-state behaviors can be performed with the use of car-following models.

Car-following models explain the behavior of drivers in vehicles that follow the lead vehicle. Currently, several car-following models exist in various microsimulation software packages. These include:

  • The Pitt model (CORSIM);
  • The Wiedemann74 and 99 models (VISSIM);
  • The Gipps' model (AIMSUN2);
  • The Fritzsche's model (PARAMICS); and
  • The Van Aerde model (INTEGRATION).

One can calibrate car-following models with the use of the adjusted macroscopic traffic stream parameters: free-flow speed (uf), speed at capacity (uc), saturation flow at capacity (qc), and jam density (kj). Remember, these macroscopic traffic stream parameters are influenced by inclement weather. So using these parameters will allow users to calibrate the car-following models under inclement weather conditions.

The calibration procedure for steady-state behavior in these car-following models is as follows (Hranac et al., 2006):

  1. Define the functional form to be calibrated;
  2. Identify the dependent and the independent variables;
  3. Define the optimum set of parameters; and
  4. Develop an optimization technique to compute the set of parameter values.

Additional methods of calibrating for weather impacts exist. These methods include: deceleration modeling, acceleration modeling, gap acceptance modeling, and lane-changing modeling. For more information on the feasibility of incorporating weather-related factors into these analyses, please refer to the FHWA report (2009) Microscopic Analysis of Traffic Flow in Inclement Weather at: http://ntl.bts.gov/lib/32000/32500/32539/tfiw_final.pdf.

3.3.4.2. Demand Parameters

Very few studies have been successful in modeling the impacts of inclement weather on demand (e.g., traffic volume) because of the random nature of driver behavior. Studies that analyzed the impact of inclement weather on traffic conditions typically assumed that demand remains at volumes observed under normal, dry conditions. This approach quantifies the effect of inclement weather, but the accuracy of traffic demand behavior suffers.

A probabilistic approach, as discussed in Section 3.2.4.2, can be used to determine the percent of average reduction in traffic demand under inclement weather, such as rainy and snowy conditions. Please refer to Section 3.2.4.2 to obtain the procedure that was used to calibrate demand parameters for inclement weather.

3.3.4.3. Driver Behavior Parameters

Driver behaviors influence the supply and demand parameters. For instance, a driver who decides not to travel affects traffic demand, and a driver who slows during inclement weather conditions will have lower speed, longer saturation headway, and lost startup time. Saturation headway and lost startup time are considered driver behavior parameters because they are developed in response to supply and demand traffic parameters. Calibrating for these two parameters can be accomplished, but the procedure will differ based on the analysis. Users can obtain such driver behavior parameters (e.g., saturation headway and/or lost startup time) from the field during inclement weather conditions.

3.3.5. Performance Measures

The purpose of conducting simulation models is to recreate field conditions and to evaluate the impacts of untried strategies over the base case. In order to adequately assess the performance of a transportation facility, a user should select performance measures for their analyses. These measures give insight into the performance of a project's traffic operations objectives. The performances measures used in mesoscopic traffic analyses (see Section 3.2.5) can also be used in microscopic traffic analyses. Such performance measures include travel time, speed, delay, queue, stops, and density. The user may also establish a reliability measure (e.g., travel time variance) as a performance measure for microscopic traffic analyses. Travel time variance would be easier to obtain for microscopic traffic analysis rather than mesoscopic analysis because microscopic traffic analysis has the capability to model individual movement of vehicles.

When choosing performance measures they should, of course, be measureable, and have some relation to the objective of the project.

3.3.6. Model Implementation and Analysis

Performing model implementation involves model development. This includes: model setup, data preparation, and calibration and validation. Please refer back to the procedure for simulation model setup discussed in Section 3.3.1.

Because the focus of this discussion is on weather model implementation, it is imperative to discuss the procedures that others have adopted to incorporate the effects of inclement weather in their analyses. Since a universally "correct" method of modeling to determine inclement weather impacts is nonexistent, procedures will vary.

3.3.6.1. Data Preparation

Modeling the impact of inclement weather in traffic analyses is a complex task. Therefore, users should reduce data to prevent increasing the complexity of the analysis. This can be accomplished by simply setting limitations on what will be analyzed, such as restricting the number of lanes or datasets in the analysis. Use the objectives and goals of the analysis to decide which values are important and should remain in the analysis

3.3.6.2 Calibration

To calibrate for weather impacts, a user can use macroscopic traffic stream parameters that have been adjusted to reflect inclement weather conditions in microscopic simulation modeling. Weather Adjustment Factors (WAFs) can be used to adjust the key macroscopic traffic stream parameters: free-flow speed, speed at capacity, and saturation flow at capacity. Please refer to Section 3.3.4 for more information on the calibration for weather impacts.

3.3.6.3. Model Implementation of Weather Impacts

Studies that have attempted to model the impact of inclement weather on the transportation system do not all use the same approach or method for model implementation. Some studies incorporate weather impacts by making assumptions as to how traffic conditions react to weather while other studies use actual field observations.

This section presents a model implementation of weather impacts by using updated microscopic simulation parameters reflecting traffic conditions under inclement weather conditions. These parameters include:

  • Free-flow speeds on roadways (mph);
  • Maximum acceleration and deceleration rates (ft/s2);
  • Gap acceptance for car-following, lane changing, and turning(s);
  • Queue discharge headways at intersection(s); and
  • Lost startup time at intersection(s).

For example, Lieu and Lin (2004) modified these microscopic traffic parameters by assuming reductions/augmentations to account for inclement weather conditions. The weather case, which consisted of wet and slushy weather/road surface conditions, had free-flow speed reduced by 20 percent and had queue discharge headway and lost startup time increased by 20 percent from those of the base condition. Three scenarios were analyzed in the study: 1) base case, 2) weather case, 3) weather case with signal retiming. After one hour of simulation, it was seen that speed reduced significantly in the weather base case. If average speed was 25 mph for the major-street demand of 1,450 vph in the base case then the average speed was 16 mph for a reduced major-street demand of 1,230 vph. Figure 3-12 shows the relationship of traffic speed and demand volume for 1 hour of simulation for the "weather without signal retiming" case and the "weather with signal retiming" case.

Figure 3-12. Relationship between Speed and Volume for Base and Weather Cases
Graph depicts the relationship between speed and volume for base and weather cases. The graph indicates noticeable benefits from weather with signal retiming versus weather without signal retiming.
(Source: Lieu and Lin, 2004)

Based on the analysis, implementing signal retiming specifically for inclement weather conditions improve traffic conditions if demand volumes on the major street are between 1100 vph and 1700 vph. As seen from Figure 3-13, demand volume of 1,230 vph coincides with a travel speed of 19 mph when signal retiming is put into effect for inclement weather conditions. For this demand volume, travel speed is 16 mph when signal retiming is not implemented.

A user can develop optimal signal timing plans for inclement weather conditions and evaluate those plans using microscopic models that reflect inclement weather conditions. For more information on this method please refer to Case Study 2.


Adobe logo. You will need the Adobe Acrobat Reader to view the PDFs on this page.

Office of Operations