SPARK latest release v2.01-031113 - Last updated 6 November 2003.  
 

Frequently Asked Questions

This is a list of frequently asked questions (FAQ) for VisualSPARK users. If you need help for something that is not covered by the SPARK Reference Manual, the SPARK Atomic Class API, the SPARK Problem Driver API, or this FAQ, please email us.

This FAQ is intended to supplement, not replace, the SPARK documentation. Before emailing us a question, you should first check to see if the topic is covered in the various manuals.


Table of Contents



SPARK Language

How to port atomic classes from SPARK 1.x to SPARK 2.x?

The syntax used to define the callback functions has changed in SPARK 2.x in order to support new features such as the extended callback mechanism and the multivalued inverses. In order to make the definition of the callback functions easier to the user, C preprocessor macros that hide the implementation details of argument passing as well as the function prototype have been defined in the header file "spark.h". Note that the syntax of the EQUATIONS and FUNCTIONS blocks in the parsed portion of the atomic class has not changed.

To demonstrate how to use the new macros let's port the simple "sum.cc" atomic class from SPARK 1.x to SPARK 2.x. The next code snippet shows this atomic class as found in the globalclass directory of the SPARK 1.02 distribution.

#ifdef SPARK_PARSER

PORT a	"Summand 1" ;
PORT b	"Summand 2" ;
PORT c	"Sum" ;

EQUATIONS {
	c = a + b ;
}

FUNCTIONS {
	a = sum_subtract( c, b ) ;
	b = sum_subtract( c, a ) ;
	c = sum_add( a, b ) ;
}

#endif /* SPARK_PARSER */
#include "spark.h"

double sum_subtract ( ARGS )
{
    ARGDEF(0, c);
    ARGDEF(1, b);
    double a;

    a = c - b;
    return a ;
}

double sum_add ( ARGS )
{
    ARGDEF(0, a);
    ARGDEF(1, b);
    double c;

    c = a + b;
    return c;
}

In order to port this atomic class to the syntax required by SPARK 2.x you need to:

  • use the EVALUATE preprocessor macro to declare the callback function in place of the explicit function prototype, and
  • use the RETURN preprocessor macro to return the new value for the target variables.
We used a bold, blue font to indicate the modifications in the implementation of the "sum.cc" atomic class required for SPARK 2.x.

#ifdef SPARK_PARSER

PORT a	"Summand 1" ;
PORT b	"Summand 2" ;
PORT c	"Sum" ;

EQUATIONS {
	c = a + b ;
}

FUNCTIONS {
	a = sum_subtract( c, b ) ;
	b = sum_subtract( c, a ) ;
	c = sum_add( a, b ) ;
}

#endif /* SPARK_PARSER */
#include "spark.h"

EVALUATE( sum_subtract )
{
    ARGDEF(0, c);
    ARGDEF(1, b);
    double a;

    a = c - b;
    RETURN( a );
}

EVALUATE( sum_add )
{
    ARGDEF(0, a);
    ARGDEF(1, b);
    double c;

    c = a + b;
    RETURN( c );
}

As in SPARK 1.x the macro ARGDEF is used to define an argument through its 0-based position in the argument list as specified in the FUNCTIONS block. The position of the first element in the list is thus referred to with the index 0, the position of the second element with the index 1, and so forth.

A synonym macro named ARGUMENT and with the same syntax was added in SPARK 2.x. For example the following code snippet defines the argument in position 0 in the argument list of the callback sum_subtract as a const reference to a SPARK::TArgument object named c. It is exactly equivalent to using the ARGDEF macro as in the previous code snippet.

EVALUATE( sum_subtract )
{
    ARGUMENT(0, c);
    ...
}

In SPARK 2.x the RETURN macro must be used to return the value of the target variable in place of the C++ language keyword return followed by the new double value.

Finally if the atomic class you want to port to SPARK 2.x defines a PRED function in the FUNCTIONS block along with the main EVALUATE callback function, then you need to:

  • replace the PRED language keyword in the FUNCTIONS block with the new PREDICT language keyword,
  • use the PREDICT preprocessor macro to declare the callback function in place of the explicit function prototype, and
  • use the RETURN preprocessor macro to "return" the new value for the target variables.

Consult this article for a more detailed explanation of how to port atomic classes from SPARK 1.x to SPARK 2.x.

How to return the calculated values in a multi-valued inverse?

The RETURN preprocessor macro mimics the C++ language keyword return used in SPARK 1.x. However it works only with single-valued inverses since it assumed that only one value at a time could be returned from a callback function. Therefore, we added a new mechanism in SPARK 2.x to return the new value for each target variable of a callback function. The target variables are the variables listed on the left-hand side of the '=' sign of each inverse specified in the FUNCTIONS block.

Unlike in SPARK 1.x it is now possible in SPARK 2.x to gain write access to the target variables in a similar way that the ARGUMENT macro grants you read access to the argument variables. This is accomplished with the preprocessor macro TARGET that uses syntax similar to the one of the ARGUMENT macro. For each target variable you need to specify:

  • the 0-based position of the target variable in the target list (i.e., starting at zero for the first variable), and
  • the name of the reference to the SPARK::TTarget object representing the target variable.
Assigning the new calculated value(s) to each target variable ensures the proper data flow across the set of the unknown variables in the problem. Note that instead of using the RETURN macro it is equivalent to declare the target variable in the body of the callback function with the TARGET macro and assign the new value to it. Also, note that the SPARK::TTarget class does not grant you read access to the current value of the target variable as this would break the topological dependency implied in the FUNCTIONS block used during the graph-theoretic processing in setupcpp. However, it is possible to access all other properties of a target variable, such as its name, its unit, its past values, its absolute tolerance, ...

We demonstrate the usage of the TARGET macro in the following code snippet that shows the mulivalued atomic class "root2.cc". This atomic class computes the two real roots of a quadratic polynomial. Consult the Section 4 of the SPARK Reference Manual for more details on this atomic class and its implementation.

// root2.cc 
// Multi-valued object that returns the 2 roots (root_plus, root_minus) 
// of a quadratic polynomial of the form a*x^2 + b*x + c = 0 
///////////////////////////////////////////////////////////////////////// 

#ifdef SPARK_PARSER 

PORT a; 
PORT b; 
PORT c; 
PORT root_plus; 
PORT root_minus; 

EQUATIONS { 
    a*x^2 + b*x + c = 0;
}

FUNCTIONS { 
    root_plus, root_minus = root2__mroot2( a, b, c ) ;
}

#endif /* SPARK_PARSER */ 
#include "spark.h" 

EVALUATE( root2__mroot2 ) 
{ 
    ARGUMENT( 0, a ); 
    ARGUMENT( 1, b ); 
    ARGUMENT( 2, c ); 
    TARGET( 0, root_plus );
    TARGET( 1, root_minus );
    
    double discriminantx = b*b - 4.0*a*c; 
    if (discriminantx < 0.0) { // Atomic class error 
        REQUEST__ABORT( "Cannot compute complex roots." );
    } 
    
    double square_discriminant = sqrt( discriminantx ); 
    
    root_plus = (-b + square_discriminant)/(2.0*a); 
    root_minus = (-b - square_discriminant)/(2.0*a); 
}

The target variables in the "root2.cc" atomic class are root_plus and root_minus. Within the body of the EVALUATE callback function root2__mroot2, we declare the two target variables with the statements:

TARGET( 0, root_plus );
TARGET( 1, root_minus );

At the end of the callback function we assign the new values to the target variables from the following expressions that evaluate to double values:

root_plus = (-b + square_discriminant)/(2.0*a); 
root_minus = (-b - square_discriminant)/(2.0*a); 

Note that in case the discriminant is a negative real value we send an abort request to the solver to interrupts the simulation. This error checking is necessary because SPARK does not have native support for complex numbers.

When are the PRED_FROM_LINK variables updated?

The PRED_FROM_LINK variables are the variables in the problem definition that are specified with the keyword PRED_FROM_LINK, like the variable linkA in the following code snippet.

LINK linkA ... PRED_FROM_LINK=linkB;

  • The PRED_FROM_LINK mechanism is invoked only if the target variable linkA is a break variable in the problem under study.
  • The PRED_FROM_LINK variables are updated at the beginning of each simulation step after the prepare step callbacks are fired for all components. In our example the current value of the fromLink variable linkB is assigned to the current value of the target variable linkA.
  • This prediction mechanism for the break variables happens before any components are evaluated and only once per step. At the end of the step the predicted values are likely to have been overridden with the solution values computed for each break variable for the current step.

When are the INPUT_FROM_LINK variables updated?

The INPUT_FROM_LINK variables are the variables in the problem definition that are specified with the keyword INPUT_FROM_LINK, like the variable linkA in the following code snippet.

LINK linkA ... INPUT_FROM_LINK=linkB;

  • INPUT_FROM_LINK variables are input variables to the computational model, i.e. they are not computed by the SPARK solver. In that sense they serve a similar role as the INPUT and PARAMETER variables, except that they get their values using a different mechanism.
  • The current value of an INPUT_FROM_LINK variable will always be the value of the fromLink variable from the last successfully computed step, i.e., the value of linkA in our example will always be the value of linkB from one step back. Using the SPARK mechanism to access the first past value, linkA = linkB[1].
  • This requires that for the calculation of the initial step the fromLink variable linkB be specified at the previous time step in order to ensure consistent initialization. Then the value will be propagated correctly to the variable linkA when solving the initial step. Not that specifying a value directly for linkA in an input file for example is useless as the SPARK solver will always propagate the value from linkB, even at the initial step.
  • When reporting an INPUT_FROM_LINK variable at the end of a successful step the reported value will show the current value of the fromLink variable although this is not the value that was used for the computation. The value used for the computation is always the value from the fromLink variable from one step back.

How to write comments in the SPARK-readable files?

Comments can be specified in input files, preference files (*.prf) and run-control files (*.run) in SPARK by using either one of the two supported formats:
  • /* C-style comments */
  • // C++-style comments

Note that C-style comments can spread over multiple lines. C++-style comments extend to the end of the current line only. However, there is one major limitation of the SPARK comment mechanism, whereby each starting comment delimiter like '/*' or '//' must be preceded either by a white space (to indicate a new token) or be specified at the beginning of a line. This is a major difference with the behavior of the C preprocessor. A comment delimiter appearing within a token will not be detected by SPARK.

// this a valid comment
some text// this is NOT a comment
some more text //another valid comment
some more text/* this is NOT a valid comment*/ more text after comment
some more text /* this is a valid comment*/ more text after comment
some more text /* this is still a valid comment*/more text after comment

Which problem variables are reported at runtime?

Only the variables specified with the REPORT keyword in the problem definition files (i.e., *.pr, *.cm, and *.cc files) or the variables with a write URL containing the REPORT tag will be reported. You can also use the map file mechanism to override the static write URLs for any problem variables.

How to specify the values in the native input files?

SPARK native input files must respect a predetermined format to ensure correct parsing at runtime.
  • The first non-comment line in the file must specify the number of columns and then provide a string without space for each column.
  • The following lines list the values for each column for the time stamp specified in the first column. Note that the time stamps must be specified in strictly increasing order.
  • The values for each time stamp must be specified on the same row. It is possible to have comments in the row as long as they don't span over multiple lines.
  • The '*' meta-character in the first column indicates that the same values as at the previous time stamp are to be kept until the end of the simulation.
  • Similarly, the '*' meta-character specified in any column but the first one after the first row indicates that the previous value should be repeated for this column. E.g., in the following example of an input file, the variable y will also get the value 2.0 for the time stamp 1.0. It is illegal to use the '*' character in the first row since there are no previous value to propagate. This will produce a runtime error.
    // Various comments pertaining to the time-varying values
    3        x         y         z
    0.0      1.0       2.0       3.0
    1.0      1.1       *         2.4
    // More comments possible 
    3.0      *         5.0       3.0   /* Comments */ 
    *
    
  • Finally, the '*' meta-character used as the first and only time stamp indicates that the values are constant for the entire duration of the simulation.
    // List of constant values
    2        A         B
    *        1.0       2.0
    
    Of course, when using the '*' meta-character as the first time stamp it is no longer possible to specify the '*' meta-character in place of a value as all values must be explicitly specified.

SPARK Runtime Controls

Runtime control parameters are specified in the *.run file using the so-called preference settings syntax, also used in the *.prf file. In this section we discuss issues pertaining to the runtime control parameters only, independent of the file format.

How to perform a simulation with adaptive time stepping?

In order to perform a simulation with adaptive time stepping you should set the key VariableTimeStep to 1 in the *.run file. By default, it is set to 0, which will perform a simulation with constant time step.

(  ...
   VariableTimeStep    ( 1   ())
   MinTimeStep         ( 1.0e-6 ())
   MaxTimeStep         ( 10.0 ())
   ...
)

When performing a simulation with variable time stepping it is recommended to also set the minimum allowed time step and the maximum allowed time step using the keys MinTimeStep and MaxTimeStep.

Setting the key VariableTimeStep to 1 in the *.run file tells the solver that it should adapt the time step to respond to various time-related requests. For example, the solver will try to synchronize the global time with the user-requested meeting points or the various report and input events. However, the most important application of the adaptive time stepping operation is in conjunction with the new integrator classes that provide automatic integration error control. These new integrator atomic classes adapt the simulation time step so that the estimated integration error in the dynamic variables satisfies the prescribed relative tolerance specified in the global settings section of the preference settings file.

The new integrator atomic classes can be found in the globalclass directory of your VisualSPARK installation. They are named after the template filename "integrator_*.cc". Since they define the same port interface as the other integrator classes they can be readily used simply by changing the name of the integrator class in the DECLARE statements.

How to specify an "infinite" final time?

When using the '*' meta-character for the FinalTime entry, you essentially tell the SPARK solver to run the simulation until "infinity". However, because of the fixed-length representation of numbers on computers, infinity really means some large number. Therefore, the simulation will run until the value for the global time variable reaches the biggest possible floating point number in double precision. On a 64-bit architecture with an 8-byte representation of a double type this corresponds approximately to 1.7E+308.

(  ...
   FinalTime    ( *   ())
   ...
)

How to generate reports for all computed steps?

Specifying a ReportCycle of zero essentially forces the simulator to report the solution values at each computed step. However, a negative report cycle value does not make sense.

(  ...
   FirstReport    ( 0.0   ())
   ReportCycle    ( 0.0   ())
   ...
)

SPARK Preference Settings

The preference settings are specified in the *.prf file using the so-called preference settings syntax that relies on set of parenthesis "( )" to identify the various tokens. See the SPARK Reference Manual for more information on the syntax. In this section we discuss issues pertaining to the preference settings only, independent of the file format.

How to deal with bad numerics?

When SPARK detects bad numerics (i.e., Not a Number or an infinite number) in a strongly-connected component, you can try to set the CheckBadNumericsFlag in the component section of the preference settings file to 1 in order to activate a more robust numerical treatment that is likely to cure the problem. Essentially SPARK will check for bad numerics in the residual function at each iteration and halve the relaxation coefficient until the bad numerics situation is no longer detected in the resulting residuals. This mode of operation is very costly because it happens at the inner most level of the nonlinear solver. Therefore, it is not selected by default and you should only select it for the components that require it.

(  
   ComponentSettings ( 
      0 ( ...
        CheckBadNumericsFlag  ( 1   ())
        ...
      )
   )
)

For example, the previous code snippet sets the robust bad numerics checking mode on for the component 0. This setting might be required to solve highly nonlinear equation systems that are likely to produce bad numerics in the residuals if the full Newton step is applied.

How to deal with no convergence?

For each strongly-connected component SPARK solves a system of nonlinear equations for the set of break variables identified by the setup program during the graph-algorithmic analysis. By default the nonlinear solver uses the Newton method with a fixed relaxation coefficient, typically set to 1. The Newton method is locally q-quadratically convergent, which means that if the initial approximation is good enough then it will be improved rapidly until the desired precision is achieved.

However, applying the full Newton step at each iteration might be unsatisfactory when the initial approximation is not good enough. Such situations might lead to poor convergence or even divergence of the solution process. In order to ensure finding a solution of the system of nonlinear equations, from almost any starting point, SPARK implements globally convergent strategies derived from the Newton method. Such methods are usually referred to as quasi-Newton methods. They behave like the Newton method close to the solution, thus retaining its local convergence rate, whereas away from the solution they rely on a step dictated by a global method to ensure taking a reasonable step.

In SPARK these strategies are called the step control methods. They are responsible for determining the direction and the length of the step that the nonlinear solver makes at each iteration. In particular, they adapt the relaxation coefficient to ensure global convergence. SPARK implements the following step control methods, specified from the less robust and "cheapest" strategy to the most robust and most computationally expensive one:

  • [0] Fixed relaxation coefficient strategy (default)
  • [1] Basic halving strategy
  • [2] Linesearch backtracking strategy
  • [3] Affine invariant backtracking strategy
When the default step control method does not converge, you should try to use one of the more robust methods by specifying the corresponding value using the key StepControlMethod in the component section of the preference settings file. For each strategy listed above we have included in square brackets the corresponding value used to identify it in the preference settings file.

(  
   ComponentSettings ( 
      0 ( ...
        StepControlMethod  ( 2   ())
        RelaxationCoefficient ( 1.0 ())
        MinRelaxationCoefficient ( 1e-10 ())
        ...
      )
   )
)

For example, setting the step control method to 2 in the previous code snippet will select the linesearch backtracking strategy for the component 0. Furthermore, this strategy will attempt to use a relaxation coefficient of 1.0 close to the solution, and otherwise will be allowed to adapt the relaxation coefficient up to a minimum value of 1.0e-10. Such settings are typical of highly nonlinear system of equations.

Finally, it is fundamental that the model equations are continuously differentiable otherwise the convergence assumptions underlying the Newton method will break down and SPARK will not be able to compute the solution of the nonlinear equations. The way the equations are formulated in the atomic classes is critical to guarantee the differentiability of the system. Also, the model writer should always be aware of the possibility that the Jacobian matrix derived from the set of equations may be intrinsically or become singular or ill-conditioned. Most numerical problems can be avoided by paying enough attention to the formulation of the equations during the design phase. Typically it is good practice to avoid:

  • possible divisions by zero,
  • discontinuities in the return values,
  • expressions that will generate partial derivatives likely to become infinite, and
  • expressions that have a limited validity domain likely to generate a NaN.
The remaining pathological numerical problems arising at runtime are then dealt with algorithmically using a different solution method and the various recovery strategies implemented in SPARK.

How to prevent early convergence after the prediction step?

The nonlinear solver always checks for the convergence after the prediction phase, i.e. firing of the predict callbacks assigned to the break variables. This is accomplished by checking the residual functions associated with the break variables against the specified tolerance multiplied by the prediction safety factor. By default the prediction safety factor is set to 0.01 so that this prediction convergence test will only succeed when the residuals are significantly smaller than the prescribed tolerance.

However, this test does not ensure that the desired precision is achieved in all the unknown variables for the component in question since "small" residual functions do not imply a certain number of significant digits in the solution. Such an assumption is entirely dependent on the scaling of the residual functions. This is especially an issue with highly nonlinear equations. In order to prevent such early and unsatisfactory convergence from being detected, it is possible to set the value of the prediction safety factor in the global settings section of the preference settings file using the key PredictionSafetyFactor.

(  
   GlobalSettings ( ...
      PredictionSafetyFactor  ( 0.0   ())
      ...
   )
)

For example, setting the prediction safety factor to 0.0 will ensure that the nonlinear solver never detects convergence after the prediction phase unless all residuals are exactly equal to zero. When the "convergence" diagnostic level is set in the runtime controls file, the result of the prediction phase is indicated with the iteration count 0 in the run log file.

How to relax the convergence test on the non-break variables?

After each iteration the nonlinear solver checks for convergence by comparing the solution increments since the last iteration against the specified tolerance. The iterative process stops only when the increments of both the break variables and the non-break variables are smaller than the prescribed tolerance multiplied by some safety factor.

In some cases it is impossible to obtain the desired precision for some of the non-break variables because the error is amplified from the break variables in such way that the smallest change in the solution of a break variable causes the increments of a dependent non-break variable in the same component to be larger than the requested tolerance. Such unfortunate numerical situations are likely to occur with SPARK when solving highly nonlinear problems because of its solution methodology whereby only the break variables forming the cut set are solved iteratively a opposed to the entire set of unknowns in each strongly-connected component.

To deal with such situations we recommend that:

  • either you make the cut set bigger by forcing some of the offending variables to become break variables using the BREAK_LEVEL language construct,
  • or that you relax the convergence test on the non-break variables by specifying a safety factor larger than 1, the default value.
The latter is achieved using the key NormalUnknownSafetyFactor in the global settings section of the preference settings file.

(  
   GlobalSettings ( ...
      NormalUnknownSafetyFactor  ( 100.0   ())
      ...
   )
)

For example, setting the safety factor for the non-break variables (also referred to as the normal unknowns) to 100 relaxes the convergence test by 2 orders of magnitude. If the requested tolerance is 1.0e-6, you should expect 6 significant digits in the solution of the break variables but only 4 significant digits in the solution of the non-break variables.

Such precision is of course only achieved for the variables whose values lie far away from their respective absolute tolerance. Consult the SPARK Reference Manual for more information on the convergence tests and the meaning of the various safety factors and tolerances involved.


SPARK Requests

Requests are processed at runtime during the simulation of a problem, i.e. within the TProblem::Simulate() method. Requests can be sent from the callbacks in the various atomic classes or from the driver function or internally from within the SPARK solver.

What are internal and external requests?

A request is referred to as an internal request if the "to" and "from" SPARK::TProblem addresses in the SPARK::THeader constructor are the same. In all other cases the requests are considered to be external. In general internal requests have to comply with further restrictions than external requests in order to prevent unwanted behavior at runtime. Consult the SPARK Atomic Class API for more information.

How to request a time step?

A set_time_step can be sent to request the time step to use for the next dynamic step. The desired time step must be strictly larger than zero. The time step effectively used for the next dynamic step might be smaller than the requested one in case a request for a smaller time step was sent (e.g. to synchronize with the next meeting point internally), but it will never be larger.

How to request a stopping time?

A set_stop_time request can be sent to request a stopping time ahead of the current time and earlier than the final time specified in the runtime control parameters. The stopping time is cleared when it is hit by the simulation. So if you restore the simulation to an earlier state the stopping time will no longer be active.

How to request a meeting point?

A set_meeting_point request can be sent to request a meeting point ahead of the current time. The meeting points are stored internally in the solver. They are used to synchronize the global time with the next event when the solver is operating in variable time stepping mode.

Only the meeting points before the final time as specified in the *.run file will be tracked. If the requested meeting point is after the final time, it will be discarded. In order to clear all the meeting points you should send a clear_meeting_points request.