SYNOPSIS

lps2lts-mc [OPTION]… input.lps

DESCRIPTION

lps2lts-mc performs multi-core reachability and LTL model checking on the labelled transition system from a compiled specification provided in input.lps. Specifications are in lps format and are commonly generated by mcrl22lps(1). Because the lps depends on libraries that are not thread-safe, this tool uses processes instead of threads (--procs). LTS output is not supported by this tool, use the distributed tools instead.

OPTIONS

--procs=NUMBER

Number of processes to use for state space exploration (default: NUMBER=NUM_CORES). Maximum is 64.

--strategy=TYPE

Select an exploration strategy. Two kinds of algorithms are available: LTL model checking and reachability. The multi-core LTL algorithms (Nested DFS) are implemented in a swarmed fashion, however with a shared state storage. On top of that, MCNDFS and ENDFS offer work sharing between threads and can deliver speedups for some models. Note that the LTL algorithms require buchi automata as input. Such a model can be provided directly by the language frontend, i.e., DiVinE property models are supported, or by the LTL layer (see --ltl). Note finally that strict reachability exploration orders are not guaranteed by all multi-core search strategies. TYPE can be one of the following options:

bfs

explore state space in breadth-first order (default).

sbfs

explore state space in strict breadth-first order (default).

dfs

explore state space in relaxed depth-first order. Relaxed meaning that the inclusion check on the set of visited states is executed immediately after generating a state. This saves stack space and improves performance of the parallel reachability analysis.

ndfs

multi-core swarmed Nested Depth-First Search (Courcoubetis et al.).

nndfs

multi-core swarmed New Nested Depth-First Search (Schwoon, Esparza).

ldfs

Multi-Core Nested Depth-First Search (Laarman, Langerak, van de Pol, Weber, Wijs).

endfs

Multi-Core Nested Depth-First Search by Evangelista et al. The algorithm has been adapted with the cyan color encoding and early cycle detection, as described in "Variations on Multi-Core Nested Depth-First Search" (Laarman, van de Pol). Note that ENDFS requires a repair procedure, NNDFS is chosen by default. Alternatives can be chosen by providing a list of strategies, for example: "endfs,mcndfs", yields the NMC-NDFS algorithm as described in the Variations paper (with load balancing). Finally, we also allow multiple levels of ENDFS to be combined: "endfs,endfs,nndfs".

cndfs

New CNDFS algorithm. Multi-Core Nested Depth-First Search (Evangelista, Laarman, Petrucci, van de Pol).

tarjan

Tarjan’s sequential SCC algorithm. Depth-First Search and Linear Graph Algorithms (Tarjan).

renault

Renault’s SCC algorithm. Parallel Explicit Model Checking for Generalized Büchi Automata (Renault et al.).

ufscc

The UFSCC SCC algorithm. Multi-Core On-The-Fly SCC Decomposition (Bloemen, Laarman, van de Pol).

dfsfifo

DFS-FIFO detect non-progress cycles if the front-end provides a progress state or transitions label. Additionally, it verifies weak LTL properties with superior scalability to the *NDFS algorithms. LTSmin automatically identifies weak LTL formulae when provided with a property (via the --ltl option).

--perm=TYPE

Select the transition permutation, which is used to guide different threads to different parts of the state space. A good permutation can significantly speed up bug hunting. TYPE can be one of the following options, each has different properties in terms of performance and effectiveness summarized as (perf./eff.) :

dynamic

use "fresh successor heuristics" described in "Variations on Multi-Core Nested Depth-First Search" (Laarman, van de Pol). (decent/very good for bug hunting). Default for LTL.

sort

sort on the unique id of the successor state (decent/good)

random

use multiple fixed random permutation schemes per worker (decent/good).

rr

randomized using a full random sort order on the states. This is more random than the previous option, but requires more pre-computation time for the random array (decent/almost perfect).

shift

shift the order of transitions by a fixed value per worker (fast/decent).

otf

sort according to a dynamic permutation scheme (slow/perfect).

none

use the same exploration order for all workers. Default for reachability.

--state=TYPE

Select type of data structure for storing visited states. TYPE can be one of the following options:

table

use a lockless hash table.

tree

use a lockless tree-compressed table. In many cases the tree can compress states to two integers regardless of their length. To efficiently accommodate more than 4*10^9 states, the table is split in a root and a leaf table whose respective size can be adjusted using the --ratio option (default).

cleary-tree

use a lockless tree-compressed hash table with a parallel Cleary table to store roots. The compressed size of a state can approach one integer with this approach. The leafs table is stored as a standard tree table (two integers per subtree), which typically can be much smaller (up to the square root of the root table size). To control the respective size of the leaf table use the --ratio option.

--ratio=NUMBER

Log_2 ratio between root and leaf table in tree compression. The ratio can theoretically be as low as the square root of the root table size (see --size). This will however only work if the state vectors are perfectly combinatorial wrt their variable values. In most cases the leaf table will grow larger, but find found that a factor four (--ratio=2) works well for over 75% of the BEEM models. (default: NUMBER=2).

--size=NUMBER

Log_2 hash table size in elements (default: NUMBER=24). This is also used for the internal node table of the tree.

--zobrist=NUMBER

Save time by using zobrist incremental state hashing. NUMBER defines the (log_2) size of the random number table (default: 0=OFF). Large tables mean better hash distributions, but more cache misses due to memory size. Experiments have shown that small tables (2^6) suffice for good distributions (equal or better than Jenkin’s hash). Improvements are only noticable for fast state generators, like DiVinE 2.2 (dve22lts-mc(1)).

--max=NUMBER

Maximum search depth.

--progress-states

Forcess DFS-FIFO to use progress state labels, even if progress transition labels are present.

--strict

Forces DFS-FIFO to use strict BFS ordering for finding shorter lassos.

--proviso=PROVISO

Change the proviso implementation for partial order reduction in presence of safety properties or liveness properties. Parallel LTL is only supported with the cndfs proviso in the cndfs search strategy. Sequentially, LTL is also supported with the stack proviso in the ndfs search strategy. PROVISO can be either:

force-none

Disables the ignoring proviso check (only useful for benchmarking or bug hunting).

closed-set

The closed set proviso is the default proviso which requires almost no extra work/memory. It might however result in less reduction than the stack proviso. It works with both the dfs and (p/s)bfs exploration strategies.

stack

The stack proviso is the proviso used for example by the spin model checker. It requires some extra work/memory but may result in a better reduction than closedset. It works only for a dfs or ndfs search strategy (for resp. safety or LTL properties).

cndfs

A special parallel cycle proviso is used to enable POR with multi-core LTL model checking using the cndfs search strategy.

-n, --no-exit

Do not exit when an error is found. Just count errors. Error counts are printed with -v.

-d, --deadlock

Find state with no outgoing transitions. Returns with exit code 1 if a deadlock is found, 0 or 255 (error) otherwise.

-i, --invariant=PREDFILE|PREDEXPRESSION

Find state where the invariant is violated. The file PREDFILE contains an expression in a simple predicate language (see ltsmin-pred(5)). Its contents can also be entered directly as a PREDEXPRESSION. Returns with exit code 1 if a violation is found, 0 or 255 (error) otherwise.

-a, --action=STRING

Find state with an outgoing transition of type STRING. Returns with exit code 1 if the action is found, 0 or 255 (error) otherwise.

--trace='FILE'.gcf

When finding a deadlock state or a transition labelled with a certain action, write a trace to 'FILE'.gcf, beginning from the initial state. Traces can be pretty-printed with ltsmin-printtrace(1).

PINS Options

--labels

Print state variable, type and value names, and state and action labels. Then exit. Useful for writing predicate (--invariant), LTL (--ltl), CTL/CTL* (--ctl), and mu-calculus (--mu) expressions.

--matrix

Print the dependency matrix and exit.

-c, --cache

Enable caching of greybox calls.

If this option is used, the state space generator makes calls to the short version of the greybox next-state function and memoizes the results. If the next-state function is expensive this will yield substantial speedups.

--pins-guards

Use guards in combination with the long next-state function to speed up the next-state function.

--allow-undefined-edges

Allow values for edge variables in atomic predicates to be unreachable.

--allow-undefined-values

Allow undefined values in atomic predicates for enums.

-r, --regroup=SPEC

Enable regrouping optimizations on the dependency matrix.

SPEC is a comma-separated sequence of transformations <(T,)+> which are applied in this order to the dependency matrix. The following transformations T are available:

gs

Group Safely; macro for "gc,gr,cw,rs"; almost always a win.

ga

Group Aggressively (row subsumption); macro for "gc,rs,ru,cw,rs"; can be a huge win, but in some cases causes slower state space generation.

gsa

Group Simulated Annealing; macro for "gc,gr,csa,rs"; almost always a win; usually better than gs.

gc

Group Columns; macro for "cs,cn".

gr

Group Rows; macro for "rs,rn".

cs

Column Sort; sort columns lexicographically.

cn

Column Nub; (temporarily) group duplicate columns, thereby making ca more tractable. Requires cs.

cw

Column sWap; minimize distance between columns by swapping them heuristically. This reordering improves performance of the symbolic data structures.

ca

Column All permutations; try to find the column permutation with the best cost metric. Potentially, this is an expensive operation.

csa

Column Simulated Annealing; minimize distance between columns by swapping them using simulated annealing.

rs

Row Sort; sort rows lexicographically.

rn

Row Nub; remove duplicate rows from the dependency matrix. This is always a win. Requires rs.

ru

Row sUbsume; try to remove more rows than nubbing, thereby trading speed for memory. Requires rs.

w2W

Over-approximate all must-write to may-write. May-write supports the copy (-) dependency.

r2+

Over-approximate read to read+write. Allows read dependencies to also subsume write dependencies.

w2+

Over-approximate must-write to read+write. Allows must-write dependencies to also subsume read dependencies.

W2+

Over-approximate may-write to read+write. Allows must-write dependencies to also subsume read dependencies.

-2r

Over-approximate copy to read. May be useful for testing whether the dependency matrix is correct.

rb4w

Use special heuristics to move read dependencies before write dependences. Often a win in symbolic state space generation.

mm

Writes metrics of the selected (sr, sw, sc) matrix to stdout. The following metrics are printed:

  1. Event span: the total distance between the minimum and maximum column of rows.

  2. Normalized event span: the event span divided by the size of the matrix (rows x columns).

  3. Weighted event span: the weighted event span, the event span, including a moment signifying the location of the span. See, Siminiceanu et al., we use moment 1.

  4. Normalized weighted event span: the weighted event span divided by the size of the matrix (rows x column).

sr

Select the read matrix for cs, csa, cw, ca, rs, bcm, bs, bk, vcm, vacm, vgps and mm.

sw

Select the write matrix (default) for cs, csa, cw, ca, rs, bcm, bs, bk, vcm, vacm, vgps and mm. The write matrix is the default selection, because only write dependencies can create new nodes in decision diagrams. A bad variable order in the write matrix thus leads to a large number of peak nodes during reachability analysis. A bad variable order in the read matrix can also lead to a slow reachability analysis, but typically not as severe as a bad variable order in the write matrix. Slow reachability analysis due to a bad variable order in the read matrix causes many recursive calls to the relational product operation. Typically it is best that read dependencies are moved to the top DD level, thus left most in the read matrix.

sc

Select the combined matrix for cs, csa, cw, ca, rs, bcm, bs, bk, vcm, vacm, vgps and mm. The combined matrix is the logical or of the read and write matrix.

bg

Use a bipartite graph (default) for bcm, bk, bs, vcm, vacm and vgps.

tg

Create a total graph of the bipartite graph for bcm, bk, bs, vcm, vacm and vgps. This adds more vertices and edges thus increasing computation time, but sometimes provides a better ordering.

Below, the sparse matrix algorithms prefixed with b are only available when LTSmin is compiled with Boost. Algorithms prefixed with v are only available when LTSmin is compiled with ViennaCL.

bcm

Apply Boost’s Cuthill-McKee ordering.

bk

Apply Boost’s King ordering.

bs

Apply Boost’s Sloan ordering.

vcm

Apply ViennaCL’s Cuthill-McKee ordering.

vacm

Apply ViennaCL’s advanced Cuthill-McKee ordering.

vgps

Apply ViennaCl’s Gibbs-Poole-Stockmeyer ordering.

f

Apply FORCE ordering.

--row-perm=PERM

Apply row permutation PERM, where PERM is a sequence of row numbers, separated by a comma. E.g. the vector 2,1,0 will swap row 2 with row 0.

--col-perm=PERM

Apply column permutation PERM, where PERM is a sequence of column numbers, separated by a comma. E.g. the vector 2,1,0 will swap column 2 with column 0.

--col-ins=PAIRS

Insert columns before other columns in the dependency matrix.

PAIRS is a comma-separated sequence of pairs <(C.C,)+>'. E.g. --col-ins=1.0 will insert column 1 before column 0. Each pair contains a source column C and a target column C'. During the application of the whole sequence, C will always be the column number that corresponds with the column before the application of the whole sequence. The column number C' will always be the column during the application of the whole sequence. This means that in for example --col-ins=2.0,1.0, first column 2 is inserted at position 0, then column 1 is inserted at position 0. The result will be that the original column 2 will be at position 1. Another important detail is that when --col-ins is used, all source columns will temporarily be "removed" during reordering from the dependency matrix, i.e. when the -r,--regroup option is given. After reordering is done, the columns will be inserted at the desired target position. In other words, reordering algorithms given by the option -r,--regroup, will only be applied on the dependency matrix with source columns removed.

--sloan-w1=WEIGHT1

Use WEIGHT1 as the first weight for the Sloan algorithm, see https://www.boost.org/doc/libs/1_66_0/libs/graph/doc/sloan_ordering.htm.

--sloan-w2=WEIGHT2

Use WEIGHT2 as the second weight for the Sloan algorithm, see https://www.boost.org/doc/libs/1_66_0/libs/graph/doc/sloan_ordering.htm.

--graph-metrics

Print Boost’s and ViennaCL’s graph metrics (only available when LTSmin is compiled with Boost or ViennaCL).

--regroup-exit

Exit with 0 when regrouping is done.

--regroup-time

Print timing information of each transformation, given in sequence --regroup (-r).

--mucalc=FILE|FORMULA

Compute a parity game for the mu-calculus formula.

The mu-calculus formula is provided in the file FILE or directly as a string FORMULA. The syntax and tool support are described in ltsmin-mucalc(5).

--ltl=LTLFILE|LTLFORMULA

Compute cross-product of a Buchi automaton and the specification

LTLFILE is a file containing an Linear Temporal Logic formula (see ltsmin-ltl(5)). Which content can also be provided directly as LTLFORMULA. This formula will be converted to a Buchi automaton. Then the synchronous cross product with the original specification is computed on-the-fly. A state label is added to encode accepting states.

--ltl-semantics=spin|textbook|ltsmin

Change the semantics of the crossproduct generated using --ltl

Three options are available, the default is automatically chosen based on the atomic predicates in the formula.

spin

Use semantics equal to the spin model checker. From the source state all transitions are generated. Then, state predicates are evaluated on the source state. The Buchi automaton now moves according to these predicates. Deadlocks in the LTS cause the Buchi to progress independently. This option is incompatible with edge-based atomic predicates. This option is the default if no edge-based atomic predicates are found in the LTL formula.

textbook

Use textbook semantics. A new initial state is generated with an outgoing transition to the initial state. Now, predicates are evaluated on the target state and the Buchi automaton moves according to these predicates. Deadlocks in the LTS do NOT cause the Buchi to progress independently. This option is incompatible with edge-based atomic predicates.

ltsmin

Same as spin semantics, but now deadlocks in the LTS do NOT cause the Buchi to progress independently. This option is the default if edge-based atomic predicates are found in the LTL formula.

--por=heur|del

Activate partial-order reduction

Partial-Order Reduction (POR) can reduce the state space when searching for deadlocks (-d) or accepting cycles (--ltl). Two POR algorithms are available:

heur

Uses a cost-based heuristic beam-search to find the smallest stubborn set

del

Uses Valmari’s deletion algorithm to find the smallest stubborn set by iteratively removing transitions while maintaining the constraints.

--weak

Use weak commutativity in partial-order reduction. Possibly yielding better reductions.

--leap

Use leaping partial-order reduction, by combining several disjoint stubborn sets sequentially.

Environment Variables

LTSmin supports the following list of environment variables.

Table 1. Environment Variables:
Name Unit Description

LTSMIN_MEM_SIZE

bytes

Sets the amount of system memory to the given value.

LTSMIN_NUM_CPUS

constant

Sets the amount of CPUs to the given value.

The variables LTSMIN_MEM_SIZE, and LTSMIN_NUM_CPUS are particularly relevant when neither sysconf(3) nor cgroups(7) is able to properly detect these limits, e.g. when LTSmin runs on Travis CI in a docker container.

Development Options

--grey

Make use of GetTransitionsLong calls.

A language module can have three next state calls: GetTransitionsAll, GetTransitionsLong and GetTransitionsShort. The first call is used by default, the second call is used when this flag is passed and the third form is used if --cache is enabled. This allows all three calls in a language module to be tested.

--write-state

Write the full state vector.

General Options

-v

Increase the level of verbosity

-q

Be quiet; do not print anything to the terminal.

--debug=<file.c>

Enable debugging output for file.c (option allowed multiple times).

--version

Print version string of this tool.

-h, --help

Print help text

--usage

Print short usage summary.

File Formats

The following file formats are supported:

  • Directory format (*.dir, *.dz and *.gcf)

  • Vector format (*.dir, *.gcd, *.gcf)

  • Binary Coded Graphs (*.bcg)

  • Aldebaran Format (*.aut)

  • FSM Format (*.fsm)

  • MRMC/Prism (*.tra+*.lab)

  • PGSolver format (*.pg)

If a tool operates in streaming mode then support for file formats is limited, as can be seen in the following table:

Format Streaming mode Load/Store mode

DIR

R/W

R/W

VEC

R/W

R/W

BCG

W

R/W

AUT

W

R/W

FSM

W

W

TRA

-

R/W

PG

-

W

The directory format uses multiple files to store an LTS. The various extension explain how these files are stored in the underlying file system. The *.dir format uses multiple files in a directory without compression. If the LTS has one edge label, no state labels and does not store state vectors then these files are backwards compatible. Neither the *.dz nor the *.gcf formats are backwards compatible. Both formats use compression. The first uses a directory for the files, second interleaves files into a single file.

If you try to open a *.dir with the old mCRL tools and you get the error message:

wrong file version: 0

then the directory is probably compressed. If that happens then you may convert the directory by typing the command:

ltsmin-convert bad.dir good.dir

LTL MODEL CHECKING

LTL model checking requires a Buchi cross product with accepting states marked with accepting state labels, and a search strategy which takes these labels into account (see ndfs / scc strategies of the sequential and multi-core tools).

For generating a cross product, the following options are supported:

  1. The input specification is already combined with a Buchi automaton, and states are appropriately marked with accepting labels. Currently two frontends support this functionality: SpinS (see documentation on prom tools), and DiVinE (see documentation on dve tools).

  2. LTSmin tools can build the cross product of input specification and Buchi automaton (through a PINS2PINS layer enabled with the --ltl option) themselves, and will annotate cross-product states appropriately when they are accepting. See also --labels option in the PINS tools.

Note
Combination with the Partial Order Reduction PINS2PINS layer (--por) requires the latter option.
Note
As of LTSmin release 1.9, our default LTL semantics mimics those of SPIN/DiVinE. Before that, LTSmin implemented textbook semantics. See the documentation on the enumerative tools for more information.

EXIT STATUS

0

Successful termination.

1

Counter example found.

255

Some error occurred.

SUPPORT

Send questions, bug reports, comments and feature suggestions to the LTSmin Support Team.

SEE ALSO