prom2lts-mc [OPTION]… input.prom


prom2lts-mc performs multi-core reachability and LTL model checking on the labelled transition system from a compiled specification provided in input.prom or input.prom.spins. LTS output is not supported by this tool, use the distributed tools instead.


Use SpinS (distributed as submodule LTSmin) to compile PROMELA model to

spins -o3

The optional flag -o3 turns off control flow optimizations.

On the resulting compiled SpinS module, all SpinS-related LTSmin tools can be used:

prom2lts-sym -rgs --order=chain leader.etf
prom2lts-mc --assert -prr --threads=16
prom2lts-seq --por -d --trace=t.gcf

These three examples perform: full symbolic reachability with chaining order (--order) and reordering (-rgs) storing the state space in ETF format, doing a randomized (-prr) parallel (--threads) search for assertion violations (--assert) in the model, and searching for deadlocks (-d) storing the first counter example in t.gcf (--trace).

If a never claim is present in the model, a Buchi automaton is created. LTSmin can directly perform LTL model checking on such models:

prom2lts-mc --strategy=cndfs --threads=16
prom2lts-seq --por --proviso=color --strategy=scc

These two examples perform LTL model checking using: multi-core NDFS (cndfs), and a sequential SCC-based algorithm with partial order reduction (--por and --proviso, where different provisos are available). Again one can provide additional options to store traces, etc.

See the man pages of the respective tools for further options.

SpinS is an adaptation of the SpinJa model checker, which generates C code implementing the PINS interface instead of Java code.



Number of threads to use for state space exploration (default: NUMBER=NUM_CORES). Maximum is 64.


Select an exploration strategy. Two kinds of algorithms are available: LTL model checking and reachability. The multi-core LTL algorithms (Nested DFS) are implemented in a swarmed fashion, however with a shared state storage. On top of that, MCNDFS and ENDFS offer work sharing between threads and can deliver speedups for some models. Note that the LTL algorithms require buchi automata as input. Such a model can be provided directly by the language frontend, i.e., DiVinE property models are supported, or by the LTL layer (see --ltl). Note finally that strict reachability exploration orders are not guaranteed by all multi-core search strategies. TYPE can be one of the following options:


explore state space in breadth-first order (default).


explore state space in strict breadth-first order (default).


explore state space in relaxed depth-first order. Relaxed meaning that the inclusion check on the set of visited states is executed immediately after generating a state. This saves stack space and improves performance of the parallel reachability analysis.


multi-core swarmed Nested Depth-First Search (Courcoubetis et al.).


multi-core swarmed New Nested Depth-First Search (Schwoon, Esparza).


Multi-Core Nested Depth-First Search (Laarman, Langerak, van de Pol, Weber, Wijs).


Multi-Core Nested Depth-First Search by Evangelista et al. The algorithm has been adapted with the cyan color encoding and early cycle detection, as described in "Variations on Multi-Core Nested Depth-First Search" (Laarman, van de Pol). Note that ENDFS requires a repair procedure, NNDFS is chosen by default. Alternatives can be chosen by providing a list of strategies, for example: "endfs,mcndfs", yields the NMC-NDFS algorithm as described in the Variations paper (with load balancing). Finally, we also allow multiple levels of ENDFS to be combined: "endfs,endfs,nndfs".


New CNDFS algorithm. Multi-Core Nested Depth-First Search (Evangelista, Laarman, Petrucci, van de Pol).


Tarjan’s sequential SCC algorithm. Depth-First Search and Linear Graph Algorithms (Tarjan).


Renault’s SCC algorithm. Parallel Explicit Model Checking for Generalized B├╝chi Automata (Renault et al.).


The UFSCC SCC algorithm. Multi-Core On-The-Fly SCC Decomposition (Bloemen, Laarman, van de Pol).


DFS-FIFO detect non-progress cycles if the front-end provides a progress state or transitions label. Additionally, it verifies weak LTL properties with superior scalability to the *NDFS algorithms. LTSmin automatically identifies weak LTL formulae when provided with a property (via the --ltl option).


Select the transition permutation, which is used to guide different threads to different parts of the state space. A good permutation can significantly speed up bug hunting. TYPE can be one of the following options, each has different properties in terms of performance and effectiveness summarized as (perf./eff.) :


use "fresh successor heuristics" described in "Variations on Multi-Core Nested Depth-First Search" (Laarman, van de Pol). (decent/very good for bug hunting). Default for LTL.


sort on the unique id of the successor state (decent/good)


use multiple fixed random permutation schemes per worker (decent/good).


randomized using a full random sort order on the states. This is more random than the previous option, but requires more pre-computation time for the random array (decent/almost perfect).


shift the order of transitions by a fixed value per worker (fast/decent).


sort according to a dynamic permutation scheme (slow/perfect).


use the same exploration order for all workers. Default for reachability.


Select type of data structure for storing visited states. TYPE can be one of the following options:


use a lockless hash table.


use a lockless tree-compressed table. In many cases the tree can compress states to two integers regardless of their length. To efficiently accommodate more than 4*10^9 states, the table is split in a root and a leaf table whose respective size can be adjusted using the --ratio option (default).


use a lockless tree-compressed hash table with a parallel Cleary table to store roots. The compressed size of a state can approach one integer with this approach. The leafs table is stored as a standard tree table (two integers per subtree), which typically can be much smaller (up to the square root of the root table size). To control the respective size of the leaf table use the --ratio option.


Log_2 ratio between root and leaf table in tree compression. The ratio can theoretically be as low as the square root of the root table size (see --size). This will however only work if the state vectors are perfectly combinatorial wrt their variable values. In most cases the leaf table will grow larger, but find found that a factor four (--ratio=2) works well for over 75% of the BEEM models. (default: NUMBER=2).


Log_2 hash table size in elements (default: NUMBER=24). This is also used for the internal node table of the tree.


Save time by using zobrist incremental state hashing. NUMBER defines the (log_2) size of the random number table (default: 0=OFF). Large tables mean better hash distributions, but more cache misses due to memory size. Experiments have shown that small tables (2^6) suffice for good distributions (equal or better than Jenkin’s hash). Improvements are only noticable for fast state generators, like DiVinE 2.2 (dve22lts-mc(1)).


Maximum search depth.


Forcess DFS-FIFO to use progress state labels, even if progress transition labels are present.


Forces DFS-FIFO to use strict BFS ordering for finding shorter lassos.


Change the proviso implementation for partial order reduction in presence of safety properties or liveness properties. Parallel LTL is only supported with the cndfs proviso in the cndfs search strategy. Sequentially, LTL is also supported with the stack proviso in the ndfs search strategy. PROVISO can be either:


Disables the ignoring proviso check (only useful for benchmarking or bug hunting).


The closed set proviso is the default proviso which requires almost no extra work/memory. It might however result in less reduction than the stack proviso. It works with both the dfs and (p/s)bfs exploration strategies.


The stack proviso is the proviso used for example by the spin model checker. It requires some extra work/memory but may result in a better reduction than closedset. It works only for a dfs or ndfs search strategy (for resp. safety or LTL properties).


A special parallel cycle proviso is used to enable POR with multi-core LTL model checking using the cndfs search strategy.

-n, --no-exit

Do not exit when an error is found. Just count errors. Error counts are printed with -v.

-d, --deadlock

Find state with no outgoing transitions. Returns with exit code 1 if a deadlock is found, 0 or 255 (error) otherwise.


Find state where the invariant is violated. The file PREDFILE contains an expression in a simple predicate language (see ltsmin-pred(5)). Its contents can also be entered directly as a PREDEXPRESSION. Returns with exit code 1 if a violation is found, 0 or 255 (error) otherwise.

-a, --action=STRING

Find state with an outgoing transition of type STRING. Returns with exit code 1 if the action is found, 0 or 255 (error) otherwise.


When finding a deadlock state or a transition labelled with a certain action, write a trace to 'FILE'.gcf, beginning from the initial state. Traces can be pretty-printed with ltsmin-printtrace(1).

PINS Options


Print state variable, type and value names, and state and action labels. Then exit. Useful for writing predicate (--invariant), LTL (--ltl), CTL/CTL* (--ctl), and mu-calculus (--mu) expressions.


Print the dependency matrix and exit.

-c, --cache

Enable caching of greybox calls.

If this option is used, the state space generator makes calls to the short version of the greybox next-state function and memoizes the results. If the next-state function is expensive this will yield substantial speedups.


Use guards in combination with the long next-state function to speed up the next-state function.


Allow values for edge variables in atomic predicates to be unreachable.


Allow undefined values in atomic predicates for enums.

-r, --regroup=SPEC

Enable regrouping optimizations on the dependency matrix.

SPEC is a comma-separated sequence of transformations <(T,)+> which are applied in this order to the dependency matrix. The following transformations T are available:


Group Safely; macro for "gc,gr,cw,rs"; almost always a win.


Group Aggressively (row subsumption); macro for "gc,rs,ru,cw,rs"; can be a huge win, but in some cases causes slower state space generation.


Group Simulated Annealing; macro for "gc,gr,csa,rs"; almost always a win; usually better than gs.


Group Columns; macro for "cs,cn".


Group Rows; macro for "rs,rn".


Column Sort; sort columns lexicographically.


Column Nub; (temporarily) group duplicate columns, thereby making ca more tractable. Requires cs.


Column sWap; minimize distance between columns by swapping them heuristically. This reordering improves performance of the symbolic data structures.


Column All permutations; try to find the column permutation with the best cost metric. Potentially, this is an expensive operation.


Column Simulated Annealing; minimize distance between columns by swapping them using simulated annealing.


Row Sort; sort rows lexicographically.


Row Nub; remove duplicate rows from the dependency matrix. This is always a win. Requires rs.


Row sUbsume; try to remove more rows than nubbing, thereby trading speed for memory. Requires rs.


Over-approximate all must-write to may-write. May-write supports the copy (-) dependency.


Over-approximate read to read+write. Allows read dependencies to also subsume write dependencies.


Over-approximate must-write to read+write. Allows must-write dependencies to also subsume read dependencies.


Over-approximate may-write to read+write. Allows must-write dependencies to also subsume read dependencies.


Over-approximate copy to read. May be useful for testing whether the dependency matrix is correct.


Use special heuristics to move read dependencies before write dependences. Often a win in symbolic state space generation.


Writes metrics of the selected (sr, sw, sc) matrix to stdout. The following metrics are printed:

  1. Event span: the total distance between the minimum and maximum column of rows.

  2. Normalized event span: the event span divided by the size of the matrix (rows x columns).

  3. Weighted event span: the weighted event span, the event span, including a moment signifying the location of the span. See, Siminiceanu et al., we use moment 1.

  4. Normalized weighted event span: the weighted event span divided by the size of the matrix (rows x column).


Select the read matrix for cs, csa, cw, ca, rs, bcm, bs, bk, vcm, vacm, vgps and mm.


Select the write matrix (default) for cs, csa, cw, ca, rs, bcm, bs, bk, vcm, vacm, vgps and mm. The write matrix is the default selection, because only write dependencies can create new nodes in decision diagrams. A bad variable order in the write matrix thus leads to a large number of peak nodes during reachability analysis. A bad variable order in the read matrix can also lead to a slow reachability analysis, but typically not as severe as a bad variable order in the write matrix. Slow reachability analysis due to a bad variable order in the read matrix causes many recursive calls to the relational product operation. Typically it is best that read dependencies are moved to the top DD level, thus left most in the read matrix.


Select the combined matrix for cs, csa, cw, ca, rs, bcm, bs, bk, vcm, vacm, vgps and mm. The combined matrix is the logical or of the read and write matrix.


Use a bipartite graph (default) for bcm, bk, bs, vcm, vacm and vgps.


Create a total graph of the bipartite graph for bcm, bk, bs, vcm, vacm and vgps. This adds more vertices and edges thus increasing computation time, but sometimes provides a better ordering.

Below, the sparse matrix algorithms prefixed with b are only available when LTSmin is compiled with Boost. Algorithms prefixed with v are only available when LTSmin is compiled with ViennaCL.


Apply Boost’s Cuthill-McKee ordering.


Apply Boost’s King ordering.


Apply Boost’s Sloan ordering.


Apply ViennaCL’s Cuthill-McKee ordering.


Apply ViennaCL’s advanced Cuthill-McKee ordering.


Apply ViennaCl’s Gibbs-Poole-Stockmeyer ordering.


Apply FORCE ordering.


Apply row permutation PERM, where PERM is a sequence of row numbers, separated by a comma. E.g. the vector 2,1,0 will swap row 2 with row 0.


Apply column permutation PERM, where PERM is a sequence of column numbers, separated by a comma. E.g. the vector 2,1,0 will swap column 2 with column 0.


Insert columns before other columns in the dependency matrix.

PAIRS is a comma-separated sequence of pairs <(C.C,)+>'. E.g. --col-ins=1.0 will insert column 1 before column 0. Each pair contains a source column C and a target column C'. During the application of the whole sequence, C will always be the column number that corresponds with the column before the application of the whole sequence. The column number C' will always be the column during the application of the whole sequence. This means that in for example --col-ins=2.0,1.0, first column 2 is inserted at position 0, then column 1 is inserted at position 0. The result will be that the original column 2 will be at position 1. Another important detail is that when --col-ins is used, all source columns will temporarily be "removed" during reordering from the dependency matrix, i.e. when the -r,--regroup option is given. After reordering is done, the columns will be inserted at the desired target position. In other words, reordering algorithms given by the option -r,--regroup, will only be applied on the dependency matrix with source columns removed.


Use WEIGHT1 as the first weight for the Sloan algorithm, see


Use WEIGHT2 as the second weight for the Sloan algorithm, see


Print Boost’s and ViennaCL’s graph metrics (only available when LTSmin is compiled with Boost or ViennaCL).


Exit with 0 when regrouping is done.


Print timing information of each transformation, given in sequence --regroup (-r).


Compute a parity game for the mu-calculus formula.

The mu-calculus formula is provided in the file FILE or directly as a string FORMULA. The syntax and tool support are described in ltsmin-mucalc(5).


Compute cross-product of a Buchi automaton and the specification

LTLFILE is a file containing an Linear Temporal Logic formula (see ltsmin-ltl(5)). Which content can also be provided directly as LTLFORMULA. This formula will be converted to a Buchi automaton. Then the synchronous cross product with the original specification is computed on-the-fly. A state label is added to encode accepting states.


Change the semantics of the crossproduct generated using --ltl

Three options are available, the default is automatically chosen based on the atomic predicates in the formula.


Use semantics equal to the spin model checker. From the source state all transitions are generated. Then, state predicates are evaluated on the source state. The Buchi automaton now moves according to these predicates. Deadlocks in the LTS cause the Buchi to progress independently. This option is incompatible with edge-based atomic predicates. This option is the default if no edge-based atomic predicates are found in the LTL formula.


Use textbook semantics. A new initial state is generated with an outgoing transition to the initial state. Now, predicates are evaluated on the target state and the Buchi automaton moves according to these predicates. Deadlocks in the LTS do NOT cause the Buchi to progress independently. This option is incompatible with edge-based atomic predicates.


Same as spin semantics, but now deadlocks in the LTS do NOT cause the Buchi to progress independently. This option is the default if edge-based atomic predicates are found in the LTL formula.


Activate partial-order reduction

Partial-Order Reduction (POR) can reduce the state space when searching for deadlocks (-d) or accepting cycles (--ltl). Two POR algorithms are available:


Uses a cost-based heuristic beam-search to find the smallest stubborn set


Uses Valmari’s deletion algorithm to find the smallest stubborn set by iteratively removing transitions while maintaining the constraints.


Use weak commutativity in partial-order reduction. Possibly yielding better reductions.


Use leaping partial-order reduction, by combining several disjoint stubborn sets sequentially.

Environment Variables

LTSmin supports the following list of environment variables.

Table 1. Environment Variables:
Name Unit Description



Sets the amount of system memory to the given value.



Sets the amount of CPUs to the given value.

The variables LTSMIN_MEM_SIZE, and LTSMIN_NUM_CPUS are particularly relevant when neither sysconf(3) nor cgroups(7) is able to properly detect these limits, e.g. when LTSmin runs on Travis CI in a docker container.

Development Options


Make use of GetTransitionsLong calls.

A language module can have three next state calls: GetTransitionsAll, GetTransitionsLong and GetTransitionsShort. The first call is used by default, the second call is used when this flag is passed and the third form is used if --cache is enabled. This allows all three calls in a language module to be tested.


Write the full state vector.

General Options


Increase the level of verbosity


Be quiet; do not print anything to the terminal.


Enable debugging output for file.c (option allowed multiple times).


Print version string of this tool.

-h, --help

Print help text


Print short usage summary.

File Formats

The following file formats are supported:

  • Directory format (*.dir, *.dz and *.gcf)

  • Vector format (*.dir, *.gcd, *.gcf)

  • Binary Coded Graphs (*.bcg)

  • Aldebaran Format (*.aut)

  • FSM Format (*.fsm)

  • MRMC/Prism (*.tra+*.lab)

  • PGSolver format (*.pg)

If a tool operates in streaming mode then support for file formats is limited, as can be seen in the following table:

Format Streaming mode Load/Store mode






















The directory format uses multiple files to store an LTS. The various extension explain how these files are stored in the underlying file system. The *.dir format uses multiple files in a directory without compression. If the LTS has one edge label, no state labels and does not store state vectors then these files are backwards compatible. Neither the *.dz nor the *.gcf formats are backwards compatible. Both formats use compression. The first uses a directory for the files, second interleaves files into a single file.

If you try to open a *.dir with the old mCRL tools and you get the error message:

wrong file version: 0

then the directory is probably compressed. If that happens then you may convert the directory by typing the command:

ltsmin-convert bad.dir good.dir


LTL model checking requires a Buchi cross product with accepting states marked with accepting state labels, and a search strategy which takes these labels into account (see ndfs / scc strategies of the sequential and multi-core tools).

For generating a cross product, the following options are supported:

  1. The input specification is already combined with a Buchi automaton, and states are appropriately marked with accepting labels. Currently two frontends support this functionality: SpinS (see documentation on prom tools), and DiVinE (see documentation on dve tools).

  2. LTSmin tools can build the cross product of input specification and Buchi automaton (through a PINS2PINS layer enabled with the --ltl option) themselves, and will annotate cross-product states appropriately when they are accepting. See also --labels option in the PINS tools.

Combination with the Partial Order Reduction PINS2PINS layer (--por) requires the latter option.
As of LTSmin release 1.9, our default LTL semantics mimics those of SPIN/DiVinE. Before that, LTSmin implemented textbook semantics. See the documentation on the enumerative tools for more information.



Successful termination.


Counter example found.


Some error occurred.


Send questions, bug reports, comments and feature suggestions to the LTSmin Support Team.