OMNeT++ supports parallel distributed simulation. This page describes how to set it up with your simulation model.
Before getting deeply into installing and configuring, check the parallel potential of your simulation model. Because it is possible that regardless your efforts, you'll only get massive slowdown instead of speedup. You've been warned.
Consider whether you really need parallel simulation. While you are developing your model, parallel simulation will make debugging much harder. If you are running your simulation to get results (using Cmdenv), you will probably need several runs with different parameters and different seeds. Those runs can execute simultaneously independent of each other, and this is a much easier (and more efficient) way to exploit your multi-core system than parallel simulation.
OMNeT++ supports several communications means ("transport layers") between partitions of the simulation. However, named pipes and file-based communication are only suitable for experimentation and debugging. To get good results, you need to compile OMNeT++ with MPI support, and use that.
Install an MPI package (we used LAM-MPI, but MPICH would probably do as well), and make sure OMNeT++'s
./configure will find it (you may need to tweak MPI-releated settings in
configure.user), then (re)compile OMNeT++.
There are many free and non-free MPI implementations, we tried DeinoMPI, with good results. Check the following mailing list posts:
and other messages in that thread.
Configuring the simulation in
First, read the 12 Parallel Distributed Simulation chapter in the OMNeT++ Manual, and try out the parallel CQN demo (
Turning on parallel simulation, choosing MPI as communication mechanism, and selecting the null-messsage protocol as synchronization algorithm:
parsim-communications-class = "cMPICommunications"
parsim-synchronization-class = "cNullMessageProtocol"
Then you need to partition your model (if this is missing, the simulation will only use one CPU). Assign different partition IDs to parts of the model which you want to run on different processors (you can use wildcards):
*.tandemQueue**.partition-id = 0
*.tandemQueue**.partition-id = 1
*.tandemQueue**.partition-id = 2
Running the model under MPI:
First, build your simulation program with Cmdenv. (With Tkenv you're likely to get a
Tk_Init failed: no display name and no $DISPLAY environment variable error -- see below).
Use the mpirun command (part of MPI) which takes care of launching the processes on different processors of the MPI virtual machine. The
-p argument to the OMNeT++ simulation is not needed -- it is only used with pipe or file-based communication but not with MPI. An example:
mpirun -np 3 ./cqn
This will run the cqn simulation in three processes (
-np 3). The
-np arg must naturally correspond to the partition IDs you've used in omnetpp.ini (highest ID + 1). Be sure to have the
parallel-simulation=true as well as the
parsim-communications-class = "cMPICommunications" lines in
omnetpp.ini -- otherwise you'll just be running 3 identical instances of the same sequential simulation, or they won't communicate using MPI.
configure script doesn't recognize MPI
MPI_... settings in
configure.user. You probably need to set them manually to get MPI recognized.
MPI related problems
Verify that you can compile and run a simple MPI program. Such as this:
int main(int argc, char *argv)
int numPartitions, myRank;
Hi! I am process %d out of %d\n", myRank, numPartitions);
Save it as
mpitest.c, and compile it (
mpicc mpitest.c) and try running it (
mpirun -np 3 ./mpitest). You should get something like
Hi! I am process 1 out of 3
Hi! I am process 0 out of 3
Hi! I am process 2 out of 3
Tk_Init failed: no display name and no $DISPLAY environment variable
This error means that the Tkenv GUI could not be started on the remote MPI nodes. You'll usually want to run parallel simulations under Cmdenv (without GUI). Just change the
USERIF_LIBS in the makefile to
$(CMDENV_LIBS), and re-link the simulation.
If you really want to have the Tkenv GUI of remote nodes to pop up on your screen, you need to make sure that processes lauched by MPI have the
DISPLAY environment variable set and pointing to your X display (e.g.
DISPLAY=myhost:0), and the X server accepts incoming connections from those nodes as well (check the
xhost command). But there's usually little reason to run MPI programs with a GUI.
Parsim error: no doPacking() function for type xxxx or its base class
One has to provide
doUnpacking() functions for the given type. Please check the ParsimDoPacking page.
Before going deeply into running your simulation in parallel, you might want to do a back-of-the-envelope calculation regarding the feasiblity of getting performance from it. Calculate
lambda = L * E / (tau * P)
and it should be well above 1.0 (10..20 or larger).
For practical guidance for applying the above see ParsimEfficiency.