Here are the results of the Pallas MPI Benchmarks:
Interconnections |
MPI implementations |
Number of nodes |
Processes per node |
Fast Ethernet |
NT-MPICH, MPICH.NT, MPI/Pro, WMPI |
8 |
1 |
Gigabit Ethernet |
NT-MPICH, MPICH.NT, MPI/Pro, WMPI 1) |
8 |
1 |
Shared Memory |
NT-MPICH, MPICH.NT, WMPI 2) |
1 |
8 |
Fast & Gigabit Ethernet |
NT-MPICH |
8 |
1 |
|
|
|
|
SCI, Gigabit Ethernet |
SCI-MPICH, NT-MPICH |
planned |
|
1) We had to turn off some internal optimizations in
MPI/Pro for message sizes up to 256kB (using the command line argument -msti_tcp_pin_size 300000) because otherwise we noticed some very heavy performance drops in some benchmarks with message sizes between 64kB and 256kB.
2) We could not benchmark MPI/Pro because our evaluation license had unexpectedly expired.
|