「view this page in B3 βῆτα server」

Revisions №57861

branch: master 「№57861」
Commited by: Vikram K. Mulligan
GitHub commit link: 「9f069d8ecc1be01e」 「№525」
Difference from previous tested commit:  code diff
Commit date: 2015-05-21 16:28:32

Merge pull request #525 from RosettaCommons/vmullig/mpi_integration_test Adding MPI-mode integration tests This modifies integration.py to create a special MPI class of integration tests. Running integration.py with the old options should create the old behaviour (non-MPI tests). Running with the --mpi_tests flag should only run tests that have a command.mpi file in addition to their command file. This specifies an MPI-mode test (and contains the explicit call to mpirun). If no --extras=xxx flag is provided, then --mpi_tests implies --extras=mpi. Tasks: -- Modify the bundlegridsampler_design_nstruct_mode test to have an MPI variant. -- Modify integration.py to add the --mpi_tests flag. -- Modify integration.py to try to run MPI tests ONLY in cases for which a command.mpi file exists. -- Modify the command.mpi scripts to split output by process. -- Modify the MPI JobDistributors to have a sequential job assignment option (each job sent to each slave in sequence), set to false by default. -- Add documentation for this type of test. (This was added to wiki.rosettacommons.org. When are we going to move the documentation on testing Rosetta to the Gollum wiki?) Put off until later: -- Modify integration.py to respect the -j flag (i.e. if the user specifies -j 10, and jobs 1 and 2 each launch 4 processes, count 8 jobs running and do not launch another 4-process job). -- Put off to later. For now, the -j flag should launch N jobs, each of which could launch multiple processes (slightly risky). -- Add support for alternatives to mpirun command. -- Put off to later, if there's a need for this. Two important notes about this: -- MPI behaviour is inherently stochastic, since different processes can finish in different orders, and request jobs in different orders. Integration tests must be designed with this in mind. Note that in some cases, calling mpirun with -np 2 (just one master and one slave) can solve this, though this might not test everything that you want to test. -- I, um, ahem, don't actually know Python. It looks like pretty much any other programming language, though, so I THINK I'm modifying integration.py properly.

...