We do our best to insulate our users from needing to think about parallel issues in their applications. However, even with our best efforts there are still times when you need to perform your own MPI communication... but even in those cases we provide helper functions.
Today we made changes to the way you access those helper functions. In the past we were utilizing a "global"
MPI_COMM_WORLD in those helper functions. This created quite a bit of trouble when we wanted to run many MOOSE-based applications on separate communicators (such as with
MultiApp). With the change today, every MOOSE-based object has access to the "correct" communicator it is supposed to be using.
So, in the past you might have used functions that looked like this:
libMesh::n_processors() libMesh::processor_id() Parallel::max() Parallel::send()
Now you will access these same functions using:
n_processors() processor_id() _communicator.max() _communicator.send()
Basically: every MOOSE object gained the two functions
processord_id() they also gained a
_communicator member variable that can be utilized to call our MPI helpers. For a complete list of the functions you can call on a _communicator please see the libMesh Communicator Doxygen.
Your applications have been modified to reflect these changes.
update_and_rebuild_libmesh.sh script has been modified to remove the configure flag for creating the
COMM_WORLD we no longer depend on. From this moment forward you will get compile errors if you attempt to utilize one of the old functions. All of our testing infrastructure is in the process of being updated to reflect this so that you will see errors there if you try to use one of the old functions as well.
If you have any questions please mail the mailing list.Share on Twitter Share on Facebook