| MPI_COMM_JOIN(fd, intercomm) | |
| IN fd | socket file descriptor | 
| OUT intercomm | new intercommunicator (handle) | 
MPI_COMM_JOIN is intended for MPI implementations that exist in an environment supporting the Berkeley Socket interface [14,17]. Implementations that exist in an environment not supporting Berkeley Sockets should provide the entry point for MPI_COMM_JOIN and should return MPI_COMM_NULL.
This call creates an intercommunicator from the union of two MPI processes which are connected by a socket. MPI_COMM_JOIN should normally succeed if the local and remote processes have access to the same implementation-defined MPI communication universe.
 
 
 
 Advice to users.  
 An  MPI implementation may require a specific  
communication medium for  MPI communication, such as a shared memory  
segment or a special switch. In this case, it may not be possible for  
two processes to successfully join even if there is a socket  
connecting them and they are using the same  MPI implementation.  
 ( End of advice to users.) 
 
 
 
 Advice  
        to implementors.  
 
A high quality implementation will attempt to establish  
communication over a slow medium if its preferred one  
is not available.   
If implementations do not do this, they must document  
why they cannot do  MPI communication over the medium  
used by the socket (especially if the socket is a TCP  
connection).  
 ( End of advice to implementors.) 
 
 fd is a file descriptor representing a socket of type  
 SOCK_STREAM (a two-way reliable byte-stream connection).  
Non-blocking I/O and asynchronous notification via  SIGIO  
must not be enabled for the socket. The socket must be in a connected  
state.  The socket must be quiescent when  
 MPI_COMM_JOIN is called (see below). It is the responsibility  
of the application to create the socket using standard socket API  
calls.  
MPI_COMM_JOIN must be called by the process at each end of the socket. It does not return until both processes have called MPI_COMM_JOIN. The two processes are referred to as the local and remote processes.
MPI uses the socket to bootstrap creation of the intercommunicator, and for nothing else. Upon return from MPI_COMM_JOIN, the file descriptor will be open and quiescent (see below).
If MPI is unable to create an intercommunicator, but is able to leave the socket in its original state, with no pending communication, it succeeds and sets intercomm to MPI_COMM_NULL.
The socket must be quiescent before MPI_COMM_JOIN is called and after MPI_COMM_JOIN returns. More specifically, on entry to MPI_COMM_JOIN, a read on the socket will not read any data that was written to the socket before the remote process called MPI_COMM_JOIN. On exit from MPI_COMM_JOIN, a read will not read any data that was written to the socket before the remote process returned from MPI_COMM_JOIN. It is the responsibility of the application to ensure the first condition, and the responsibility of the MPI implementation to ensure the second. In a multithreaded application, the application must ensure that one thread does not access the socket while another is calling MPI_COMM_JOIN, or call MPI_COMM_JOIN concurrently.
 
 
 
 Advice  
        to implementors.  
 
 MPI is free to use any available communication path(s) for  MPI  
messages in the new communicator; the socket is only used for the  
initial handshaking.  
 ( End of advice to implementors.) 
 
  
 MPI_COMM_JOIN uses non- MPI communication to do its work.   
The interaction of non- MPI communication with pending  MPI  
communication is not defined. Therefore, the result of calling  
 MPI_COMM_JOIN on two connected processes   
(see Section Releasing Connections 
 for the definition of connected) is  
undefined.  
  
The returned communicator may be used to establish MPI communication with additional processes, through the usual MPI communicator creation mechanisms.