https://github.com/alphaparrot/ExoPlaSim
Raw File
Tip revision: 9581a6d9094df0b09e10be5a540ad1dd1db7f2fb authored by alphaparrot on 08 January 2020, 20:00:25 UTC
Update README.md
Tip revision: 9581a6d
mpimod.tex
%--------------------------------------------------------------------------------
\begin{center}
\begin{tabular}{|p{15cm}|}
\hline
\vspace{-5mm} \section{mpimod.f90 / mpimod\_stub.f90} \vspace{-5mm}
\\
\hline
\vspace{1mm} {\bf General} The module {\module mpimod.f90} contains interface
subroutines
to the MPI (Message Passing Interface) needed for (massive) parallel computing.  Several MPI
routines are called from the module. The interface to other modules are given by numerous
subroutines which name starts with {\sub mp}. Subroutines from {\module mpimod.f90}  are
called in sveral other modules. There  are no direct calls to MPI other than
in {\module mpimod.f90}. This encapsulation makes it possible to
use {\module mpimod\_stub.f90} for single CPU runs without
changing any other part of the model code.
The selection is done automatically by using MoSt or manually
by editing "Most15/puma/src/make\_puma".  \vspace{3mm} 
\\
\hline
\vspace{1mm} {\bf Input/Output} {\module mpimod.f90} and {\module mpimod\underline{
}stub} do not use any extra input file or
output file. No namelist input is required \vspace{3mm} \\
\hline
\vspace{2mm} {\bf Structure} Internally, {\module mpimod.f90} uses the FORTRAN-90
module
{\module mpimod},  which uses the global common module {\module pumamod} from
{\module pumamod.f90} and the MPI module {\module mpi}. {\module mpimod\underline{
}stub.f90} does not use any module. The following subroutines are included in {\module
mpimod.f90}:

\begin{center}
\begin{tabular}{l p{2cm} l}
Subroutine & &Purpose \\
&& \\
{\sub mpbci} && broadcast 1 integer \\
{\sub mpbcin} & &broadcast n integers \\
{\sub mpbcr} & &broadcast 1 real \\
{\sub mpbcrn} & &broadcast n reals \\
{\sub mpbcl} && broadcast 1 logical \\
{\sub mpscin} & &scatter n integers \\
{\sub mpscrn} && scatter n reals \\
{\sub mpscgp} && scatter grid point field \\
{\sub mpgagp} && gather grid point field \\
{\sub mpgallgp} && gather grid point field to all \\
{\sub mpscsp} & &scatter spectral field \\
{\sub mpgasp} && gather spectral field \\
{\sub mpgacs} && gather cross section \\
{\sub mpgallsp} && gather spectral field to all \\
{\sub mpsum} && sum spectral field \\
{\sub mpsumsc} && sum and scatter spectral field \\
{\sub mpsumr} && sum n reals \\
{\sub mpsumbcr}& & sum and broadcast n reals \\
{\sub mpstart} & &initialize MPI \\
{\sub mpstop} & &finalize MPI \\
\end{tabular}
\end{center}
\vspace{3mm} \\
\hline
\end{tabular}
\end{center}

\newpage

\begin{center}
\begin{tabular}{|p{15cm}|}
\hline
\begin{center}
\begin{tabular}{l p{2cm} l}
Subroutine & &Purpose \\
&& \\
{\sub mpreadgp}& & read and scatter grid point field \\
{\sub mpwritegp}& & gather and write grid point field \\
{\sub mpwritegph} && gather and write (with header) grid point field \\
{\sub mpreadsp} & &read and scatter spectral field \\
{\sub mpwritesp} &&gather and write spectral field \\
{\sub mpi\_info} && give information about setup \\
{\sub mpgetsp}   && read spectral array from restart file \\
{\sub mpgetgp}   && read gridpoint array from restart file \\
{\sub mpputsp}   && write spectral array to restart file \\
{\sub mpputgp}   && write gridpoint array to restart file \\
{\sub mpmaxval}  && compute maximum value of an array \\
{\sub mpsumval}  && compute sum of all array elements \\
\end{tabular}
\end{center}

\vspace{3mm} \\

\hline
\end{tabular}
\end{center}
\newpage
%--------------------------------------------------------------------------------
back to top