A collection of commonly used, powerful, portable, and parallel functions used in scientific computing (mainly mesh management and parallel data transfers). All are based on MPI and written in C. This is a source-code library (not binary) so that parameters, such as the definition of element and node structures, can be changed depending upon the application. These parameters are changed through the use of an include file (i.e. included during compilation).

**shift_elem.c**- Given a permutation array (simple integer array specifying which processor each element belongs on), this function sifts the elements across the parallel processors to their destination PE and updates the dual array values accordingly (the element numbers have changed).
**shift_node.c**- Given a permutation array (simple integer array specifying which processor each node belongs on), this function shifts the nodes across the parallel processors to their destination PE and updates the element connectivity array accordingly (the node numbers have changed).
**distrib_nodes.c**- Given the element connectivity array, this function determines the nodal permutation array so that the nodes are aligned up on the same processor (as much as possible) with the elements who are connected to them. Nodes on partition boundaries are assigned randomly to one of the desired locations. The nodes can then actually be shifted across the processors with shift_node.c .
**rcb_algor.c**- Performs a Recursive Center Bisection algorithm to partition the elements on to the processors. The result are simple X,Y,Z cuts of the mesh into roughly equal-size blocks. The result is the element permutation array and the elements can then be shifted with shift_elem.c . This isn't the most optimal mesh partitioning algorithm (see Metis below), but it is quick and some of my programs (solution projection from one mesh to another) work best if the mesh is cut into X,Y,Z blocks.
**local_nodes.c**- Creates a set of local (on-processor) nodes for each mesh partition based on the element distribution. If the elements are partitioned good, these nodal sets will be minimum in size; if the elements are distributed randomly (badly), these nodal sets might match the entire set of nodes (quite large).
**ncommsetup.c**- Sets up the set of local (on-processor) nodes based on the element connectivity (similar to local_nodes.c), and pre-computes the connectivity information between this local node set and the distributed global nodes. Also allocates internal data buffers for repetitive transfers of data between these two nodal sets (performed many times in an iterative solver).
**ngather.c**- Performs the transfer of data from the distributed global nodes to the set of local (on-processor) nodes. The data transfer is optimized (reduced number of off-processor sends) if the mesh is partitioned.
**ngather_str.c**- Performs the transfer of data from the distributed global nodes to the set of local (on-processor) nodes. The data transfer is optimized (reduced number of off-processor sends) if the mesh is partitioned. This version transferes an array of Node structures.
**nscatter.c**- Performs the transfer of data from the set of local (on-processor) nodes to the distributed global nodes. The user can have the assembled data added together (a standard finite element assembly operation) or just overwrite one another (a "globalization" of the data). The data transfer is optimized (reduced number of off-processor sends) if the mesh is partitioned.
**nscatter_str.c**- Performs the transfer of data from the set of local (on-processor) nodes to the distributed global nodes. The data transfer is optimized (reduced number of off-processor sends) if the mesh is partitioned. This version transferes an array of Node structures, and there is no assembly option.
**ndealloc_comm.c**- Deallocates the current communication pattern and memory buffers set up by ncommsetup.c.
**Metis**- A very good, and fast, parallel mesh partitioning program written by my colleague George Karypis. Not part of MY library, but thought I'd mention it here since it is relevant to these other functions. Information on Metis (as well as the code itself) can be found here.