03 June 2014

Towards a parallel fem-fenics

These days I have been studying the documentation of the FEniCS project, mainly the FEniCS book, in order to understand the features related to parallel execution that it boasts. This preliminary study is aimed at adding them to the fem-fenics package. First of all I will summarise my findings, then I will comment the problems I need to address to implement this functionality.

Parallelism in FEniCS

FEniCS implements parallelism in such a way to be transparent to the user of the library. Moreover, it scales on different architectures, ranging from multi-core personal computers to distributed clusters. To this end, FEniCS makes use of two paradigms, which can be exploited both separately and together.

The first approach is tailored for shared memory architectures, such as the vast majority of the PCs nowadays, but also in many cases each node of a computational cluster. The implementation is based on OpenMP, and adding a simple instruction one can enable parallelisation to speed up the matrix assembly phase. It should be noted that this paradigm has little support in the underlying linear algebra libraries, so the resolution phase can take advantage of multi-threading only with the PaStiX solver. Since in a shared memory model parallel programs might suffer race conditions, the mesh is coloured to identify subsets, so that no two neighbouring elements belong to the same set. Obviously, the notion of proximity depends on the particular function space, then this is considered in the colouring algorithm. The assembly proceeds iterating over colours and splitting their nodes among threads: with this technique race conditions are avoided and the user can enjoy the benefits of parallelisation without incurring in unpredictable behaviour.

Contrasting to the first approach, the second paradigm is based on MPI and addresses the needs of distributed memory architectures. Unfortunately, the latter is less immediate than the former, requiring a DOLFIN program to be launched with the MPI execution utility, but in this case the code need not be modified. In this implementation, the mesh is split so that each process gets its part of it, with an algorithm striving to minimise inter-process communication. With scalability in mind, no single process holds the full matrix and, moreover, everything happens behind the scenes: this way the user has no need of taking care of low level issues. The distributed memory paradigm is diffusely supported in the algebraic back-ends, so it allows the usage of several solvers, both indirect and direct. As already noted, this and the previous approach can be combined, for instance distributing the computation on a cluster and further speeding up the assembly process enabling multi-threading within each node, provided they are multi-core machines. 

The implementation in fem-fenics

The shared memory paradigm should be quite straightforward to implement in fem-fenics. I expect to operate on a couple of functions: the private generate_makefile and the two assemble and assemble_system. The former should have the proper compilation flag (-fopenmp) added. The latter should have a new line reading like:

dolfin::parameters["num_threads"] = femfenicsthreads;

The number of threads could be passed to those functions as an argument, but this would ruin the interface compatibility with FEniCS, so this is a poor approach. Another way of addressing the issue is to define a global Octave variable in PKG_ADD and store in it the desired number of concurrent threads to use for the assembly.

The implementation of the distributed memory paradigm, instead, seems quite tricky. Basically, Octave does not use MPI, at least not Octave core. Nonetheless, there are two Forge packages with this goal, mpi and parallel. I will go through the documentation of these packages to understand if and, in case, how they address the problem of launching the oct-file with mpirun or mpiexec. Even leaving this aspect aside, I still do not know how easily the distributed objects storing matrices and vectors can be accessed to obtain the whole data.

In conclusion, I will initially work to add shared memory parallelism, at the same time looking deeper into the issues related to the distributed memory paradigm, which I suspect of being more than the ones highlighted.

No comments:

Post a Comment