PLUTO  4.0
 All Data Structures Files Functions Variables Enumerations Macros Pages
General Structure

Table of Contents

1 Overview

PLUTO is a freely-distributed software for the numerical solution of mixed hyperbolic/parabolic systems of partial differential equations (conservation laws) targeting high Mach number flows in astrophysical fluid dynamics. The code is designed with a modular and flexible structure whereby different numerical algorithms can be separately combined to solve systems of conservation laws using the finite volume or finite difference approach based on Godunov-type schemes.

Equations are discretized and solved on a structured mesh that can be either static or adaptive. For the latter functionality, PLUTO relies on the Chombo library (freely available at https://commons.lbl.gov/display/chombo/) which provides a distributed infrastructure for parallel calculations over block-structured, adaptively refined grids.

The static grid version of PLUTO is entirely written in the C programming language while the adaptive mesh refinement (AMR) version requires also C++ and Fortran for compiling the interface with CHOMBO.

A comprehensive description of the code design, phsyics module and method implementation may be found, for the static grid version, in Mignone et al., ApJS (2007) 170, 228 and, for the AMR version, in Mignone et al., ApJS (2012) 198, 7 .

The code can run from a single workstation up to several thousands processors using the Message Passing Interface (MPI) to achieve highly scalable parallel performance.

The software is developed at the Dipartimento di Fisica, Torino University in a joint collaboration with INAF, Osservatorio Astronomico di Torino and the SCAI Department of CINECA in Bologna. (http://www.hpc.cineca.it). The code can be freely downloaded at http://plutocode.ph.unito.it/.

The supported physics modules include solvers for classical hydrodynamics, special relativistic hydro, magnetohydrodynamics (MHD) and (special) relativistic MHD. Non-ideal processes can be included among:

Source files are located in the Src/ directory where module files are grouped together inside sub-directories.

PLUTO is developed and maintained by A. Mignone although a number of people are actively contributing to its development through valuable support. In particular:

1.1 Naming Convention and Coding Style

Starting with PLUTO 4, naming convention has been completely revised to comply with a more consistent coding style. This avoids the confusion between function, macro and global variable names generated by the mixed-up style present in previous releases. In brief:

2 Updating Strategy

The primary vectors being used as arguments throughout the code are the 4-dimensional arrays of cell and face centered primitive variables contained inside the structure Data, i.e., Data->Vc and Data->Vs, respectively.

Primitive variables are preferred over conservative ones since they are more convenient for assigning initial and boundary conditions, piecewise reconstruction inside computational zones and source term integration. The primary integration loop is located inside main.c and the Sweep/Unsplit functions perform the actual integrations. Time stepping functions are handled by one between three different algorithms and may be found under the Src/Time_Stepping/ directory. Dimensionally split marching schemes are coded in sweep.c, Runge-Kutta unsplit methods inside unsplit.c and the Corner-Transport Upwind (CTU) inside unsplit_ctu.c. Schematically, starting with a vector of primitive variables $ \vec{V}^n$ at time level $t=t^n$, a single time step update involves the following sequence of tasks:

3 Parallelization

PLUTO makes usage of the MPI library for parallel computations. The parallel interface for the static grid version is located under Src/Parallel where a compact and reduced subset of the former ArrayLib (AL) has been largely adapted, optimized and incorporated into the code, see Parallel_page. The parallel API adopts basically provides a) an abstraction for distributed array objects, and b) simple interfaces to the underlying message passing routines (MPI). The adopted parallelization model is the usual one of distributed arrays augmented with guard cells (ghost points) to deal with boundary conditions. The size of the guard cells is determined by the stencil of the discretized differential operator

Parallelization for the AMR version of PLUTO is separately handled by the Chombo library which comes with its own independent MPI interface.

4 I/O

Different data formats are available for I/O both for serial and heavily parallelized applications. The main output driver is write_data.c which provides basic calling mechanism for standard raw binary, VTK, hdf5, ascii data file and images (ppm and png). Input from binary files is controlled by restart.c.

Input/output is provided using different data formats, namely raw binary, VTK or HDF5. handled supports different data formats for I/O and most of them are fully parallel