Tristan v2
Tristan v2
is a multi-species particle-in-cell code used for kinetic plasma simulations which also includes a module for treating QED interactions between particles. Particle-in-cell codes solve the equations of motion for the charged particles, coupled to the Maxwell’s equations on the electromagnetic field defined on a discretized grid. In this exercise, however, we will ignore electromagnetic interactions. We will model photons as a separate particle species, along with the $e^\pm$ pairs, and we will enable different QED reaction channels between them.
Clone the tristan-v2-mini
repository:
git clone <https://github.com/haykh/tristan-v2-mini.git>
cd tristan-v2-mini
We first need to configure the code before compiling. First, make sure you have all the pre-requisites either installed or loaded as modules (in case you are working on an HPC cluster).
on a personal linux machine (or wsl) install the following:
# for apt package managers (debian/ubuntu)
sudo apt install build-essential libhdf5-openmpi-dev hdf5-tools libopenmpi-dev
# for pacman (arch)
sudo pacman -S base-devel gcc gcc-fortran openmpi hdf5-openmpi
on a personal mac machine:
# brew package manager
brew install gcc hdf5-mpi
on the stellar
cluster:
module load intel/2021.1.2 intel-mpi/intel/2021.1.1 hdf5/intel-2021.1/intel-mpi/1.10.6
During the configuration stage we will define all the physics modules we want enabled, as well as the so-called userfile
, which defines the simulation setup (initial/boundary conditions. etc.):
# on personal machine:
python3 configure.py --user=<USERFILE> <PHYSICS_FLAGS> -2d -hdf5
# on the `stellar` cluster (enables optimizations):
python3 configure.py --user=<USERFILE> <PHYSICS_FLAGS> -2d -hdf5 --cluster=stellar
Now the code can be compiled using the make -j
command. After a successful compilation a binary file called tristan-mp2d
will be generated in the bin/
directory.
<aside> 💡
On a personal machine sometimes setting up the proper dependencies might be tricky. Because of that we also provide an option to compile/run the code within a Docker container. If you opt to using this option, do the following steps:
After installing docker
and docker-compose
on your machine, enter the docker/
directory in the source code and run: docker-compose up -d
. The first run might take a few minutes.
If everything goes well, a container will be created and initiated. You can check it by doing docker ps -a
. A container is basically a virtual machine which runs an isolated environment. All the pre-requisites for compiling the code are already installed on that machine.
To connect to that machine you may either use VSCode
, or for simple shell access run docker exec -it <CONTAINER_ID> zsh
(where your <CONTAINER_ID>
can be found by running docker ps -a
).
Within the container, the code will be located in ~/tristan-v2
(it basically mounts the local folder to the virtual container, mirroring all the contents). Then you can configure and compile the code (steps 3 and 4 above) as normal.
To stop the running container, simply head to the same docker
directory on your host machine and run docker-compose down
. This will stop the running VM, but will keep the filesystem intact, so the second run of the container should be faster.
If you want to completely remove the image, simply run docker rmi tristan-v2:latest
. To ensure it’s deleted, run docker images
.
</aside>
Simulation-specific parameters can be passed via input files (without the need to recompile the code). At the very top of any input file you will find two fields sizex
and sizey
. The product of these two numbers indicates the number of cores the code will be run with. For the simulations in this chapter, anything from 4
to 16
should be ok.
Now we are ready to run the code. If you are running on a personal machine, you can simply do mpiexec -np <NPROC> ./tristan-mp2d -i <INPUTFILE>
from the bin/
directory. If you are running on stellar
, use the following submit script:
#!/bin/bash
#SBATCH -n <NPROC>
#SBATCH -t 00:30:00
#SBATCH -J <JOBNAME>
# make sure the paths are correct ...
# ... (i.e., the submit script and the executable/input have to be in the same directory)
EXECUTABLE=tristan-mp2d
INPUT=input.svensson
OUTPUT_DIR=output
REPORT_FILE=report
ERROR_FILE=error
module load intel/2021.1.2
module load intel-mpi/intel/2021.1.1
module load hdf5/intel-2021.1/intel-mpi/1.10.6
mkdir $OUTPUT_DIR
srun $EXECUTABLE -i $INPUT -o $OUTPUT_DIR > $REPORT_FILE 2> $ERROR_FILE
Make sure that the number of requested cores <NPROC>
is equal to the product of sizex * sizey
from your input file.
When the run is done, output will be written in the output/
directory (or whatever directory was specified with the -o
flag) in the form of hdf5
files. For this particular chapter we are only interested in particle distribution functions (output/spec/
).
Exercises in this chapter include python
notebooks called in the vis/
directory. To run them, you will need to install a few python modules (for nicer plotting and hdf5 reading). All the requirements can be found in the requirements.txt
. The recommended way of doing this is via pip
virtual environment (but feel free to use any other convenient method):
python3 -m venv .venv
. This creates a virtual environment in the .venv/
directory.source .venv/bin/activate
.which pip
.pip install -r requirements.txt
or pip install myplotlib dask xarray h5py jupyterlab ipykernel
.