MAIA bb96820c
Multiphysics at AIA
|
This section contains information specific for the compute environment of the AIA. As you are an AIA user, this quickstart will also inform you about some of the peculiarities of the IT infrastructure at the AIA and some best practices.
cd ~
to get to your home directory.cd ~/; ln -s /aia/<raid device>/scratch/<your user name> ./
. You can then use the command cd ~/scratch
to change get to your personal scratch directory. You will get the raid device number from Miro Gondrum (Room 001). Alternatively you could also set an an environment variable called, e.g. SCRATCH in your .bash_profile
such as export SCRATCH=/aia/<raid device>/scratch/<your user name>
and use the command cd $SCRATCH
to change into your scratch directory.For the reasons mentioned above, this is what you should store where:
home directory: In general, this is a location for all files generated by a text editor such as:
Do not store files containing simulation results in your home directory!
scratch directory: In general, this is the location for all results generated by a simulation run such as:
Do not store data on your scratch space, which cannot be reproduced by a simulation run!
Before generating new data on any file server, always make sure that the available free space is sufficiently large by using the unix command df Keeping the information above in mind, you can now start installing and using m-AIA by first cloning the git repository (repo) of m-AIA, second configuring m-AIA, third compiling m-AIA and fourth running your first simulation, e.g. a testcase or tutorial.
Clone the repo of m-AIA: For this step git is used.
Open the terminal on your local Linux machine:
Connect to a frontend (fe) of the AIA cluster, e.g., fe1:
Tip: Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
If you want to isolate your m-AIA simulation project(s), create a folder (yourFolder
) where you want to clone the repository on your scratch and change the directory to this folder:
Clone the m-AIA repo:
Change the directory to the downloaded Solver
folder and switch to the desired branch you want to work with:
Tip: If you want to work with the main/master branch, you do not need to switch the branch.
Tip: Use git fetch
and then git status
to check the current status.
Configure m-AIA: This is only guaranteed to work on the frontend like fe1 and cluster nodes, so ssh fe1
if not already.
Run the configure.py
file in the top directory (Solver
folder):
Tip: 1 2
for gnu production mode. You can also use ./configure.py gnu production
.
Tip: You can also use the default setting with ./configure.py ? ?
.
This process should take about 1 minute. The print message on the console should look like this:
Compile m-AIA: This is only guaranteed to work on cluster nodes, so ssh fe1
if not already.
Check for available resources on the connected frontend:
The print message on the console should look like this:
Allocate a cluster node (interactive session) on the AIA cluster with si queue noNodes hh:mm:ss
to allocate a cluster node for a certain amount of time hh:mm:ss
like 00:15:00
for 15min, where queue
is the number of jobs/processors of the node you want to use, e.g. 12, noNodes
is the number of nodes you want to allocate, e.g. 1.
Run the Makefile
in the top directory (Solver
folder) to compile m-AIA in parallel using noCores
processors, here 12:
This process can take 10 to 15 minutes. The print message on the console should look like this:
As a test, if m-AIA is compiled open m-AIA's help:
Run your first m-AIA simulation: This is only guaranteed to work on the frontend like fe1 and cluster nodes.
Check and allocate a cluster node on the AIA cluster with noCores
cores, e.g. 12, if not already:
Tip: By using shosts
again your username should be listed for one of the nodes of the AIA cluster inlcuding the remaining time for which you have allocated this node.
Go to your simulation project (e.g. a testcase or tutorial) and create a link to the compiled m-AIA executable of step 3:
As a test, check if the link is successfully created by openin m-AIA's help:
Run the simulation in parallel. The number of cores noCores
, here 12, should match with the number of cores allocated:
Tip: The property file contains resp. specifies the most important solver settings (properties) to run the simulation.
Tip: Run scancel JobID
to cancel a running job, where the JobID is identified via the table displayed by using shosts
.
Now that the code has been compiled it can be used to simulate testcases. Some ready-to-use testcases are found in the subfoldera of the testcase repository here http://svn/maia/testcases/. When being in the AIA network you can use your browser to open the testcases and to have a look at them or use the terminal to check the testcase out (=copy) with the following command.
Now create a symbolic link (command ln -s
) to your m-AIA executable maia
(compiled code in yourDirectory
= ~/scratch/yourFolder
) in your chosen testcase folder to avoid unnecessary copies of the executable.
Once the m-AIA executable has been created and linked, the simulation can be started as follows, where properties_run.toml
is an ASCII format containing the settings to run the m-AIA-based simulation.
si
) on the AIA cluster before starting the simulation. This means that for a limited time, 30 minutes, you will have exclusive access to one of the computation nodes with for example 12 cores on the AIA cluster. shosts
is used to get an overview over all cluster nodes and their current status.You can find m-AIA tools, which are useful, e.g., for pre- and postprocessing, here: https://git.rwth-aachen.de/aia/MAIA/tools
To use ssh to login into a remote node without the need to provide a password, you have to generate a key file first with ssh-keygen and then add the contents of your key file to your authorized_keys files in your .ssh directory:
Open the terminal
Generate a key in your home directory
Go to your .ssh directory
Append the contents of the key file id_rsa.pub to authorized_keys