6. Using software in a hpc environment#

TOCHECK:

In real HPC environments, software is often not available, outdated, or needs specific compiler optimizations for your project. Mastering the installation of software from source is not only about ‘getting it to work’ — it’s about taking control of your computational environment, maximizing performance, and enabling reproducible science. Software installation is an essential survival skill in HPC. And using the right environment and tools is critical to not reinvent the wheel and use the resources more efficiently. Real life examples are:

  • A production cluster that has GCC 8 but you need C++20 features (GCC 11+).

  • Building MPI or HDF5 with specific flags for performance (or GPUs).

  • Needing newer Python packages, but the cluster limits pip install.

  • The Reproducibility Crisis & Your Research: “Imagine publishing a groundbreaking result, but nobody, not even your future self, can replicate it because the exact software environment is lost. Manually installing and managing software correctly is the first step towards reproducible computational science.”

  • Bleeding Edge vs. Stability: Your research often demands the latest features or bug fixes in a scientific library (e.g., a new algorithm in TensorFlow, a fix in GROMACS). System administrators, prioritizing stability, might offer older versions. Learning to install software yourself gives you access to the cutting edge.

  • Performance is Key: HPC is about speed! Generic software provided by admins might not be compiled with optimizations for the specific processor architecture (like AVX-512) or linked against high-performance libraries (like MKL) available on your cluster. Compiling from source allows you to tailor the build for maximum performance.

  • The “It Doesn’t Exist” Problem: Sometimes, the specialized tool you need simply isn’t available at all via system package managers or modules. Your only option is to build it from source.

  • Dependency Hell: Software rarely lives in isolation. Program A needs Library B version 1.2, but Program C needs Library B version 2.0. How do you manage this without conflicts, especially without root access? This is a core challenge we’ll address. (This sets the stage nicely for Spack).

Typically, software is available through modules, and it can be “loaded” into the current shell environment by using

module load SOFTWARE/VERSION

You can use several versions of the same software, something that you cannot do using apt or similar. In the following we will see a specific tool called spack that allows to install software in a hpc environment, and use it, and is compatible with the module tools (see https://docs.gwdg.de/doku.php?id=en:services:application_services:high_performance_computing:spack_and_modulefiles. It is typical to loose a day or a week installing and configuring software, so is better to use the right tools.

NOTE regarding python: My current suggestion is to use uv.

Using tools like spack helps to solve many problems, like

  • ✅ Dependency Management — source installs often require satisfying many dependencies manually (painful) while Spack automates this.

  • ✅ Conflicts and Errors (e.g., missing a dependency or wrong compiler).

  • ✅ Uninstalling — (e.g., make uninstall if available, or just delete directories).

  • ✅ Environment Issues — binaries installed outside of default paths won’t be found unless PATH or LD_LIBRARY_PATH is modified.

  • ✅ Parallel Builds — make -j$(nproc) — very practical in HPC!

  • ✅ Simple performance aspect (e.g., using -O3 flags).

Other tools to install/manage packages:

  • Environment Modules (Lmod/Tcl): Managing environment variables (PATH, LD_LIBRARY_PATH, etc.) without conflicts. Basic commands: module avail, module list, module load , module unload , module purge, module spider (or module keyword ). Notice that spack can generate module files (spack module tcl refresh -y ), bridging the gap between installing with Spack and using the software easily. Even manually installed software should ideally have a module file created for it.

  • Conda: Pros: Easy to use, large package ecosystem (especially Python/R). Cons: Can sometimes have performance issues (binaries not always optimized for HPC hardware/interconnects), environment conflicts with system modules, potential storage bloat. Often better for interactive work or specific workflows than core HPC tasks.

  • uv: https://docs.astral.sh/uv/ A modern , fast, and it is becoming the standard.

  • EasyBuild: Another widely used HPC-centric build automation tool, similar in scope to Spack.

  • Containers (Singularity/Apptainer): Alternative way to package and run software with its entire environment. They solve similar problems but work differently (packaging vs. native installation). high importance for portability and reproducibility, especially across different clusters.

6.1. Shell env vars introduction#

Environment variables are dynamic values that affect how processes run on a computer. They are part of the environment in which a process runs and can influence program behavior without requiring code changes. These variables are crucial for configuring system behavior, program execution paths, and runtime behavior.

There are modern tools that automate setting env vars per project, like .env files, direnv, and even dockerfiles.

6.1.1. Key Environment Variables#

6.1.1.1. PATH#

The PATH variable is one of the most important environment variables. It specifies a list of directories where the shell looks for executable programs.

# View your current PATH
echo $PATH

# Example output
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin

Importance:

  • Determines which programs can be run without specifying their full path

  • Affects which versions of programs are executed when multiple versions exist

  • Essential for running commands from any directory without typing the full path

Example usage:

# Adding a directory to PATH temporarily
export PATH=$PATH:/path/to/your/directory

# Adding permanently (in your shell profile file like .bashrc or .zshrc)
echo 'export PATH=$PATH:/path/to/your/directory' >> ~/.bashrc

6.1.1.2. LD_LIBRARY_PATH#

LD_LIBRARY_PATH specifies directories where the dynamic linker should look for shared libraries before checking the standard locations.

# View current LD_LIBRARY_PATH
echo $LD_LIBRARY_PATH

Importance:

  • Critical for running programs that depend on specific library versions

  • Essential when installing software in non-standard locations

  • Helps resolve “library not found” errors without system-wide installation

Example usage:

# Setting LD_LIBRARY_PATH temporarily
export LD_LIBRARY_PATH=/path/to/libraries:$LD_LIBRARY_PATH

# Running a program with a specific library path
LD_LIBRARY_PATH=/path/to/libs ./your_program

6.1.1.3. Other Important Environment Variables#

  • PYTHONPATH: Directories where Python looks for modules

  • JAVA_HOME: Base directory of your Java installation

  • LANG and LC_ALL: Control system locale and language settings

  • HOME: Path to the current user’s home directory

  • USER: Current username

6.1.2. Setting and Using Environment Variables#

6.1.2.1. Temporary Setting (Current Session Only)#

export VARIABLE_NAME=value

6.1.2.2. Permanent Setting#

Add to your shell profile file (.bashrc, .zshrc, etc.):

echo 'export VARIABLE_NAME=value' >> ~/.bashrc

6.1.2.3. Using Variables in Scripts#

#!/bin/bash
echo "Your PATH is: $PATH"
if [ -z "$LD_LIBRARY_PATH" ]; then
  echo "LD_LIBRARY_PATH is not set"
else
  echo "Libraries will be searched in: $LD_LIBRARY_PATH"
fi

6.1.3. Practical Applications#

6.1.3.1. Compiling Programs#

  • CPATH/C_INCLUDE_PATH: Directories for C header files

  • LIBRARY_PATH: Directories for static libraries during link time

6.1.3.2. Running Programs with Custom Libraries#

# Running with specific library paths
LD_LIBRARY_PATH=/opt/custom/lib ./my_program

# Compiling with specific include paths
CPATH=/opt/custom/include gcc -o program program.c

6.1.3.3. Debugging Environment Issues#

# Print all environment variables
env

# Check if a program can find its libraries
ldd ./my_program

6.2. Spack: automatic load for programs and containers#

Spack was designed by llnl and is targeted to simplify the installation and managment of HPC programs with many possible versions and dependencies. Check the docs at

For now let’s just install it, configure it, and install some simple tools. A spack command table is found at https://spack.readthedocs.io/en/latest/command_index.html . Also, as a tip for the future, it is recommended that you use a spack environment: https://spack.readthedocs.io/en/latest/environments.html

6.2.1. Installation#

Actually, spack is already installed at the computer room. Yet you can use your own spack installation together with an upstream one, configuring it using https://spack.readthedocs.io/en/latest/chain.html.

To isntall spack, go to the Downloads folder (you DO NOT want to put the spack repo inside your class codes repo) , and then, following the official docs, just clone the repository

git clone https://github.com/spack/spack.git # clones the repo
cd spack
git checkout v0.23.1 # Checkout the latest stable release
echo $PATH
source share/spack/setup-env.sh # Setup the environment , this command should go in ~/.bashrc
echo $PATH

Now you can check what can be installed

spack list

To be able to use spack easily in the future, it is recommended to add the source command to your ~/.bashrc, so just add the following at the end of the file

    source $HOME/PATHTOREPO/share/spack/setup-env.sh

and then close and open the terminal.

6.2.1.1. NOT NEEDED Troubleshooting the openssl lib#

In our computer room there has been a problem with spack and openssl. It is better to instruct spack to use the local openssl version instead of building one. To do so, add the following to your spack package config file, $SPACK_REPO/etc/spack/packages.yaml:

packages:
    openssl:
        externals:
        - spec: openssl@1.1.1m
          prefix: /usr
        buildable: False

You can check the correct version with \(openssl version\). Furthermore, to be able to run your installed programs on several computers with different processors, use the flag target=x86_64 .

6.3. Installing some tools with spack#

Before installing some software, first, we need to deactivate the already installed environment, just to play a bit with a clean environment.

echo $PATH
spack env deactivate
echo $PATH

Notice that this will be like a virgin installation, not using the global spack install.

Now let’s install zlib, with several versions

spack info zlib # Get some information about versions, deps, etc
spack install zlib@1.3 target=x86_64_v3
spack install zlib@1.2.13 target=x86_64_v3
spack find

To check the installed software, you can use the module command as (installed when you used bootstrap)

module avail

NOTE: This is not configured yet to use spack so it will not show anything.

Please install to versions for the gsl

spack install gsl@2.7 target=x86_64_v3
spack install gsl@2.6 target=x86_64_v3
spack find

Now you will see that you have two versions of the zlib or gsl. If you want to use one of them, you will load it with spack. The check the change in environment, first check the PATH, then load, then compare

echo $PATH
echo $C_INCLUDE_PATH
echo $LD_LIBRARY_PATH

Now load the gsl version 2.8,

spack load gsl@2.8

and check the new paths

echo $PATH
echo $C_INCLUDE_PATH
echo $LD_LIBRARY_PATH

If you unload the gsl 2.8, everything goes back to normal,

spack unload gsl@2.8
echo $PATH
echo $C_INCLUDE_PATH
echo $LD_LIBRARY_PATH

To learn more about spack, check the official docs and tutorials. In the following we will use it to play with several packages in parallel programming. Is voro++ available? what about eigen?

6.4. Activating modules#

You can generate modules using (see https://spack.readthedocs.io/en/latest/module_file_support.html)

spack module tcl refresh
spack module lmod refresh

And then setup module to find your modules as

export MODULEPATH=$MODULEPATH:~/Downloads/spack/share/spack/modules/linux-slackware15-x86_64_v3
#export MODULEPATH=$MODULEPATH:~/Downloads/spack/share/spack/modules/linux-slackware15-zen3
module avail

and then you can load using

module load ...

6.5. Using a module/spack package#

Once you have verified that your package is installed, or you have installed it, then you can use it. Let’s usse the following program that simply prints the gsl version in use

#include <stdio.h>
#include <gsl/gsl_version.h>

int main(void) {
    printf("GNU Scientific Library (GSL) Version: %s\n", GSL_VERSION);
    return 0;
}

You can compile it as

gcc gsl.c

and then run it. In the computer room, the gsl version already installed is 2.8. Now load the version 2.7 with spack load gsl@2.7.1 and recompile as

gcc -I $GSL_ROOT_DIR/include gsl.c

and re-run. Do you get something different? Of course the large -I specification is too much and we can reconfigure spack to always modify the C_INCLUDE_PATH to not need this things.

Finally, open a new terminal. You will return to the original global spack installation. Remember, if you want ot use your own, you need to configure upstreams and so on.

6.6. Using the global spack installation#

You can use the global installation by adding a new upstream as

spack config --scope site add upstreams:spack-instance-1:install_tree:/mnt/scratch/salafis/v1/spack-packages

Now execute spack find and you will have available all the global and local packages.

6.7. How to install programs from source#

Sometimes you actually need to install programs from source. In this workshop we will learn how to install a program or library from source to the home directory of the user. This is useful when you need a program but it is not available in the system you are working on. Also, It could be that you actually need a newer version than the installed one. We will use the following programs, some of them already available at the computer room:

Name

installed version

latest version

fftw

3.3.4

3.3.10

Eigen C++

3.2.7

3.4.0

voro++

Not installed

0.4.6

g++ 15.x

Not installed

15.1

We will learn how to do it by compiling a package directly, something very useful to know, and also using a new tool aimed for supercomputers, called spack , that simplifies the installation of software and also allows to have many versions of the same library, something not easy done manually.

Notice that we use the most common/established tools, but there are other tools, like:

  • CMake (cmake .. && make && make install) — It’s very common now (e.g., scientific libraries, Eigen, OpenCV, Trilinos).

  • Meson + Ninja — Even more modern (but more niche).

  • Environment Modules (module load, module avail).

  • Virtual Environments for Python (uv, venv, conda).

6.7.1. Preliminary concepts#

It is important for you to know a little how your operating system and the shell look for commands. In general, the PATH variable stores the directories where the shell interpreter will look for a given command. To check its contents, you can run the following,

If, for instance, you want to add another directory to the PATH, like the directory $HOME/local/bin, you can run the following

export PATH=$PATH:$HOME/bin

This appends the special directory to the old content of the PATH.

When you want to compile a given program that uses some other libraries, you must specify any extra folder to look for include files, done with the -I flag, and for libraries, done with the -L and -l flags. For instance, let’s assume that you installed some programs in your HOME/local, the include files inside $HOME/local/include, and the libraries inside $HOME/local/lib. If you want to tell the compiler to use those, you must compile as

Finally, whenever you are installing a program from source you must be aware that this reduces to basically the following steps:

  1. Download and untar the file. Enter the unpacked directory.

  2. Read the README/INSTALL files for any important info.

  3. If the program uses cmake, create a build dir and then use cmake to generate the Makefiles:

    mkdir build
    cd build
    cmake ../ -DCMAKE_INSTALL_PREFIX=$HOME/local
    

    On the other hand, if the program uses configure, then configure the system to install on the required path

    ./configure --prefix=$HOME/local
    
  4. Compile and install the program, maybe using many threads

    make -j 4 # uses for threads to compile
    make install
    

    Done. The program is installed.

    Finally, when compiling, do not forget to use the flags -L and -I appropriately.

Setting all the flags and making sure to use the right version is sometimes difficult, so tools like spack aim to manage and simplify this.

6.7.2. Checking the version for already installed programs#

If you are used to apt-get or something related, you can use the package manager to check. But, in general, you can check the versions by looking at the appropriate places on your system. Typically, if you are looking for a library, they are installed under /usr/lib or /usr/local/lib, while include files are installed under /usr/include or /usr/local/include . For instance, if you are looking for library foo, then you should look for the file libfoo.a or libfoo.so . One useful utility for this is the command locate or find .

locate libfoo
find /usr/lib -iname "*libfoo*"

Execute these commands to check the actual versions for fftw, eigen, and git. What versions do you have? If you a re looking for a program, or an specific version of program, you must check if the program exists by executing it. For command line programs you usually can check the version by using the following

where programname is the name of the command.

6.7.3. Preparing the local places to install the utilities#

In this case we will install everything under the $HOME/local subdirectory inside your home, so please create it. Remember that the symbol $HOME means your home directory. The utilies will then create the appropriate folders there. NOTE: Better use the $HOME var instead of ~, which is another way to state the name of your home.

6.7.4. Typical installation algorithm#

  1. Download the source from the project page. This normally implies downloading a tarball (file ending with .tar.gz or .tar.bz2) .

  2. Un-compress the downloaded file. For a tarball, the command will be

    tar xf filename.tar.gz
    
  3. Enter to the newly uncompressed folder (almost always usually cd   filename).

  4. READ the README and/or the INSTALL file to check for important info regarding the program. SOmetimes these files tell you that the installation is special and that you are required to follow some special steps (that will happen with voro++ )

  5. CONFIGURATION: You have two options, each one independent on the other:

    1. If the program has a configure script, then just run

      ./configure --help
      

    to check all the available options. Since we want to install on the $HOME/local directory, then we need to run

    ./configure --prefix=$HOME/local
    

    If you don’t specify the prefix, then the program will be installed on the /usr/bin or /usr/local/bin directories, whatever is the default. If these commands ends successfully, it will print some info to the screen and will tell you to go to the next step. Otherwise you will need to read the log and fix the errors (like installing a dependency).

    1. If the program uses cmake, a makefile generator and

    configurator, then you need to do the following:

    mkdir build # creates a build directory to put there the temporary built files
    cd build 
    cmake ../ -DCMAKE_INSTALL_PREFIX:PATH=$HOME/local # configure the building process for the source code located on the parent directory
    
  6. COMPILATION: Now that you have configured your installation, you need to compile by using the GNU make utility (Note: All this build utilities come from the gnu organization and are free software as in freedom). If you have several cores, you can use them in parallel, assuming the that the Makefile and your make versions supports it:

    make -j 3 # for three cores, but, if you are unsure, just use one core.
    

    Any errors in this stage should be fixed before going to the next one.

  7. INSTALLATION After successful compilation, you can install by using

    make install
    

    This will install the program (libraries, binaries, include files, manual files, etc) onto the prefix directory. If you want to instll system-wide (you did not set the prefix), then you need to use sudo make install . In this case you don’t need sudo since you are installing on your own home.

  8. TESTING In this case use a program to test your installation. When you compile your program and you want to use the version that you installed, you need to tell the compiler where to find the libraries/includes, so you need something like

    g++ -L $HOME/local/lib -I $HOME/local/include  programname.cpp -llibname
    
    • -L $HOME/local/lib tells the compiler to look for libraries on

    the $HOME/local/lib directory.

    • -I $HOME/local/include tells the compiler to look for include

    files on the $HOME/local/include directory.

    • -llibname tells the compiler to link with the given

    library. Sometimes is not needed. Sometimes is crucial. Be careful, if your library is called libfftw, you need to write -lfftw, not -llibfftw.

6.8. Challenges#

6.8.1. Source Code Install Challenges#

Basic Challenge: Download and install a simple program like htop, zlib, or hello-c manually. Install into the ~/mysoftware/ directory.

Medium Challenge: Install a CMake-based library, e.g., Eigen or fmt. Perform an out-of-source build.

Advanced Challenge: Install something with dependencies: like GROMACS or LAMMPS from source. It needs MPI installed, maybe link it against a manually installed MPI (OpenMPI). They must configure with a custom –prefix and show a working simple simulation.

6.9. Workshop#

For each of the proposed utilities written at the beginning, follow the next procedure:

  • Check the installed version number and compare with the latest one.

  • Install the latest version on your home directory by following the procedure stated above.

  • Run each of the following example program but make sure you a re using you installed version. Show to the instructor the compilation line.

Important NOTE: for g++, use the prefix -9 in the configure line to put that as suffix to the commands and avoid collisions with the compiler already installed in the system. This can be done by adding the flag --program-suffix=-9 to the configure command.

6.9.1. Test Programs#

6.9.1.1. fftw#

This is a c code. Save it as testfftw.c and compile with gcc instead of g++ .

// From : https://github.com/undees/fftw-example
// This ia a c code (save it as testfftw.c)
/* Start reading here */

#include <fftw3.h>

#define NUM_POINTS 128


/* Never mind this bit */

#include <stdio.h>
#include <math.h>

#define REAL 0
#define IMAG 1

void acquire_from_somewhere(fftw_complex* signal) {
  /* Generate two sine waves of different frequencies and
   * amplitudes.
   */

  int i;
  for (i = 0; i < NUM_POINTS; ++i) {
    double theta = (double)i / (double)NUM_POINTS * M_PI;

    signal[i][REAL] = 1.0 * cos(10.0 * theta) +
      0.5 * cos(25.0 * theta);

    signal[i][IMAG] = 1.0 * sin(10.0 * theta) +
      0.5 * sin(25.0 * theta);
  }
}

void do_something_with(fftw_complex* result) {
  int i;
  for (i = 0; i < NUM_POINTS; ++i) {
    double mag = sqrt(result[i][REAL] * result[i][REAL] +
                      result[i][IMAG] * result[i][IMAG]);

    printf("%g\n", mag);
  }
}


/* Resume reading here */

int main() {
  fftw_complex signal[NUM_POINTS];
  fftw_complex result[NUM_POINTS];

  fftw_plan plan = fftw_plan_dft_1d(NUM_POINTS,
                                    signal,
                                    result,
                                    FFTW_FORWARD,
                                    FFTW_ESTIMATE);

  acquire_from_somewhere(signal);
  fftw_execute(plan);
  do_something_with(result);

  fftw_destroy_plan(plan);

  return 0;
}

6.9.1.2. Eigen C++#

These are C++ codes. Save them, compile, run and explain what they do.

    #include <iostream>
    #include <Eigen/Dense>
    #include <Eigen/Core>
    using Eigen::MatrixXd;
    int main()
    {
      //std::cout << EIGEN_MAYOR_VERSION << std::endl;
      std::cout << EIGEN_MINOR_VERSION << std::endl;
      MatrixXd m(2,2);
      m(0,0) = 3;
      m(1,0) = 2.5;
      m(0,1) = -1;
      m(1,1) = m(1,0) + m(0,1);
      std::cout << m << std::endl;
    }

    #include <iostream>
    #include <Eigen/Dense>
    using namespace Eigen;
    int main()
    {
      Matrix2d a;
      a << 1, 2,
        3, 4;
      MatrixXd b(2,2);
      b << 2, 3,
        1, 4;
      std::cout << "a + b =\n" << a + b << std::endl;
      std::cout << "a - b =\n" << a - b << std::endl;
      std::cout << "Doing a += b;" << std::endl;
      a += b;
      std::cout << "Now a =\n" << a << std::endl;
      Vector3d v(1,2,3);
      Vector3d w(1,0,0);
      std::cout << "-v + w - v =\n" << -v + w - v << std::endl;
    }

    #include <iostream>
    #include <Eigen/Dense>
    using namespace std;
    using namespace Eigen;
    int main()
    {
      Matrix3f A;
      Vector3f b;
      A << 1,2,3,  4,5,6,  7,8,10;
      b << 3, 3, 4;
      cout << "Here is the matrix A:\n" << A << endl;
      cout << "Here is the vector b:\n" << b << endl;
      Vector3f x = A.colPivHouseholderQr().solve(b);
      cout << "The solution is:\n" << x << endl;
    }

    #include <iostream>
    #include <Eigen/Dense>
    using namespace std;
    using namespace Eigen;
    int main()
    {
      Matrix2f A;
      A << 1, 2, 2, 3;
      cout << "Here is the matrix A:\n" << A << endl;
      SelfAdjointEigenSolver<Matrix2f> eigensolver(A);
      if (eigensolver.info() != Success) abort();
      cout << "The eigenvalues of A are:\n" << eigensolver.eigenvalues() << endl;
      cout << "Here's a matrix whose columns are eigenvectors of A \n"
           << "corresponding to these eigenvalues:\n"
           << eigensolver.eigenvectors() << endl;
    }

6.9.2. Voro++#

Use the example http://math.lbl.gov/voro++/examples/random_points/

// Voronoi calculation example code
//
// Author   : Chris H. Rycroft (LBL / UC Berkeley)
// Email    : chr@alum.mit.edu
// Date     : August 30th 2011

#include "voro++.hh"
using namespace voro;

// Set up constants for the container geometry
const double x_min=-1,x_max=1;
const double y_min=-1,y_max=1;
const double z_min=-1,z_max=1;
const double cvol=(x_max-x_min)*(y_max-y_min)*(x_max-x_min);

// Set up the number of blocks that the container is divided into
const int n_x=6,n_y=6,n_z=6;

// Set the number of particles that are going to be randomly introduced
const int particles=20;

// This function returns a random double between 0 and 1
double rnd() {return double(rand())/RAND_MAX;}

int main() {
  int i;
  double x,y,z;

  // Create a container with the geometry given above, and make it
  // non-periodic in each of the three coordinates. Allocate space for
  // eight particles within each computational block
  container con(x_min,x_max,y_min,y_max,z_min,z_max,n_x,n_y,n_z,
                false,false,false,8);

  // Randomly add particles into the container
  for(i=0;i<particles;i++) {
    x=x_min+rnd()*(x_max-x_min);
    y=y_min+rnd()*(y_max-y_min);
    z=z_min+rnd()*(z_max-z_min);
    con.put(i,x,y,z);
  }

  // Sum up the volumes, and check that this matches the container volume
  double vvol=con.sum_cell_volumes();
  printf("Container volume : %g\n"
         "Voronoi volume   : %g\n"
         "Difference       : %g\n",cvol,vvol,vvol-cvol);

  // Output the particle positions in gnuplot format
  con.draw_particles("random_points_p.gnu");

  // Output the Voronoi cells in gnuplot format
  con.draw_cells_gnuplot("random_points_v.gnu");
}

On gnuplot do the following:

splot "random_points_p.gnu" u 2:3:4, "random_points_v.gnu" with lines

6.9.3. g++ 9.2#

Just run the command and check the version,

Now run any of the special functions examples that required -std=c++17 .