Updates and Results Talks and Posters Advice Ideas Important Figures Write-Ups Outreach How-To Funding Opportunities GENETIS
  Place to document instructions for how to do things, Page 1 of 3  ELOG logo
New entries since:Wed Dec 31 19:00:00 1969
  ID Date Authordown Subject Project
  31   Thu Dec 13 17:33:54 2018 s prohiraparallel jobs on rubySoftware

On ruby, users get charged for the full node, even if you aren't using all 20 cores, so it's a pain if you want to run a bunch of serial jobs. There is, however, a thing called the 'parallel command processor' (pcp) which is provided on ruby, (https://www.osc.edu/resources/available_software/software_list/parallel_command_processor) that makes it very simple.

essentially, you make a text file filled with commands, one command per line, and then you give it to the parallel command processor and it submits each line of your text file as an individual job. the nice thing about this is that you don't have to think about it. you just give it the file and go, and it will use all cores on the full node in the most efficient way possible.

below i provide 2 examples, a very simple one to show you how it works, and a more complicated one. in both files, i make the command file inside of a loop. you don't need to do this-you can make the file in some other way if you choose to. note that you can also do this from within an interactive job. more instructions at the above link.

test.pbs  is just a minimal thing, where you need to submit the same command but with some value that needs to be incremented 1000 times (e.g. 1000 different jobs).

effvol.pbs is more involved, and shows some important steps if your job produces a lot of output, where you use the $TMPDIR or the pbs workdir. (if you don't know what that is, you probably don't need to use it). each command in this file stores an output file to the $TMPDIR directory. this directory is accessed faster than the directories where you store your files, and so your jobs run faster. at the end of the script, all of the output files from all of the run jobs, are copied to my home directory, because $TMPDIR is deleted after each job. also this file shows the sourcing of a particular bash profile for submitted jobs (if you need this. some programs work differently when submitted than jobs run on the login nodes on ruby).

i recommend reading the above link for more information. the pcp is very useful on ruby!

Attachment 1: test.pbs
#!/bin/bash

#PBS -A PCON0003
#PBS -l walltime=01:00:00
#PBS -l nodes=1:ppn=20


touch commandfile
for value in {1..1000}
do
    line="/path/to/your_command_to_run $value (arg1) (arg2)..(argn)"
    echo ${line}>> commandfile
    
done


module load pcp
mpiexec parallel-command-processor commandfile




Attachment 2: effvol.pbs
#!/bin/bash

#PBS -A PCON0003
#PBS -N effvol
#PBS -l walltime=01:00:00
#PBS -l nodes=1:ppn=20
#PBS -o ./log/out
#PBS -e ./log/err

source /users/PCON0003/osu10643/.bash_batch

cd $TMPDIR
touch effvolconf
for value in {1..1000}
do
    line="/users/PCON0003/osu10643/app/geant/app/nrt -m /users/PCON0003/osu10643/app/geant/app/nrt/effvol.mac -f \"$TMPDIR/effvol$value.root\""
    echo ${line}>> effvolconf
    
done


module load pcp
mpiexec parallel-command-processor effvolconf


cp $TMPDIR/*summary.root /users/PCON0003/osu10643/doc/root/summary/

  45   Fri Feb 4 13:06:25 2022 William Luszczak"Help! AnitaBuildTools/PueoBuilder can't seem to find FFTW!"Software

Disclaimer: This might not be the best solution to this problem. I arrived here after a lot of googling and stumbling across this thread with a similar problem for an unrelated project: https://github.com/xtensor-stack/xtensor-fftw/issues/52. If you're someone who actually knows cmake, maybe you have a better solution.

When compiling both pueoBuilder and anitaBuildTools, I have run into a cmake error that looks like:

CMake Error at /apps/cmake/3.17.2/share/cmake-3.17/Modules/FindPackageHandleStandardArgs.cmake:164 (message):
  Could NOT find FFTW (missing: FFTW_LIBRARIES)

(potentially also missing FFTW_INCLUDES). Directing CMake to the pre-existing FFTW installations on OSC does not seem to do anything to resolve this error. From what I can tell, this might be related to how FFTW is built, so to get around this we need to build our own installation of FFTW using cmake instead of the recommended build process. To do this, grab the whatever version of FFTW you need from here: http://www.fftw.org/download.html (for example, I needed 3.3.9). Untar the source file into whatever directory you're working in:

    tar -xzvf fftw-3.3.9.tar.gz

Then make a build directory and cd into it:
    
    mkdir install
    cd install

Now build using cmake, using the flags shown below.

    cmake -DCMAKE_INSTALL_PREFIX=$(path_to_install_loc) -DBUILD_SHARED_LIBS=ON -DENABLE_OPENMP=ON -DENABLE_THREADS=ON ../fftw-3.3.9

For example, I downloaded and untarred the source file in `/scratch/wluszczak/fftw/`, and my install prefix was `/scratch/wluszczak/fftw/install/`. In principle this installation prefix can be anywhere you have write access, but for the sake of organization I usually try to keep everything in one place.

Once you have configured cmake, go ahead and install:

    make install -j $(nproc)

Where $(nproc) is the number of threads you want to use. On OSC I used $(nproc)=4 for compiling the ANITA tools and it finished in a reasonable amount of time.

Once this has finished, cd to your install directory and remove everything except the `include` and `lib64` folders:

    cd $(path_to_install_dir) #You might already be here if you never left
    rm *
    rm -r CMakeFiles

Now we need to rebuild with slightly different flags:

    cmake -DCMAKE_INSTALL_PREFIX=$(path_to_install_loc) -DBUILD_SHARED_LIBS=ON -DENABLE_OPENMP=ON -DENABLE_THREADS=ON -DENABLE_FLOAT=ON ../fftw-3.3.9
    make install -j $(nproc)

At the end of the day, your fftw install directory should have the following files:

    include/fftw3.f  
    include/fftw3.f03
    include/fftw3.h  
    include/fftw3l.f03  
    include/fftw3q.f03 
    lib64/libfftw3f.so          
    lib64/libfftw3f_threads.so.3      
    lib64/libfftw3_omp.so.3.6.9  
    lib64/libfftw3_threads.so
    lib64/libfftw3f_omp.so        
    lib64/libfftw3f.so.3        
    lib64/libfftw3f_threads.so.3.6.9  
    lib64/libfftw3.so            
    lib64/libfftw3_threads.so.3
    lib64/libfftw3f_omp.so.3      
    lib64/libfftw3f.so.3.6.9    
    lib64/libfftw3_omp.so             
    lib64/libfftw3.so.3          
    lib64/libfftw3_threads.so.3.6.9
    lib64/libfftw3f_omp.so.3.6.9  
    lib64/libfftw3f_threads.so  
    lib64/libfftw3_omp.so.3           
    lib64/libfftw3.so.3.6.9

Once fftw has been installed, export your install directory (the one with the include and lib64 folders) to the following environment variable:

    export FFTWDIR=$(path_to_install_loc)

Now you should be able to cd to your anitaBuildTools directory (or pueoBuilder directory) and run their associated build scripts:

    ./buildAnita.sh

or:

    ./pueoBuilder.sh

And hopefully your tools will magically compile (or at least, you'll get a new set of errors that are no longer related to this problem).

If you're running into an error that looks like:
        
    CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
    Please set them or make sure they are set and tested correctly in the CMake files:
    FFTWF_LIB (ADVANCED)

then pueoBuilder/anitaBuildTools can't seem to find your fftw installation (or files that are supposed to be included in that installation), try rebuilding FFTW with different flags according to which files it seems to think are missing.

If it seems like pueoBuilder can't seem to find your FFTW installation at all (i.e. you're getting some error that looks like missing: FFTW_LIBRARIES or missing: FFTW_INCLUDES, check the environment variables that are supposed to point to your local FFTW installation (`$FFTWDIR`) and make sure there are the correct files in the `lib` and `include` subdirectories. 

  47   Mon Apr 24 11:51:42 2023 William LuszczakPUEO simulation stack installation instructionsSoftware

These are instructions I put together as I was first figuring out how to compile PueoSim/NiceMC. This was originally done on machines running CentOS 7, however has since been replicated on the OSC machines (running RedHat 7.9 I think?). I generally try to avoid any `module load` type prerequisites, instead opting to compile any dependencies from source. You _might_ be able to get this to work by `module load`ing e.g. fftw, but try this at your own peril.

#pueoBuilder Installation Tutorial

This tutorial will guide you through the process of building the tools included in pueoBuilder from scratch, including the prerequisites and any environment variables that you will need to set. This sort of thing is always a bit of a nightmare process for me, so hopefully this guide can help you skip some of the frustration that I ran into. I did not have root acces on the system I was building on, so the instructions below are what I had to do to get things working with local installations. If you have root access, then things might be a bit easier. For reference I'm working on CentOS 7, other operating systems might have different problems that arise. 

##Prerequisites
As far as I can tell, the prerequisites that need to be built first are:

-Python 3.9.18 (Apr. 6 2024 edit by Jason Yao, needed for ROOT 6.26-14)
-cmake 3.21.2 (I had problems with 3.11.4)
-gcc 11.1.0 (9.X will not work) (update 4/23/24: If you are trying to compile ROOT 6.30, you might need to downgrade to gcc 10.X, see note about TBB in "Issues I ran into" at the end)
-fftw 3.3.9
-gsl 2.7.1 (for ROOT)
-ROOT 6.24.00
-OneTBB 2021.12.0 (if trying to compile ROOT 6.30)

###CMake
You can download the source files for CMake here: https://cmake.org/download/. Untar the source files with:

    tar -xzvf cmake-3.22.1.tar.gz

Compiling CMake is as easy as following the directions on the website: https://cmake.org/install/, but since we're doing a local build, we'll use the `configure` script instead of the listed `bootstrap` script. As an example, suppose that I downloaded the above tar file to `/scratch/wluszczak/cmake`: 

    mkdir install
    cd cmake-3.22.1
    ./configure --prefix=/scratch/wluszczak/cmake/install
    make
    make install

You should additionally add this directory to your `$PATH` variable:

    export PATH=/scratch/wluszczak/cmake/install/bin:$PATH
    

To check to make sure that you are using the correct version of CMake, run:

    cmake --version

and you should get:

    cmake version 3.22.1

    CMake suite maintained and supported by Kitware (kitware.com/cmake).

### gcc 11.1.0

Download the gcc source from github here: https://github.com/gcc-mirror/gcc/tags. I used the 11.1.0 release, though there is a more recent 11.2.0 release that I have not tried. Once you have downloaded the source files, untar the directory:

    tar -xzvf gcc-releases-gcc-11.1.0.tar.gz

Then install the prerequisites for gcc:
    
    cd gcc-releases-gcc-11.1.0
    contrib/download_prerequisites

One of the guides I looked at also recommended installing flex separately, but I didn't seem to need to do this, and I'm not sure how you would go about it without root priviledges, though I imagine it's similar to the process for all the other packages here (download the source and then build by providing an installation prefix somewhere)

After you have installed the prerequisites, create a build directory:

    cd ../
    mkdir build
    cd build

Then configure GCC for compilation like so:

    ../gcc-releases-gcc-11.1.0/configure -v --prefix=/home/wluszczak/gcc-11.1.0 --enable-checking=release --enable-languages=c,c++,fortran --disable-multilib --program-suffix=-11.1

I don't remember why I installed to my home directory instead of the /scratch/ directories used above. In principle the installation prefix can go wherever you have write access. Once things have configured, compile gcc with:

    make -j $(nproc)
    make install

Where `$(nproc)` is the number of processing threads you want to devote to compilation. More threads will run faster, but be more taxing on your computer. For reference, I used 8 threads and it took ~15 min to finish. 


Once gcc is built, we need to set a few environment variables:

    export PATH=/home/wluszczak/gcc-11.1.0/bin:$PATH
    export LD_LIBRARY_PATH=/home/wluszczak/gcc-11.1.0/lib64:$LD_LIBRARY_PATH

We also need to make sure cmake uses this compiler:

    export CC=/home/wluszczak/gcc-11.1.0/bin/gcc-11.1
    export CXX=/home/wluszczak/gcc-11.1.0/bin/g++-11.1
    export FC=/home/wluszczak/gcc-11.1.0/bin/gfortran-11.1

If your installation prefix in the configure command above was different, substitute that directory in place of `/home/wluszczak/gcc-11.1.0` for all the above export commands. To easily set these variables whenever you want to use gcc-11.1.0, you can stick these commands into a single shell script:

    #load_gcc11.1.sh
    export PATH=/home/wluszczak/gcc-11.1.0/bin:$PATH
    export LD_LIBRARY_PATH=/home/wluszczak/gcc-11.1.0/lib64:$LD_LIBRARY_PATH

    export CC=/home/wluszczak/gcc-11.1.0/bin/gcc-11.1
    export CXX=/home/wluszczak/gcc-11.1.0/bin/g++-11.1
    export FC=/home/wluszczak/gcc-11.1.0/gfortran-11.1

(again substituting your installation prefix in place of mine). You can then set all these environment variables by simply running:
    
    source load_gcc11.1.sh

Once this is done, you can check that gcc-11.1.0 is properly installed by running:

    gcc-11.1 --version

Note that plain old

    gcc --version

might still point to an older version of gcc. This is fine though. 

###FFTW 3.3.9
Grab the source code for the appropriate versino of FFTW from here: http://www.fftw.org/download.html

However, do NOT follow the installation instructions on the webpage. Those instructions might work if you have root privileges, but I personally couldn't seem to to get things to work that way. Instead, we're going to build fftw with cmake. Untar the fftw source files:

    tar -xzvf fftw-3.3.9.tar.gz

Make a build directory and cd into it:
    
    mkdir build
    cd build

Now build using cmake, using the flags shown below. For reference, I downloaded and untarred the source file in `/scratch/wluszczak/fftw/build`, so adjust your install prefix accordingly to point to your own build directory that you created in the previous step.

    cmake -DCMAKE_INSTALL_PREFIX=/scratch/wluszczak/fftw/build/ -DBUILD_SHARED_LIBS=ON -DENABLE_OPENMP=ON -DENABLE_THREADS=ON ../fftw-3.3.9
    make install -j $(nproc)

Now comes the weird part. Remove everything except the `include` and `lib64` directories in your build directory (if you installed to a different `CMAKE_INSTALL_PREFIX`, the include and lib64 directories might be located there instead. The important thing is that you want to remove everything, but leave the `include` and `lib64` directories untouched):

    rm *
    rm -r CMakeFiles

Now rebuild fftw, but with an additional flag:

    cmake -DCMAKE_INSTALL_PREFIX=/scratch/wluszczak/fftw/build/ -DBUILD_SHARED_LIBS=ON -DENABLE_OPENMP=ON -DENABLE_THREADS=ON -DENABLE_FLOAT=ON ../fftw-3.3.9
    make install -j $(nproc)

At the end of the day, your fftw install directory should have the following files:

    include/fftw3.f  
    include/fftw3.f03
    include/fftw3.h  
    include/fftw3l.f03  
    include/fftw3q.f03 
    lib64/libfftw3f.so          
    lib64/libfftw3f_threads.so.3      
    lib64/libfftw3_omp.so.3.6.9  
    lib64/libfftw3_threads.so
    lib64/libfftw3f_omp.so        
    lib64/libfftw3f.so.3        
    lib64/libfftw3f_threads.so.3.6.9  
    lib64/libfftw3.so            
    lib64/libfftw3_threads.so.3
    lib64/libfftw3f_omp.so.3      
    lib64/libfftw3f.so.3.6.9    
    lib64/libfftw3_omp.so             
    lib64/libfftw3.so.3          
    lib64/libfftw3_threads.so.3.6.9
    lib64/libfftw3f_omp.so.3.6.9  
    lib64/libfftw3f_threads.so  
    lib64/libfftw3_omp.so.3           
    lib64/libfftw3.so.3.6.9

Why do we have to do things this way? I don't know, I'm bad at computers. Maybe someone more knowledgeable knows. I found that when I didn't do this step, I'd run into errors that pueoBuilder could not find some subset of the required files (either the ones added by building with `-DENABLE_FLOAT`, or the ones added by building without `-DENABLE_FLOAT`). 

Once fftw has been installed, export your install directory (the one with the include and lib64 folders) to the following environment variable:

    export FFTWDIR=/scratch/wluszczak/fftw/build

Again, substituting your own fftw install prefix that you used above in place of `/scratch/wluszczak/fftw/build`

###gsl 2.7.1
gsl 2.7.1 is needed for the `mathmore` option in ROOT. If you have an outdated version of gsl, ROOT will still compile, but it will skip installing `mathmore` and `root-config --has-mathmore` will return `no`. To fix this, grab the latest source code for gsl from here: https://www.gnu.org/software/gsl/. Untar the files to a directory of your choosing:

    tar -xzvf gsl-latest.tar.gz

For some reason I also installed gsl to my home directory, but in principle you can put it wherever you want. 

    mkdir /home/wluszczak/gsl
    ./configure --prefix=/home/wluszczak/gsl
    make
    make check
    make install

To make sure ROOT can find this installation of gsl, you'll again need to set an environment variable prior to building ROOT:

    export GSL_ROOT_DIR=/home/wluszczak/gsl/
    
I also added this to my $PATH variable, though I don't remember if that was required to get things working or not:

    export PATH=/home/wluszczak/gsl/bin/:$PATH 
    export LD_LIBRARY_PATH=/home/wluszczak/gsl/lib:$LD_LIBRARY_PATH

 

###Python 3.9.18
Apr. 6, 2024 edit by Jason Yao:
I was able to follow this entire ELOG to install root 6.24.00, but I also was getting warnings/errors that seem to be related to Python,
so I went ahead and installed Python 3.9.18 (and then a newer version of ROOT just for fun).
Note that even though we can `module load python/3.9-2022.05` on OSC, I am 90% sure that this provided Python instance is no good as far as ROOT is concerned.

Head over to https://www.python.org/downloads/release/python-3918/ to check out the source code.
You can run
    wget https://www.python.org/ftp/python/3.9.18/Python-3.9.18.tgz
to download it on OSC; then, run
    tar -xzvf Python-3.9.18.tgz
    cd Python-3.9.18

Next we will compile Python from source. I used this website for this step.
I wanted to install to `${HOME}/usr/Python-3.9.18/install`, so
    ./configure --prefix=${HOME}/usr/Python-3.9.18/install --enable-shared
Note that we must have the flag `--enable-shared` "to ensure that shared libraries are built for Python. By not doing this you are preventing any application which wants to use Python as an embedded environment from working" according to this guy.
(The corresponding error when you try to compile ROOT later on would look like "...can not be used when making a shared object; recompile with -fPIC...")

After configuration,
    make -j8 && make install
(using 8 threads)

After installation, head over to the install directory, and then
    cd bin
You should see `pip3` and `python3.9`. If you run
    ./python3
you should see the Python interactive terminal
    Python 3.9.18 (main, Apr  5 2024, 22:49:51) 
    [GCC 11.1.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>>
Add the python3 in this folder to your PATH variable. For example, 
    export PATH=${HOME}/usr/Python-3.9.18/install/bin:${PATH}
 

While we are in the python install directory, we might as well also use the `pip3` there to install numpy:
    ./pip3 install numpy
(I am not sure if this is absolutely needed by ROOT, but probably)

Next comes the important bit. You need to add the `lib/` directory inside you python installation to the environment variable $LD_LIBRARY_PATH
    export LD_LIBRARY_PATH=${HOME}/usr/Python-3.9.18/install/lib:$LD_LIBRARY_PATH
according to stackoverflow. Without this step I ran into errors when compiling ROOT.

 

###ROOT 6.24.00
Download the specific version of ROOT that you need from here: https://root.cern/install/all_releases/

You might need to additionally install some of the dependencies (https://root.cern/install/dependencies/), but it seems like everything I needed was already installed on my system. 

Untar the source you downloaded:
    
    tar -xzvf root_v6.24.00.source.tar.gz

Make some build and install directories:

    mkdir build install
    cd build

Run CMake, but be sure to enable the fortan, mathmore and minuit2 options. For reference, I had downloaded and untarred the source files to `/scratch/wluszczak/root`. Your installation and source paths will be different.

    cmake -DCMAKE_INSTALL_PREFIX=/scratch/wluszczak/root/install/ /scratch/wluszczak/root/root-6.24.00/ -Dfortran=ON -Dminuit2=ON -Dmathmore=ON

Note: if you end up with an error related to compiling XROOTD, then add -Dxrootd=OFF to the original cmake command above.

Then proceed to start the build:

    cmake --build . --target install -j $(nproc)
    

If everything has worked then after the above command finishes running, you should be able to run the following file to finish setting up ROOT:

    source ../install/bin/thisroot.sh

##pueoBuilder
By this point, you should have working installations of CMake 3.21.2, gcc-11.1.0, fftw 3.3.9, and ROOT 6.24.00. Additionally, the following environment variables should have been set:

    export PATH=/scratch/wluszczak/cmake/install/bin:$PATH

    export PATH=/home/wluszczak/gcc-11.1.0/bin:$PATH
    export LD_LIBRARY_PATH=/home/wluszczak/gcc-11.1.0/lib64:$LD_LIBRARY_PATH

    export CC=/home/wluszczak/gcc-11.1.0/bin/gcc-11.1
    export CXX=/home/wluszczak/gcc-11.1.0/bin/g++-11.1
    export FC=/home/wluszczak/gcc-11.1.0/gfortran-11.1

    export FFTWDIR=/scratch/wluszczak/fftw/build

At this point, the hard work is mostly done. Check out pueoBuilder with:

    git clone git@github.com:PUEOCollaboration/pueoBuilder 

set the following environment variables:

    export PUEO_BUILD_DIR=/scratch/wluszczak/PUEO/pueoBuilder
    export PUEO_UTIL_INSTALL_DIR=/scratch/wluszczak/PUEO/pueoBuilder
    export NICEMC_SRC=${PUEO_BUILD_DIR}/components/nicemc
    export NICEMC_BUILD=${PUEO_BUILD_DIR}/build/components/nicemc
    export PUEOSIM_SRC=${PUEO_BUILD_DIR}/components/pueoSim
    export LD_LIBRARY_PATH=${PUEO_UTIL_INSTALL_DIR}/lib:$LD_LIBRARY_PATH

Where $PUEO_BUILD_DIR and $PUEO_UTIL_INSTALL_DIR point to where you cloned pueoBuilder to (in my case, `/scratch/wluszczak/PUEO/pueoBuilder`. Now you should be able to just run:

    ./pueoBuilder.sh

Perform a prayer to the C++ gods while you're waiting for it to compile, and hopefully at the end of the day you'll have a working set of PUEO software. 

##Issues I Ran Into
If you already have an existing installation of ROOT, you may still need to recompile to make sure you're using the same c++ standard that the PUEO software is using. I believe the pre-compiled ROOT binaries available through their website are insufficient, though maybe someone else has been able to get those working. 

If you're running into errors about c++ standard or compiler version even after you have installed gcc-11.1.0, then for some reason your system isn't recognizing your local installation of gcc-11.1.0. Check the path variables ($PATH and $LD_LIBRARY_PATH) to make sure the gcc-11.1.0 `bin` directory is being searched.

If you're running into an error that looks like:
        
    CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
    Please set them or make sure they are set and tested correctly in the CMake files:
    FFTWF_LIB (ADVANCED)

then pueoBuilder can't seem to find your fftw installation (or files that are supposed to be included in that installation), try rebuilding with different flags according to which files it seems to think are missing.

If it seems like pueoBuilder can't seem to find your fftw installation at all (i.e. you're getting some error that looks like `missing: FFTW_LIBRARIES` or `missing: FFTW_INCLUDES`), check the environment variables that are supposed to point to your fftw installation (`$FFTWDIR`) and make sure there are the correct files in the `lib` and `include` subdirectories. 

Update: 6/23/24: The latest version of ROOT (6.30) will fail to compile on OSC unless you manually compile TBB as well. An easy workaround is to simply downgrade to ROOT 6.24, however if you really need ROOT 6.30 you can follow the instructions below to install TBB and compile ROOT:

You will first need to downgrade to GCC 10.X. TBB will not compile with GCC 11. This can be done by following the GCC installation isntructions above, except starting with GCC 10 source code instead of GCC 11.

To install TBB yourself, download the source code (preferably the .tar.gz file) from here: https://github.com/oneapi-src/oneTBB/releases/tag/v2021.12.0. Move the file to the directory where you want to install TBB and untar it with:

    tar -xzvf oneTBB-2021.12.0.tar.gz

Make some build and install directories:

    mkdir build install
    cd build

Then configure cmake:

    cmake -DCMAKE_INSTALL_PREFIX=/path/to/tbb/install

Then compile with:

    cmake --build .

Once this has finished running, you can add the installation to you $PATH and $LD_LIBRARY_PATH variables:

    export PATH=/path/to/tbb/install/bin:$PATH
    export LD_LIBRARY_PATH=/path/to/tbb/install/lib64:$LD_LIBRARY_PATH

You can then proceed as normal, except when compiling root you will need one additional cmake flag (Dbuiltin_tbb=ON):

  cmake -DCMAKE_INSTALL_PREFIX=/scratch/wluszczak/root/install/ /scratch/wluszczak/root/root-6.24.00/ -Dfortran=ON -Dminuit2=ON -Dmathmore=ON -Dbuiltin_tbb=ON

And hopefully this should work. This process is a little bit more involved than just downgrading ROOT, so try to avoid going down this route unless absolutely necessary.

 

  Draft   Sun Sep 17 20:05:29 2017 Spoorthi NagasamudramSome basic  
  Draft   Thu Apr 27 18:28:22 2017 Sam Stafford (Also Slightly Jacob)Installing AnitaTools on OSCSoftware

Jacob Here, Just want to add how I got AnitaTools to see FFTW:

1) echo $FFTW3_HOME to find where the lib and include dir is.

2) Next add the following line to the start of cmake/modules/FindFFTW.cmake

'set ( FFTW_ROOT full/path/you/got/from/step/1)'

 

Brief, experience-based instructions on installing the AnitaTools package on the Oakley OSC cluster.

Attachment 1: OSC_build.txt
Installing AnitaTools on OSC
Sam Stafford
04/27/2017

This document summarizes the issues I encountered installing AnitaTools on the OSC Oakley cluster.
I have indicated work-arounds I made for unexpected issues
  I do not know that this is the only valid process
  This process was developed by trial-and-error (mostly error) and may contain superfluous steps
    A person familiar with AnitaTools and cmake may be able to streamline it

Check out OSC's web site, particularly to find out about MODULES, which facilitate access to pre-installed software

export the following environment variables in your .bash_profile  (not .bashrc):

  ROOTSYS                             where you want ROOT to live
     install it in somewhere your user directory; at this time, ROOT is not pre-installed on Oakley as far as I can tell
  ANITA_UTIL_INSTALL_DIR              where you want anitaTools to live
  FFTWDIR                             where fftw is 
    look on OSC's website to find out where it is; you shouldn't have to install it locally

  PATH   should contain $FFTWDIR/bin  and $ROOTSYS/bin
  LD_LIBRARY_PATH should contain     $FFTWDIR/lib    $ROOTSYS/lib     $ANITA_UTIL_INSTALL_DIR/lib
  LD_INCLUDE_PATH should contain     $FFTWDIR/include    $ROOTSYS/include     $ANITA_UTIL_INSTALL_DIR/include

also put in your .bash_profile: (I put these after the exports)
  
    module load gnu/4.8.5     // loads g++ compiler
           (this should automatically load module fftw/3.3.4 also)

install ROOT  - follow ROOT's instructions to build from source.  It's a typical (configure / make / make install) sequence
  you probably need  ./configure --enable-Minuit2

get AnitaTools from github/anitaNeutrino "anitaBuildTool" (see Anita ELOG 672, by Cosmin Deaconu)

Change entry in which_event_reader to ANITA3, if you want to analyze ANITA-3 data
  (at least for now; I think they are developing smarts to make the SW adapt automatically to the anita data "version")

Do ./buildAnita.sh       //downloads the software and attempts a full build/install

    it may file on can't-find-fftw during configure:
      system fails to populate environment variable FFTW_ROOT, not sure why
      add the following line at beginning of anitaBuildTool/cmake/modules/FindFFTW.cmake:
        set( FFTW_ROOT /usr/local/fftw3/3.3.4-gnu)
          (this apparently tricks cmake into finding fftw)
  NOTE: ./buildAnita.sh always downloads the software from github.  IT WILL WIPE OUT ANY CHANGES YOU MADE TO AnitaTools!
      
Do "make" 
    May fail with   /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.14' not found and/or a few other similar messages 
                   or may say c11 is not supported  
      need to change compiler setting for cmake:
        make the following change in anitaBuildTool/build/CMakeCache.txt   (points cmake to the g++ compiler instead of default intel/c++)
          #CMAKE_CXX_COMPILER:FILEPATH=/usr/bin/c++                  (comment this out)
           CMAKE_CXX_COMPILER:FILEPATH=/usr/local/gcc/4.8.5/bin/g++  (add this)  
       (you don't necessarfily have to use version, gcc/4.8.5, but it worked for me)

Then retry by doing "make"

Once make is completed, do "make install"

A couple of notes:
  Once AnitaTools is built, if you change source, just do make, then make install (from anitaBuildTool) (don't ./buildAnita.sh; see above)
      (actually make install will do the make step if source changes are detected)
  To start AnitaTools over from cmake, delete the anitaBuildTool/build directory and run make (not cmake: make will drive cmake for you)
      (don't do cmake directly unless you know what you're doing; it'll mess things up)
  

  9   Thu May 11 13:43:46 2017 Sam StaffordNotes on installing icemc on OSCSoftware
Attachment 1: icemc_setup_osc.txt
A few notes about installing icemc on OSC

Dependencies
  ROOT - download from CERN and install according to instructions
  FFTW - do "module load gnu/4.8.5"   (or put it in your .bash_profile)

The environment variable FFTWDIR must contain the directory where FFTW resides
  in my case this was /usr/local/fftw3/3.3.4-gnu 
  set this up in your .bash_profile (not .bashrc)

I copied my working instance of icemc from my laptop to a folder in my osc space
  Copy the whole icemc directory (maybe its icemc/trunk, depending on how you installed), EXCEPT for the "output" subdir because it's big and unnecessary
  in your icemc directory on OSC, do "mkdir output"

In icemc/Makefile
  find a statement like this:
    LIBS += -lMathMore $(FFTLIBS) -lAnitaEvent
  and modify it to include the directory where the FFTW library is:
    LIBS += -L$(FFTWDIR)/lib -lMathMore $(FFTLIBS) -lAnitaEvent

  note: FFTLIBS contains the list of libraries (e.g., -lfftw3), NOT the library search paths
  
Compile by doing "make"

Remember you should set up a batch job on OSC using PBS.

  10   Thu May 11 14:38:10 2017 Sam StaffordSample OSC batch job setupSoftware

Batch jobs on OSC are initiated through the Portable Batch System (PBS).  This is the recommended way to run stuff on OSC clusters.
Attached is a sample PBS script that copies files to temporary storage on the OSC cluster (also recommended) and runs an analysis program.
Info on batch processing is at https://www.osc.edu/supercomputing/batch-processing-at-osc.
     This will tell you how to submit and manage batch jobs.
More resources are available at www.osc.edu.

PBS web site: /www.pbsworks.com

The PBS user manual is at www.pbsworks.com/documentation/support/PBSProUserGuide10.4.pdf.

Attachment 1: osc_batch_jobs.txt
## annotated sample PBS batch job specification for OSC
## Sam Stafford 05/11/2017

#PBS -N j_ai06_${RUN_NUMBER}

##PBS -m abe       ##  request an email on job completion
#PBS -l mem=16GB   ##  request 16GB memory
##PBS -l walltime=06:00:00  ## set this in qsub
#PBS -j oe         ## merge stdout and stderr into a single output log file
#PBS -A PAS0174

echo "run number " $RUN_NUMBER
echo "cal pulser " $CAL_PULSER
echo "baseline file " $BASELINE_FILE
echo "temp dir is " $TMPDIR
echo "ANITA_DATA_REMOTE_DIR="$ANITA_DATA_REMOTE_DIR
set -x


## copy the files from kingbee to the temporary workspace  
##    (if you set up public key authentication between kingbee and OSC, you won't need a password; just google "public key authentication")
mkdir $TMPDIR/run${RUN_NUMBER}   ## make a directory for this run number
scp stafford.16@kingbee.mps.ohio-state.edu:/data/anita/anita3/flightData/copiedBySam/run${RUN_NUMBER}/calEventFile${RUN_NUMBER}.root $TMPDIR/run${RUN_NUMBER}/calEventFile${RUN_NUMBER}.root
scp stafford.16@kingbee.mps.ohio-state.edu:/data/anita/anita3/flightData/copiedBySam/newerData/run${RUN_NUMBER}/gpsEvent${RUN_NUMBER}.root $TMPDIR/run${RUN_NUMBER}/gpsEvent${RUN_NUMBER}.root
scp stafford.16@kingbee.mps.ohio-state.edu:/data/anita/anita3/flightData/copiedBySam/newerData/run${RUN_NUMBER}/timedHeadFile${RUN_NUMBER}.root $TMPDIR/run${RUN_NUMBER}/timedHeadFile${RUN_NUMBER}.root
scp stafford.16@kingbee.mps.ohio-state.edu:/data/anita/anita3/flightData/copiedBySam/newerData/run${RUN_NUMBER}/decBlindHeadFileV1_${RUN_NUMBER}.root $TMPDIR/run${RUN_NUMBER}/decBlindHeadFileV1_${RUN_NUMBER}.root

## set up the environment variables to point to the temporary work space
export ANITA_DATA_REMOTE_DIR=$TMPDIR
export ANITA_DATA_LOCAL_DIR=$TMPDIR
echo "ANITA_DATA_REMOTE_DIR="$ANITA_DATA_REMOTE_DIR

## run the analysis program
cd analysisSoftware
./analyzerIterator06 ${CAL_PULSER} -S1 -Noverlap --FILTER_OPTION=4 ${BASELINE_FILE} ${RUN_NUMBER} -O
echo "batch job ending"

  14   Mon Sep 18 12:06:01 2017 Oindree BanerjeeHow to get anitaBuildTool and icemc set up and workingSoftware

First try reading and following the instructions here

https://u.osu.edu/icemc/new-members-readme/

Then e-mail me at oindreeb@gmail.com with your problems 

 

  35   Tue Feb 26 19:07:40 2019 Lauren EnnesserValgrind command to suppress ROOT warnings 

valgrind --suppressions=$ROOTSYS/etc/valgrind-root.supp ./myCode

If you use valgrind to identify potential memory leaks in your code, but use a lot of ROOT objects and functions, you'll notice that ROOTs TObjects trigger a lot of "potential memory leak" warnings. This option will suppress many of those. More info at https://root-forum.cern.ch/t/valgrind-and-root/2ss8506

  42   Tue Nov 5 16:22:16 2019 Keith McBrideNASA Proposal fellowshipsOther

Here is a useful link for astroparticle grad students related proposals from NASA:

 

https://nspires.nasaprs.com/external/solicitations/summary.do?solId=%7BE16CD59F-29DD-06C0-8971-CE1A9C252FD4%7D&path=&method=init

Full email I received regarding this information was:

_______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

ROSES-19 Amendment adds a new program element to ROSES-2019: Future Investigators in NASA Earth and Space Science and Technology (FINESST), the ROSES Graduate Student Research Program Element, E.6.

 

Through FINESST, the Science Mission Directorate (SMD) solicits proposals from accredited U.S. universities and other eligible organizations for graduate student-designed and performed research projects that contribute to SMD's science, technology and exploration goals.

 

A Notice of Intent is not requested for E.6 FINESST. Proposals to FINESST are due by February 4, 2020.

 

Potential proposers who wish to participate in the optional pre-proposal teleconference December 2, 2019 from 1:00-2:30 p.m. Eastern Time may (no earlier than 30 minutes prior to the start time) call 1-888-324-3185 (U.S.-only Toll Free) or 1-630-395-0272 (U.S. Toll) and use Participant Passcode: 8018549. Restrictions may prevent the use of a toll-free number from a mobile or free-phone or from telephones outside the U.S. For U.S. TTY-equipped callers or other types of relay service no earlier than 30 minutes before the start of the teleconference, call 711 and provide the same conference call number/passcode. Email HQ-FINESST@mail.nasa.gov any teleconference agenda suggestions and questions by November 25, 2019. Afterwards, questions and responses, with identifying information removed, will be posted on the NSPIRES page for FINESST under "other documents".

 

FINESST awards research grants with a research mentor as the principal investigator and the listed graduate student listed as the "student participant". Unlike the extinct NASA Earth and Space Science Fellowships (NESSF), the Future Investigators (FIs) are not trainees or fellows. Students with existing NESSF awards seeking a third year of funding, may not submit proposals to FINESST. Instead, they may propose to NESSF20R. Subject to a period of performance restriction, some former NESSFs may be eligible to submit proposals to FINESST.

 

On or about November 1, 2019, this Amendment to the NASA Research Announcement "Research Opportunities in Space and Earth Sciences (ROSES) 2019" (NNH19ZDA001N) will be posted on the NASA research opportunity homepage at http://solicitation.nasaprs.com/ROSES2019 and will appear on the RSS feed at: https://science.nasa.gov/researchers/sara/grant-solicitations/roses-2019/

 

Questions concerning this program element may be directed to HQ-FINESST@mail.nasa.gov.

  39   Thu Jul 11 10:05:37 2019 Justin FlahertyInstalling PyROOT for Python 3 on OwensSoftware

In order to get PyROOT working for Python 3, you must build ROOT with a flag that specifies Python 3 in the installation.  This method will create a folder titled root-6.16.00 in your current directory, so organize things how you see fit. Then the steps are relatively simple:

wget https://root.cern/download/root_v6.16.00.source.tar.gz
tar -zxf root_v6.16.00.source.tar.gz
cd root-6.16.00
mkdir obj
cd obj
cmake .. -Dminuit2=On -Dpython3=On
make -j8

If you wish to do a different version of ROOT, the steps should be the same:

wget https://root.cern/download/root_v<version>.source.tar.gz
tar -zxf root_v<version>.source.tar.gz
cd root-<version>
mkdir obj
cd obj
cmake .. -Dminuit2=On -Dpython3=On
make -j8

  Draft   Fri Feb 28 13:09:53 2020 Justin FlahertyInstalling anitaBuildTools on OSC-Owens (Revised 2/28/2020) 
  Draft   Thu Sep 21 14:30:18 2017 Julie RollaUsing/Running XF 

Below I've attached a video with some information regarding running XF. Before you start, here's some important information you need to know. 

 

In order to run XF:

XF can now only run on a windows OS. If you are on a Mac, you can dual boot with windows 10 that is not activated—this still works. The lack of activation gives you minimal contraints, and will not hinder your ability to run XF. Otherwise, you can use a queen bee machine. Note that there are two ways to run XF itself (once you are on a windows machine). You can run via OSC on Oakly--This has a floating license and you will not need to use the USB Key-- or you can use the USB key. Unfortunately, at the moment, we only have one USB key, and it may be best to get an account through OSC. 

 

2 methods: 

 

1.) If you are not using Oakly— you need the USB key to run

 

2.) Log in to Oakly — you do not need the USB key to run

 

------

for method 1:

 XFdtd you can just click on the icon

Will pop up with “need to find a valid license”

click “ok” and insert the USB key and hit “retry"

 

For method 2: 

Log in to Oakly — To log into Oakly follow the steps between **

 

** ssh in 

put “module load XFdtd” in command line— this gets you access to the libraries

then put “XFdtd” to load it

 

————

 

Note: you must have drivers installed to use usb key. Once you plug it in the first time, it will tell you what to install.

Note: The genetic algorithm will change the geometry of the detector and XF will check the gain values with those given geometries. 

 

After XF is loaded:

Components are listed on the left. Each are different for different simulations. 

To put in geometry on antenna, click on “parts”. 

click “create new”

choose a type of geometry. 

Note that you can also go to “file” and import and you can import cad files. 

 

“Create simulation”— save the project, and give it a name. Then, click “create simulation”. This stores all of the geometry and settings of the simulation. Now you could, if you wanted to, browse all of the different types of simulations. 

 

How to actually run the simulations:

In this example, Carl setup a planar sensor with a point sensor inside the sphere, and two more sensors on each side of the sphere. Now, you load the simulation and hit the play button at the bottom. Note that this should take 20 or 30 minutes for it to actually simulate. When it is done, you can actually re-click the play button and it will show you the visual simulation. It will automatically write this out when running the simulations. You either now need to parse that, or be able to view the data in XF itself. 

 

You can click “file” ad export data. Additionally, you can export the image. Note that this can give you a “smith chart”; this is that gain measurement you’re looking for. If you had a far field/zone sensor, then you could get a far field gain— which is this smith chart. To get the smith chart, you hover over the sensor, and right click. This should give you the option of a smith chart if you had the correct sensor. Note that all of this data and information is on the right hand side under “results” this will pull up all of the sensors, which you can right click to gather the actual individual data on.  

 

Note: the far zone sensor puts sensors completely symmetrical around the object. Ie if we have a sphere, we will have a larger sphere outside of our conducting sphere/antenna.

  34   Tue Feb 26 16:19:20 2019 Julie RollaAll of the group GitHub account linksSoftware

ANITA Binned Analysis: https://github.com/osu-particle-astrophysics/BinnedAnalysis

GENETIS Bicone: https://github.com/mclowdus/BiconeEvolution

GENETIS Dipole: https://github.com/hchasan/XF-Scripts

ANITA Build tool: https://github.com/anitaNeutrino/anitaBuildTool

ANITA Hackathon: https://github.com/anitaNeutrino/hackathon2017

ICEMC: https://github.com/anitaNeutrino/icemc

Brian's Github: https://github.com/clark2668?tab=repositories

 

Note that you *may* need permissions for some of these. Please email Lauren (ennesser.1@buckeyemail.osu.edu ), Julie (JulieRolla@gmail.com), AND Brian (clark.2668@buckeyemail.osu.edu ) if you have any issues with permssions. Please state which GitHub links you are looking to view. 

  6   Tue Apr 25 10:22:50 2017 Jude RajasekeraShelfMC Cluster RunsSoftware

Doing large runs of ShelfMC can be time intensive. However, if you have access to a computing cluster like Ruby or KingBee, where you are given a node with multiple processors, ShelfMC runs can be optimized by utilizing all available processors on a node. The multithread_shelfmc.sh script automates these runs for you. The script and instructions are attached below.

Attachment 1: multithread_shelfmc.sh
#!/bin/bash
#Jude Rajasekera 3/20/2017
shelfmcDir=/users/PCON0003/cond0091/ShelfMC #put your shelfmc directory address here 
 

runName='TestRun' #name of run
NNU=500000 #total NNU per run
seed=42 #initial seed for every run,each processor will recieve a different seed (42,43,44,45...)
NNU="$(($NNU / 20))" #calculating NNU per processor, change 20 to however many processor your cluster has per node
ppn=20 #processors per node
########################### make changes for input.txt file here #####################################################
input1="#inputs for ARIANNA simulation, do not change order unless you change ReadInput()"
input2="$NNU #NNU, setting to 1 for unique neutrino"
input3="$seed   #seed Seed for Rand3"
input4="18.0    #EXPONENT, !should be exclusive with SPECTRUM"
input5="1000    #ATGap, m, distance between stations"
input6="4       #ST_TYPE, !restrict to 4 now!"
input7="4       #N_Ant_perST, not to be confused with ST_TYPE above"
input8="2       #N_Ant_Trigger, this is the minimum number of AT to trigger"
input9="30      #Z for ST_TYPE=2"
input10="575     #ICETHICK, thickness of ice including firn, 575m at Moore's Bay"
input11="1      #FIRN, KD: ensure DEPTH_DEPENDENT is off if FIRN is 0"
input12="1.30   #NFIRN 1.30"
input13="$122    #FIRNDEPTH in meters"
input14="1      #NROWS 12 initially, set to 3 for HEXAGONAL"
input15="1      #NCOLS 12 initially, set to 5 for HEXAGONAL"
input16="0      #SCATTER"
input17="1      #SCATTER_WIDTH,how many times wider after scattering"
input18="0      #SPECTRUM, use spectrum, ! was 1 initially!"
input19="0      #DIPOLE,  add a dipole to the station, useful for st_type=0 and 2"
input20="0      #CONST_ATTENLENGTH, use constant attenuation length if ==1"
input21="1000   #ATTEN_UP, this is the conjuction of the plot attenlength_up and attlength_down when setting REFLECT_RATE=0.5(3dB)"
input22="250    #ATTEN_DOWN, this is the average attenlength_down before Minna Bluff measurement(not used anymore except for CONST_ATTENLENGTH)"
input23="4      #NSIGMA, threshold of trigger"
input24="1      #ATTEN_FACTOR, change of the attenuation length"
input25="1      #REFLECT_RATE,power reflection rate at the ice bottom"
input26="0      #GZK, 1 means using GZK flux, 0 means E-2 flux"
input27="0      #FANFLUX, use fenfang's flux which only covers from 10^17 eV to 10^20 eV"
input28="0      #WIDESPECTRUM, use 10^16 eV to 10^21.5 eV as the energy spectrum, otherwise use 17-20"
input29="1      #SHADOWING"
input30="1      #DEPTH_DEPENDENT_N;0 means uniform firn, 1 means n_firn is a function of depth"
input31="0      #HEXAGONAL"
input32="1      #SIGNAL_FLUCT 1=add noise fluctuation to signal or 0=do not"
input33="4.0    #GAINV  gain dependency"
input34="1      #TAUREGENERATION if 1=tau regeneration effect, if 0=original"
input35="3.0    #ST4_R radius in meters between center of station and antenna"
input36="350    #TNOISE noise temperature in Kelvin"
input37="80     #FREQ_LOW low frequency of LPDA Response MHz #was 100"
input38="1000   #FREQ_HIGH high frequency of LPDA Response MHz"
input39="/users/PCON0003/cond0091/ShelfMC/temp/LP_gain_manual.txt     #GAINFILENAME"
###########################################################################################

cd $shelfmcDir #cd to dir containing shelfmc
mkdir $runName #make a folder for run
cd $runName    #cd into run folder
initSeed=$seed 

for (( i=1; i<=$ppn;i++)) #make 20 setup files for 20 processors
do
    mkdir Setup$i #make setup folder i
    cd Setup$i #go into setup folder i
    
    seed="$(($initSeed+$i-1))" #calculate seed for this iteration
    input3="$seed      #seed Seed for Rand3" #save new input line    
    for j in {1..40} #print all input.txt lines 
    do
	lineName=input$j
        echo "${!lineName}" >> input.txt #print line to input.txt file
    done
    cd ..
done



pwd=`pwd`

#create job file
echo '#!/bin/bash' >> run_shelfmc_multithread.sh
echo '#PBS -l nodes=1:ppn='$ppn >> run_shelfmc_multithread.sh #change depending on processors per node
echo '#PBS -l walltime=00:05:00' >> run_shelfmc_multithread.sh #change walltime depending on run size, will be 20x shorter than single processor run time
echo '#PBS -N shelfmc_'$runName'_job' >> run_shelfmc_multithread.sh
echo '#PBS -j oe'  >> run_shelfmc_multithread.sh
echo '#PBS -A PCON0003' >> run_shelfmc_multithread.sh #change to specify group
echo 'cd ' $shelfmcDir >> run_shelfmc_multithread.sh 
echo 'runName='$runName  >> run_shelfmc_multithread.sh
for (( k=1; k<=$ppn;k++))
do
    echo './shelfmc_stripped.exe $runName/Setup'$k' _'$k'$runName &' >> run_shelfmc_multithread.sh #execute commands for 20 setup files
done
echo 'wait' >> run_shelfmc_multithread.sh #wait until all runs are finished
echo 'cd $runName' >> run_shelfmc_multithread.sh #go into run folder 
echo 'for (( i=1; i<='$ppn';i++)) #20 iterations' >> run_shelfmc_multithread.sh 
echo 'do' >> run_shelfmc_multithread.sh
echo '  cd Setup$i #cd into setup dir' >> run_shelfmc_multithread.sh
echo '  mv *.root ..' >> run_shelfmc_multithread.sh #move root files to runDir
echo '  cd ..' >> run_shelfmc_multithread.sh
echo 'done' >> run_shelfmc_multithread.sh
echo 'hadd Result_'$runName'.root *.root' >> run_shelfmc_multithread.sh #add all root files
echo 'rm *ShelfMCTrees*' >> run_shelfmc_multithread.sh #delete all partial root files

chmod u+x run_shelfmc_multithread.sh

echo "Run files created"
echo "cd into run folder and do $ qsub run_shelfmc_multithread.sh"
Attachment 2: multithread_shelfmc_walkthrough.txt
This document will explain how to dowload, configure, and run multithread_shelfmc.sh in order to do large runs on computing clusters.

####DOWNLOAD####
1.Download multithread_shelfmc.sh
2.Move multithread_shelfmc.sh into ShelfMC directory
3.Do $chmod u+x multithread_shelfmc.sh

####CONFIGURE###
1.Open multithread_shelfmc.sh
2.On line 3, modify shelfmcDir to your ShelfMC dir
3.On line 6, add your run name
4.On line 7, add the total NNU
5.On line 8, add an intial seed
6.On line 10, specify number of processors per node for your cluster
7.On lines 12-49, edit the input.txt parameters
8.On line 50, add the location of your LP_gain_manual.txt
9.On line 80, specify a wall time for each run, remember this will be about 20x shorter than ShelfMC on a single processor
10.On line 83, Specify the group name for your cluster if needed
11.Save file

####RUN####
1.Do $./multithread_shelfmc.sh 
2.There should now be a new directory in the ShelfMC dir with 20 setup files and a run_shelfmc_multithread.sh script
3.Do $qsub run_shelfmc_multithread.sh

###RESULT####
1.After the run has completed, there will be a result .root file in the run directory
 
  7   Tue Apr 25 10:35:43 2017 Jude RajasekeraShelfMC Parameter Space ScanSoftware

These scripts allow you to do thousands of ShelfMC runs while varying certain parameters of your choice. As is, the attenuation length, reflection, ice thickness, firn depth, station depth is varied over certain rages; in total, the whole Parameter Space Scan does 5250 runs on a cluster like Ruby or KingBee. The scripts and instructions are attached below. 

Attachment 1: ParameterSpaceScan_instructions.txt
This document will explain how to dowload, configure, and run a parameter space search for ShelfMC on a computing cluster. 
These scripts explore the ShelfMC parameter space by varying ATTEN_UP, REFLECT_RATE, ICETHICK, FIRNDEPTH, and STATION_DEPTH for certain rages. 
The ranges and increments can be found in setup.sh. 

In order to vary STATION_DEPTH, some changes were made to the ShelfMC code. Follow these steps to allow STATION_DEPTH to be an input parameter.
1.cd to ShelfMC directory
2.Do $sed -i -e 's/ATDepth/STATION_DEPTH/g' *.cc
3.Open declaration.hh. Replace line 87 "const double ATDepth = 0.;" with "double STATION_DEPTH;"
4.In functions.cc go to line 1829. This is the ReadInput() method. Add the lines below to the end of this method. 
   GetNextNumber(inputfile, number); // new line for station Depth
   STATION_DEPTH  = (double) atof(number.c_str()); //new line
5.Do $make clean all

#######Script Descriptions########
setup.sh -> This script sets up the necessary directories and setup files for all the runs
scheduler.sh -> This script submits and monitors all jobs. 


#######DOWNLOAD########
1.Download setup.sh and scheduler.sh
2.Move both files into your ShelfMC directory
3.Do $chmod u+x setup.sh and $chmod u+x scheduler.sh

######CONFIGURE#######
1.Open setup.sh
2.On line 4, modify the job name
3.On line 6, modify group name
4.On line 10, specify your ShelfMC directory
5.On line 13, modify your run name
6.On line 14, specify the NNU per run
7.On line 15, specify the starting seed
8.On line 17, specify the number of processors per node on your cluster
9.On lines 19-56, edit the input.txt parameters that you want to keep constant for every run
10.On line 57, specify the location of the LP_gain_manual.txt
11.On line 126, change walltime depending on total NNU. Remember this wall time will be 20x shorter than a single processor run.
12.On line 127, change job prefix
13.On line 129, change the group name if needed 
14.Save file
15.Open scheduler.sh
16.On line 4, specify your ShelfMC directory
17.On line 5, modify run name. Make sure it is the same runName as you have in setup.sh
18.On lines 35 and 39, replace cond0091 with your username for the cluster
19.On line 42, you can pick how many nodes you want to use at any given time. It is set to 6 intially. 
20.Save file 

#######RUN#######
1.Do $qsub setup.sh
2.Wait for setup.sh to finish. This script is creating the setup files for all runs. This may take about an hour.
3.When setup.sh is done, there should be a new directory in your home directory. Move this directory to your ShelfMC directory.
4.Do $screen to start a new screen that the scheduler can run on. This is incase you lose connection to the cluster mid run. 
5.Do $./scheduler.sh to start script. This script automatically submits jobs and lets you see the status of the runs. This will run for several hours.
5.The scheduler makes a text file of all jobs called jobList.txt in the ShelfMC dir. Make sure to delete jobList.txt before starting a whole new run.


######RESULT#######
1.When Completed, there will be a great amount of data in the run files, about 460GB.  
2.The run directory is organized in tree, results for particular runs can be found by cd'ing deeper into the tree.
3.In each run directory, there will be a resulting root file, all the setup files, and a log file for the run.
 
Attachment 2: setup.sh
#!/bin/bash
#PBS -l walltime=04:00:00
#PBS -l nodes=1:ppn=1,mem=4000mb
#PBS -N jude_SetupJob
#PBS -j oe
#PBS -A PCON0003
#Jude Rajasekera 3/20/17
#directories
WorkDir=$TMPDIR   
tmpShelfmc=$HOME/shelfmc/ShelfMC #set your ShelfMC directory here

#controlled variables for run
runName='ParamSpaceScanDir' #name of run
NNU=500000 #NNU per run
seed=42 #starting seed for every run, each processor will recieve a different seed (42,43,44,45...)
NNU="$(($NNU / 20))" #calculating processors per node, change 20 to however many processors your cluster has per node
ppn=5 #number of processors per node on cluster
########################### input.txt file ####################################################
input1="#inputs for ARIANNA simulation, do not change order unless you change ReadInput()"
input2="$NNU #NNU, setting to 1 for unique neutrino"
input3="$seed      #seed Seed for Rand3"
input4="18.0    #EXPONENT, !should be exclusive with SPECTRUM"
input5="1000    #ATGap, m, distance between stations"
input6="4       #ST_TYPE, !restrict to 4 now!"
input7="4 #N_Ant_perST, not to be confused with ST_TYPE above"
input8="2 #N_Ant_Trigger, this is the minimum number of AT to trigger"
input9="30      #Z for ST_TYPE=2"
input10="$T   #ICETHICK, thickness of ice including firn, 575m at Moore's Bay"
input11="1       #FIRN, KD: ensure DEPTH_DEPENDENT is off if FIRN is 0"
input12="1.30    #NFIRN 1.30"
input13="$FT      #FIRNDEPTH in meters"
input14="1 #NROWS 12 initially, set to 3 for HEXAGONAL"
input15="1 #NCOLS 12 initially, set to 5 for HEXAGONAL"
input16="0       #SCATTER"
input17="1       #SCATTER_WIDTH,how many times wider after scattering"
input18="0       #SPECTRUM, use spectrum, ! was 1 initially!"
input19="0       #DIPOLE,  add a dipole to the station, useful for st_type=0 and 2"
input20="0       #CONST_ATTENLENGTH, use constant attenuation length if ==1"
input21="$L     #ATTEN_UP, this is the conjuction of the plot attenlength_up and attlength_down when setting REFLECT_RATE=0.5(3dB)"
input22="250     #ATTEN_DOWN, this is the average attenlength_down before Minna Bluff measurement(not used anymore except for CONST_ATTENLENGTH)"
input23="4 #NSIGMA, threshold of trigger"
input24="1      #ATTEN_FACTOR, change of the attenuation length"
input25="$Rval    #REFLECT_RATE,power reflection rate at the ice bottom"
input26="0       #GZK, 1 means using GZK flux, 0 means E-2 flux"
input27="0       #FANFLUX, use fenfang's flux which only covers from 10^17 eV to 10^20 eV"
input28="0       #WIDESPECTRUM, use 10^16 eV to 10^21.5 eV as the energy spectrum, otherwise use 17-20"
input29="1       #SHADOWING"
input30="1       #DEPTH_DEPENDENT_N;0 means uniform firn, 1 means n_firn is a function of depth"
input31="0 #HEXAGONAL"
input32="1       #SIGNAL_FLUCT 1=add noise fluctuation to signal or 0=do not"
input33="4.0     #GAINV  gain dependency"
input34="1       #TAUREGENERATION if 1=tau regeneration effect, if 0=original"
input35="3.0     #ST4_R radius in meters between center of station and antenna"
input36="350     #TNOISE noise temperature in Kelvin"
input37="80      #FREQ_LOW low frequency of LPDA Response MHz #was 100"
input38="1000    #FREQ_HIGH high frequency of LPDA Response MHz"
input39="/home/rajasekera.3/shelfmc/ShelfMC/temp/LP_gain_manual.txt     #GAINFILENAME"
input40="$SD     #STATION_DEPTH"
#######################################################################################################

cd $TMPDIR   
mkdir $runName
cd $runName

initSeed=$seed
counter=0
for L in {500..1000..100} #attenuation length 500-1000
do
    mkdir Atten_Up$L
    cd Atten_Up$L

    for R in {0..100..25} #Reflection Rate 0-1
    do
        mkdir ReflectionRate$R
        cd ReflectionRate$R
        if [ "$R" = "100" ]; then #fixing reflection rate value
            Rval="1.0"
        else
            Rval="0.$R"
        fi

        for T in {500..2900..400} #Thickness of Ice 500-2900
        do
            mkdir IceThick$T
            cd IceThick$T
            for FT in {60..140..20} #Firn Thinckness 60-140
            do
                mkdir FirnThick$FT
                cd FirnThick$FT
                for SD in {0..200..50} #Station Depth
                do
                    mkdir StationDepth$SD
                    cd StationDepth$SD
                    #####Do file operations###########################################
                    counter=$((counter+1))
                    echo "Counter = $counter ; L = $L ; R = $Rval ; T = $T ; FT = $FT ; SD = $SD " #print variables

                    #define changing lines
                    input21="$L     #ATTEN_UP, this is the conjuction of the plot attenlength_up and attlength_down when setting REFLECT_RATE=0.5(3dB)"
                    input25="$Rval    #REFLECT_RATE,power reflection rate at the ice bottom"
                    input10="$T   #ICETHICK, thickness of ice including firn, 575m at Moore's Bay"
                    input13="$FT      #FIRNDEPTH in meters"
                    input40="$SD       #STATION_DEPTH"
		    
		    for (( i=1; i<=$ppn;i++)) #make 20 setup files for 20 processors
                    do

                        mkdir Setup$i #make setup folder
                        cd Setup$i #go into setup folder
                        seed="$(($initSeed + $i -1))" #calculate seed for this iteration
                        input3="$seed      #seed Seed for Rand3"

                        for j in {1..40} #print all input.txt lines
                        do
                            lineName=input$j
                            echo "${!lineName}" >> input.txt
                        done
			
                        cd ..
                    done
		    
		    pwd=`pwd`
                    #create job file
		    echo '#!/bin/bash' >> run_shelfmc_multithread.sh
		    echo '#PBS -l nodes=1:ppn='$ppn >> run_shelfmc_multithread.sh
		    echo '#PBS -l walltime=00:05:00' >> run_shelfmc_multithread.sh #change walltime as necessary
		    echo '#PBS -N jude_'$runName'_job' >> run_shelfmc_multithread.sh #change job name as necessary
		    echo '#PBS -j oe'  >> run_shelfmc_multithread.sh
		    echo '#PBS -A PCON0003' >> run_shelfmc_multithread.sh #change group if necessary
		    echo 'cd ' $tmpShelfmc >> run_shelfmc_multithread.sh
		    echo 'runName='$runName  >> run_shelfmc_multithread.sh
		    for (( i=1; i<=$ppn;i++))
		    do
			echo './shelfmc_stripped.exe $runName/'Atten_Up$L'/'ReflectionRate$R'/'IceThick$T'/'FirnThick$FT'/'StationDepth$SD'/Setup'$i' _'$i'$runName &' >> run_shelfmc_multithread.sh
		    done
		   # echo './shelfmc_stripped.exe $runName/'Atten_Up$L'/'ReflectionRate$R'/'IceThick$T'/'FirnThick$FT'/'StationDepth$SD'/Setup1 _01$runName &' >> run_shelfmc_multithread.sh
		    echo 'wait' >> run_shelfmc_multithread.sh
		    echo 'cd $runName/'Atten_Up$L'/'ReflectionRate$R'/'IceThick$T'/'FirnThick$FT'/'StationDepth$SD >> run_shelfmc_multithread.sh
		    echo 'for (( i=1; i<='$ppn';i++)) #20 iterations' >> run_shelfmc_multithread.sh
		    echo 'do' >> run_shelfmc_multithread.sh
		    echo '  cd Setup$i #cd into setup dir' >> run_shelfmc_multithread.sh
		    echo '  mv *.root ..' >> run_shelfmc_multithread.sh
		    echo '  cd ..' >> run_shelfmc_multithread.sh
		    echo 'done' >> run_shelfmc_multithread.sh
		    echo 'hadd Result_'$runName'.root *.root' >> run_shelfmc_multithread.sh
		    echo 'rm *ShelfMCTrees*' >> run_shelfmc_multithread.sh

		    chmod u+x run_shelfmc_multithread.sh # make executable

                    ##################################################################
                    cd ..
                done
                cd ..
            done
            cd ..
        done
        cd ..
    done
    cd ..
done
cd 

mv $WorkDir/$runName $HOME
Attachment 3: scheduler.sh
#!/bin/bash
#Jude Rajasekera 3/20/17

tmpShelfmc=$HOME/shelfmc/ShelfMC #location of Shelfmc
runName=ParamSpaceScanDir #name of run

cd $tmpShelfmc #move to home directory

if [ ! -f ./jobList.txt ]; then #see if there is an existing job file
    echo "Creating new job List"
    for L in {500..1000..100} #attenuation length 500-1000
    do
	for R in {0..100..25} #Reflection Rate 0-1
	do
            for T in {500..2900..400} #Thickness of Ice 500-2900
            do
		for FT in {60..140..20} #Firn Thinckness 60-140
		do
                    for SD in {0..200..50} #Station Depth
                    do
		    echo "cd $runName/Atten_Up$L/ReflectionRate$R/IceThick$T/FirnThick$FT/StationDepth$SD" >> jobList.txt
                    done
		done
            done
	done
    done
else 
    echo "Picking up from last job"
fi


numbLeft=$(wc -l < ./jobList.txt)
while [ $numbLeft -gt 0 ];
do
    jobs=$(showq | grep "rajasekera.3") #change username here
    echo '__________Current Running Jobs__________'
    echo "$jobs"
    echo ''
    runningJobs=$(showq | grep "rajasekera.3" | wc -l) #change unsername here
    echo Number of Running Jobs = $runningJobs 
    echo Number of jobs left = $numbLeft
    if [ $runningJobs -le 6 ];then
	line=$(head -n 1 jobList.txt)
	$line
	echo Submit Job && pwd
	qsub run_shelfmc_multithread.sh
	cd $tmpShelfmc
	sed -i 1d jobList.txt
    else
	echo "Full Capacity"
    fi
    sleep 1
    numbLeft=$(wc -l < ./jobList.txt)
done
  24   Wed Jun 6 17:48:47 2018 Jorge TorresHow to build ROOT 6 on an OSC cluster 

Disclaimer: I wrote this for Owens, which I think will also work on Pitzer. I recommend following Steven's instructions, and use mine if it fails to build. J

1. Submit a batch job so the processing resources are not limited (change the project ID if needed.):

qsub -A PAS0654 -I -l nodes=1:ppn=4,walltime=2:00:00

2. Reset and load the following modules (copy and paste as it is):

module reset
module load cmake/3.7.2
module load python/2.7.latest
module load fftw3/3.3.5

3. Do echo $FFTW3_HOME and make sure it spits out "/usr/local/fftw3/intel/16.0/mvapich2/2.2/3.3.5". If it doesn't, do 

 

export FFTW3_HOME=/usr/local/fftw3/intel/16.0/mvapich2/2.2/3.3.5

Otherwise, just do

 

export FFTW_DIR=$FFTW3_HOME

4.  Do (Change DCMAKE_INSTALL_PREFIX and point it to the root source directory)

cmake -DCMAKE_C_COMPILER=`which gcc` \
-DCMAKE_CXX_COMPILER=`which g++` \
-DCMAKE_INSTALL_PREFIX=${HOME}/local/oakley/ROOT-6.12.06 \
-DBLAS_mkl_intel_LIBRARY=${MKLROOT}/lib/intel64 \
../root-6.12.06 2>&1 | tee cmake.log

It will configure root so it can be installed in the machine (takes about 5 minutes).

5. Once it is configured, do the following to build root (takes about 45 min)

make -j4 2>&1 | tee make.log

6. Once it's done, do 

make install

In order to run it, now go into the directory, then cd bin. Once you're in there you should see a .sh called 'thisroot.sh'. Type 'source thisroot.sh'. You should now be able to type 'root' and it will run. Note that you must source this EVERY time you log into OSC. The smart thing to do would be to put this into your bash script. 

(Second procedure from S. Prohira)

1. download ROOT: https://root.cern.ch/downloading-root (whatever the latest pro release is)

2. put the source tarball somewhere in your directory on ruby and expand it into the "source" folder

3. on ruby, open your ~/.bashrc file and add the following lines:

export CC="/usr/local/gnu/7.3.0/bin/gcc"
export CXX="/usr/local/gnu/7.3.0/bin/g++"
module load cmake
module load python
module load gnu/7.3.0

4. then run: source ~/.bashrc

5. make a "build" directory somewhere else on ruby called 'root' or 'root_build' and cd into that directory.

6. do: cmake /path/to/source/folder (e.g. the folder you expanded from the .tar file above. should finish with no errors.) here you can also include the -D flags that you want (such as minuit2 for the anita tools)
   -for example, the ANITA tools need you to do: cmake -Dminuit2:bool=true /path/to/source/folder.

7. do: make -j4 (or the way that Jorge did it above, if you want to submit it as a batch job (and not be a jerk running a job on the login nodes like i did))

8. add the following line to your .bashrc file (or .profile, whatever startup file you prefer):

source /path/to/root/build/directory/bin/thisroot.sh

9. enjoy root!

 

  28   Fri Oct 26 18:08:43 2018 Jorge TorresAnalyzing effective volumesAnalysis

Attaching some scripts that help processing the effective volumes. This is an extension of what Brian Clark did in a previous post (http://radiorm.physics.ohio-state.edu/elog/How-To/27)

There are 4 files attached:

- veff_aeff2.C and veff_aeff2.mk. veff_aeff2.C produces Veff_des$1.txt ($1 can be A or B or C). This file contains the following columns: energy, veff, veff_error, veff1 (PA), veff2 (LPDA), veff3 (bicone), respectively. However, the energies are not sorted.

-veff.sh: this bash executable runs veff_aeff2.C for all (that's what the "*" in the executable is for) the root output files, for a given design (A, B, C). You need to modify the location of your output files, though. Run like "./veff.sh A", which will execute veff_aeff2.C and produce the veff text files. Do the same for B or C.

-make_plot.py: takes Veff_des$1.txt, sorts energies out, plots the effective volumes vs. energies, and produces a csv file containing the veffs (just for the sake of copying and pastting on the spreadsheets). Run like "pyhton make_plot.py".

 

 

 

Attachment 1: veff.sh
nohup ./veff_aeff2 3000 3000 $1 /users/PAS0654/osu8354/outputs_signal_noise/des"$1"/AraOut.des"$1"_16.5.txt.run* &
nohup ./veff_aeff2 3000 3000 $1 /users/PAS0654/osu8354/outputs_signal_noise/des"$1"/AraOut.des"$1"_17.txt.run* &   
nohup ./veff_aeff2 3000 3000 $1 /users/PAS0654/osu8354/outputs_signal_noise/des"$1"/AraOut.des"$1"_17.5.txt.run* &
nohup ./veff_aeff2 5000 3000 $1 /users/PAS0654/osu8354/outputs_signal_noise/des"$1"/AraOut.des"$1"_18.txt.run* &
nohup ./veff_aeff2 5000 3000 $1 /users/PAS0654/osu8354/outputs_signal_noise/des"$1"/AraOut.des"$1"_18.5.txt.run* &
nohup ./veff_aeff2 7000 3000 $1 /users/PAS0654/osu8354/outputs_signal_noise/des"$1"/AraOut.des"$1"_19.txt.run* &
nohup ./veff_aeff2 7000 3000 $1 /users/PAS0654/osu8354/outputs_signal_noise/des"$1"/AraOut.des"$1"_19.5.txt.run* &
nohup ./veff_aeff2 7000 3000 $1 /users/PAS0654/osu8354/outputs_signal_noise/des"$1"/AraOut.des"$1"_20.txt.run* &

Attachment 2: veff_aeff2.C
//////////////////////////////////////
//To calculate various veff and aeff
//Run as ./veff_aeff2 $RADIUS $DEPTH $FILE
//For example ./veff_aeff2 8000 3000 /data/user/ypan/bin/AraSim/trunk/outputs/AraOut.setup_single_station_2_energy_20.run0.root
//
/////////////////////////////////////


#include <iostream>
#include <fstream>
#include <sstream>
#include <math.h>
#include <string>
#include <stdio.h>
#include <stdlib.h>
#include <vector>
#include <time.h>
#include "TTreeIndex.h"
#include "TChain.h"
#include "TH1.h"
#include "TF1.h"
#include "TF2.h"
#include "TFile.h"
#include "TRandom.h"
#include "TRandom2.h"
#include "TRandom3.h" 
#include "TTree.h"
#include "TLegend.h"
#include "TLine.h"
#include "TROOT.h"
#include "TPostScript.h"
#include "TCanvas.h"
#include "TH2F.h"
#include "TText.h"
#include "TProfile.h"
#include "TGraphErrors.h"
#include "TStyle.h"
#include "TMath.h"
#include <unistd.h>
#include "TVector3.h"
#include "TRotation.h"
#include "TSpline.h"
//#include "TObject.h"
#include "Tools.h"
#include "Constants.h"
#include "Vector.h"
#include "Position.h"
#include "EarthModel.h"
#include "IceModel.h"
#include "Efficiencies.h"
#include "Spectra.h"
#include "Event.h"
#include "Trigger.h"
#include "Detector.h"
#include "Settings.h"
#include "counting.hh"
#include "Primaries.h"
#include "signal.hh"
#include "secondaries.hh"

#include "Ray.h"
#include "RaySolver.h"
#include "Report.h"

using namespace std;
class EarthModel; 

int main(int argc, char **argv){
  //string readfile;
  //readfile = string(argv[1]);
  //readfile = "/data/user/ypan/bin/AraSim/branches/new_geom/outputs/AraOut20.0.root";

  int ifile = 0;
  double totweightsq;
  double totweight;
  int totnthrown;
  int typetotnthrown[12];
  double tottrigeff;
  double sigma[12];
  double typetotweight[12];
  double typetotweightsq[12];
  double totsigmaweight;
  double totsigmaweightsq;
  double volradius;
  double voldepth;
  const int nstrings = 9;
  const int nantennas = 1;
  double veff1 = 0.0;
  double veff2 = 0.0;
  double veff3 = 0.0;
  double weight1 = 0.0;
  double weight2 = 0.0;
  double weight3 = 0.0;
  double veffT[6], vefferrT[6], aeffT[6], aefferrT[6];
  double veffF[3], vefferrF[3], aeffF[3], aefferrF[3];
  double veffNu[2], vefferrNu[2], aeffNu[2], aefferrNu[2];
  double veff, vefferr, aeff, aefferr, aeff2;
  double pnu;

  Detector *detector = 0; 
  //Settings *settings = 0;
  //IceModel *icemodel = 0;
  Event *event = 0;
  Report *report = 0;
  cout<<"construct detector"<<endl;

  //TFile *AraFile=new TFile(readfile.c_str());
  //TFile *AraFile=new TFile((outputdir+"/AraOut.root").c_str());
  TChain *AraTree = new TChain("AraTree");
  TChain *AraTree2 = new TChain("AraTree2");
  TChain *eventTree = new TChain("eventTree");
  //AraTree->SetBranchAddress("detector",&detector);
  //AraTree->SetBranchAddress("settings",&settings);
  //AraTree->SetBranchAddress("icemodel",&icemodel);
  cout << "trees set" << endl;
  for(ifile = 3; ifile < (argc - 1); ifile++){
    AraTree->Add(string(argv[ifile + 1]).c_str());
    AraTree2->Add(string(argv[ifile + 1]).c_str());
    eventTree->Add(string(argv[ifile + 1]).c_str());
  }
  AraTree2->SetBranchAddress("event",&event);
  AraTree2->SetBranchAddress("report",&report);
  cout<<"branch detector"<<endl;

  for(int i=0; i<12; i++) {
    typetotweight[i] = 0.0;
    typetotweightsq[i] = 0.0;
    typetotnthrown[i] = 0;
  }
  
  totweightsq = 0.0;
  totweight = 0.0;
  totsigmaweight = 0.0;
  totsigmaweightsq = 0.0;
  totnthrown = AraTree2->GetEntries();
  cout << "Total number of events: " << totnthrown << endl;
  //totnthrown = settings->NNU;
  //volradius = settings->POSNU_RADIUS;
  volradius = atof(argv[1]);
  voldepth = atof(argv[2]);
  AraTree2->GetEntry(0);
  pnu = event->pnu;
  cout << "Energy " << pnu << endl;
  for(int iEvt2=0; iEvt2<totnthrown; iEvt2++) {
    
    AraTree2->GetEntry(iEvt2);

    double sigm = event->Nu_Interaction[0].sigma;
    int iflavor = (event->nuflavorint)-1;
    int inu = event->nu_nubar;
    int icurr = event->Nu_Interaction[0].currentint;
    
    sigma[inu+2*icurr+4*iflavor] = sigm;
    typetotnthrown[inu+2*icurr+4*iflavor]++;

    if( (iEvt2 % 10000 ) == 0 ) cout << "*";
    if(report->stations[0].Global_Pass<=0) continue;

    double weight = event->Nu_Interaction[0].weight;
    if(weight > 1.0){
        cout << weight << "; " << iEvt2 << endl;
        continue;
    }
//    cout << weight << endl;
    totweightsq += pow(weight,2);
    totweight += weight;
    typetotweight[inu+2*icurr+4*iflavor] += weight;
    typetotweightsq[inu+2*icurr+4*iflavor] += pow(weight,2);
    totsigmaweight += weight*sigm;
    totsigmaweightsq += pow(weight*sigm,2);

    int trig1 = 0;
    int trig2 = 0;
    int trig3 = 0;
    for (int i = 0; i < nstrings; i++){
        if (i == 0 && report->stations[0].strings[i].antennas[0].Trig_Pass > 0) trig1++;
        if (i > 0 && i < 5 && report->stations[0].strings[i].antennas[0].Trig_Pass > 0) trig2++;
        if (i > 4 && i < 9 && report->stations[0].strings[i].antennas[0].Trig_Pass > 0) trig3++;
    }
    if ( trig1 > 0)//phase array
        weight1 += event->Nu_Interaction[0].weight;
    if ( trig2 > 1)//lpda
        weight2 += event->Nu_Interaction[0].weight;
    if (trig3 > 3)//bicone
        weight3 += event->Nu_Interaction[0].weight;
  }


  tottrigeff = totweight / double(totnthrown); 
  double nnucleon = 5.54e29;
  double vtot = PI * double(volradius) * double(volradius) * double(voldepth) / 1e9;
  veff = vtot * totweight / double(totnthrown) * 4.0 * PI;
  //vefferr = sqrt(SQ(sqrt(double(totnthrown))/double(totnthrown))+SQ(sqrt(totweightsq)/totweight));
  vefferr = sqrt(totweightsq) / totweight * veff;
  aeff = vtot * (1e3) * nnucleon * totsigmaweight / double(totnthrown);
  //aefferr = sqrt(SQ(sqrt(double(totnthrown))/double(totnthrown))+SQ(sqrt(totsigmaweightsq)/totsigmaweight));
  //aefferr = sqrt(SQ(sqrt(double(totnthrown))/double(totnthrown))+SQ(sqrt(totweightsq)/totweight));
  aefferr = sqrt(totweightsq) / totweight * aeff;
  double sigmaave = 0.0;

  for(int iflavor=0; iflavor<3; iflavor++) {
    double flavorweight = 0.0;
    double flavorweightsq = 0.0;
    double flavorsigmaave = 0.0;
    int flavortotthrown = 0;
    double temptotweightnu[2] = {0};
    double tempsignu[2] = {0};
    double temptotweight = 0.0;
    for(int inu=0; inu<2; inu++) {
      double tempsig = 0.0;
      double tempweight = 0.0;
      for(int icurr=0; icurr<2; icurr++) {
	tempsig += sigma[inu+2*icurr+4*iflavor];
	tempsignu[inu] += sigma[inu+2*icurr+4*iflavor];
	tempweight += typetotweight[inu+2*icurr+4*iflavor];
	flavorweight += typetotweight[inu+2*icurr+4*iflavor];
	flavorweightsq += typetotweightsq[inu+2*icurr+4*iflavor];
	temptotweight += typetotweight[inu+2*icurr+4*iflavor];
	temptotweightnu[inu] += typetotweight[inu+2*icurr+4*iflavor];
	flavortotthrown += typetotnthrown[inu+2*icurr+4*iflavor];
      }
      //printf("Temp Sigma: "); cout << tempsig << "\n";
      sigmaave += tempsig*(tempweight/totweight);
    }

    flavorsigmaave += tempsignu[0]*(temptotweightnu[0]/temptotweight)+tempsignu[1]*(temptotweightnu[1]/temptotweight);
    veffF[iflavor] = vtot*flavorweight/double(flavortotthrown);
    vefferrF[iflavor] = sqrt(flavorweightsq)/flavorweight;
    //printf("Volume: %.9f*%.9f/%.9f \n",vtot,flavorweight,double(totnthrown));
    aeffF[iflavor] = veffF[iflavor]*(1e3)*nnucleon*flavorsigmaave;
    aefferrF[iflavor] = sqrt(flavorweightsq)/flavorweight;

  }



  for(int inu=0; inu<2; inu++) {
    double tempsig = 0.0;
    double tempweight = 0.0;
    double tempweightsq = 0.0;
    int nutotthrown = 0;
    for(int iflavor=0; iflavor<3; iflavor++) {
      for(int icurr=0; icurr<2; icurr++) {
	tempweight += typetotweight[inu+2*icurr+4*iflavor];
	tempweightsq += typetotweightsq[inu+2*icurr+4*iflavor];
	nutotthrown += typetotnthrown[inu+2*icurr+4*iflavor];
      }
    }

    tempsig += sigma[inu+2*0+4*0];
    tempsig += sigma[inu+2*1+4*0];

    veffNu[inu] = vtot*tempweight/double(nutotthrown);
    vefferrNu[inu] = sqrt(tempweightsq)/tempweight;

    aeffNu[inu] = veffNu[inu]*(1e3)*nnucleon*tempsig;
    aefferrNu[inu] = sqrt(tempweightsq)/tempweight;
    
  }

  double totalveff = 0.0;
  double totalaeff = 0.0;
  for(int inu=0; inu<2; inu++) {
    for(int iflavor=0; iflavor<3; iflavor++) {
      int typetotthrown = 0;
      for(int icurr=0; icurr<2; icurr++) {
	typetotthrown += typetotnthrown[inu+2*icurr+4*iflavor];
      }
      totalveff += veffT[iflavor+3*inu]*(double(typetotthrown)/double(totnthrown));
      totalaeff += aeffT[iflavor+3*inu]*(double(typetotthrown)/double(totnthrown));
    }
  }
  aeff2 = veff*(1e3)*nnucleon*sigmaave;
  aeff = aeff2;
  veff1 = weight1 / totnthrown * vtot * 4.0 * PI;
  veff2 = weight2 / totnthrown * vtot * 4.0 * PI;
  veff3 = weight3 / totnthrown * vtot * 4.0 * PI;

  printf("\nvolthrown: %.6f; totweight: %.6f; Veff: %.6f +- %.6f\n", vtot, totweight, veff, vefferr);
  printf("veff1: %.3f; veff2: %.3f; veff3: %.3f\n", veff1, veff2, veff3);
  //string des = string(argv[4]);
  char buf[100];
  std::ostringstream stringStream;
  stringStream << string(argv[3]);
  std::string copyOfStr = stringStream.str();
  snprintf(buf, sizeof(buf), "Veff_des%s.txt", copyOfStr.c_str());
  FILE *fout = fopen(buf, "a+");
  fprintf(fout, "%e, %.6f, %.6f, %.3f, %.3f, %.3f \n", pnu, veff, vefferr, veff1, veff2, veff3);
  fclose(fout);

  return 0;
}
Attachment 3: veff_aeff2.mk
#############################################################################
## Makefile -- New Version of my Makefile that works on both linux
##              and mac os x
## Ryan Nichol <rjn@hep.ucl.ac.uk>
##############################################################################
##############################################################################
##############################################################################
##
##This file was copied from M.readGeom and altered for my use 14 May 2014
##Khalida Hendricks.
##
##Modified by Brian Clark for use on CosTheta_NuTraject on 20 February 2015
##
##Changes:
##line 54 - OBJS = .... add filename.o      .... del oldfilename.o
##line 55 - CCFILE = .... add filename.cc     .... del oldfilename.cc
##line 58 - PROGRAMS = filename
##line 62 - filename : $(OBJS)
##
##
##############################################################################
##############################################################################
##############################################################################
include StandardDefinitions.mk
#Site Specific  Flags
ifeq ($(strip $(BOOST_ROOT)),)
BOOST_ROOT = /usr/local/include
endif
SYSINCLUDES	= -I/usr/include -I$(BOOST_ROOT)
SYSLIBS         = -L/usr/lib
DLLSUF = ${DllSuf}
OBJSUF = ${ObjSuf}
SRCSUF = ${SrcSuf}

CXX = g++

#Generic and Site Specific Flags
CXXFLAGS     += $(INC_ARA_UTIL) $(SYSINCLUDES) 
LDFLAGS      += -g $(LD_ARA_UTIL) -I$(BOOST_ROOT) $(ROOTLDFLAGS) -L. 

# copy from ray_solver_makefile (removed -lAra part)

# added for Fortran to C++


LIBS	= $(ROOTLIBS) -lMinuit $(SYSLIBS) 
GLIBS	= $(ROOTGLIBS) $(SYSLIBS)


LIB_DIR = ./lib
INC_DIR = ./include

#ROOT_LIBRARY = libAra.${DLLSUF}

OBJS = Vector.o EarthModel.o IceModel.o Trigger.o Ray.o Tools.o Efficiencies.o Event.o Detector.o Position.o Spectra.o RayTrace.o RayTrace_IceModels.o signal.o secondaries.o Settings.o Primaries.o counting.o RaySolver.o Report.o eventSimDict.o veff_aeff2.o
CCFILE = Vector.cc EarthModel.cc IceModel.cc Trigger.cc Ray.cc Tools.cc Efficiencies.cc Event.cc Detector.cc Spectra.cc Position.cc RayTrace.cc signal.cc secondaries.cc RayTrace_IceModels.cc Settings.cc Primaries.cc counting.cc RaySolver.cc Report.cc veff_aeff2.cc
CLASS_HEADERS = Trigger.h Detector.h Settings.h Spectra.h IceModel.h Primaries.h Report.h Event.h secondaries.hh #need to add headers which added to Tree Branch

PROGRAMS = veff_aeff2

all : $(PROGRAMS) 

veff_aeff2 : $(OBJS)
	$(LD) $(OBJS) $(LDFLAGS)  $(LIBS) -o $(PROGRAMS) 
	@echo "done."

#The library
$(ROOT_LIBRARY) : $(LIB_OBJS) 
	@echo "Linking $@ ..."
ifeq ($(PLATFORM),macosx)
# We need to make both the .dylib and the .so
	$(LD) $(SOFLAGS)$@ $(LDFLAGS) $(G77LDFLAGS) $^ $(OutPutOpt) $@
ifneq ($(subst $(MACOSX_MINOR),,1234),1234)
ifeq ($(MACOSX_MINOR),4)
ln -sf $@ $(subst .$(DllSuf),.so,$@)
else
$(LD) -dynamiclib -undefined $(UNDEFOPT) $(LDFLAGS) $(G77LDFLAGS) $^ \
$(OutPutOpt) $(subst .$(DllSuf),.so,$@)
endif
endif
else
	$(LD) $(SOFLAGS) $(LDFLAGS) $(G77LDFLAGS) $(LIBS) $(LIB_OBJS) -o $@
endif

##-bundle

#%.$(OBJSUF) : %.$(SRCSUF)
#	@echo "<**Compiling**> "$<
#	$(CXX) $(CXXFLAGS) -c $< -o  $@

%.$(OBJSUF) : %.C
	@echo "<**Compiling**> "$<
	$(CXX) $(CXXFLAGS) $ -c $< -o  $@

%.$(OBJSUF) : %.cc
	@echo "<**Compiling**> "$<
	$(CXX) $(CXXFLAGS) $ -c $< -o  $@

# added for fortran code compiling
%.$(OBJSUF) : %.f
	@echo "<**Compiling**> "$<
	$(G77) -c $<


eventSimDict.C: $(CLASS_HEADERS)
	@echo "Generating dictionary ..."
	@ rm -f *Dict* 
	rootcint $@ -c ${INC_ARA_UTIL} $(CLASS_HEADERS) ${ARA_ROOT_HEADERS} LinkDef.h

clean:
	@rm -f *Dict*
	@rm -f *.${OBJSUF}
	@rm -f $(LIBRARY)
	@rm -f $(ROOT_LIBRARY)
	@rm -f $(subst .$(DLLSUF),.so,$(ROOT_LIBRARY))	
	@rm -f $(TEST)
#############################################################################
Attachment 4: make_plot.py
# -*- coding: utf-8 -*-
import numpy as np
import sys
import matplotlib.pyplot as plt
from pylab import setp
from matplotlib.pyplot import rcParams
import csv
import pandas as pd

rcParams['mathtext.default'] = 'regular'

def read_file(finame):
    fi = open(finame, 'r')
    rdr = csv.reader(fi, delimiter=',', skipinitialspace=True)
    table = []
    for row in rdr:
    #    print(row)
        energy = float(row[0])
        veff = float(row[1])
        veff_err = float(row[2])
        veff1 = float(row[3])
        veff2 = float(row[4])
        veff3 = float(row[5])
        row = {'energy':energy, 'veff':veff, 'veff_err':veff_err, 'veff1':veff1, 'veff2':veff2, 'veff3':veff3}
        table.append(row)
    df=pd.DataFrame(table)
    df_ordered=df.sort_values('energy',ascending=True)
 #   print(df_ordered)
    return df_ordered

def beautify_veff(this_ax):
    sizer=20
    xlow = 1.e16 #the lower x limit
    xup = 2.e20 #the uppper x limit
    ylow =1e-3 #the lower x limit
    yup = 6.e1 #the uppper x limit
    this_ax.set_xlabel('Energy [eV]',size=sizer) #give it a title
    this_ax.set_ylabel('[V$\Omega]_{eff}$  [km$^3$sr]',size=sizer)
    this_ax.set_yscale('log')
    this_ax.set_xscale('log')
    this_ax.tick_params(labelsize=sizer)
    this_ax.set_xlim([xlow,xup]) #set the x limits of the plot
    this_ax.set_ylim([ylow,yup]) #set the y limits of the plot
    this_ax.grid()
    this_legend = this_ax.legend(loc='upper left')
    setp(this_legend.get_texts(), fontsize=17)
    setp(this_legend.get_title(), fontsize=17)

def main():
    
    """   

    arasim_energies = np.array([3.16e+16, 1e+17, 3.16e+17, 1e+18, 3.16e+18, 1e+19, 3.16e+19, 1e+20])
    arasim_energies2 = np.array([3.16e+16, 1e+17, 3.16e+17, 1e+18, 1e+19, 3.16e+19, 1e+20])
    #arasim_desA_veff = np.array([0.080894,0.290695,0.943223,2.388708,4.070498,6.824112,10.506490,13.969418])
    arasim_desA_veff_s = np.array([0.067384,0.289591,0.996509,2.464464,4.945600,8.735506,13.357300,18.751915])
   # arasim_desA_veff_ice = np.array([0.066291,0.303620,0.927647,2.427554,4.962093,8.465895,13.425852,18.706528])
    arasim_desA_error = np.array([0.008367,0.017401,0.032222,0.084223,0.119240,0.221266,0.272460,0.320456])
  #  arasim_desA_error_ice = np.array([0.008345,0.017748,0.031295,0.083747,0.119370,0.217717,0.273075,0.319872])
   # arasim_desB_veff = np.array([0.124111,0.417102,1.310555,3.648494,4.070E+0,11.675189,18.393961,13.688909])
    arasim_desB_veff_s = np.array([0.064937,0.355747,1.289947,3.821705,8.002805,15.352981,25.391282,18.009545])
   # arasim_desB_veff_ice = np.array([0.080070,0.340927,1.369081,3.938550,8.211407,15.190858,25.066541,18.021033])
    arasim_desB_error = np.array([0.008268,0.019317,0.036926,0.105445,0.152488,0.296617,0.377997,0.314314])
   # arasim_desB_error_ice = np.array([0.009220,0.019144,0.037966,0.107189,0.154430,0.293728,0.375827,0.314244])
    arasim_desA_veff1_sm = np.array([0.054,0.231,0.868,2.239,4.586,8.339,12.910,18.301])
    arasim_desA_veff2_sm = np.array([0.009,0.036,0.098,0.323,0.705,1.167,2.169,3.130])
    arasim_desA_veff3_sm = np.array([0.011,0.074,0.193,0.664,1.208,2.347,3.689,4.814])
    
    arasim_desA_veff1 = np.array([0.053,0.282,1.108,3.462,7.486,14.613,24.682,17.557])
    arasim_desA_veff2 = np.array([0.006,0.040,0.121,0.322,0.636,1.308,1.961,2.708])
    arasim_desA_veff3 = np.array([0.006,0.065,0.188,0.638,1.187,2.270,3.301,4.322])
    
    """
    veff_A=read_file("Veff_desA.txt")
    veff_B=read_file("Veff_desB.txt")
    print("desA is \n", veff_A)
    print("desB is \n", veff_B)

    dfA = veff_A[['veff','veff_err']]
    dfB = veff_B[['veff','veff_err']]
    dfA.to_csv('veffA.csv', sep='\t',index=False)
    dfB.to_csv('veffB.csv', sep='\t',index=False)

    
    fig = plt.figure(figsize=(11,8.5))
    ax1 = fig.add_subplot(1,1,1)
    ax1.plot(veff_A['energy'], veff_A['veff'],'bs-',label='Strawman',markersize=8,linewidth=2)
    ax1.plot(veff_B['energy'], veff_B['veff'],'gs-',label='Punch @100 m',markersize=8,linewidth=2)

    ax1.fill_between(veff_A['energy'], veff_A['veff']-veff_A['veff_err'], veff_A['veff']+veff_A['veff_err'], alpha=0.2, color='red')
    ax1.fill_between(veff_B['energy'], veff_B['veff']-veff_B['veff_err'], veff_B['veff']+veff_B['veff_err'], alpha=0.2, color='red')
    beautify_veff(ax1)
    ax1.set_title("Punch vs Strawman, noise + signal",fontsize=20)
    fig.savefig("desAB.png",edgecolor='none',bbox_inches="tight") #save the figure


    
    fig2 = plt.figure(figsize=(11,8.5))
    ax2 = fig2.add_subplot(1,1,1)
	
        #Triggers plot
    ax2.plot(veff_B['energy'], veff_B['veff1'],'g^-',label='Phased array (Punch)',markersize=8,linewidth=2)
    ax2.plot(veff_A['energy'], veff_A['veff1'],'gs-',label='Phased array (Strawman)',markersize=8,linewidth=2)
    ax2.plot(veff_B['energy'], veff_B['veff2'],'b^-',label='LPDAs (Punch)',markersize=8,linewidth=2)
    ax2.plot(veff_A['energy'], veff_A['veff2'],'bs-',label='LPDAs (Strawman)',markersize=8,linewidth=2)
    ax2.plot(veff_B['energy'], veff_B['veff3'],'y^-',label='Bicones (Punch)',markersize=8,linewidth=2)
    ax2.plot(veff_A['energy'], veff_A['veff3'],'ys-',label='Bicones (Strawman)',markersize=8,linewidth=2)
    ax2.set_title("Triggers contribution, noise + signal",fontsize=20)
    beautify_veff(ax2)
    fig2.savefig("desAB_triggers.png",edgecolor='none',bbox_inches="tight") #save the figure

        
main()
        
        
  36   Mon Mar 4 14:11:07 2019 Jorge TorresSubmitting arrays of jobs on OSCAnalysis

PBS has the option of having arrays of jobs in case you want to easily submit multiple similar jobs. The only difference in them is the array index, which you can use in your PBS script to run each task with a different set of input arguments, or any other operation that requires a unique index.

You need to add the following lines to your submitter file:

#PBS -t array_min-array_max%increment

where "array_min/array_max" are integers that set lower and upper limit, respectively, of your "job loop" and increment lets you set the number of jobs that you want to submit simultaneously. For example:

#PBS -t 1-100%5

submits an array with 100 jobs in it, but the system will only ever have 5 running at one time.

Here's an example of a script that submits to Pitzer a job array (from 2011-3000 in batches of 40) of a script named "make_fits_noise" that uses one core only. Make sure you use "#PBS -m n", otherwise you'll get tons of emails notyfing you about your jobs.

To delete the whole array, use "qdel JOB_ID[ ]". To delete one single instance, use "qdel JOB_ID[ARRAY_ID]".

More info: https://arc-ts.umich.edu/software/torque/job-arrays/

 

  Draft   Fri Jan 31 10:43:52 2020 Jorge TorresMounting ARA software on OSC through CernVM File SystemSoftware

OSC added ARA's CVMFS repository on Pitzer and Owens. This has been already done in UW's cluster thanks to Ben Hokanson-Fasig and Brian Clark. With CVMFS, all the dependencies are compiled and stored in a single folder (container), meaning that the user can just source the paths to the used environmental variables and not worry about installing them at all. This is very useful, since it usually takes a considerable amount of time to get those dependencies and properly install/debug the ARA software. To use the software, all you have to do is:

source /cvmfs/ara.opensciencegrid.org/trunk/centos7/setup.sh

To verify that the containter was correctly loaded, type

root

and see if the root display pops up. You can also go to /cvmfs/ara.opensciencegrid.org/trunk/centos7/source/AraSim and execute ./AraSim

Because of it being a container, the permissions are read-only. This means that if you want to do any modifications to the existing code, you'll have to copy the piece of code that you want, and change the enviromental variables of that package, in this case $ARA_UTIL_INSTALL_DIR, which is the destination where you want your executables, libraries and such, installed.

Libraries and executables are stored here, in case you want to reference those dependencies as your environmental variables: /cvmfs/ara.opensciencegrid.org/trunk/centos7/

Even if you're not in the ARA collaboration, you can benefit from this through the fact that ROOT6 is installed and compiled in the container. In order to use it you just need to run the same bash command, and ROOT 6 will be available for you to use.

Feel free to email any questions to Brian Clark or myself.

--------

Technical notes: 

The ARA software was compiled with gcc version 4.8.5. In OSC, that compiler can be loaded by doing module load gnu/4.8.5. If you're using any other compiler, you'll get a warning telling you that if you do any compilation against the ARA software, you may need to add the -D_GLIBCXX_USE_CXX11_ABI=0 flag to your Make file. 

ELOG V3.1.5-fc6679b