Updates and Results Talks and Posters Advice Ideas Important Figures Write-Ups Outreach How-To Funding Opportunities GENETIS
  Place to document instructions for how to do things, Page 1 of 3  ELOG logo
  ID Date Author Subjectdown Project
  31   Thu Dec 13 17:33:54 2018 s prohiraparallel jobs on rubySoftware

On ruby, users get charged for the full node, even if you aren't using all 20 cores, so it's a pain if you want to run a bunch of serial jobs. There is, however, a thing called the 'parallel command processor' (pcp) which is provided on ruby, (https://www.osc.edu/resources/available_software/software_list/parallel_command_processor) that makes it very simple.

essentially, you make a text file filled with commands, one command per line, and then you give it to the parallel command processor and it submits each line of your text file as an individual job. the nice thing about this is that you don't have to think about it. you just give it the file and go, and it will use all cores on the full node in the most efficient way possible.

below i provide 2 examples, a very simple one to show you how it works, and a more complicated one. in both files, i make the command file inside of a loop. you don't need to do this-you can make the file in some other way if you choose to. note that you can also do this from within an interactive job. more instructions at the above link.

test.pbs  is just a minimal thing, where you need to submit the same command but with some value that needs to be incremented 1000 times (e.g. 1000 different jobs).

effvol.pbs is more involved, and shows some important steps if your job produces a lot of output, where you use the $TMPDIR or the pbs workdir. (if you don't know what that is, you probably don't need to use it). each command in this file stores an output file to the $TMPDIR directory. this directory is accessed faster than the directories where you store your files, and so your jobs run faster. at the end of the script, all of the output files from all of the run jobs, are copied to my home directory, because $TMPDIR is deleted after each job. also this file shows the sourcing of a particular bash profile for submitted jobs (if you need this. some programs work differently when submitted than jobs run on the login nodes on ruby).

i recommend reading the above link for more information. the pcp is very useful on ruby!

Attachment 1: test.pbs
#!/bin/bash

#PBS -A PCON0003
#PBS -l walltime=01:00:00
#PBS -l nodes=1:ppn=20


touch commandfile
for value in {1..1000}
do
    line="/path/to/your_command_to_run $value (arg1) (arg2)..(argn)"
    echo ${line}>> commandfile
    
done


module load pcp
mpiexec parallel-command-processor commandfile




Attachment 2: effvol.pbs
#!/bin/bash

#PBS -A PCON0003
#PBS -N effvol
#PBS -l walltime=01:00:00
#PBS -l nodes=1:ppn=20
#PBS -o ./log/out
#PBS -e ./log/err

source /users/PCON0003/osu10643/.bash_batch

cd $TMPDIR
touch effvolconf
for value in {1..1000}
do
    line="/users/PCON0003/osu10643/app/geant/app/nrt -m /users/PCON0003/osu10643/app/geant/app/nrt/effvol.mac -f \"$TMPDIR/effvol$value.root\""
    echo ${line}>> effvolconf
    
done


module load pcp
mpiexec parallel-command-processor effvolconf


cp $TMPDIR/*summary.root /users/PCON0003/osu10643/doc/root/summary/

  35   Tue Feb 26 19:07:40 2019 Lauren EnnesserValgrind command to suppress ROOT warnings 

valgrind --suppressions=$ROOTSYS/etc/valgrind-root.supp ./myCode

If you use valgrind to identify potential memory leaks in your code, but use a lot of ROOT objects and functions, you'll notice that ROOTs TObjects trigger a lot of "potential memory leak" warnings. This option will suppress many of those. More info at https://root-forum.cern.ch/t/valgrind-and-root/2ss8506

  Draft   Thu Sep 21 14:30:18 2017 Julie RollaUsing/Running XF 

Below I've attached a video with some information regarding running XF. Before you start, here's some important information you need to know. 

 

In order to run XF:

XF can now only run on a windows OS. If you are on a Mac, you can dual boot with windows 10 that is not activated—this still works. The lack of activation gives you minimal contraints, and will not hinder your ability to run XF. Otherwise, you can use a queen bee machine. Note that there are two ways to run XF itself (once you are on a windows machine). You can run via OSC on Oakly--This has a floating license and you will not need to use the USB Key-- or you can use the USB key. Unfortunately, at the moment, we only have one USB key, and it may be best to get an account through OSC. 

 

2 methods: 

 

1.) If you are not using Oakly— you need the USB key to run

 

2.) Log in to Oakly — you do not need the USB key to run

 

------

for method 1:

 XFdtd you can just click on the icon

Will pop up with “need to find a valid license”

click “ok” and insert the USB key and hit “retry"

 

For method 2: 

Log in to Oakly — To log into Oakly follow the steps between **

 

** ssh in 

put “module load XFdtd” in command line— this gets you access to the libraries

then put “XFdtd” to load it

 

————

 

Note: you must have drivers installed to use usb key. Once you plug it in the first time, it will tell you what to install.

Note: The genetic algorithm will change the geometry of the detector and XF will check the gain values with those given geometries. 

 

After XF is loaded:

Components are listed on the left. Each are different for different simulations. 

To put in geometry on antenna, click on “parts”. 

click “create new”

choose a type of geometry. 

Note that you can also go to “file” and import and you can import cad files. 

 

“Create simulation”— save the project, and give it a name. Then, click “create simulation”. This stores all of the geometry and settings of the simulation. Now you could, if you wanted to, browse all of the different types of simulations. 

 

How to actually run the simulations:

In this example, Carl setup a planar sensor with a point sensor inside the sphere, and two more sensors on each side of the sphere. Now, you load the simulation and hit the play button at the bottom. Note that this should take 20 or 30 minutes for it to actually simulate. When it is done, you can actually re-click the play button and it will show you the visual simulation. It will automatically write this out when running the simulations. You either now need to parse that, or be able to view the data in XF itself. 

 

You can click “file” ad export data. Additionally, you can export the image. Note that this can give you a “smith chart”; this is that gain measurement you’re looking for. If you had a far field/zone sensor, then you could get a far field gain— which is this smith chart. To get the smith chart, you hover over the sensor, and right click. This should give you the option of a smith chart if you had the correct sensor. Note that all of this data and information is on the right hand side under “results” this will pull up all of the sensors, which you can right click to gather the actual individual data on.  

 

Note: the far zone sensor puts sensors completely symmetrical around the object. Ie if we have a sphere, we will have a larger sphere outside of our conducting sphere/antenna.

  29   Fri Nov 9 00:44:09 2018 Brian ClarkTransfer files from IceCube Data Warehouse to OSC 

Brian had to move ~7 TB of data from the IceCube data warehouse to OSC.

To do this, he used the gridftp software. The advantage is that griftp is optimized for large file transfers, and will manage data transfer better than something like scp or rsync.

Note that this utilizes the gridftp software installed on OSC, but doesn't formally use the globus end point described here: https://www.osc.edu/resources/getting_started/howto/howto_transfer_files_using_globus_connect. This is because IceCube doesn't have a formal globus endpoint to connect too. The formal globus endpoint would have been even easier if it was available, but oh well...

Setup goes as follows:

  1. Follow the IceCube instructions for getting an OSG certificate
    1. Go through CILogon (https://cilogon.org/) and generate and download a certificate
  2. Install your certificate on the IceCube machines
    1. Move your certificate to the following place on IceCube: ./globus/usercred.p12
    2. Change the permissions on this certificate: chmod 600 ./globus/usercred.p12
    3. Get your "subject" to this key: openssl pkcs12 -in .globus/usercred.p12 -nokeys | grep subject
    4. Copy the subject line into your IceCube LDAP account
      1. Select "Edit your profile"
      2. Enter IceCube credentials
      3. Paste the subject into the "x509 Subject DN" box
  3. Install your certificates on the OSC machines
    1. Follow the same instructions as for IceCube to install the globus credentials, but you don't need to do the IceCube LDAP part

How to actually make a transfer:

  1. Initialize a proxy certificate on OSC: grid-proxy-init -bits 1024
  2. Use globus-url-copy to move a file, for example: globus-url-copy -r gsiftp://gridftp.icecube.wisc.edu/data/wipac/ARA/2016/unblinded/L1/ARA03/ 2016/ &
    1. I'm using the command "globus-url-copy"
    2. "-r" says to transfer recursively
    3. "gsiftp://gridftp.icecube.wisc.edu/data/wipac/ARA/2016/ublinded/L1/ARA03/" is the entire directory I'm trying to copy
    4. "2016/" is the directory I'm copying them to
    5. "&" says do this in the background once launched
  3. Note that it's kind of syntatically picky:
    1. To copy a directory, the source path name must end in "/"
    2. To copy a directory, the destination path name must also end in "/"

 

 

  36   Mon Mar 4 14:11:07 2019 Jorge TorresSubmitting arrays of jobs on OSCAnalysis

PBS has the option of having arrays of jobs in case you want to easily submit multiple similar jobs. The only difference in them is the array index, which you can use in your PBS script to run each task with a different set of input arguments, or any other operation that requires a unique index.

You need to add the following lines to your submitter file:

#PBS -t array_min-array_max%increment

where "array_min/array_max" are integers that set lower and upper limit, respectively, of your "job loop" and increment lets you set the number of jobs that you want to submit simultaneously. For example:

#PBS -t 1-100%5

submits an array with 100 jobs in it, but the system will only ever have 5 running at one time.

Here's an example of a script that submits to Pitzer a job array (from 2011-3000 in batches of 40) of a script named "make_fits_noise" that uses one core only. Make sure you use "#PBS -m n", otherwise you'll get tons of emails notyfing you about your jobs.

To delete the whole array, use "qdel JOB_ID[ ]". To delete one single instance, use "qdel JOB_ID[ARRAY_ID]".

More info: https://arc-ts.umich.edu/software/torque/job-arrays/

 

  Draft   Sun Sep 17 20:05:29 2017 Spoorthi NagasamudramSome basic  
  7   Tue Apr 25 10:35:43 2017 Jude RajasekeraShelfMC Parameter Space ScanSoftware

These scripts allow you to do thousands of ShelfMC runs while varying certain parameters of your choice. As is, the attenuation length, reflection, ice thickness, firn depth, station depth is varied over certain rages; in total, the whole Parameter Space Scan does 5250 runs on a cluster like Ruby or KingBee. The scripts and instructions are attached below. 

Attachment 1: ParameterSpaceScan_instructions.txt
This document will explain how to dowload, configure, and run a parameter space search for ShelfMC on a computing cluster. 
These scripts explore the ShelfMC parameter space by varying ATTEN_UP, REFLECT_RATE, ICETHICK, FIRNDEPTH, and STATION_DEPTH for certain rages. 
The ranges and increments can be found in setup.sh. 

In order to vary STATION_DEPTH, some changes were made to the ShelfMC code. Follow these steps to allow STATION_DEPTH to be an input parameter.
1.cd to ShelfMC directory
2.Do $sed -i -e 's/ATDepth/STATION_DEPTH/g' *.cc
3.Open declaration.hh. Replace line 87 "const double ATDepth = 0.;" with "double STATION_DEPTH;"
4.In functions.cc go to line 1829. This is the ReadInput() method. Add the lines below to the end of this method. 
   GetNextNumber(inputfile, number); // new line for station Depth
   STATION_DEPTH  = (double) atof(number.c_str()); //new line
5.Do $make clean all

#######Script Descriptions########
setup.sh -> This script sets up the necessary directories and setup files for all the runs
scheduler.sh -> This script submits and monitors all jobs. 


#######DOWNLOAD########
1.Download setup.sh and scheduler.sh
2.Move both files into your ShelfMC directory
3.Do $chmod u+x setup.sh and $chmod u+x scheduler.sh

######CONFIGURE#######
1.Open setup.sh
2.On line 4, modify the job name
3.On line 6, modify group name
4.On line 10, specify your ShelfMC directory
5.On line 13, modify your run name
6.On line 14, specify the NNU per run
7.On line 15, specify the starting seed
8.On line 17, specify the number of processors per node on your cluster
9.On lines 19-56, edit the input.txt parameters that you want to keep constant for every run
10.On line 57, specify the location of the LP_gain_manual.txt
11.On line 126, change walltime depending on total NNU. Remember this wall time will be 20x shorter than a single processor run.
12.On line 127, change job prefix
13.On line 129, change the group name if needed 
14.Save file
15.Open scheduler.sh
16.On line 4, specify your ShelfMC directory
17.On line 5, modify run name. Make sure it is the same runName as you have in setup.sh
18.On lines 35 and 39, replace cond0091 with your username for the cluster
19.On line 42, you can pick how many nodes you want to use at any given time. It is set to 6 intially. 
20.Save file 

#######RUN#######
1.Do $qsub setup.sh
2.Wait for setup.sh to finish. This script is creating the setup files for all runs. This may take about an hour.
3.When setup.sh is done, there should be a new directory in your home directory. Move this directory to your ShelfMC directory.
4.Do $screen to start a new screen that the scheduler can run on. This is incase you lose connection to the cluster mid run. 
5.Do $./scheduler.sh to start script. This script automatically submits jobs and lets you see the status of the runs. This will run for several hours.
5.The scheduler makes a text file of all jobs called jobList.txt in the ShelfMC dir. Make sure to delete jobList.txt before starting a whole new run.


######RESULT#######
1.When Completed, there will be a great amount of data in the run files, about 460GB.  
2.The run directory is organized in tree, results for particular runs can be found by cd'ing deeper into the tree.
3.In each run directory, there will be a resulting root file, all the setup files, and a log file for the run.
 
Attachment 2: setup.sh
#!/bin/bash
#PBS -l walltime=04:00:00
#PBS -l nodes=1:ppn=1,mem=4000mb
#PBS -N jude_SetupJob
#PBS -j oe
#PBS -A PCON0003
#Jude Rajasekera 3/20/17
#directories
WorkDir=$TMPDIR   
tmpShelfmc=$HOME/shelfmc/ShelfMC #set your ShelfMC directory here

#controlled variables for run
runName='ParamSpaceScanDir' #name of run
NNU=500000 #NNU per run
seed=42 #starting seed for every run, each processor will recieve a different seed (42,43,44,45...)
NNU="$(($NNU / 20))" #calculating processors per node, change 20 to however many processors your cluster has per node
ppn=5 #number of processors per node on cluster
########################### input.txt file ####################################################
input1="#inputs for ARIANNA simulation, do not change order unless you change ReadInput()"
input2="$NNU #NNU, setting to 1 for unique neutrino"
input3="$seed      #seed Seed for Rand3"
input4="18.0    #EXPONENT, !should be exclusive with SPECTRUM"
input5="1000    #ATGap, m, distance between stations"
input6="4       #ST_TYPE, !restrict to 4 now!"
input7="4 #N_Ant_perST, not to be confused with ST_TYPE above"
input8="2 #N_Ant_Trigger, this is the minimum number of AT to trigger"
input9="30      #Z for ST_TYPE=2"
input10="$T   #ICETHICK, thickness of ice including firn, 575m at Moore's Bay"
input11="1       #FIRN, KD: ensure DEPTH_DEPENDENT is off if FIRN is 0"
input12="1.30    #NFIRN 1.30"
input13="$FT      #FIRNDEPTH in meters"
input14="1 #NROWS 12 initially, set to 3 for HEXAGONAL"
input15="1 #NCOLS 12 initially, set to 5 for HEXAGONAL"
input16="0       #SCATTER"
input17="1       #SCATTER_WIDTH,how many times wider after scattering"
input18="0       #SPECTRUM, use spectrum, ! was 1 initially!"
input19="0       #DIPOLE,  add a dipole to the station, useful for st_type=0 and 2"
input20="0       #CONST_ATTENLENGTH, use constant attenuation length if ==1"
input21="$L     #ATTEN_UP, this is the conjuction of the plot attenlength_up and attlength_down when setting REFLECT_RATE=0.5(3dB)"
input22="250     #ATTEN_DOWN, this is the average attenlength_down before Minna Bluff measurement(not used anymore except for CONST_ATTENLENGTH)"
input23="4 #NSIGMA, threshold of trigger"
input24="1      #ATTEN_FACTOR, change of the attenuation length"
input25="$Rval    #REFLECT_RATE,power reflection rate at the ice bottom"
input26="0       #GZK, 1 means using GZK flux, 0 means E-2 flux"
input27="0       #FANFLUX, use fenfang's flux which only covers from 10^17 eV to 10^20 eV"
input28="0       #WIDESPECTRUM, use 10^16 eV to 10^21.5 eV as the energy spectrum, otherwise use 17-20"
input29="1       #SHADOWING"
input30="1       #DEPTH_DEPENDENT_N;0 means uniform firn, 1 means n_firn is a function of depth"
input31="0 #HEXAGONAL"
input32="1       #SIGNAL_FLUCT 1=add noise fluctuation to signal or 0=do not"
input33="4.0     #GAINV  gain dependency"
input34="1       #TAUREGENERATION if 1=tau regeneration effect, if 0=original"
input35="3.0     #ST4_R radius in meters between center of station and antenna"
input36="350     #TNOISE noise temperature in Kelvin"
input37="80      #FREQ_LOW low frequency of LPDA Response MHz #was 100"
input38="1000    #FREQ_HIGH high frequency of LPDA Response MHz"
input39="/home/rajasekera.3/shelfmc/ShelfMC/temp/LP_gain_manual.txt     #GAINFILENAME"
input40="$SD     #STATION_DEPTH"
#######################################################################################################

cd $TMPDIR   
mkdir $runName
cd $runName

initSeed=$seed
counter=0
for L in {500..1000..100} #attenuation length 500-1000
do
    mkdir Atten_Up$L
    cd Atten_Up$L

    for R in {0..100..25} #Reflection Rate 0-1
    do
        mkdir ReflectionRate$R
        cd ReflectionRate$R
        if [ "$R" = "100" ]; then #fixing reflection rate value
            Rval="1.0"
        else
            Rval="0.$R"
        fi

        for T in {500..2900..400} #Thickness of Ice 500-2900
        do
            mkdir IceThick$T
            cd IceThick$T
            for FT in {60..140..20} #Firn Thinckness 60-140
            do
                mkdir FirnThick$FT
                cd FirnThick$FT
                for SD in {0..200..50} #Station Depth
                do
                    mkdir StationDepth$SD
                    cd StationDepth$SD
                    #####Do file operations###########################################
                    counter=$((counter+1))
                    echo "Counter = $counter ; L = $L ; R = $Rval ; T = $T ; FT = $FT ; SD = $SD " #print variables

                    #define changing lines
                    input21="$L     #ATTEN_UP, this is the conjuction of the plot attenlength_up and attlength_down when setting REFLECT_RATE=0.5(3dB)"
                    input25="$Rval    #REFLECT_RATE,power reflection rate at the ice bottom"
                    input10="$T   #ICETHICK, thickness of ice including firn, 575m at Moore's Bay"
                    input13="$FT      #FIRNDEPTH in meters"
                    input40="$SD       #STATION_DEPTH"
		    
		    for (( i=1; i<=$ppn;i++)) #make 20 setup files for 20 processors
                    do

                        mkdir Setup$i #make setup folder
                        cd Setup$i #go into setup folder
                        seed="$(($initSeed + $i -1))" #calculate seed for this iteration
                        input3="$seed      #seed Seed for Rand3"

                        for j in {1..40} #print all input.txt lines
                        do
                            lineName=input$j
                            echo "${!lineName}" >> input.txt
                        done
			
                        cd ..
                    done
		    
		    pwd=`pwd`
                    #create job file
		    echo '#!/bin/bash' >> run_shelfmc_multithread.sh
		    echo '#PBS -l nodes=1:ppn='$ppn >> run_shelfmc_multithread.sh
		    echo '#PBS -l walltime=00:05:00' >> run_shelfmc_multithread.sh #change walltime as necessary
		    echo '#PBS -N jude_'$runName'_job' >> run_shelfmc_multithread.sh #change job name as necessary
		    echo '#PBS -j oe'  >> run_shelfmc_multithread.sh
		    echo '#PBS -A PCON0003' >> run_shelfmc_multithread.sh #change group if necessary
		    echo 'cd ' $tmpShelfmc >> run_shelfmc_multithread.sh
		    echo 'runName='$runName  >> run_shelfmc_multithread.sh
		    for (( i=1; i<=$ppn;i++))
		    do
			echo './shelfmc_stripped.exe $runName/'Atten_Up$L'/'ReflectionRate$R'/'IceThick$T'/'FirnThick$FT'/'StationDepth$SD'/Setup'$i' _'$i'$runName &' >> run_shelfmc_multithread.sh
		    done
		   # echo './shelfmc_stripped.exe $runName/'Atten_Up$L'/'ReflectionRate$R'/'IceThick$T'/'FirnThick$FT'/'StationDepth$SD'/Setup1 _01$runName &' >> run_shelfmc_multithread.sh
		    echo 'wait' >> run_shelfmc_multithread.sh
		    echo 'cd $runName/'Atten_Up$L'/'ReflectionRate$R'/'IceThick$T'/'FirnThick$FT'/'StationDepth$SD >> run_shelfmc_multithread.sh
		    echo 'for (( i=1; i<='$ppn';i++)) #20 iterations' >> run_shelfmc_multithread.sh
		    echo 'do' >> run_shelfmc_multithread.sh
		    echo '  cd Setup$i #cd into setup dir' >> run_shelfmc_multithread.sh
		    echo '  mv *.root ..' >> run_shelfmc_multithread.sh
		    echo '  cd ..' >> run_shelfmc_multithread.sh
		    echo 'done' >> run_shelfmc_multithread.sh
		    echo 'hadd Result_'$runName'.root *.root' >> run_shelfmc_multithread.sh
		    echo 'rm *ShelfMCTrees*' >> run_shelfmc_multithread.sh

		    chmod u+x run_shelfmc_multithread.sh # make executable

                    ##################################################################
                    cd ..
                done
                cd ..
            done
            cd ..
        done
        cd ..
    done
    cd ..
done
cd 

mv $WorkDir/$runName $HOME
Attachment 3: scheduler.sh
#!/bin/bash
#Jude Rajasekera 3/20/17

tmpShelfmc=$HOME/shelfmc/ShelfMC #location of Shelfmc
runName=ParamSpaceScanDir #name of run

cd $tmpShelfmc #move to home directory

if [ ! -f ./jobList.txt ]; then #see if there is an existing job file
    echo "Creating new job List"
    for L in {500..1000..100} #attenuation length 500-1000
    do
	for R in {0..100..25} #Reflection Rate 0-1
	do
            for T in {500..2900..400} #Thickness of Ice 500-2900
            do
		for FT in {60..140..20} #Firn Thinckness 60-140
		do
                    for SD in {0..200..50} #Station Depth
                    do
		    echo "cd $runName/Atten_Up$L/ReflectionRate$R/IceThick$T/FirnThick$FT/StationDepth$SD" >> jobList.txt
                    done
		done
            done
	done
    done
else 
    echo "Picking up from last job"
fi


numbLeft=$(wc -l < ./jobList.txt)
while [ $numbLeft -gt 0 ];
do
    jobs=$(showq | grep "rajasekera.3") #change username here
    echo '__________Current Running Jobs__________'
    echo "$jobs"
    echo ''
    runningJobs=$(showq | grep "rajasekera.3" | wc -l) #change unsername here
    echo Number of Running Jobs = $runningJobs 
    echo Number of jobs left = $numbLeft
    if [ $runningJobs -le 6 ];then
	line=$(head -n 1 jobList.txt)
	$line
	echo Submit Job && pwd
	qsub run_shelfmc_multithread.sh
	cd $tmpShelfmc
	sed -i 1d jobList.txt
    else
	echo "Full Capacity"
    fi
    sleep 1
    numbLeft=$(wc -l < ./jobList.txt)
done
  6   Tue Apr 25 10:22:50 2017 Jude RajasekeraShelfMC Cluster RunsSoftware

Doing large runs of ShelfMC can be time intensive. However, if you have access to a computing cluster like Ruby or KingBee, where you are given a node with multiple processors, ShelfMC runs can be optimized by utilizing all available processors on a node. The multithread_shelfmc.sh script automates these runs for you. The script and instructions are attached below.

Attachment 1: multithread_shelfmc.sh
#!/bin/bash
#Jude Rajasekera 3/20/2017
shelfmcDir=/users/PCON0003/cond0091/ShelfMC #put your shelfmc directory address here 
 

runName='TestRun' #name of run
NNU=500000 #total NNU per run
seed=42 #initial seed for every run,each processor will recieve a different seed (42,43,44,45...)
NNU="$(($NNU / 20))" #calculating NNU per processor, change 20 to however many processor your cluster has per node
ppn=20 #processors per node
########################### make changes for input.txt file here #####################################################
input1="#inputs for ARIANNA simulation, do not change order unless you change ReadInput()"
input2="$NNU #NNU, setting to 1 for unique neutrino"
input3="$seed   #seed Seed for Rand3"
input4="18.0    #EXPONENT, !should be exclusive with SPECTRUM"
input5="1000    #ATGap, m, distance between stations"
input6="4       #ST_TYPE, !restrict to 4 now!"
input7="4       #N_Ant_perST, not to be confused with ST_TYPE above"
input8="2       #N_Ant_Trigger, this is the minimum number of AT to trigger"
input9="30      #Z for ST_TYPE=2"
input10="575     #ICETHICK, thickness of ice including firn, 575m at Moore's Bay"
input11="1      #FIRN, KD: ensure DEPTH_DEPENDENT is off if FIRN is 0"
input12="1.30   #NFIRN 1.30"
input13="$122    #FIRNDEPTH in meters"
input14="1      #NROWS 12 initially, set to 3 for HEXAGONAL"
input15="1      #NCOLS 12 initially, set to 5 for HEXAGONAL"
input16="0      #SCATTER"
input17="1      #SCATTER_WIDTH,how many times wider after scattering"
input18="0      #SPECTRUM, use spectrum, ! was 1 initially!"
input19="0      #DIPOLE,  add a dipole to the station, useful for st_type=0 and 2"
input20="0      #CONST_ATTENLENGTH, use constant attenuation length if ==1"
input21="1000   #ATTEN_UP, this is the conjuction of the plot attenlength_up and attlength_down when setting REFLECT_RATE=0.5(3dB)"
input22="250    #ATTEN_DOWN, this is the average attenlength_down before Minna Bluff measurement(not used anymore except for CONST_ATTENLENGTH)"
input23="4      #NSIGMA, threshold of trigger"
input24="1      #ATTEN_FACTOR, change of the attenuation length"
input25="1      #REFLECT_RATE,power reflection rate at the ice bottom"
input26="0      #GZK, 1 means using GZK flux, 0 means E-2 flux"
input27="0      #FANFLUX, use fenfang's flux which only covers from 10^17 eV to 10^20 eV"
input28="0      #WIDESPECTRUM, use 10^16 eV to 10^21.5 eV as the energy spectrum, otherwise use 17-20"
input29="1      #SHADOWING"
input30="1      #DEPTH_DEPENDENT_N;0 means uniform firn, 1 means n_firn is a function of depth"
input31="0      #HEXAGONAL"
input32="1      #SIGNAL_FLUCT 1=add noise fluctuation to signal or 0=do not"
input33="4.0    #GAINV  gain dependency"
input34="1      #TAUREGENERATION if 1=tau regeneration effect, if 0=original"
input35="3.0    #ST4_R radius in meters between center of station and antenna"
input36="350    #TNOISE noise temperature in Kelvin"
input37="80     #FREQ_LOW low frequency of LPDA Response MHz #was 100"
input38="1000   #FREQ_HIGH high frequency of LPDA Response MHz"
input39="/users/PCON0003/cond0091/ShelfMC/temp/LP_gain_manual.txt     #GAINFILENAME"
###########################################################################################

cd $shelfmcDir #cd to dir containing shelfmc
mkdir $runName #make a folder for run
cd $runName    #cd into run folder
initSeed=$seed 

for (( i=1; i<=$ppn;i++)) #make 20 setup files for 20 processors
do
    mkdir Setup$i #make setup folder i
    cd Setup$i #go into setup folder i
    
    seed="$(($initSeed+$i-1))" #calculate seed for this iteration
    input3="$seed      #seed Seed for Rand3" #save new input line    
    for j in {1..40} #print all input.txt lines 
    do
	lineName=input$j
        echo "${!lineName}" >> input.txt #print line to input.txt file
    done
    cd ..
done



pwd=`pwd`

#create job file
echo '#!/bin/bash' >> run_shelfmc_multithread.sh
echo '#PBS -l nodes=1:ppn='$ppn >> run_shelfmc_multithread.sh #change depending on processors per node
echo '#PBS -l walltime=00:05:00' >> run_shelfmc_multithread.sh #change walltime depending on run size, will be 20x shorter than single processor run time
echo '#PBS -N shelfmc_'$runName'_job' >> run_shelfmc_multithread.sh
echo '#PBS -j oe'  >> run_shelfmc_multithread.sh
echo '#PBS -A PCON0003' >> run_shelfmc_multithread.sh #change to specify group
echo 'cd ' $shelfmcDir >> run_shelfmc_multithread.sh 
echo 'runName='$runName  >> run_shelfmc_multithread.sh
for (( k=1; k<=$ppn;k++))
do
    echo './shelfmc_stripped.exe $runName/Setup'$k' _'$k'$runName &' >> run_shelfmc_multithread.sh #execute commands for 20 setup files
done
echo 'wait' >> run_shelfmc_multithread.sh #wait until all runs are finished
echo 'cd $runName' >> run_shelfmc_multithread.sh #go into run folder 
echo 'for (( i=1; i<='$ppn';i++)) #20 iterations' >> run_shelfmc_multithread.sh 
echo 'do' >> run_shelfmc_multithread.sh
echo '  cd Setup$i #cd into setup dir' >> run_shelfmc_multithread.sh
echo '  mv *.root ..' >> run_shelfmc_multithread.sh #move root files to runDir
echo '  cd ..' >> run_shelfmc_multithread.sh
echo 'done' >> run_shelfmc_multithread.sh
echo 'hadd Result_'$runName'.root *.root' >> run_shelfmc_multithread.sh #add all root files
echo 'rm *ShelfMCTrees*' >> run_shelfmc_multithread.sh #delete all partial root files

chmod u+x run_shelfmc_multithread.sh

echo "Run files created"
echo "cd into run folder and do $ qsub run_shelfmc_multithread.sh"
Attachment 2: multithread_shelfmc_walkthrough.txt
This document will explain how to dowload, configure, and run multithread_shelfmc.sh in order to do large runs on computing clusters.

####DOWNLOAD####
1.Download multithread_shelfmc.sh
2.Move multithread_shelfmc.sh into ShelfMC directory
3.Do $chmod u+x multithread_shelfmc.sh

####CONFIGURE###
1.Open multithread_shelfmc.sh
2.On line 3, modify shelfmcDir to your ShelfMC dir
3.On line 6, add your run name
4.On line 7, add the total NNU
5.On line 8, add an intial seed
6.On line 10, specify number of processors per node for your cluster
7.On lines 12-49, edit the input.txt parameters
8.On line 50, add the location of your LP_gain_manual.txt
9.On line 80, specify a wall time for each run, remember this will be about 20x shorter than ShelfMC on a single processor
10.On line 83, Specify the group name for your cluster if needed
11.Save file

####RUN####
1.Do $./multithread_shelfmc.sh 
2.There should now be a new directory in the ShelfMC dir with 20 setup files and a run_shelfmc_multithread.sh script
3.Do $qsub run_shelfmc_multithread.sh

###RESULT####
1.After the run has completed, there will be a result .root file in the run directory
 
  10   Thu May 11 14:38:10 2017 Sam StaffordSample OSC batch job setupSoftware

Batch jobs on OSC are initiated through the Portable Batch System (PBS).  This is the recommended way to run stuff on OSC clusters.
Attached is a sample PBS script that copies files to temporary storage on the OSC cluster (also recommended) and runs an analysis program.
Info on batch processing is at https://www.osc.edu/supercomputing/batch-processing-at-osc.
     This will tell you how to submit and manage batch jobs.
More resources are available at www.osc.edu.

PBS web site: /www.pbsworks.com

The PBS user manual is at www.pbsworks.com/documentation/support/PBSProUserGuide10.4.pdf.

Attachment 1: osc_batch_jobs.txt
## annotated sample PBS batch job specification for OSC
## Sam Stafford 05/11/2017

#PBS -N j_ai06_${RUN_NUMBER}

##PBS -m abe       ##  request an email on job completion
#PBS -l mem=16GB   ##  request 16GB memory
##PBS -l walltime=06:00:00  ## set this in qsub
#PBS -j oe         ## merge stdout and stderr into a single output log file
#PBS -A PAS0174

echo "run number " $RUN_NUMBER
echo "cal pulser " $CAL_PULSER
echo "baseline file " $BASELINE_FILE
echo "temp dir is " $TMPDIR
echo "ANITA_DATA_REMOTE_DIR="$ANITA_DATA_REMOTE_DIR
set -x


## copy the files from kingbee to the temporary workspace  
##    (if you set up public key authentication between kingbee and OSC, you won't need a password; just google "public key authentication")
mkdir $TMPDIR/run${RUN_NUMBER}   ## make a directory for this run number
scp stafford.16@kingbee.mps.ohio-state.edu:/data/anita/anita3/flightData/copiedBySam/run${RUN_NUMBER}/calEventFile${RUN_NUMBER}.root $TMPDIR/run${RUN_NUMBER}/calEventFile${RUN_NUMBER}.root
scp stafford.16@kingbee.mps.ohio-state.edu:/data/anita/anita3/flightData/copiedBySam/newerData/run${RUN_NUMBER}/gpsEvent${RUN_NUMBER}.root $TMPDIR/run${RUN_NUMBER}/gpsEvent${RUN_NUMBER}.root
scp stafford.16@kingbee.mps.ohio-state.edu:/data/anita/anita3/flightData/copiedBySam/newerData/run${RUN_NUMBER}/timedHeadFile${RUN_NUMBER}.root $TMPDIR/run${RUN_NUMBER}/timedHeadFile${RUN_NUMBER}.root
scp stafford.16@kingbee.mps.ohio-state.edu:/data/anita/anita3/flightData/copiedBySam/newerData/run${RUN_NUMBER}/decBlindHeadFileV1_${RUN_NUMBER}.root $TMPDIR/run${RUN_NUMBER}/decBlindHeadFileV1_${RUN_NUMBER}.root

## set up the environment variables to point to the temporary work space
export ANITA_DATA_REMOTE_DIR=$TMPDIR
export ANITA_DATA_LOCAL_DIR=$TMPDIR
echo "ANITA_DATA_REMOTE_DIR="$ANITA_DATA_REMOTE_DIR

## run the analysis program
cd analysisSoftware
./analyzerIterator06 ${CAL_PULSER} -S1 -Noverlap --FILTER_OPTION=4 ${BASELINE_FILE} ${RUN_NUMBER} -O
echo "batch job ending"

  32   Mon Dec 17 21:16:31 2018 Brian ClarkRun over many data files in parallel 

To analyze data, we sometimes need to run over many thousands of runs at once. To do this in parallel, we can submit a job for every run we want to do. This will proceed in several steps:

  1. We need to prepare an analysis program.
    1. This is demo.cxx.
    2. The program will take an input data file and an output location.
    3. The program will do some analysis on each events, and then write the result of that analysis to an output file labeled by the same number as the input file.
  2. We need to prepare a job script for PBS.
    1. This is "run.sh"; this is the set of instructions to be submitted to the cluster.
    2. The instructions say to:
      1. Source a a shell environment
      2. To run the executable
      3. Move the output root file to the output location.
    3. Note that we're telling the program we wrote in step 1 to write to the node-local $TMPDIR, and then moving the result to our final output directory at the end. This is better for cluster performance.
  3. We need to make a list of data files to run over
    1. We can do this on OSC by running ls -d -1 /fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event*.root > run_list.txt
    2. This places the full path to the ROOT files in that folder into a list called run_list.txt that we can loop over.
  4. Third, we need to script that will submit all of the jobs to the cluster.
    1. This is "submit_jobs.sh".
    2. This loops over all the files in our run_list.txt and submits a run.sh job for each of them.
    3. This is also where we define the $RUNDIR (where the code is to be exeucted) and the $OUTPUTDIR (where the output products are to be stored)

Once you've generated all of these output files, you can run over the output files only to make plots and such.

 

Attachment 1: demo.cxx
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
////  demo.cxx 
////  demo
////
////  Nov 2018
////////////////////////////////////////////////////////////////////////////////

//Includes
#include <iostream>
#include <string>
#include <sstream>

//AraRoot Includes
#include "RawAtriStationEvent.h"
#include "UsefulAtriStationEvent.h"

//ROOT Includes
#include "TTree.h"
#include "TFile.h"
#include "TGraph.h"

using namespace std;

RawAtriStationEvent *rawAtriEvPtr;

int main(int argc, char **argv)
{

	if(argc<3) {
		std::cout << "Usage\n" << argv[0] << " <input_file> <output_location> "<<endl;
		return -1;
	}

	/*
	arguments
	0: exec
	1: input data file
	2: output location
	*/
	
	TFile *fpIn = TFile::Open(argv[1]);
	if(!fpIn) {
		std::cout << "Can't open file\n";
		return -1;
	}
	TTree *eventTree = (TTree*) fpIn->Get("eventTree");
	if(!eventTree) {
		std::cout << "Can't find eventTree\n";
		return -1;
	}
	eventTree->SetBranchAddress("event",&rawAtriEvPtr);
	int run;
	eventTree->SetBranchAddress("run",&run);
	eventTree->GetEntry(0);
	printf("Filter Run Number %d \n", run);

	char outfile_name[400];
	sprintf(outfile_name,"%s/outputs_run%d.root",argv[2],run);

	TFile *fpOut = TFile::Open(outfile_name, "RECREATE");
	TTree* outTree = new TTree("outTree", "outTree");
	int WaveformLength[16];
	outTree->Branch("WaveformLength", &WaveformLength, "WaveformLength[16]/D");
	
	Long64_t numEntries=eventTree->GetEntries();

	for(Long64_t event=0;event<numEntries;event++) {
		eventTree->GetEntry(event);
		UsefulAtriStationEvent *realAtriEvPtr = new UsefulAtriStationEvent(rawAtriEvPtr, AraCalType::kLatestCalib);
		for(int i=0; i<16; i++){
			TGraph *gr = realAtriEvPtr->getGraphFromRFChan(i);
			WaveformLength[i] = gr->GetN();
			delete gr;
		}		
		outTree->Fill();
		delete realAtriEvPtr;
	} //loop over events
	
	fpOut->Write();
	fpOut->Close();
	
	fpIn->Close();
	delete fpIn;
}
Attachment 2: run.sh
#/bin/bash
#PBS -l nodes=1:ppn=1
#PBS -l mem=4GB
#PBS -l walltime=00:05:00
#PBS -A PAS0654
#PBS -e /fs/scratch/PAS0654/shell_demo/err_out_logs
#PBS -o /fs/scratch/PAS0654/shell_demo/err_out_logs

# you should change the -e and -o to write your 
# log files to a location of your preference

# source your own shell script here
eval 'source /users/PAS0654/osu0673/A23_analysis/env.sh'

# $RUNDIR was defined in the submission script 
# along with $FILE and $OUTPUTDIR

cd $RUNDIR

# $TMPDIR is the local memory of this specific node
# it's the only variable we didn't have to define

./bin/demo $FILE $TMPDIR 

# after we're done
# we copy the results to the $OUTPUTDIR

pbsdcp $TMPDIR/'*.root' $OUTPUTDIR
Attachment 3: run_list.txt
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1000.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1001.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1002.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1004.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1005.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1006.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1007.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1009.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1010.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1011.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1012.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1014.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1015.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1016.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1017.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1019.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1020.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1021.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1022.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1024.root
Attachment 4: submit_jobs.sh
#!/bin/bash

#where should the outputs be stored?
OutputDir="/fs/scratch/PAS0654/shell_demo/outputs"
echo '[ Processed file output directory: ' $OutputDir ' ]'
export OutputDir

#where is your executable compiled?
RunDir="/users/PAS0654/osu0673/A23_analysis/araROOT"
export RunDir

#define the list of runs to execute on
readfile=run_list.txt

counter=0
while read line1
do
	qsub -v RUNDIR=$RunDir,OUTPUTDIR=$OutputDir,FILE=$line1 -N 'job_'$counter run.sh
	counter=$((counter+1))
done < $readfile
  30   Tue Nov 27 09:43:37 2018 Brian ClarkPlot Two TH2Ds with Different Color PalettesAnalysis

Say you want to plot two TH2Ds with on the same pad but different color paletettes?

This is possible, but requires a touch of fancy ROOT-ing. Actually, it's not that much, it just takes a lot of time to figure it out. So here it is.

Fair warning, the order of all the gPad->Modified's etc seems very important to no seg fault.

In include the main (demo.cc) and the Makefile.

Attachment 1: demo.cc
///////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
///////		demo.cxx
//////		Nov 2018, Brian Clark 
//////		Demonstrate how to draw 2 TH2D's with two different color palettes (needs ROOT6)
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////

//C++
#include <iostream>

//ROOT Includes
#include "TH2D.h"
#include "TCanvas.h"
#include "TStyle.h"
#include "TExec.h"
#include "TColor.h"
#include "TPaletteAxis.h"
#include "TRandom3.h"

using namespace std;

int main(int argc, char **argv)
{

	gStyle->SetOptStat(0);

	TH2D *h1 = new TH2D("h1","h1",100,-10,10,100,-10,10);
	TH2D *h2 = new TH2D("h2","h2",100,-10,10,100,-10,10);

	TRandom3 *R = new TRandom3(time(0));
	int count=0;
	while(count<10000000){
		h1->Fill(R->Gaus(-4.,1.),R->Gaus(-4.,1.));
		count++;
	}

	TRandom3 *R2 = new TRandom3(time(0)+100);
	count=0;
	while(count<10000000){
		h2->Fill(R2->Gaus(4.,1.),R2->Gaus(4.,1.));
		count++;
	}

	TCanvas *c = new TCanvas("c","c",1100,850);

	h1->Draw("colz");
		TExec *ex1 = new TExec("ex1","gStyle->SetPalette(kValentine);");
		ex1->Draw();
		h1->Draw("colz same");
	gPad->SetLogz();
	gPad->Modified();
	gPad->Update();

	TPaletteAxis *palette1 = (TPaletteAxis*)h1->GetListOfFunctions()->FindObject("palette");
	palette1->SetX1NDC(0.7);
	palette1->SetX2NDC(0.75);
	palette1->SetY1NDC(0.1);
	palette1->SetY2NDC(0.95);

	gPad->SetRightMargin(0.3);
	gPad->SetTopMargin(0.05);

	h2->Draw("same colz");
		TExec *ex2 = new TExec("ex2","gStyle->SetPalette(kCopper);"); //TColor::InvertPalette();
		ex2->Draw();
		h2->Draw("same colz");
	gPad->SetLogz();
	gPad->Modified();
	gPad->Update();

	TPaletteAxis *palette2 = (TPaletteAxis*)h2->GetListOfFunctions()->FindObject("palette");
	palette2->SetX1NDC(0.85);
	palette2->SetX2NDC(0.90);
	palette2->SetY1NDC(0.1);
	palette2->SetY2NDC(0.95);

	h1->SetTitle("");
	h1->GetXaxis()->SetTitle("X-Value");
	h1->GetYaxis()->SetTitle("Y-Value");

	h1->GetZaxis()->SetTitle("First Histogram Events");
	h2->GetZaxis()->SetTitle("Second Histogram Events");

	h1->GetYaxis()->SetTitleSize(0.05);
	h1->GetXaxis()->SetTitleSize(0.05);
	h1->GetYaxis()->SetLabelSize(0.045);
	h1->GetXaxis()->SetLabelSize(0.045);

	h1->GetZaxis()->SetTitleSize(0.045);
	h2->GetZaxis()->SetTitleSize(0.045);
	h1->GetZaxis()->SetLabelSize(0.04);
	h2->GetZaxis()->SetLabelSize(0.04);
	
	h1->GetYaxis()->SetTitleOffset(1);
	h1->GetZaxis()->SetTitleOffset(1);
	h2->GetZaxis()->SetTitleOffset(1);

	c->SetLogz();
	c->SaveAs("test.png");

}
Attachment 2: Makefile
LDFLAGS=-L${ARA_UTIL_INSTALL_DIR}/lib -L${shell root-config --libdir}
CXXFLAGS=-I${ARA_UTIL_INSTALL_DIR}/include -I${shell root-config --incdir}
LDLIBS += $(shell $(ROOTSYS)/bin/root-config --libs)
Attachment 3: test.png
test.png
  47   Mon Apr 24 11:51:42 2023 William LuszczakPUEO simulation stack installation instructionsSoftware

These are instructions I put together as I was first figuring out how to compile PueoSim/NiceMC. This was originally done on machines running CentOS 7, however has since been replicated on the OSC machines (running RedHat 7.9 I think?). I generally try to avoid any `module load` type prerequisites, instead opting to compile any dependencies from source. You _might_ be able to get this to work by `module load`ing e.g. fftw, but try this at your own peril.

#pueoBuilder Installation Tutorial

This tutorial will guide you through the process of building the tools included in pueoBuilder from scratch, including the prerequisites and any environment variables that you will need to set. This sort of thing is always a bit of a nightmare process for me, so hopefully this guide can help you skip some of the frustration that I ran into. I did not have root acces on the system I was building on, so the instructions below are what I had to do to get things working with local installations. If you have root access, then things might be a bit easier. For reference I'm working on CentOS 7, other operating systems might have different problems that arise. 

##Prerequisites
As far as I can tell, the prerequisites that need to be built first are:

-Python 3.9.18 (Apr. 6 2024 edit by Jason Yao, needed for ROOT 6.26-14)
-cmake 3.21.2 (I had problems with 3.11.4)
-gcc 11.1.0 (9.X will not work) (update 4/23/24: If you are trying to compile ROOT 6.30, you might need to downgrade to gcc 10.X, see note about TBB in "Issues I ran into" at the end)
-fftw 3.3.9
-gsl 2.7.1 (for ROOT)
-ROOT 6.24.00
-OneTBB 2021.12.0 (if trying to compile ROOT 6.30)

###CMake
You can download the source files for CMake here: https://cmake.org/download/. Untar the source files with:

    tar -xzvf cmake-3.22.1.tar.gz

Compiling CMake is as easy as following the directions on the website: https://cmake.org/install/, but since we're doing a local build, we'll use the `configure` script instead of the listed `bootstrap` script. As an example, suppose that I downloaded the above tar file to `/scratch/wluszczak/cmake`: 

    mkdir install
    cd cmake-3.22.1
    ./configure --prefix=/scratch/wluszczak/cmake/install
    make
    make install

You should additionally add this directory to your `$PATH` variable:

    export PATH=/scratch/wluszczak/cmake/install/bin:$PATH
    

To check to make sure that you are using the correct version of CMake, run:

    cmake --version

and you should get:

    cmake version 3.22.1

    CMake suite maintained and supported by Kitware (kitware.com/cmake).

### gcc 11.1.0

Download the gcc source from github here: https://github.com/gcc-mirror/gcc/tags. I used the 11.1.0 release, though there is a more recent 11.2.0 release that I have not tried. Once you have downloaded the source files, untar the directory:

    tar -xzvf gcc-releases-gcc-11.1.0.tar.gz

Then install the prerequisites for gcc:
    
    cd gcc-releases-gcc-11.1.0
    contrib/download_prerequisites

One of the guides I looked at also recommended installing flex separately, but I didn't seem to need to do this, and I'm not sure how you would go about it without root priviledges, though I imagine it's similar to the process for all the other packages here (download the source and then build by providing an installation prefix somewhere)

After you have installed the prerequisites, create a build directory:

    cd ../
    mkdir build
    cd build

Then configure GCC for compilation like so:

    ../gcc-releases-gcc-11.1.0/configure -v --prefix=/home/wluszczak/gcc-11.1.0 --enable-checking=release --enable-languages=c,c++,fortran --disable-multilib --program-suffix=-11.1

I don't remember why I installed to my home directory instead of the /scratch/ directories used above. In principle the installation prefix can go wherever you have write access. Once things have configured, compile gcc with:

    make -j $(nproc)
    make install

Where `$(nproc)` is the number of processing threads you want to devote to compilation. More threads will run faster, but be more taxing on your computer. For reference, I used 8 threads and it took ~15 min to finish. 


Once gcc is built, we need to set a few environment variables:

    export PATH=/home/wluszczak/gcc-11.1.0/bin:$PATH
    export LD_LIBRARY_PATH=/home/wluszczak/gcc-11.1.0/lib64:$LD_LIBRARY_PATH

We also need to make sure cmake uses this compiler:

    export CC=/home/wluszczak/gcc-11.1.0/bin/gcc-11.1
    export CXX=/home/wluszczak/gcc-11.1.0/bin/g++-11.1
    export FC=/home/wluszczak/gcc-11.1.0/bin/gfortran-11.1

If your installation prefix in the configure command above was different, substitute that directory in place of `/home/wluszczak/gcc-11.1.0` for all the above export commands. To easily set these variables whenever you want to use gcc-11.1.0, you can stick these commands into a single shell script:

    #load_gcc11.1.sh
    export PATH=/home/wluszczak/gcc-11.1.0/bin:$PATH
    export LD_LIBRARY_PATH=/home/wluszczak/gcc-11.1.0/lib64:$LD_LIBRARY_PATH

    export CC=/home/wluszczak/gcc-11.1.0/bin/gcc-11.1
    export CXX=/home/wluszczak/gcc-11.1.0/bin/g++-11.1
    export FC=/home/wluszczak/gcc-11.1.0/gfortran-11.1

(again substituting your installation prefix in place of mine). You can then set all these environment variables by simply running:
    
    source load_gcc11.1.sh

Once this is done, you can check that gcc-11.1.0 is properly installed by running:

    gcc-11.1 --version

Note that plain old

    gcc --version

might still point to an older version of gcc. This is fine though. 

###FFTW 3.3.9
Grab the source code for the appropriate versino of FFTW from here: http://www.fftw.org/download.html

However, do NOT follow the installation instructions on the webpage. Those instructions might work if you have root privileges, but I personally couldn't seem to to get things to work that way. Instead, we're going to build fftw with cmake. Untar the fftw source files:

    tar -xzvf fftw-3.3.9.tar.gz

Make a build directory and cd into it:
    
    mkdir build
    cd build

Now build using cmake, using the flags shown below. For reference, I downloaded and untarred the source file in `/scratch/wluszczak/fftw/build`, so adjust your install prefix accordingly to point to your own build directory that you created in the previous step.

    cmake -DCMAKE_INSTALL_PREFIX=/scratch/wluszczak/fftw/build/ -DBUILD_SHARED_LIBS=ON -DENABLE_OPENMP=ON -DENABLE_THREADS=ON ../fftw-3.3.9
    make install -j $(nproc)

Now comes the weird part. Remove everything except the `include` and `lib64` directories in your build directory (if you installed to a different `CMAKE_INSTALL_PREFIX`, the include and lib64 directories might be located there instead. The important thing is that you want to remove everything, but leave the `include` and `lib64` directories untouched):

    rm *
    rm -r CMakeFiles

Now rebuild fftw, but with an additional flag:

    cmake -DCMAKE_INSTALL_PREFIX=/scratch/wluszczak/fftw/build/ -DBUILD_SHARED_LIBS=ON -DENABLE_OPENMP=ON -DENABLE_THREADS=ON -DENABLE_FLOAT=ON ../fftw-3.3.9
    make install -j $(nproc)

At the end of the day, your fftw install directory should have the following files:

    include/fftw3.f  
    include/fftw3.f03
    include/fftw3.h  
    include/fftw3l.f03  
    include/fftw3q.f03 
    lib64/libfftw3f.so          
    lib64/libfftw3f_threads.so.3      
    lib64/libfftw3_omp.so.3.6.9  
    lib64/libfftw3_threads.so
    lib64/libfftw3f_omp.so        
    lib64/libfftw3f.so.3        
    lib64/libfftw3f_threads.so.3.6.9  
    lib64/libfftw3.so            
    lib64/libfftw3_threads.so.3
    lib64/libfftw3f_omp.so.3      
    lib64/libfftw3f.so.3.6.9    
    lib64/libfftw3_omp.so             
    lib64/libfftw3.so.3          
    lib64/libfftw3_threads.so.3.6.9
    lib64/libfftw3f_omp.so.3.6.9  
    lib64/libfftw3f_threads.so  
    lib64/libfftw3_omp.so.3           
    lib64/libfftw3.so.3.6.9

Why do we have to do things this way? I don't know, I'm bad at computers. Maybe someone more knowledgeable knows. I found that when I didn't do this step, I'd run into errors that pueoBuilder could not find some subset of the required files (either the ones added by building with `-DENABLE_FLOAT`, or the ones added by building without `-DENABLE_FLOAT`). 

Once fftw has been installed, export your install directory (the one with the include and lib64 folders) to the following environment variable:

    export FFTWDIR=/scratch/wluszczak/fftw/build

Again, substituting your own fftw install prefix that you used above in place of `/scratch/wluszczak/fftw/build`

###gsl 2.7.1
gsl 2.7.1 is needed for the `mathmore` option in ROOT. If you have an outdated version of gsl, ROOT will still compile, but it will skip installing `mathmore` and `root-config --has-mathmore` will return `no`. To fix this, grab the latest source code for gsl from here: https://www.gnu.org/software/gsl/. Untar the files to a directory of your choosing:

    tar -xzvf gsl-latest.tar.gz

For some reason I also installed gsl to my home directory, but in principle you can put it wherever you want. 

    mkdir /home/wluszczak/gsl
    ./configure --prefix=/home/wluszczak/gsl
    make
    make check
    make install

To make sure ROOT can find this installation of gsl, you'll again need to set an environment variable prior to building ROOT:

    export GSL_ROOT_DIR=/home/wluszczak/gsl/
    
I also added this to my $PATH variable, though I don't remember if that was required to get things working or not:

    export PATH=/home/wluszczak/gsl/bin/:$PATH 
    export LD_LIBRARY_PATH=/home/wluszczak/gsl/lib:$LD_LIBRARY_PATH

 

###Python 3.9.18
Apr. 6, 2024 edit by Jason Yao:
I was able to follow this entire ELOG to install root 6.24.00, but I also was getting warnings/errors that seem to be related to Python,
so I went ahead and installed Python 3.9.18 (and then a newer version of ROOT just for fun).
Note that even though we can `module load python/3.9-2022.05` on OSC, I am 90% sure that this provided Python instance is no good as far as ROOT is concerned.

Head over to https://www.python.org/downloads/release/python-3918/ to check out the source code.
You can run
    wget https://www.python.org/ftp/python/3.9.18/Python-3.9.18.tgz
to download it on OSC; then, run
    tar -xzvf Python-3.9.18.tgz
    cd Python-3.9.18

Next we will compile Python from source. I used this website for this step.
I wanted to install to `${HOME}/usr/Python-3.9.18/install`, so
    ./configure --prefix=${HOME}/usr/Python-3.9.18/install --enable-shared
Note that we must have the flag `--enable-shared` "to ensure that shared libraries are built for Python. By not doing this you are preventing any application which wants to use Python as an embedded environment from working" according to this guy.
(The corresponding error when you try to compile ROOT later on would look like "...can not be used when making a shared object; recompile with -fPIC...")

After configuration,
    make -j8 && make install
(using 8 threads)

After installation, head over to the install directory, and then
    cd bin
You should see `pip3` and `python3.9`. If you run
    ./python3
you should see the Python interactive terminal
    Python 3.9.18 (main, Apr  5 2024, 22:49:51) 
    [GCC 11.1.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>>
Add the python3 in this folder to your PATH variable. For example, 
    export PATH=${HOME}/usr/Python-3.9.18/install/bin:${PATH}
 

While we are in the python install directory, we might as well also use the `pip3` there to install numpy:
    ./pip3 install numpy
(I am not sure if this is absolutely needed by ROOT, but probably)

Next comes the important bit. You need to add the `lib/` directory inside you python installation to the environment variable $LD_LIBRARY_PATH
    export LD_LIBRARY_PATH=${HOME}/usr/Python-3.9.18/install/lib:$LD_LIBRARY_PATH
according to stackoverflow. Without this step I ran into errors when compiling ROOT.

 

###ROOT 6.24.00
Download the specific version of ROOT that you need from here: https://root.cern/install/all_releases/

You might need to additionally install some of the dependencies (https://root.cern/install/dependencies/), but it seems like everything I needed was already installed on my system. 

Untar the source you downloaded:
    
    tar -xzvf root_v6.24.00.source.tar.gz

Make some build and install directories:

    mkdir build install
    cd build

Run CMake, but be sure to enable the fortan, mathmore and minuit2 options. For reference, I had downloaded and untarred the source files to `/scratch/wluszczak/root`. Your installation and source paths will be different.

    cmake -DCMAKE_INSTALL_PREFIX=/scratch/wluszczak/root/install/ /scratch/wluszczak/root/root-6.24.00/ -Dfortran=ON -Dminuit2=ON -Dmathmore=ON

Note: if you end up with an error related to compiling XROOTD, then add -Dxrootd=OFF to the original cmake command above.

Then proceed to start the build:

    cmake --build . --target install -j $(nproc)
    

If everything has worked then after the above command finishes running, you should be able to run the following file to finish setting up ROOT:

    source ../install/bin/thisroot.sh

##pueoBuilder
By this point, you should have working installations of CMake 3.21.2, gcc-11.1.0, fftw 3.3.9, and ROOT 6.24.00. Additionally, the following environment variables should have been set:

    export PATH=/scratch/wluszczak/cmake/install/bin:$PATH

    export PATH=/home/wluszczak/gcc-11.1.0/bin:$PATH
    export LD_LIBRARY_PATH=/home/wluszczak/gcc-11.1.0/lib64:$LD_LIBRARY_PATH

    export CC=/home/wluszczak/gcc-11.1.0/bin/gcc-11.1
    export CXX=/home/wluszczak/gcc-11.1.0/bin/g++-11.1
    export FC=/home/wluszczak/gcc-11.1.0/gfortran-11.1

    export FFTWDIR=/scratch/wluszczak/fftw/build

At this point, the hard work is mostly done. Check out pueoBuilder with:

    git clone git@github.com:PUEOCollaboration/pueoBuilder 

set the following environment variables:

    export PUEO_BUILD_DIR=/scratch/wluszczak/PUEO/pueoBuilder
    export PUEO_UTIL_INSTALL_DIR=/scratch/wluszczak/PUEO/pueoBuilder
    export NICEMC_SRC=${PUEO_BUILD_DIR}/components/nicemc
    export NICEMC_BUILD=${PUEO_BUILD_DIR}/build/components/nicemc
    export PUEOSIM_SRC=${PUEO_BUILD_DIR}/components/pueoSim
    export LD_LIBRARY_PATH=${PUEO_UTIL_INSTALL_DIR}/lib:$LD_LIBRARY_PATH

Where $PUEO_BUILD_DIR and $PUEO_UTIL_INSTALL_DIR point to where you cloned pueoBuilder to (in my case, `/scratch/wluszczak/PUEO/pueoBuilder`. Now you should be able to just run:

    ./pueoBuilder.sh

Perform a prayer to the C++ gods while you're waiting for it to compile, and hopefully at the end of the day you'll have a working set of PUEO software. 

##Issues I Ran Into
If you already have an existing installation of ROOT, you may still need to recompile to make sure you're using the same c++ standard that the PUEO software is using. I believe the pre-compiled ROOT binaries available through their website are insufficient, though maybe someone else has been able to get those working. 

If you're running into errors about c++ standard or compiler version even after you have installed gcc-11.1.0, then for some reason your system isn't recognizing your local installation of gcc-11.1.0. Check the path variables ($PATH and $LD_LIBRARY_PATH) to make sure the gcc-11.1.0 `bin` directory is being searched.

If you're running into an error that looks like:
        
    CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
    Please set them or make sure they are set and tested correctly in the CMake files:
    FFTWF_LIB (ADVANCED)

then pueoBuilder can't seem to find your fftw installation (or files that are supposed to be included in that installation), try rebuilding with different flags according to which files it seems to think are missing.

If it seems like pueoBuilder can't seem to find your fftw installation at all (i.e. you're getting some error that looks like `missing: FFTW_LIBRARIES` or `missing: FFTW_INCLUDES`), check the environment variables that are supposed to point to your fftw installation (`$FFTWDIR`) and make sure there are the correct files in the `lib` and `include` subdirectories. 

Update: 6/23/24: The latest version of ROOT (6.30) will fail to compile on OSC unless you manually compile TBB as well. An easy workaround is to simply downgrade to ROOT 6.24, however if you really need ROOT 6.30 you can follow the instructions below to install TBB and compile ROOT:

You will first need to downgrade to GCC 10.X. TBB will not compile with GCC 11. This can be done by following the GCC installation isntructions above, except starting with GCC 10 source code instead of GCC 11.

To install TBB yourself, download the source code (preferably the .tar.gz file) from here: https://github.com/oneapi-src/oneTBB/releases/tag/v2021.12.0. Move the file to the directory where you want to install TBB and untar it with:

    tar -xzvf oneTBB-2021.12.0.tar.gz

Make some build and install directories:

    mkdir build install
    cd build

Then configure cmake:

    cmake -DCMAKE_INSTALL_PREFIX=/path/to/tbb/install

Then compile with:

    cmake --build .

Once this has finished running, you can add the installation to you $PATH and $LD_LIBRARY_PATH variables:

    export PATH=/path/to/tbb/install/bin:$PATH
    export LD_LIBRARY_PATH=/path/to/tbb/install/lib64:$LD_LIBRARY_PATH

You can then proceed as normal, except when compiling root you will need one additional cmake flag (Dbuiltin_tbb=ON):

  cmake -DCMAKE_INSTALL_PREFIX=/scratch/wluszczak/root/install/ /scratch/wluszczak/root/root-6.24.00/ -Dfortran=ON -Dminuit2=ON -Dmathmore=ON -Dbuiltin_tbb=ON

And hopefully this should work. This process is a little bit more involved than just downgrading ROOT, so try to avoid going down this route unless absolutely necessary.

 

  46   Tue Aug 2 14:34:15 2022 Alex MOSC License Request 

Some programs on OSC require authorized access to use in the form of a license. The license will be automatically read if it is available whenever you open a program on OSC, provided you have access to the license. In order to have access to a license, you need to fill out the attached form and send it to Amy to forward to OSC. Available programs (at least some of) which require a license can be found here: https://www.osc.edu/resources/available_software/software_list . Replace the name of the program at the top of the form with the desired software.

Attachment 1: User_Software_Agreement.pdf
  9   Thu May 11 13:43:46 2017 Sam StaffordNotes on installing icemc on OSCSoftware
Attachment 1: icemc_setup_osc.txt
A few notes about installing icemc on OSC

Dependencies
  ROOT - download from CERN and install according to instructions
  FFTW - do "module load gnu/4.8.5"   (or put it in your .bash_profile)

The environment variable FFTWDIR must contain the directory where FFTW resides
  in my case this was /usr/local/fftw3/3.3.4-gnu 
  set this up in your .bash_profile (not .bashrc)

I copied my working instance of icemc from my laptop to a folder in my osc space
  Copy the whole icemc directory (maybe its icemc/trunk, depending on how you installed), EXCEPT for the "output" subdir because it's big and unnecessary
  in your icemc directory on OSC, do "mkdir output"

In icemc/Makefile
  find a statement like this:
    LIBS += -lMathMore $(FFTLIBS) -lAnitaEvent
  and modify it to include the directory where the FFTW library is:
    LIBS += -L$(FFTWDIR)/lib -lMathMore $(FFTLIBS) -lAnitaEvent

  note: FFTLIBS contains the list of libraries (e.g., -lfftw3), NOT the library search paths
  
Compile by doing "make"

Remember you should set up a batch job on OSC using PBS.

  42   Tue Nov 5 16:22:16 2019 Keith McBrideNASA Proposal fellowshipsOther

Here is a useful link for astroparticle grad students related proposals from NASA:

 

https://nspires.nasaprs.com/external/solicitations/summary.do?solId=%7BE16CD59F-29DD-06C0-8971-CE1A9C252FD4%7D&path=&method=init

Full email I received regarding this information was:

_______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

ROSES-19 Amendment adds a new program element to ROSES-2019: Future Investigators in NASA Earth and Space Science and Technology (FINESST), the ROSES Graduate Student Research Program Element, E.6.

 

Through FINESST, the Science Mission Directorate (SMD) solicits proposals from accredited U.S. universities and other eligible organizations for graduate student-designed and performed research projects that contribute to SMD's science, technology and exploration goals.

 

A Notice of Intent is not requested for E.6 FINESST. Proposals to FINESST are due by February 4, 2020.

 

Potential proposers who wish to participate in the optional pre-proposal teleconference December 2, 2019 from 1:00-2:30 p.m. Eastern Time may (no earlier than 30 minutes prior to the start time) call 1-888-324-3185 (U.S.-only Toll Free) or 1-630-395-0272 (U.S. Toll) and use Participant Passcode: 8018549. Restrictions may prevent the use of a toll-free number from a mobile or free-phone or from telephones outside the U.S. For U.S. TTY-equipped callers or other types of relay service no earlier than 30 minutes before the start of the teleconference, call 711 and provide the same conference call number/passcode. Email HQ-FINESST@mail.nasa.gov any teleconference agenda suggestions and questions by November 25, 2019. Afterwards, questions and responses, with identifying information removed, will be posted on the NSPIRES page for FINESST under "other documents".

 

FINESST awards research grants with a research mentor as the principal investigator and the listed graduate student listed as the "student participant". Unlike the extinct NASA Earth and Space Science Fellowships (NESSF), the Future Investigators (FIs) are not trainees or fellows. Students with existing NESSF awards seeking a third year of funding, may not submit proposals to FINESST. Instead, they may propose to NESSF20R. Subject to a period of performance restriction, some former NESSFs may be eligible to submit proposals to FINESST.

 

On or about November 1, 2019, this Amendment to the NASA Research Announcement "Research Opportunities in Space and Earth Sciences (ROSES) 2019" (NNH19ZDA001N) will be posted on the NASA research opportunity homepage at http://solicitation.nasaprs.com/ROSES2019 and will appear on the RSS feed at: https://science.nasa.gov/researchers/sara/grant-solicitations/roses-2019/

 

Questions concerning this program element may be directed to HQ-FINESST@mail.nasa.gov.

  Draft   Fri Jan 31 10:43:52 2020 Jorge TorresMounting ARA software on OSC through CernVM File SystemSoftware

OSC added ARA's CVMFS repository on Pitzer and Owens. This has been already done in UW's cluster thanks to Ben Hokanson-Fasig and Brian Clark. With CVMFS, all the dependencies are compiled and stored in a single folder (container), meaning that the user can just source the paths to the used environmental variables and not worry about installing them at all. This is very useful, since it usually takes a considerable amount of time to get those dependencies and properly install/debug the ARA software. To use the software, all you have to do is:

source /cvmfs/ara.opensciencegrid.org/trunk/centos7/setup.sh

To verify that the containter was correctly loaded, type

root

and see if the root display pops up. You can also go to /cvmfs/ara.opensciencegrid.org/trunk/centos7/source/AraSim and execute ./AraSim

Because of it being a container, the permissions are read-only. This means that if you want to do any modifications to the existing code, you'll have to copy the piece of code that you want, and change the enviromental variables of that package, in this case $ARA_UTIL_INSTALL_DIR, which is the destination where you want your executables, libraries and such, installed.

Libraries and executables are stored here, in case you want to reference those dependencies as your environmental variables: /cvmfs/ara.opensciencegrid.org/trunk/centos7/

Even if you're not in the ARA collaboration, you can benefit from this through the fact that ROOT6 is installed and compiled in the container. In order to use it you just need to run the same bash command, and ROOT 6 will be available for you to use.

Feel free to email any questions to Brian Clark or myself.

--------

Technical notes: 

The ARA software was compiled with gcc version 4.8.5. In OSC, that compiler can be loaded by doing module load gnu/4.8.5. If you're using any other compiler, you'll get a warning telling you that if you do any compilation against the ARA software, you may need to add the -D_GLIBCXX_USE_CXX11_ABI=0 flag to your Make file. 

  Draft   Fri Feb 28 13:09:53 2020 Justin FlahertyInstalling anitaBuildTools on OSC-Owens (Revised 2/28/2020) 
  39   Thu Jul 11 10:05:37 2019 Justin FlahertyInstalling PyROOT for Python 3 on OwensSoftware

In order to get PyROOT working for Python 3, you must build ROOT with a flag that specifies Python 3 in the installation.  This method will create a folder titled root-6.16.00 in your current directory, so organize things how you see fit. Then the steps are relatively simple:

wget https://root.cern/download/root_v6.16.00.source.tar.gz
tar -zxf root_v6.16.00.source.tar.gz
cd root-6.16.00
mkdir obj
cd obj
cmake .. -Dminuit2=On -Dpython3=On
make -j8

If you wish to do a different version of ROOT, the steps should be the same:

wget https://root.cern/download/root_v<version>.source.tar.gz
tar -zxf root_v<version>.source.tar.gz
cd root-<version>
mkdir obj
cd obj
cmake .. -Dminuit2=On -Dpython3=On
make -j8

  Draft   Thu Apr 27 18:28:22 2017 Sam Stafford (Also Slightly Jacob)Installing AnitaTools on OSCSoftware

Jacob Here, Just want to add how I got AnitaTools to see FFTW:

1) echo $FFTW3_HOME to find where the lib and include dir is.

2) Next add the following line to the start of cmake/modules/FindFFTW.cmake

'set ( FFTW_ROOT full/path/you/got/from/step/1)'

 

Brief, experience-based instructions on installing the AnitaTools package on the Oakley OSC cluster.

Attachment 1: OSC_build.txt
Installing AnitaTools on OSC
Sam Stafford
04/27/2017

This document summarizes the issues I encountered installing AnitaTools on the OSC Oakley cluster.
I have indicated work-arounds I made for unexpected issues
  I do not know that this is the only valid process
  This process was developed by trial-and-error (mostly error) and may contain superfluous steps
    A person familiar with AnitaTools and cmake may be able to streamline it

Check out OSC's web site, particularly to find out about MODULES, which facilitate access to pre-installed software

export the following environment variables in your .bash_profile  (not .bashrc):

  ROOTSYS                             where you want ROOT to live
     install it in somewhere your user directory; at this time, ROOT is not pre-installed on Oakley as far as I can tell
  ANITA_UTIL_INSTALL_DIR              where you want anitaTools to live
  FFTWDIR                             where fftw is 
    look on OSC's website to find out where it is; you shouldn't have to install it locally

  PATH   should contain $FFTWDIR/bin  and $ROOTSYS/bin
  LD_LIBRARY_PATH should contain     $FFTWDIR/lib    $ROOTSYS/lib     $ANITA_UTIL_INSTALL_DIR/lib
  LD_INCLUDE_PATH should contain     $FFTWDIR/include    $ROOTSYS/include     $ANITA_UTIL_INSTALL_DIR/include

also put in your .bash_profile: (I put these after the exports)
  
    module load gnu/4.8.5     // loads g++ compiler
           (this should automatically load module fftw/3.3.4 also)

install ROOT  - follow ROOT's instructions to build from source.  It's a typical (configure / make / make install) sequence
  you probably need  ./configure --enable-Minuit2

get AnitaTools from github/anitaNeutrino "anitaBuildTool" (see Anita ELOG 672, by Cosmin Deaconu)

Change entry in which_event_reader to ANITA3, if you want to analyze ANITA-3 data
  (at least for now; I think they are developing smarts to make the SW adapt automatically to the anita data "version")

Do ./buildAnita.sh       //downloads the software and attempts a full build/install

    it may file on can't-find-fftw during configure:
      system fails to populate environment variable FFTW_ROOT, not sure why
      add the following line at beginning of anitaBuildTool/cmake/modules/FindFFTW.cmake:
        set( FFTW_ROOT /usr/local/fftw3/3.3.4-gnu)
          (this apparently tricks cmake into finding fftw)
  NOTE: ./buildAnita.sh always downloads the software from github.  IT WILL WIPE OUT ANY CHANGES YOU MADE TO AnitaTools!
      
Do "make" 
    May fail with   /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.14' not found and/or a few other similar messages 
                   or may say c11 is not supported  
      need to change compiler setting for cmake:
        make the following change in anitaBuildTool/build/CMakeCache.txt   (points cmake to the g++ compiler instead of default intel/c++)
          #CMAKE_CXX_COMPILER:FILEPATH=/usr/bin/c++                  (comment this out)
           CMAKE_CXX_COMPILER:FILEPATH=/usr/local/gcc/4.8.5/bin/g++  (add this)  
       (you don't necessarfily have to use version, gcc/4.8.5, but it worked for me)

Then retry by doing "make"

Once make is completed, do "make install"

A couple of notes:
  Once AnitaTools is built, if you change source, just do make, then make install (from anitaBuildTool) (don't ./buildAnita.sh; see above)
      (actually make install will do the make step if source changes are detected)
  To start AnitaTools over from cmake, delete the anitaBuildTool/build directory and run make (not cmake: make will drive cmake for you)
      (don't do cmake directly unless you know what you're doing; it'll mess things up)
  

  41   Fri Oct 11 13:43:35 2019 Amy How to start a new undergrad hireOther

If an undergrad has been working with the group on a volunteer basis, they will need that clarified, so that they have not up to then we working for pay without training.  There is a form that they sign saying they have been working as a volunteer.

Below is an email that Pam sent in July 2019 outlining what they need.  Other things to remember:

Undergrads receive emails from the department to remind them about orientation and scheduling prior to first day of hire, and other emails from ASC. They receive more information at orientation.

They will need to show ID.  

If an undergrad (or any hire) does not waive retirement/OPERs within 30 days, they will have to have this deducted from their paycheck. This is another reason that orientation is important.
 
For the person hiring them:
 
1.      Tell your admin (Lisa) asap who, when, etc you want to hire. ASAP is important because (Pam) will process over 75 undergraduates hires in the Fall and over 100 in the summer. Each takes 20 days on average (see below)
2.      On the first day of employment – ask your new hire if they have completed their orientation with the ASC.
 

From: Hood, Pam 
Sent: Tuesday, July 23, 2019 4:49 PM
To: 'physics-all@lists.osu.edu' <physics-all@lists.osu.edu>
Subject: Undergraduate Fall Hires
 
Hello all,
 
Fall is almost upon us and I am working on having positions ready to fill as well as posting on the OSU student job site and Federal Work study job boards. In order to plan my workflow, if you would let me know:
 
1.      Approximately how many undergraduate student assistants you plan to hire and at what rate of pay. Our department’s current average rate of pay is $10.00 - $10.50/hour, however it does depend on the position. Range is $8.55 - $14.00/hour.
 
2.      A brief summary of job duties and responsibilities (i.e. assisting in research for ______)
 
3.      Request for posting OR
 
4.      If you have specific student that has been a non-paid volunteer that you would like to hire, they need to sign a waiver prior to orientation.  Please see attached for volunteer waiver.
 
5.      Please indicate if your start date varies from August 20. And it may vary based on multiply factors i.e. signatures, workflow, etc.
 
6.      All terminations for summer ungraduated student workers OR
 
7.      If you are intending to continue employment but require a reduction in hours worked in order to comply with policy Please see attached policy for min/max hours.
 
 
I have attempted to answer all the FAQs that I noted in the “summer undergraduate hiring wave” in italics, however please let me know if you have questions. As always, it is ASC policy to not start any employee prior to orientation and it will take a minimum of 10 days from the time the contract is signed by both the student and the supervisor– please encourage your students to sign asap or it lengthens the process significantly.
 
I will do my best to facilitate this process for you. Students Buck ID# and email is extremely helpful. If you need the hire to start August 20, please send me the information via email by July 31, 2019. If you respond later or request a hire after July 31, I will estimate the hire date at the time of submittal.
 
Thanks,
 
Pam

ELOG V3.1.5-fc6679b