Updates and Results Talks and Posters Advice Ideas Important Figures Write-Ups Outreach How-To Funding Opportunities GENETIS
  Place to document instructions for how to do things, Page 2 of 3  ELOG logo
ID Date Authordown Subject Project
  49   Thu Sep 14 22:30:06 2023 Jason YaoHow to profile a C++ programSoftware

This guide is modified from section (d) of the worksheet inside Module 10 of Phys 6810 Computational Physics (Spring 2023).

NOTE: gprof does not work on macOS. Please use a linux machine (such as OSC)

To use gprof, compile and link the relevant codes with the -pg option:
Take a look at the Makefile make_hello_world and modify both the CFLAGS and LDFLAGS lines to include -pg

Compile and link the script by typing
    make -f make_hello_world

Execute the program
    ./hello_world.x

With the -pg flags, the execution will generate a file called gmon.out that is used by gprof.
The program has to exit normally (e.g. we can't stop with a ctrl-C).
Warning: Any existing gmon.out file will be overwritten.

Run gprof and save the output to a file (e.g., gprof.out) by
    gprof hello_world.x > gprof.out

We should at this point see a text file called gprof.out which contains the profile of hello_world.cpp
    vim gprof.out

Attachment 1: hello_world.cpp
#include <iostream>
#include <thread>
#include <chrono>
#include <cmath>


using namespace std;

void nap(){
  // usleep(3000);
  // this_thread::sleep_for(30000ms);

  for (int i=0;i<1000000000;i++){
    double j = sqrt(i^2);
  }
}

int main(){

  cout << "taking a nap" << endl;

  nap();

  cout << "hello world" << endl;

}
Attachment 2: make_hello_world
SHELL=/bin/sh

# Note: Comments start with #.  $(FOOBAR) means: evaluate the variable 
#        defined by FOOBAR= (something).

# This file contains a set of rules used by the "make" command.
#   This makefile $(MAKEFILE) tells "make" how the executable $(COMMAND) 
#   should be create from the source files $(SRCS) and the header files 
#   $(HDRS) via the object files $(OBJS); type the command:
#        "make -f make_program"
#   where make_program should be replaced by the name of the makefile.
# 
# Programmer:  Dick Furnstahl (furnstahl.1@osu.edu)
# Latest revision: 12-Jan-2016 
# 
# Notes:
#  * If you are ok with the default options for compiling and linking, you
#     only need to change the entries in section 1.
#
#  * Defining BASE determines the name for the makefile (prepend "make_"), 
#     executable (append ".x"), zip archive (append ".zip") and gzipped 
#     tar file (append ".tar.gz"). 
#
#  * To remove the executable and object files, type the command:
#          "make -f $(MAKEFILE) clean"
#
#  * To create a zip archive with name $(BASE).zip containing this 
#     makefile and the SRCS and HDRS files, type the command:
#        "make -f $(MAKEFILE) zip"
#
#  * To create a gzipped tar file with name $(BASE).tar.gz containing this 
#     makefile and the source and header files, type the command:
#          "make -f $(MAKEFILE) tarz"
#
#  * Continuation lines are indicated by \ with no space after it.  
#     If you get a "missing separator" error, it is probably because there
#     is a space after a \ somewhere.
#

###########################################################################
# 1. Specify base name, source files, header files, input files
########################################################################### 

# The base for the names of the makefile, executable command, etc.
BASE= hello_world

# Put all C++ (or other) source files here.  NO SPACES after continuation \'s.
SRCS= \
hello_world.cpp

# Put all header files here.  NO SPACES after continuation \'s.
HDRS= \

# Put any input files you want to be saved in tarballs (e.g., sample files).
INPFILE= \

###########################################################################
# 2. Generate names for object files, makefile, command to execute, tar file
########################################################################### 

# *** YOU should not edit these lines unless to change naming conventions ***

OBJS= $(addsuffix .o, $(basename $(SRCS)))
MAKEFILE= make_$(BASE)
COMMAND=  $(BASE).x
TARFILE= $(BASE).tar.gz
ZIPFILE= $(BASE).zip

###########################################################################
# 3. Commands and options for different compilers
########################################################################### 

#
# Compiler parameters
#
# CXX           Name of the C++ compiler to use
# CFLAGS        Flags to the C++ compiler
# CWARNS        Warning options for C++ compiler
# F90           Name of the fortran compiler to use (if relevant) 
# FFLAGS        Flags to the fortran compiler 
# LDFLAGS       Flags to the loader
# LIBS          A list of libraries 
#

CXX= g++
CFLAGS=  -g
CWARNS= -Wall -W -Wshadow -fno-common 
MOREFLAGS= -Wpedantic -Wpointer-arith -Wcast-qual -Wcast-align \
           -Wwrite-strings -fshort-enums 

# add relevant libraries and link options
LIBS=           
# LDFLAGS= -lgsl -lgslcblas 
LDFLAGS=

###########################################################################
# 4. Instructions to compile and link, with dependencies
########################################################################### 
all:    $(COMMAND) 

.SUFFIXES:
.SUFFIXES: .o .mod .f90 .f .cpp

#%.o:   %.mod 

# This is the command to link all of the object files together. 
#  For fortran, replace CXX by F90.
$(COMMAND): $(OBJS) $(MAKEFILE) 
	$(CXX) -o $(COMMAND) $(OBJS) $(LDFLAGS) $(LIBS)

# Command to make object (.o) files from C++ source files (assumed to be .cpp).
#  Add $(MOREFLAGS) if you want additional warning options.
%.o: %.cpp $(HDRS) $(MAKEFILE)
	$(CXX) -c $(CFLAGS) $(CWARNS) -o $@ $<

# Commands to make object (.o) files from Fortran-90 (or beyond) and
#  Fortran-77 source files (.f90 and .f, respectively).
.f90.mod:
	$(F90) -c $(F90FLAGS) -o $@ $< 
 
.f90.o: 
	$(F90) -c $(F90FLAGS) -o $@ $<
 
.f.o:   
	$(F90) -c $(FFLAGS) -o $@ $<
      
##########################################################################
# 5. Additional tasks      
##########################################################################
      
# Delete the program and the object files (and any module files)
clean:
	/bin/rm -f $(COMMAND) $(OBJS)
	/bin/rm -f $(MODIR)/*.mod
 
# Pack up the code in a compressed gnu tar file 
tarz:
	tar cfvz $(TARFILE) $(MAKEFILE) $(SRCS) $(HDRS) $(MODIR) $(INPFILE) 

# Pack up the code in a zip archive
zip:
	zip -r $(ZIPFILE) $(MAKEFILE) $(SRCS) $(HDRS) $(MODIR) $(INPFILE) 

##########################################################################
# That's all, folks!     
##########################################################################
  50   Wed Jun 12 12:10:05 2024 Jacob WeilerHow to install AraSim on OSCSoftware

# Installing AraSim on OSC

Readding this because I realized it was deleted when I went looking for it laugh

Quick Links:
- https://github.com/ara-software/AraSim # AraSim github repo (bottom has installation instructions that are sort of right)
- Once AraSim is downloaded: AraSim/UserGuideTex/AraSimGuide.pdf (manual for AraSim) might have to be downloaded if you can't view pdf's where you write code

Step 1: 
We need to add in the dependancies. AraSim needs multiple different packages to be able to run correctly. The easiest way on OSC to get these without a headache is to add the following to you .bashrc for your user.

cvmfs () {
    module load gnu/4.8.5
    export CC=`which gcc`
    export CXX=`which g++`
    if [ $# -eq 0 ]; then
        local version="trunk"
    elif [ $# -eq 1 ]; then
        local version=$1
    else
        echo "cvmfs: takes up to 1 argument, the version to use"
        return 1
    fi
    echo "Loading cvmfs for AraSim"
    echo "Using /cvmfs/ara.opensciencegrid.org/${version}/centos7/setup.sh"
    source "/cvmfs/ara.opensciencegrid.org/${version}/centos7/setup.sh"
    #export JUPYTER_CONFIG_DIR=$HOME/.jupyter
    #export JUPYTER_PATH=$HOME/.local/share/jupyter
    #export PYTHONPATH=/users/PAS0654/alansalgo1/.local/bin:/users/PAS0654/alansalgo1/.local/bin/pyrex:$PYTHONPATH
}


If you want to view my bashrc
- /users/PAS1977/jacobweiler/.bashrc

Reload .bashrc
- source ~/.bashrc

Step 2:
Go to directory that you want to put AraSim and type: 
- git clone https://github.com/ara-software/AraSim.git
This will download the github repo

Step 3:
We need to use make and load sourcing
- cd AraSim
- cvmfs
- make
wait and it should compile the code 

Step 4:
We want to do a test run with 100 neutrinos to make sure that it does *actually* run
Try: - ./AraSim SETUP/setup.txt
This errored for me (probably you as well) 
Switch from frequency domain to time domain in the setup.txt
- cd SETUP
- open setup.txt 
- scroll to bottom
- Change SIMULATION_MODE = 1
- save
- cd .. 
- ./AraSim SETUP/setup.txt 
This should run quickly and now you have AraSim setup!

  40   Thu Jul 25 16:50:43 2019 Dustin NguyenAdvice (not mine) on writing HEP stuff Other

PDF of advice by Andy Buckley (U Glasgow) on writing a HEP thesis (and presumably HEP papers too) that was forwarded by John Beacom to the CCAPP mailing list a few months back. 

Attachment 1: thesis-writing-gotchas.pdf
  17   Mon Nov 20 08:31:48 2017 Brian Clark and Oindree BanerjeeFit a Function in ROOTAnalysis

Sometimes you need to fit a function to a histogram in ROOT. Attached is code for how to do that in the simple case of a power law fit.

To run the example, you should type "root fitSnr.C" in the command line. The code will access the source histogram (hstrip1nsr.root, which is actually ANITA-2 satellite data). The result is stripe1snrfit.png.

Attachment 1: fitSnr.C
#include "TF1.h"

void fitSnr();

void fitSnr()

{
	gStyle->SetLineWidth(4); //set some style parameters
	TFile *stripe1file = new TFile("hstripe1snr.root"); //import the file containing the histogram to be fit
	TH1D *hstripe1 = (TH1D*)stripe1file->Get("stripe1snr"); //get the histogram to be fit
	TCanvas c1("c1","c1",1000,800); //make a canvas
	hstripe1->Draw(""); //draw it
	c1.SetLogy(); //set a log axis

	//need to declare an equation
	//I want to fit for two paramters, in the equation these are [0] and [1]
	//so, you'll need to re-write the equation to whatever you're trying to fit for
	//but ROOT wants the variables to find to be given as [0], [1], [2], etc.
	
	char equation1[150]; //declare a container for the equation
	sprintf(equation1,"([0]*(x^[1]))"); //declare the equation
	TF1 *fit1 = new TF1("PowerFit",equation1,20,50); //create a function to fit with, with the range being 20 to 50

	//now, we need to set the initial parameters of the fit
	//fit->SetParameter(0,H->GetRMS()); //this should be a good starting place for a standard deviation like variable
	//fit->SetParameter(1,H->GetMaximum()); //this should be a good starting place for amplitude like variable
	fit1->SetParameter(0,60000.); //for our example, we will manually choose this
	fit1->SetParameter(1,-3.);

	hstripe1->Fit("PowerFit","R"); //actually do the fit;
	fit1->Draw("same"); //draw the fit

	//now, we want to print out some parameters to see how good the fit was
	cout << "par0 " << fit1->GetParameter(0) << " par1 " << fit1->GetParameter(1) << endl;
	cout<<"chisquare "<<fit1->GetChisquare()<<endl;
	cout<<"Ndf "<<fit1->GetNDF()<<endl;
	cout<<"reduced chisquare "<<double(fit1->GetChisquare())/double(fit1->GetNDF())<<endl;
	cout<<"   "<<endl;
	c1.SaveAs("stripe1snrfit.png");

}
Attachment 2: hstripe1snr.root
Attachment 3: stripe1snrfit.png
stripe1snrfit.png
  23   Wed Jun 6 08:54:44 2018 Brian Clark and Oindree BanerjeeHow to Access Jacob's ROOT6 on Oakley 

Source the attached env.sh file. Good to go!

Attachment 1: env.sh
export ROOTSYS=/users/PAS0174/osu8620/root-6.08.06
eval 'source /users/PAS0174/osu8620/root-6.08.06/builddir/bin/thisroot.sh'
export LD_INCLUDE_PATH=/users/PAS0174/osu8620/cint/libcint/build/include:$LD_INCLUDE_PATH
module load fftw3/3.3.5
module load gnu/6.3.0
module load python/3.4.2
module load cmake/3.7.2

#might need this, but probably not
#export CC=/usr/local/gcc/6.3.0/bin/gcc
  Draft   Fri Jul 28 17:57:06 2017 Brian Clark and Ian Best   
  3   Wed Mar 22 18:01:23 2017 Brian ClarkAdvice for Using the Ray Trace CorrelatorAnalysis

If you are trying to use the Ray Trace Correlator with AraRoot, you will probably encounter some issues as you go. Here is some advice that Carl Pfendner found, and Brian Clark compiled.

Please note that it is extremely important that your AntennaInfo.sqlite table in araROOT contain the ICRR versions of both Testbed and Station1. Testbed seems to have fallen out of practice of being included in the SQL table. Also, Station1 is the ICRR (earliest) version of A1, unlike the ATRI version which is logged as ARA01. This will cause seg faults in the intial setup of the timing and geometry arrays that seem unrelated to pure geometry files. If you get a seg-fault in the "setupSizes" function or the Detector call of the "setupPairs" function, checking your SQL file is a good idea. araROOT branch 3.13 has such a source table with Testbed and Station1 included.

Which combination of Makefile/Makefile.arch/StandardDefinitions.mk works can be machine specific (frustratingly). Sometimes the best StandardDefinitions.mk is found int he make_timing_arrays example.

Common Things to Check

1: Did you "make install" the Ray Trace Correlator after you made it?

2: Do you have the setup.txt file?

3: Do you have the "data" directory?

Common Errors

1: If the Ray Trace Correlator compiles, and you execute a binary, and get the following:

     ******** Begin Correlator ********, this one!
     Pre-icemodel test
     terminate called after throwing an instance of 'std::out_of_range'
     what():  basic_string::substr
     Aborted

Check to make sure have the "data" directory.
 

  16   Thu Oct 26 08:44:58 2017 Brian ClarkFind Other Users on OSCOther

Okay, so our group has two "project" spaces on OSC (the Ohio Supercomputer). The first is for Amy's group, and is a project workspace called "PAS0654". The second is the CCAPP Condo (literally, CCAPP has some pre-specified rental time, and hence "condo") on OSC, and this is project PCON0003.

When you are signed up for the supercomputer, one of two things happen:

  1. You will be given a username under the PAS0654 group, and in which case, your username will be something like osu****. Connolly home drive is /users/PAS0654/osu****. Beatty home drive is /users/PAS0174/osu****. CCAPP home drive is /users/PCON0003/pcon****.
  2. You will be given a username under the PCON0003 group, and in which case, your username will be something like cond****.

If you are given a osu**** username, you must make sure to be added to the CCAPP Condo so that you can use Ruby compute resources. It will not be automatic.

Some current group members, and their OSC usernames. In parenthesis is the project space they are found in.

Current Users

osu0673: Brian Clark (PAS0654)

cond0068: Jorge Torres-Espinosa (PCON0003)

osu8619: Keith McBride (PAS0174)

osu9348: Julie Rolla (PAS0654)

osu9979: Lauren Ennesser (PAS0654)

osu6665: Amy Connolly (PAS0654)

Past Users

osu0426: Oindree Banerjee (PAS0654)

osu0668: Brian Dailey (PAS0654)

osu8620: Jacob Gordon (PAS0174)

osu8386: Sam Stafford (ANITA analysis in /fs/scratch/osu8386) (PAS0174)

cond0091: Judge Rajasekera (PCON0003)

  18   Tue Dec 12 17:38:36 2017 Brian ClarkData Analysis in R from day w/ Brian ConnollyAnalysis

On Nov 28 2017, Brian Connolly came and visited and taught us how to do basic data analysis in R.

He in particular showed us how to do a Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA).

Attached are three of the data files Carl Pfendner prepared for us to analyze (ARA data, including simulation, force triggers, and RF triggers).

Also attached is some R code that shows how to set up the LDA and the PCA and how to plot their result. You are meant to run each line of the code in r by hand (this is not a functioning R script I don't think).

Go here (https://www.r-project.org/) to learn how to download R. You will also probably have to download GGFortify. To do that, open an r session, and type "install.packages('ggfortify')".

Attachment 1: pca_and_lda.R
Notes

#First we need to read in the data
dataSim <- read.table("output.txt.Simulation",sep=" ",header=TRUE)
dataRF <- read.table("output.txt.RFTriggers",sep=" ",header=TRUE)
dataPulser <- read.table("output.txt.CalPulsers",sep=" ",header=TRUE)

#Now we combine them into one data object
data <- rbind(dataRF,dataSim)

#Now we actually have to specify that we
#want to use the first 10 columns
data <- data[,1:10]

#make a principal component analysis object
pca <- prcomp(data,center=TRUE,scale=TRUE,retx=TRUE)

#Load a plotting library
library(ggfortify)

#then can plot
autplot(prcomp(data))

#or, we can plot like this
#to get some color data points
labels<-c(rep(0,nrow(dataSim)),rep(1,nrow(dataRF)))
plot(pca$x[,1],pca$x[,2],col=labels+1)

#we can also do an LDA analysis

#we need to import the MASS library
library(MASS)
#and now we make the lda object
lda(data,grouping=labels)

#didn't write down any plotting unfortunately.
Attachment 2: data.zip
  19   Mon Mar 19 12:27:59 2018 Brian ClarkHow To Do an ARA Monitoring ReportOther

So, ARA has five stations down in the ice that are taking data. Weekly, a member of the collaboration checks on the detectors to make sure that they are healthy.

This means things like making sure they are triggering at approximately the right rates, are taking cal pulsers, that the box isn't too hot, etc.

Here are some resources to get you started. Usual ara username and password apply in all cases.

Also, the page where all of the plots live is here: http://aware.wipac.wisc.edu/

Thanks, and good luck monitoring! Ask someone whose done it before when in doubt.

Brian

  20   Tue Mar 20 09:24:37 2018 Brian ClarkGet Started with Making Plots for IceMCSoftware
First, anyone not familiar with the command line should familiarize yourself with it. It is the way we interact with computers through an interface called the terminal: https://www.codecademy.com/learn/learn-the-command-line
 
Second, here is the page for the software IceMC, which is the Monte Carlo software for simulating neutrinos for ANITA.
 
On that page are good instructions for downloading the software and how to run it. You will have the choice of running it on a (1) a personal machine (if you want to use your personal mac or linux machine), (2) a queenbee laptop in the lab, or (3) on a kingbee account which I will send an email about shortly. Running IceMC will require a piece of statistics software called ROOT that can be somewhat challenging to install--it is already installed on Kingbee and OSC, so it is easier to get started there. If you want to use Kingbee, just try downloading and running. If you want to use OSC, you're first going to need to follow instructions to access a version installed on OSC. Still getting that together.
 
After you have IceMC installed and running, you should to start by replicating a set of important figures. There is lots of physics in them, so hopefully you will learn alot by doing so. The figures we want to replicate are stored here: http://radiorm.physics.ohio-state.edu/elog/Updates+and+Results/29
 
So, familiarize yourself with the command line, and then see if you can get ROOT and IceMC installed and running. Then plots.
  21   Fri Mar 30 12:06:11 2018 Brian ClarkGet icemc running on Kingbee and UnitySoftware

So, icemc has some needs (like Mathmore) and preferably root 6 that aren't installed on kingbeen and unity.

Here's what I did to get icecmc running on kingbee.

Throughout, $HOME=/home/clark.2668

  • Try to install new version fo ROOT (6.08.06, which is the version Jacob uses on OSC) with CMAKE. Failed because Kingbee version of cmake is too old.
  • Downloaded new version of CMAKE (3.11.0), failed because kingbee doesn't have C++11 support.
  • Downloaded new version of gcc (7.33) and installed that in $HOME/QCtools/source/gcc-7.3. So I installed it "in place".
  • Then, compiled the new version of CMAKE, also in place, so it's in $HOME/QCtools/source/cmake-3.11.0.
  • Then, tried to compile ROOT, but it got upset because it couldn't find CXX11; so I added "export CC=$HOME/QCtools/source/gcc-7.3/bin/gcc" and then it could find it.
  • Then, tried to compile ROOT, but couldn't because ROOT needs >python 2.7, and kingbee has python 2.6.
  • So, downloaded latest bleeding edge version of python 3 (pyton 3.6.5), and installed that with optimiation flags. It's installed in $HOME/QCtools/tools/python-3.6.5-build.
  • Tried to compile ROOT, and realized that I need to also compile the shared library files for python. So went back and compiled with --enable-shared as an argument to ./configure.
  • Had to set the python binary, include, and library files custom in the CMakeCache.txt file.
  22   Sun Apr 29 21:44:15 2018 Brian ClarkAccess Deep ARA Station DataAnalysis

Quick C++ program for pulling waveforms out of deep ARA station data. If you are using AraRoot, you would put this inside your "analysis" directory and add it to your CMakeLists.txt.

Attachment 1: plot_deep_station_event.cxx
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
////		plot_event.cxx 
////		plot deep station event
////
////		Apr 2018,  clark.2668@osu.edu
////////////////////////////////////////////////////////////////////////////////

//Includes
#include <iostream>

//AraRoot Includes
#include "RawAtriStationEvent.h"
#include "UsefulAtriStationEvent.h"
#include "AraEventCalibrator.h"

//ROOT Includes
#include "TTree.h"
#include "TFile.h"
#include "TGraph.h"
#include "TCanvas.h"

using namespace std;

int main(int argc, char **argv)
{
	
	//check to make sure they've given me a run and a pedestal
	if(argc<2) {
		std::cout << "Usage\n" << argv[0] << " <run file>\n";
		std::cout << "e.g.\n" << argv[0] << " event1841.root\n";
		return 0;
	}
  
	char pedFileName[200];
	sprintf(pedFileName, "%s", argv[1]);
  
	printf("------------------------------------------------------------------------\n");
	printf("%s\n", argv[0]);
	printf("runFileName %s\n", argv[1]);
	printf("------------------------------------------------------------------------\n");
	
	
	TFile *fp = TFile::Open(argv[2]);
	if(!fp) {
		std::cerr << "Can't open file\n";
		return -1;
	}
	TTree *eventTree = (TTree*) fp->Get("eventTree");
	if(!eventTree) {
		std::cerr << "Can't find eventTree\n";
		return -1;
	}
		
	RawAtriStationEvent *rawAtriEvPtr = 0; //empty pointer	   
	eventTree->SetBranchAddress("event",&rawAtriEvPtr); //set the branch address
	Int_t run_num; //run number of event
	eventTree->SetBranchAddress("run", &run_num); //set the branch address
	
	int numEntries=eventTree->GetEntries(); //get the number of events
	int stationId=0;
	eventTree->GetEntry(0);
	stationId = rawAtriEvPtr->stationId; //assign the statio id number
		
	AraEventCalibrator *calib = AraEventCalibrator::Instance(); //make a calibrator

	for(int event=0;event<numEntries;event++) {
	//for(int event=0;event<700;event++) {
		
		eventTree->GetEntry(event); //get the event
		int evt_num = rawAtriEvPtr->eventNumber; //check the event umber
		if(rawAtriEvPtr->isCalpulserEvent()==0) continue; //bounce out if it's not a cal pulser
		UsefulAtriStationEvent *realAtriEvPtr_fullcalib = new UsefulAtriStationEvent(rawAtriEvPtr, AraCalType::kLatestCalib); //make the event

		TGraph *waveforms[16]={0};
		for(int i=0; i<16; i++){
			waveforms[i]=realAtriEvPtr_fullcalib->getGraphFromRFChan(i);
		}
		TCanvas *canvas = new TCanvas("","",1000,1000);
		canvas->Divide(4,4);
		for(int i=0; i<16; i++){
			canvas->cd(i+1);
			waveforms[i]->Draw("alp");
		}
		char title[200];
		sprintf(title,"waveforms_run%d_event%d.pdf",stationId,run_num,evt_num);
		canvas->SaveAs(title);
		delete canvas;		
	}	
}
  26   Sun Aug 26 19:23:57 2018 Brian ClarkGet a quick start with AraSim on OSC OakleySoftware

These are instructions I wrote for Rishabh Khandelwal to facilitate a "fast" start on Oakley at OSC. It was to help him run AraSim in batch jobs on Oakley.

It basically has you use software dependencies that I pre-installed on my OSC account at /users/PAS0654/osu0673/PhasedArraySimulation.

It also gives a "batch_processing" folder with examples for how to successfully run AraSim batch jobs (with correct output file management) on Oakley.

Sourcing these exact dependencies will not work on Owens or Ruby, sorry.

Attachment 1: forRishabh.tar.gz
  27   Mon Oct 1 19:06:59 2018 Brian ClarkCode to Compute Effective Volumes in AraSimAnalysis

Here is some C++ code and an associated makefile to find effective volumes from AraSim output files.

It computes error bars on the effective volumes using the relevant AraSim function.

Compile like "make -f veff.mk"

Run like "./veff thrown_radius thrown_depth AraOut.1.root AraOut.2.root...."

Attachment 1: veff.cc
#include <iostream>
#include <fstream>
#include <sstream>
#include <math.h>
#include <string>
#include <stdio.h>
#include <stdlib.h>
#include <vector>
#include <time.h>
#include <cstdio>
#include <cstdlib>
#include <unistd.h>
#include <cstring>
#include <unistd.h>

#include "TTreeIndex.h"
#include "TChain.h"
#include "TFile.h"
#include "TTree.h"
#include "TMath.h"

#include "Event.h"
#include "Detector.h"
#include "Report.h"
#include "Vector.h"
#include "counting.hh"

using namespace std;

//Get effective volumes with error bars

class EarthModel;

int main(int argc, char **argv)
{
	gStyle->SetOptStat(111111);
	gStyle->SetOptDate(1);
	
	if(argc<4){
		cout<<"Not enough arguments! Abort run."<<endl;
		cout<<"Run like: ./veff volradius voldepth AraOut1.root AraOut2.rot ..."<<endl;
		return -1;
	}

	double volradius = atof(argv[1]);
	double voldepth = atof(argv[2]);

	TChain *AraTree = new TChain("AraTree");
	TChain *AraTree2 = new TChain("AraTree2");

	for(int i=3; i<argc;i++){
		AraTree->Add(string(argv[i]).c_str());
		AraTree2->Add(string(argv[i]).c_str());
	}

	Report *report = 0;
	Event *event=0;
	AraTree2->SetBranchAddress("report",&report);
	AraTree2->SetBranchAddress("event",&event);

	int totnthrown = AraTree2->GetEntries();
	cout << "Total number of events: " << totnthrown << endl;

	int NBINS=10;
	double eventsfound_binned[NBINS];
	for(int i=0; i<NBINS; i++) eventsfound_binned[i]=0.;

	double totweight=0;
	for(int iEvt2=0; iEvt2<totnthrown; iEvt2++){
		AraTree2->GetEntry(iEvt2);
		if(report->stations[0].Global_Pass<=0) continue;
		double weight = event->Nu_Interaction[0].weight;
		if(weight > 1.0) continue;
		totweight += weight;

		int index_weights = Counting::findWeightBin(log10(weight));
		if(index_weights<NBINS) eventsfound_binned[index_weights]++;
	}

	double error_minus=0.;
	double error_plus=0.;
	Counting::findErrorOnSumWeights(eventsfound_binned,error_plus,error_minus);

	double vtot = TMath::Pi() * double(volradius) * double(volradius) * double(voldepth) / 1.e9; //answer in km^3
	double veff = vtot * totweight / double(totnthrown) * 4. * TMath::Pi(); //answer in km^3 sr
	double veff_p  = vtot * (error_plus) / double(totnthrown) * 4. * TMath::Pi(); //answer in km^3 sr
	double veff_m  = vtot * (error_minus) / double(totnthrown) * 4. * TMath::Pi(); //answer in km^3 sr
                                                                                                                          
	printf("volthrown: %.6f \n  totweight: %.6f + %.6f - %.6f \n  Veff: %.6f + %.6f - %.6f \n",
			vtot,
			totweight, error_plus, error_minus,
			veff, veff_p, veff_m
			);
	return 0;
}
Attachment 2: veff.mk
#############################################################################
##
##Changes:
##line 54 - OBJS = .... add filename.o      .... del oldfilename.o
##line 55 - CCFILE = .... add filename.cc     .... del oldfilename.cc
##line 58 - PROGRAMS = filename
##line 62 - filename : $(OBJS)
##
##############################################################################
include StandardDefinitions.mk

#Site Specific  Flags
ifeq ($(strip $(BOOST_ROOT)),)
	BOOST_ROOT = /usr/local/include
endif
SYSINCLUDES	= -I/usr/include -I$(BOOST_ROOT)
SYSLIBS         = -L/usr/lib
DLLSUF = ${DllSuf}
OBJSUF = ${ObjSuf}
SRCSUF = ${SrcSuf}

CXX = g++

#Generic and Site Specific Flags
CXXFLAGS     += $(INC_ARA_UTIL) $(SYSINCLUDES) 
LDFLAGS      += -g $(LD_ARA_UTIL) -I$(BOOST_ROOT) $(ROOTLDFLAGS) -L. 

# copy from ray_solver_makefile (removed -lAra part)

# added for Fortran to C++

LIBS	= $(ROOTLIBS) -lMinuit $(SYSLIBS) 
GLIBS	= $(ROOTGLIBS) $(SYSLIBS)


LIB_DIR = ./lib
INC_DIR = ./include

#ROOT_LIBRARY = libAra.${DLLSUF}

OBJS = Vector.o EarthModel.o IceModel.o Trigger.o Ray.o Tools.o Efficiencies.o Event.o Detector.o Position.o Spectra.o RayTrace.o RayTrace_IceModels.o signal.o secondaries.o Settings.o Primaries.o counting.o RaySolver.o Report.o eventSimDict.o veff.o
CCFILE = Vector.cc EarthModel.cc IceModel.cc Trigger.cc Ray.cc Tools.cc Efficiencies.cc Event.cc Detector.cc Spectra.cc Position.cc RayTrace.cc signal.cc secondaries.cc RayTrace_IceModels.cc Settings.cc Primaries.cc counting.cc RaySolver.cc Report.cc veff.cc
CLASS_HEADERS = Trigger.h Detector.h Settings.h Spectra.h IceModel.h Primaries.h Report.h Event.h secondaries.hh #need to add headers which added to Tree Branch

PROGRAMS = veff

all : $(PROGRAMS) 
	
veff : $(OBJS)
	$(LD) $(OBJS) $(LDFLAGS)  $(LIBS) -o $(PROGRAMS) 
	@echo "done."

#The library
$(ROOT_LIBRARY) : $(LIB_OBJS) 
	@echo "Linking $@ ..."
ifeq ($(PLATFORM),macosx)
# We need to make both the .dylib and the .so
		$(LD) $(SOFLAGS)$@ $(LDFLAGS) $(G77LDFLAGS) $^ $(OutPutOpt) $@
ifneq ($(subst $(MACOSX_MINOR),,1234),1234)
ifeq ($(MACOSX_MINOR),4)
		ln -sf $@ $(subst .$(DllSuf),.so,$@)
else
		$(LD) -dynamiclib -undefined $(UNDEFOPT) $(LDFLAGS) $(G77LDFLAGS) $^ \
		   $(OutPutOpt) $(subst .$(DllSuf),.so,$@)
endif
endif
else
	$(LD) $(SOFLAGS) $(LDFLAGS) $(G77LDFLAGS) $(LIBS) $(LIB_OBJS) -o $@
endif

##-bundle

#%.$(OBJSUF) : %.$(SRCSUF)
#	@echo "<**Compiling**> "$<
#	$(CXX) $(CXXFLAGS) -c $< -o  $@

%.$(OBJSUF) : %.C
	@echo "<**Compiling**> "$<
	$(CXX) $(CXXFLAGS) $ -c $< -o  $@

%.$(OBJSUF) : %.cc
	@echo "<**Compiling**> "$<
	$(CXX) $(CXXFLAGS) $ -c $< -o  $@

# added for fortran code compiling
%.$(OBJSUF) : %.f
	@echo "<**Compiling**> "$<
	$(G77) -c $<


eventSimDict.C: $(CLASS_HEADERS)
	@echo "Generating dictionary ..."
	@ rm -f *Dict* 
	rootcint $@ -c ${INC_ARA_UTIL} $(CLASS_HEADERS) ${ARA_ROOT_HEADERS} LinkDef.h

clean:
	@rm -f *Dict*
	@rm -f *.${OBJSUF}
	@rm -f $(LIBRARY)
	@rm -f $(ROOT_LIBRARY)
	@rm -f $(subst .$(DLLSUF),.so,$(ROOT_LIBRARY))	
	@rm -f $(TEST)
#############################################################################
  29   Fri Nov 9 00:44:09 2018 Brian ClarkTransfer files from IceCube Data Warehouse to OSC 

Brian had to move ~7 TB of data from the IceCube data warehouse to OSC.

To do this, he used the gridftp software. The advantage is that griftp is optimized for large file transfers, and will manage data transfer better than something like scp or rsync.

Note that this utilizes the gridftp software installed on OSC, but doesn't formally use the globus end point described here: https://www.osc.edu/resources/getting_started/howto/howto_transfer_files_using_globus_connect. This is because IceCube doesn't have a formal globus endpoint to connect too. The formal globus endpoint would have been even easier if it was available, but oh well...

Setup goes as follows:

  1. Follow the IceCube instructions for getting an OSG certificate
    1. Go through CILogon (https://cilogon.org/) and generate and download a certificate
  2. Install your certificate on the IceCube machines
    1. Move your certificate to the following place on IceCube: ./globus/usercred.p12
    2. Change the permissions on this certificate: chmod 600 ./globus/usercred.p12
    3. Get your "subject" to this key: openssl pkcs12 -in .globus/usercred.p12 -nokeys | grep subject
    4. Copy the subject line into your IceCube LDAP account
      1. Select "Edit your profile"
      2. Enter IceCube credentials
      3. Paste the subject into the "x509 Subject DN" box
  3. Install your certificates on the OSC machines
    1. Follow the same instructions as for IceCube to install the globus credentials, but you don't need to do the IceCube LDAP part

How to actually make a transfer:

  1. Initialize a proxy certificate on OSC: grid-proxy-init -bits 1024
  2. Use globus-url-copy to move a file, for example: globus-url-copy -r gsiftp://gridftp.icecube.wisc.edu/data/wipac/ARA/2016/unblinded/L1/ARA03/ 2016/ &
    1. I'm using the command "globus-url-copy"
    2. "-r" says to transfer recursively
    3. "gsiftp://gridftp.icecube.wisc.edu/data/wipac/ARA/2016/ublinded/L1/ARA03/" is the entire directory I'm trying to copy
    4. "2016/" is the directory I'm copying them to
    5. "&" says do this in the background once launched
  3. Note that it's kind of syntatically picky:
    1. To copy a directory, the source path name must end in "/"
    2. To copy a directory, the destination path name must also end in "/"

 

 

  30   Tue Nov 27 09:43:37 2018 Brian ClarkPlot Two TH2Ds with Different Color PalettesAnalysis

Say you want to plot two TH2Ds with on the same pad but different color paletettes?

This is possible, but requires a touch of fancy ROOT-ing. Actually, it's not that much, it just takes a lot of time to figure it out. So here it is.

Fair warning, the order of all the gPad->Modified's etc seems very important to no seg fault.

In include the main (demo.cc) and the Makefile.

Attachment 1: demo.cc
///////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
///////		demo.cxx
//////		Nov 2018, Brian Clark 
//////		Demonstrate how to draw 2 TH2D's with two different color palettes (needs ROOT6)
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////

//C++
#include <iostream>

//ROOT Includes
#include "TH2D.h"
#include "TCanvas.h"
#include "TStyle.h"
#include "TExec.h"
#include "TColor.h"
#include "TPaletteAxis.h"
#include "TRandom3.h"

using namespace std;

int main(int argc, char **argv)
{

	gStyle->SetOptStat(0);

	TH2D *h1 = new TH2D("h1","h1",100,-10,10,100,-10,10);
	TH2D *h2 = new TH2D("h2","h2",100,-10,10,100,-10,10);

	TRandom3 *R = new TRandom3(time(0));
	int count=0;
	while(count<10000000){
		h1->Fill(R->Gaus(-4.,1.),R->Gaus(-4.,1.));
		count++;
	}

	TRandom3 *R2 = new TRandom3(time(0)+100);
	count=0;
	while(count<10000000){
		h2->Fill(R2->Gaus(4.,1.),R2->Gaus(4.,1.));
		count++;
	}

	TCanvas *c = new TCanvas("c","c",1100,850);

	h1->Draw("colz");
		TExec *ex1 = new TExec("ex1","gStyle->SetPalette(kValentine);");
		ex1->Draw();
		h1->Draw("colz same");
	gPad->SetLogz();
	gPad->Modified();
	gPad->Update();

	TPaletteAxis *palette1 = (TPaletteAxis*)h1->GetListOfFunctions()->FindObject("palette");
	palette1->SetX1NDC(0.7);
	palette1->SetX2NDC(0.75);
	palette1->SetY1NDC(0.1);
	palette1->SetY2NDC(0.95);

	gPad->SetRightMargin(0.3);
	gPad->SetTopMargin(0.05);

	h2->Draw("same colz");
		TExec *ex2 = new TExec("ex2","gStyle->SetPalette(kCopper);"); //TColor::InvertPalette();
		ex2->Draw();
		h2->Draw("same colz");
	gPad->SetLogz();
	gPad->Modified();
	gPad->Update();

	TPaletteAxis *palette2 = (TPaletteAxis*)h2->GetListOfFunctions()->FindObject("palette");
	palette2->SetX1NDC(0.85);
	palette2->SetX2NDC(0.90);
	palette2->SetY1NDC(0.1);
	palette2->SetY2NDC(0.95);

	h1->SetTitle("");
	h1->GetXaxis()->SetTitle("X-Value");
	h1->GetYaxis()->SetTitle("Y-Value");

	h1->GetZaxis()->SetTitle("First Histogram Events");
	h2->GetZaxis()->SetTitle("Second Histogram Events");

	h1->GetYaxis()->SetTitleSize(0.05);
	h1->GetXaxis()->SetTitleSize(0.05);
	h1->GetYaxis()->SetLabelSize(0.045);
	h1->GetXaxis()->SetLabelSize(0.045);

	h1->GetZaxis()->SetTitleSize(0.045);
	h2->GetZaxis()->SetTitleSize(0.045);
	h1->GetZaxis()->SetLabelSize(0.04);
	h2->GetZaxis()->SetLabelSize(0.04);
	
	h1->GetYaxis()->SetTitleOffset(1);
	h1->GetZaxis()->SetTitleOffset(1);
	h2->GetZaxis()->SetTitleOffset(1);

	c->SetLogz();
	c->SaveAs("test.png");

}
Attachment 2: Makefile
LDFLAGS=-L${ARA_UTIL_INSTALL_DIR}/lib -L${shell root-config --libdir}
CXXFLAGS=-I${ARA_UTIL_INSTALL_DIR}/include -I${shell root-config --incdir}
LDLIBS += $(shell $(ROOTSYS)/bin/root-config --libs)
Attachment 3: test.png
test.png
  32   Mon Dec 17 21:16:31 2018 Brian ClarkRun over many data files in parallel 

To analyze data, we sometimes need to run over many thousands of runs at once. To do this in parallel, we can submit a job for every run we want to do. This will proceed in several steps:

  1. We need to prepare an analysis program.
    1. This is demo.cxx.
    2. The program will take an input data file and an output location.
    3. The program will do some analysis on each events, and then write the result of that analysis to an output file labeled by the same number as the input file.
  2. We need to prepare a job script for PBS.
    1. This is "run.sh"; this is the set of instructions to be submitted to the cluster.
    2. The instructions say to:
      1. Source a a shell environment
      2. To run the executable
      3. Move the output root file to the output location.
    3. Note that we're telling the program we wrote in step 1 to write to the node-local $TMPDIR, and then moving the result to our final output directory at the end. This is better for cluster performance.
  3. We need to make a list of data files to run over
    1. We can do this on OSC by running ls -d -1 /fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event*.root > run_list.txt
    2. This places the full path to the ROOT files in that folder into a list called run_list.txt that we can loop over.
  4. Third, we need to script that will submit all of the jobs to the cluster.
    1. This is "submit_jobs.sh".
    2. This loops over all the files in our run_list.txt and submits a run.sh job for each of them.
    3. This is also where we define the $RUNDIR (where the code is to be exeucted) and the $OUTPUTDIR (where the output products are to be stored)

Once you've generated all of these output files, you can run over the output files only to make plots and such.

 

Attachment 1: demo.cxx
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
////  demo.cxx 
////  demo
////
////  Nov 2018
////////////////////////////////////////////////////////////////////////////////

//Includes
#include <iostream>
#include <string>
#include <sstream>

//AraRoot Includes
#include "RawAtriStationEvent.h"
#include "UsefulAtriStationEvent.h"

//ROOT Includes
#include "TTree.h"
#include "TFile.h"
#include "TGraph.h"

using namespace std;

RawAtriStationEvent *rawAtriEvPtr;

int main(int argc, char **argv)
{

	if(argc<3) {
		std::cout << "Usage\n" << argv[0] << " <input_file> <output_location> "<<endl;
		return -1;
	}

	/*
	arguments
	0: exec
	1: input data file
	2: output location
	*/
	
	TFile *fpIn = TFile::Open(argv[1]);
	if(!fpIn) {
		std::cout << "Can't open file\n";
		return -1;
	}
	TTree *eventTree = (TTree*) fpIn->Get("eventTree");
	if(!eventTree) {
		std::cout << "Can't find eventTree\n";
		return -1;
	}
	eventTree->SetBranchAddress("event",&rawAtriEvPtr);
	int run;
	eventTree->SetBranchAddress("run",&run);
	eventTree->GetEntry(0);
	printf("Filter Run Number %d \n", run);

	char outfile_name[400];
	sprintf(outfile_name,"%s/outputs_run%d.root",argv[2],run);

	TFile *fpOut = TFile::Open(outfile_name, "RECREATE");
	TTree* outTree = new TTree("outTree", "outTree");
	int WaveformLength[16];
	outTree->Branch("WaveformLength", &WaveformLength, "WaveformLength[16]/D");
	
	Long64_t numEntries=eventTree->GetEntries();

	for(Long64_t event=0;event<numEntries;event++) {
		eventTree->GetEntry(event);
		UsefulAtriStationEvent *realAtriEvPtr = new UsefulAtriStationEvent(rawAtriEvPtr, AraCalType::kLatestCalib);
		for(int i=0; i<16; i++){
			TGraph *gr = realAtriEvPtr->getGraphFromRFChan(i);
			WaveformLength[i] = gr->GetN();
			delete gr;
		}		
		outTree->Fill();
		delete realAtriEvPtr;
	} //loop over events
	
	fpOut->Write();
	fpOut->Close();
	
	fpIn->Close();
	delete fpIn;
}
Attachment 2: run.sh
#/bin/bash
#PBS -l nodes=1:ppn=1
#PBS -l mem=4GB
#PBS -l walltime=00:05:00
#PBS -A PAS0654
#PBS -e /fs/scratch/PAS0654/shell_demo/err_out_logs
#PBS -o /fs/scratch/PAS0654/shell_demo/err_out_logs

# you should change the -e and -o to write your 
# log files to a location of your preference

# source your own shell script here
eval 'source /users/PAS0654/osu0673/A23_analysis/env.sh'

# $RUNDIR was defined in the submission script 
# along with $FILE and $OUTPUTDIR

cd $RUNDIR

# $TMPDIR is the local memory of this specific node
# it's the only variable we didn't have to define

./bin/demo $FILE $TMPDIR 

# after we're done
# we copy the results to the $OUTPUTDIR

pbsdcp $TMPDIR/'*.root' $OUTPUTDIR
Attachment 3: run_list.txt
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1000.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1001.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1002.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1004.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1005.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1006.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1007.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1009.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1010.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1011.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1012.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1014.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1015.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1016.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1017.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1019.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1020.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1021.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1022.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1024.root
Attachment 4: submit_jobs.sh
#!/bin/bash

#where should the outputs be stored?
OutputDir="/fs/scratch/PAS0654/shell_demo/outputs"
echo '[ Processed file output directory: ' $OutputDir ' ]'
export OutputDir

#where is your executable compiled?
RunDir="/users/PAS0654/osu0673/A23_analysis/araROOT"
export RunDir

#define the list of runs to execute on
readfile=run_list.txt

counter=0
while read line1
do
	qsub -v RUNDIR=$RunDir,OUTPUTDIR=$OutputDir,FILE=$line1 -N 'job_'$counter run.sh
	counter=$((counter+1))
done < $readfile
  33   Mon Feb 11 21:58:26 2019 Brian ClarkGet a quick start with icemc on OSCSoftware

Follow the instructions in the attached "getting_started_with_anita.pdf" file to download icemc, compile it, generate results, and plot those results.

Attachment 1: sample_bash_profile.sh
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin

export PATH
Attachment 2: sample_bashrc.sh
# .bashrc

source bashrc_anita.sh

# we also want to set two more environment variables that ANITA needs
# you should update ICEMC_SRC_DIR and ICEMC_BUILD_DIR to wherever you
# downloaded icemc too

export ICEMC_SRC_DIR=/path/to/icemc #change this line!
export ICEMC_BUILD_DIR=/path/to/icemc #change this line!
export DYLD_LIBRARY_PATH=${ICEMC_SRC_DIR}:${ICEMC_BUILD_DIR}:${DYLD_LIBRARY_PATH}
Attachment 3: test_plot.cc
//C++ includes
#include <iostream>

//ROOT includes
#include "TCanvas.h"
#include "TStyle.h"
#include "TH1D.h"
#include "TFile.h"
#include "TTree.h"

using namespace std;

int main(int argc, char *argv[])
{

	if(argc<2)
	{
		cout << "Not enough arguments! Stop run. " << endl;
		return -1;
	}

	/*
	we're going to make a histogram, and set some parameters about it's X and Y axes
	*/
	TH1D *nuflavorint_hist = new TH1D("nuflavorint", "",3,1,4);  
	nuflavorint_hist->SetTitle("Neutrino Flavors");
	nuflavorint_hist->GetXaxis()->SetTitle("Neutrino Flavors (1=e, 2=muon, 3=tau)");
	nuflavorint_hist->GetYaxis()->SetTitle("Weigthed Fraction of Total Detected Events");
	nuflavorint_hist->GetXaxis()->SetTitleOffset(1.2);
	nuflavorint_hist->GetYaxis()->SetTitleOffset(1.2);
	nuflavorint_hist->GetXaxis()->CenterTitle();
	nuflavorint_hist->GetYaxis()->CenterTitle();

	for(int i=1; i < argc; i++)
		
	{  // loop over the input files

		//now we are going to load the icefinal.root file and draw in the "passing_events" tree, which stores info
		
		string readfile = string(argv[i]);
		TFile *AnitaFile = new TFile(( readfile ).c_str());
		cout << "AnitaFile" << endl;
		TTree *passing_events = (TTree*)AnitaFile->Get("passing_events");
		cout << "Reading AnitaFile..." << endl;

		//declare three variables we are going to use later
		
		int num_pass;               // number of entries (ultra-neutrinos);
		double weight;              // weight of neutrino counts;
		int nuflavorint;              // neutrino flavors;
		
		num_pass = passing_events->GetEntries();
		cout << "num_pass is " << num_pass << endl;
		
		/*PRIMARIES VARIABLES*/

		//set the "branch" of the tree which stores specific pieces of information
		
		passing_events->SetBranchAddress("weight", &weight);
		passing_events->SetBranchAddress("nuflavor", &nuflavorint);

		//loop over all the events in the tree
	
		for (int k=0; k <=num_pass; k++)
		{
			passing_events->GetEvent(k);
			nuflavorint_hist->Fill(nuflavorint, weight); //fill the histogram with this value and this weight
		
		} // CLOSE FOR LOOP OVER NUMBER OF EVENTS
		
	} // CLOSE FOR LOOP OVER NUMBER OF INPUT FILES

	//set up some parameters to make things loo pretty
	gStyle->SetHistFillColor(0);
	gStyle->SetHistFillStyle(1);
	gStyle->SetHistLineColor(1);
	gStyle->SetHistLineStyle(0);
	gStyle->SetHistLineWidth(2.5); //Setup plot Style

	//make a "canvas" to draw on

	TCanvas *c4 = new TCanvas("c4", "nuflavorint", 1100,850);
	gStyle->SetOptTitle(1);
	gStyle->SetStatX(0.33);
	gStyle->SetStatY(0.87);
	nuflavorint_hist->Draw("HIST"); //draw on it
			
	//Save Plots
	
	//make the line thicker and then save the result
	gStyle->SetHistLineWidth(9);
	c4->SaveAs("nuflavorint.png");
	gStyle->SetHistLineWidth(2);
	c4->SaveAs("nuflavorint.pdf");

	delete c4; //clean up
	return 0; //return successfully
	
}
Attachment 4: bashrc_anita.sh
# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
	. /etc/bashrc
fi

module load cmake/3.11.4
module load gnu/7.3.0
export CC=`which gcc`
export CXX=`which g++`

export BOOST_ROOT=/fs/project/PAS0654/shared_software/anita/owens_pitzer/build/boost_build
export LD_LIBRARY_PATH=${BOOST_ROOT}/stage/lib:$LD_LIBRARY_PATH
export BOOST_LIB=$BOOST_ROOT/stage/lib
export LD_LIBRARY_PATH=$BOOST_LIB:$LD_LIBRARY_PATH

export ROOTSYS=/fs/project/PAS0654/shared_software/anita/owens_pitzer/build/root
eval 'source /fs/project/PAS0654/shared_software/anita/owens_pitzer/build/root/bin/thisroot.sh'
Attachment 5: getting_running_with_anita_stuff.pdf
Attachment 6: getting_running_with_anita_stuff.pptx
Attachment 7: test_plot.mk
# Makefile for the ROOT test programs.  # This Makefile shows nicely how to compile and link applications
# using the ROOT libraries on all supported platforms.
#
# Copyright (c) 2000 Rene Brun and Fons Rademakers
#
# Author: Fons Rademakers, 29/2/2000

include Makefile.arch

################################################################################
# Site specific flags
################################################################################
# Toggle these as needed to get things to install

#BOOSTFLAGS = -I boost_1_48_0
# commented out for kingbee and older versions of gcc
ANITA3_EVENTREADER=1

# Uncomment to enable healpix 
#USE_HEALPIX=1

# Uncomment to disable explicit vectorization (but will do nothing if ANITA_UTIL is not available) 
#VECTORIZE=1


# The ROOT flags are added to the CXXFLAGS in the .arch file
# so this should be simpler...
ifeq (,$(findstring -std=, $(CXXFLAGS)))
	ifeq ($(shell test $(GCC_MAJOR) -lt 5; echo $$?),0)
		ifeq ($(shell test $(GCC_MINOR) -lt 5; echo $$?),0)
			CXXFLAGS += -std=c++0x
		else
			CXXFLAGS += -std=c++11
		endif
	endif
endif

################################################################################

# If not compiling with C++11 (or later) support, all occurrences of "constexpr"
# must be replaced with "const", because "constexpr" is a keyword
# which pre-C++11 compilers do not support.
# ("constexpr" is needed in the code to perform in-class initialization
# of static non-integral member objects, i.e.:
#		static const double c_light = 2.99e8;
# which works in C++03 compilers, must be modified to:
#		static constexpr double c_light = 2.99e8;
# to work in C++11, but adding "constexpr" breaks C++03 compatibility.
# The following compiler flag defines a preprocessor macro which is
# simply:
#		#define constexpr const
# which replaces all instances of the text "constexpr" and replaces it
# with "const".
# This preserves functionality while only affecting very specific semantics.

ifeq (,$(findstring -std=c++1, $(CXXFLAGS)))
	CPPSTD_FLAGS = -Dconstexpr=const
endif



# Uses the standard ANITA environment variable to figure
# out if ANITA libs are installed
ifdef ANITA_UTIL_INSTALL_DIR
	ANITA_UTIL_EXISTS=1
	ANITA_UTIL_LIB_DIR=${ANITA_UTIL_INSTALL_DIR}/lib
	ANITA_UTIL_INC_DIR=${ANITA_UTIL_INSTALL_DIR}/include
	LD_ANITA_UTIL=-L$(ANITA_UTIL_LIB_DIR)
	LIBS_ANITA_UTIL=-lAnitaEvent -lRootFftwWrapper
	INC_ANITA_UTIL=-I$(ANITA_UTIL_INC_DIR)
	ANITA_UTIL_ETC_DIR=$(ANITA_UTIL_INSTALL_DIR)/etc
endif

ifdef ANITA_UTIL_EXISTS
	CXXFLAGS += -DANITA_UTIL_EXISTS
endif

ifdef VECTORIZE
	CXXFLAGS += -DVECTORIZE -march=native -fabi-version=0
endif

ifdef ANITA3_EVENTREADER
	CXXFLAGS += -DANITA3_EVENTREADER
endif

ifdef USE_HEALPIX
	CXXFLAGS += -DUSE_HEALPIX `pkg-config --cflags healpix_cxx`
	LIBS  += `pkg-config --libs healpix_cxx` 
endif


################################################################################

GENERAL_FLAGS = -g -O2 -pipe -m64 -pthread
WARN_FLAGS = -W -Wall -Wextra -Woverloaded-virtual
# -Wno-unused-variable -Wno-unused-parameter -Wno-unused-but-set-variable

CXXFLAGS += $(GENERAL_FLAGS) $(CPPSTD_FLAGS) $(WARN_FLAGS) $(ROOTCFLAGS) $(INC_ANITA_UTIL)

DBGFLAGS  = -pipe -Wall -W -Woverloaded-virtual -g -ggdb -O0 -fno-inline

DBGCXXFLAGS = $(DBGFLAGS) $(ROOTCFLAGS) $(BOOSTFLAGS)
LDFLAGS  += $(CPPSTD_FLAGS) $(LD_ANITA_UTIL) -I$(BOOST_ROOT) -L.
LIBS += $(LIBS_ANITA_UTIL)

# Mathmore not included in the standard ROOT libs
LIBS += -lMathMore

DICT = classdict

OBJS = vector.o position.o earthmodel.o balloon.o icemodel.o signal.o ray.o Spectra.o anita.o roughness.o secondaries.o Primaries.o Tools.o counting.o $(DICT).o Settings.o Taumodel.o screen.o GlobalTrigger.o ChanTrigger.o SimulatedSignal.o EnvironmentVariable.o source.o  random.o

BINARIES = test_plot$(ExeSuf)

################################################################################

.SUFFIXES: .$(SrcSuf) .$(ObjSuf) .$(DllSuf)

all:            $(BINARIES)

$(BINARIES): %: %.$(SrcSuf) $(OBJS)
		$(LD) $(CXXFLAGS) $(LDFLAGS) $(OBJS) $< $(LIBS) $(OutPutOpt) $@
		@echo "$@ done"


.PHONY: clean
clean:
		@rm -f $(BINARIES)


%.$(ObjSuf) : %.$(SrcSuf) %.h
	@echo "<**Compiling**> "$<
	$(LD) $(CXXFLAGS) -c $< -o $@

  38   Wed May 15 00:38:54 2019 Brian ClarkGet a quick start with AraSim on oscSoftware

Follow the instructions in the attached "getting_started_with_ara.pdf" file to download AraSim, compile it, generate results, and plot those results.

Attachment 1: plotting_example.cc
#include <iostream>
#include <fstream>
#include <sstream>
#include <math.h>
#include <string>
#include <stdio.h>
#include <stdlib.h>
#include <vector>
#include <time.h>
#include <cstdio>
#include <cstdlib>
#include <unistd.h>
#include <cstring>
#include <unistd.h>

#include "TTreeIndex.h"
#include "TChain.h"
#include "TH1.h"
#include "TF1.h"
#include "TF2.h"
#include "TFile.h"
#include "TRandom.h"
#include "TRandom2.h"
#include "TRandom3.h"
#include "TTree.h"
#include "TLegend.h"
#include "TLine.h"
#include "TROOT.h"
#include "TPostScript.h"
#include "TCanvas.h"
#include "TH2F.h"
#include "TText.h"
#include "TProfile.h"
#include "TGraphErrors.h"
#include "TStyle.h"
#include "TMath.h"
#include "TVector3.h"
#include "TRotation.h"
#include "TSpline.h"
#include "Math/InterpolationTypes.h"
#include "Math/Interpolator.h"
#include "Math/Integrator.h"
#include "TGaxis.h"
#include "TPaveStats.h"
#include "TLatex.h"

#include "Constants.h"
#include "Settings.h"
#include "Position.h"
#include "EarthModel.h"
#include "Tools.h"
#include "Vector.h"
#include "IceModel.h"
#include "Trigger.h"
#include "Spectra.h"
#include "signal.hh"
#include "secondaries.hh"
#include "Ray.h"
#include "counting.hh"
#include "Primaries.h"
#include "Efficiencies.h"
#include "Event.h"
#include "Detector.h"
#include "Report.h"

using namespace std;

/////////////////Plotting Script for nuflavorint for AraSIMQC
/////////////////Created by Kaeli Hughes, modified by Brian Clark
/////////////////Prepared on April 28 2016 as an "introductor script" to plotting with AraSim; for new users of queenbee

class EarthModel;

int main(int argc, char **argv)
{
	gStyle->SetOptStat(111111); //this is a selection of statistics settings; you should do some googling and figure out exactly what this particular comibation does
	gStyle->SetOptDate(1); //this tells root to put a date and timestamp on whatever plot we output
	
	if(argc<2){ //if you don't have at least 1 file to run over, then you haven't given it a file to analyze; this checks this
		cout<<"Not enough arguments! Abort run."<<endl;
	}
	
	//Create the histogram
		
	TCanvas *c2 = new TCanvas("c2", "nuflavorint", 1100,850); //make a canvas on which to plot the data
	TH1F *nuflavorint_hist = new TH1F("nuflavorint_hist", "nuflavorint histogram", 3, 0.5, 3.5); //create a histogram with three bins
	nuflavorint_hist->GetXaxis()->SetNdivisions(3); //set the number of divisions

	
	int total_thrown=0; // a variable to hold the grant total number of events thrown for all input files

	
	for(int i=1; i<argc;i++){ //loop over the input files
		
		string readfile; //create a variable called readfile that will hold the title of the simulation file
		readfile = string(argv[i]); //set the readfile variable equal to the filename
		
		Event *event = 0; //create a Event class pointer called event; note that it is set equal to zero to avoid creating a bald pointer
		Report *report=0; //create a Event class pointer called event; note that it is set equal to zero to avoid creating a bald pointer
		Settings *settings = 0; //create a Event class pointer called event; note that it is set equal to zero to avoid creating a bald pointer

		TFile *AraFile=new TFile(( readfile ).c_str()); //make a new file called "AraFile" that will be simulation file we are reading in

		if(!(AraFile->IsOpen())) return 0; //checking to see if the file we're trying to read in opened correctly; if not, bounce out of the program
			
		int num_pass;//number of passing events 
		
		TTree *AraTree=(TTree*)AraFile->Get("AraTree"); //get the AraTree
		TTree *AraTree2=(TTree*)AraFile->Get("AraTree2"); //get the AraTree2
		AraTree2->SetBranchAddress("event",&event); //get the event branch
		AraTree2->SetBranchAddress("report",&report); //get the report branch
		AraTree->SetBranchAddress("settings",&settings); //get the settings branch
		
		num_pass=AraTree2->GetEntries(); //get the number of passed events in the data file
		
		AraTree->GetEvent(0); //get the first event; sometimes the tree does not instantiate properly if you'd explicitly "active" the first event
		total_thrown+=(settings->NNU); //add the number of events from this file (The NNU variable) to the grand total; NNU is the number of THROWN neutrinos
		
		
		for (int k=0; k<num_pass; k++){ //going to fill the histograms for as many events as were in this input file
			
			AraTree2->GetEvent(k); //get the even from the tree

			int nuflavorint; //make the container variable
			double weight; //the weight of the event
			int trigger; //the global trigger value for the event

			
			nuflavorint=event->nuflavorint; //draw the event out; one of the objects in the event class is the nuflavorint, and this is the syntax for accessing it
			weight=event->Nu_Interaction[0].weight; //draw out the weight of the event
			
			trigger=report->stations[0].Global_Pass; //find out if the event was a triggered event or not
			
			/*				
			if(trigger!=0){ //check if the event triggered
				nuflavorint_hist->Fill(nuflavorint,weight); //fill the event into the histogram; the first argument of the fill (which is mandatory) is the value you're putting into the histogram; the second value is optional, and is the weight of the event in the histogram
				//in this particular version of the code then, we are only plotting the TRIGGERED events; if you wanted to plot all of the events, you could instead remove this "if" condition and just Fill everything
			}
			*/
			nuflavorint_hist->Fill(nuflavorint,weight); //fill the event into the histogram; the first argument of the fill (which is mandatory)	  
	}
				 
}
	//After looping over all the files, make the plots and save them
	

	//do some stuff to get the "total events thrown" text box ready; this is useful because on the plot itself you can then see how many events you THREW into the simulation; this is especially useful if you're only plotting passed events, but want to know what fraction of your total thrown that is
		char *buffer= (char*) malloc (250); //declare a buffer pointer, and allocate it some chunk of memory
		int a = snprintf(buffer, 250,"Total Events Thrown: %d",total_thrown); //print the words "Total Events Thrown: %d" to the variable "buffer" and tell the system how long that phrase is; the %d sign tells C++ to replace that "%d" with the next argument, or in this case, the number "total_thrown"
			if(a>=250){ //if the phrase is longer than the pre-allocated space, increase the size of the buffer until it's big enough
				buffer=(char*) realloc(buffer, a+1);
				snprintf(buffer, 250,"Total Events Thrown: %d",total_thrown);
				}
		TLatex *u = new TLatex(.3,.01,buffer); //create a latex tex object which we can draw on the cavas
		u->SetNDC(kTRUE); //changes the coordinate system for the tex object plotting
		u->SetIndiceSize(.1); //set the size of the latex index
		u->SetTextSize(.025); //set the size of the latex text

	nuflavorint_hist->Draw(); //draw the histogram
	nuflavorint_hist->GetXaxis()->SetTitle("Neutrino Flavor"); //set the x-axis label
	nuflavorint_hist->GetYaxis()->SetTitle("Number of Events (weighted)"); //set the y-axis label
	nuflavorint_hist->GetYaxis()->SetTitleOffset(1.5); //set the separation between the y-axis and it's label; root natively makes this smaller than is ideal
	nuflavorint_hist->SetLineColor(kBlack); //set the color of the histogram to black, instead of root's default navy blue
	u->Draw(); //draw the statistics box information

	c2->SaveAs("outputs/plotting_example.png"); //save the canvas as a JPG file for viewing
	c2->SaveAs("outputs/plotting_example.pdf"); //save the canvas as a PDF file for viewing
	c2->SaveAs("outputs/plotting_example.root"); //save the canvas as a ROOT file for viewing or editing later


}	//end main; this is the end of the script


Attachment 2: plotting_example.mk
#############################################################################
## Makefile -- New Version of my Makefile that works on both linux
##              and mac os x
## Ryan Nichol <rjn@hep.ucl.ac.uk>
##############################################################################
##############################################################################
##############################################################################
##
##This file was copied from M.readGeom and altered for my use 14 May 2014
##Khalida Hendricks.
##
##Modified by Brian Clark for use on plotting_example on 28 April 2016
##
##Changes:
##line 54 - OBJS = .... add filename.o      .... del oldfilename.o
##line 55 - CCFILE = .... add filename.cc     .... del oldfilename.cc
##line 58 - PROGRAMS = filename
##line 62 - filename : $(OBJS)
##
##
##############################################################################
##############################################################################
##############################################################################
include StandardDefinitions.mk

#Site Specific  Flags
ifeq ($(strip $(BOOST_ROOT)),)
	BOOST_ROOT = /usr/local/include
endif
SYSINCLUDES	= -I/usr/include -I$(BOOST_ROOT)
SYSLIBS         = -L/usr/lib
DLLSUF = ${DllSuf}
OBJSUF = ${ObjSuf}
SRCSUF = ${SrcSuf}

CXX = g++

#Generic and Site Specific Flags
CXXFLAGS     += $(INC_ARA_UTIL) $(SYSINCLUDES) 
LDFLAGS      += -g $(LD_ARA_UTIL) -I$(BOOST_ROOT) $(ROOTLDFLAGS) -L. 

# copy from ray_solver_makefile (removed -lAra part)

# added for Fortran to C++


LIBS	= $(ROOTLIBS) -lMinuit $(SYSLIBS) 
GLIBS	= $(ROOTGLIBS) $(SYSLIBS)


LIB_DIR = ./lib
INC_DIR = ./include

#ROOT_LIBRARY = libAra.${DLLSUF}

OBJS = Vector.o EarthModel.o IceModel.o Trigger.o Ray.o Tools.o Efficiencies.o Event.o Detector.o Position.o Spectra.o RayTrace.o RayTrace_IceModels.o signal.o secondaries.o Settings.o Primaries.o counting.o RaySolver.o Report.o eventSimDict.o plotting_example.o
CCFILE = Vector.cc EarthModel.cc IceModel.cc Trigger.cc Ray.cc Tools.cc Efficiencies.cc Event.cc Detector.cc Spectra.cc Position.cc RayTrace.cc signal.cc secondaries.cc RayTrace_IceModels.cc Settings.cc Primaries.cc counting.cc RaySolver.cc Report.cc plotting_example.cc
CLASS_HEADERS = Trigger.h Detector.h Settings.h Spectra.h IceModel.h Primaries.h Report.h Event.h secondaries.hh #need to add headers which added to Tree Branch

PROGRAMS = plotting_example

all : $(PROGRAMS) 
	
plotting_example : $(OBJS)
	$(LD) $(OBJS) $(LDFLAGS)  $(LIBS) -o $(PROGRAMS) 
	@echo "done."

#The library
$(ROOT_LIBRARY) : $(LIB_OBJS) 
	@echo "Linking $@ ..."
ifeq ($(PLATFORM),macosx)
# We need to make both the .dylib and the .so
		$(LD) $(SOFLAGS)$@ $(LDFLAGS) $(G77LDFLAGS) $^ $(OutPutOpt) $@
ifneq ($(subst $(MACOSX_MINOR),,1234),1234)
ifeq ($(MACOSX_MINOR),4)
		ln -sf $@ $(subst .$(DllSuf),.so,$@)
else
		$(LD) -dynamiclib -undefined $(UNDEFOPT) $(LDFLAGS) $(G77LDFLAGS) $^ \
		   $(OutPutOpt) $(subst .$(DllSuf),.so,$@)
endif
endif
else
	$(LD) $(SOFLAGS) $(LDFLAGS) $(G77LDFLAGS) $(LIBS) $(LIB_OBJS) -o $@
endif

##-bundle

#%.$(OBJSUF) : %.$(SRCSUF)
#	@echo "<**Compiling**> "$<
#	$(CXX) $(CXXFLAGS) -c $< -o  $@

%.$(OBJSUF) : %.C
	@echo "<**Compiling**> "$<
	$(CXX) $(CXXFLAGS) $ -c $< -o  $@

%.$(OBJSUF) : %.cc
	@echo "<**Compiling**> "$<
	$(CXX) $(CXXFLAGS) $ -c $< -o  $@

# added for fortran code compiling
%.$(OBJSUF) : %.f
	@echo "<**Compiling**> "$<
	$(G77) -c $<


eventSimDict.C: $(CLASS_HEADERS)
	@echo "Generating dictionary ..."
	@ rm -f *Dict* 
	rootcint $@ -c ${INC_ARA_UTIL} $(CLASS_HEADERS) ${ARA_ROOT_HEADERS} LinkDef.h

clean:
	@rm -f *Dict*
	@rm -f *.${OBJSUF}
	@rm -f $(LIBRARY)
	@rm -f $(ROOT_LIBRARY)
	@rm -f $(subst .$(DLLSUF),.so,$(ROOT_LIBRARY))	
	@rm -f $(TEST)
#############################################################################
Attachment 3: test_setup.txt
NFOUR=1024

EXPONENT=21
NNU=300 // number of neutrino events
NNU_PASSED=10 // number of neutrino events that are allowed to pass the trigger
ONLY_PASSED_EVENTS=0 // 0 (default): AraSim throws NNU events whether or not they pass; 1: AraSim throws events until the number of events that pass the trigger is equal to NNU_PASSED (WARNING: may cause long run times if reasonable values are not chosen)
NOISE_WAVEFORM_GENERATE_MODE=0 // generate new noise waveforms for each events
NOISE_EVENTS=16 // number of pure noise waveforms
TRIG_ANALYSIS_MODE=0 // 0 = signal + noise, 1 = signal only, 2 = noise only
DETECTOR=1 // ARA stations 1 to 7
NOFZ=1
core_x=10000
core_y=10000

TIMESTEP=5.E-10 // value for 2GHz actual station value
TRIG_WINDOW=1.E-7 // 100ns which is actual testbed trig window
POWERTHRESHOLD=-6.06 // 100Hz global trig rate for 3 out of 16 ARA stations

POSNU_RADIUS=3000
V_MIMIC_MODE=0 // 0 : global trig is located center of readout windows
DATA_SAVE_MODE=0 // 2 : don't save any waveform informations at all
DATA_LIKE_OUTPUT=0 // 0 : don't save any waveform information to eventTree
BORE_HOLE_ANTENNA_LAYOUT=0
SECONDARIES=0

TRIG_ONLY_BH_ON=0
CALPULSER_ON=0
USE_MANUAL_GAINOFFSET=0
USE_TESTBED_RFCM_ON=0
NOISE_TEMP_MODE=0
TRIG_THRES_MODE=0
READGEOM=0 // reads geometry information from the sqlite file or not (0 : don't read)

TRIG_MODE=0 // use vpol, hpol separated trigger mode. by default N_TRIG_V=3, N_TRIG_H=3. You can change this values

number_of_stations=1
core_x=10000
core_y=10000

DETECTOR=1
DETECTOR_STATION=2
DATA_LIKE_OUTPUT=0

NOISE_WAVEFORM_GENERATE_MODE=0 // generates new waveforms for every event
NOISE=0 //flat thermal noise
NOISE_CHANNEL_MODE=0 //using different noise temperature for each channel
NOISE_EVENTS=16 // number of noise events which will be store in the trigger class for later use

ANTENNA_MODE=1
APPLY_NOISE_FIGURE=0
Attachment 4: bashrc_anita.sh
# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
	. /etc/bashrc
fi

module load cmake/3.11.4
module load gnu/7.3.0
export CC=`which gcc`
export CXX=`which g++`

export BOOST_ROOT=/fs/project/PAS0654/shared_software/anita/owens_pitzer/build/boost_build
export LD_LIBRARY_PATH=${BOOST_ROOT}/stage/lib:$LD_LIBRARY_PATH
export BOOST_LIB=$BOOST_ROOT/stage/lib
export LD_LIBRARY_PATH=$BOOST_LIB:$LD_LIBRARY_PATH

export ROOTSYS=/fs/project/PAS0654/shared_software/anita/owens_pitzer/build/root
eval 'source /fs/project/PAS0654/shared_software/anita/owens_pitzer/build/root/bin/thisroot.sh'
Attachment 5: sample_bash_profile.sh
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin

export PATH
Attachment 6: sample_bashrc.sh
# .bashrc

source bashrc_anita.sh

# we also want to set two more environment variables that ANITA needs
# you should update ICEMC_SRC_DIR and ICEMC_BUILD_DIR to wherever you
# downloaded icemc too

export ICEMC_SRC_DIR=/path/to/icemc #change this line!
export ICEMC_BUILD_DIR=/path/to/icemc #change this line!
export DYLD_LIBRARY_PATH=${ICEMC_SRC_DIR}:${ICEMC_BUILD_DIR}:${DYLD_LIBRARY_PATH}
Attachment 7: getting_running_with_ara_stuff.pdf
Attachment 8: getting_running_with_ara_stuff.pptx
ELOG V3.1.5-fc6679b