Updates and Results Talks and Posters Advice Ideas Important Figures Write-Ups Outreach How-To Funding Opportunities GENETIS
  Place to document instructions for how to do things, Page 3 of 3  ELOG logo
New entries since:Wed Dec 31 19:00:00 1969
ID Date Author Subject Projectdown
  Draft   Sun Sep 17 20:05:29 2017 Spoorthi NagasamudramSome basic  
  Draft   Thu Sep 21 14:30:18 2017 Julie RollaUsing/Running XF 

Below I've attached a video with some information regarding running XF. Before you start, here's some important information you need to know. 

 

In order to run XF:

XF can now only run on a windows OS. If you are on a Mac, you can dual boot with windows 10 that is not activated—this still works. The lack of activation gives you minimal contraints, and will not hinder your ability to run XF. Otherwise, you can use a queen bee machine. Note that there are two ways to run XF itself (once you are on a windows machine). You can run via OSC on Oakly--This has a floating license and you will not need to use the USB Key-- or you can use the USB key. Unfortunately, at the moment, we only have one USB key, and it may be best to get an account through OSC. 

 

2 methods: 

 

1.) If you are not using Oakly— you need the USB key to run

 

2.) Log in to Oakly — you do not need the USB key to run

 

------

for method 1:

 XFdtd you can just click on the icon

Will pop up with “need to find a valid license”

click “ok” and insert the USB key and hit “retry"

 

For method 2: 

Log in to Oakly — To log into Oakly follow the steps between **

 

** ssh in 

put “module load XFdtd” in command line— this gets you access to the libraries

then put “XFdtd” to load it

 

————

 

Note: you must have drivers installed to use usb key. Once you plug it in the first time, it will tell you what to install.

Note: The genetic algorithm will change the geometry of the detector and XF will check the gain values with those given geometries. 

 

After XF is loaded:

Components are listed on the left. Each are different for different simulations. 

To put in geometry on antenna, click on “parts”. 

click “create new”

choose a type of geometry. 

Note that you can also go to “file” and import and you can import cad files. 

 

“Create simulation”— save the project, and give it a name. Then, click “create simulation”. This stores all of the geometry and settings of the simulation. Now you could, if you wanted to, browse all of the different types of simulations. 

 

How to actually run the simulations:

In this example, Carl setup a planar sensor with a point sensor inside the sphere, and two more sensors on each side of the sphere. Now, you load the simulation and hit the play button at the bottom. Note that this should take 20 or 30 minutes for it to actually simulate. When it is done, you can actually re-click the play button and it will show you the visual simulation. It will automatically write this out when running the simulations. You either now need to parse that, or be able to view the data in XF itself. 

 

You can click “file” ad export data. Additionally, you can export the image. Note that this can give you a “smith chart”; this is that gain measurement you’re looking for. If you had a far field/zone sensor, then you could get a far field gain— which is this smith chart. To get the smith chart, you hover over the sensor, and right click. This should give you the option of a smith chart if you had the correct sensor. Note that all of this data and information is on the right hand side under “results” this will pull up all of the sensors, which you can right click to gather the actual individual data on.  

 

Note: the far zone sensor puts sensors completely symmetrical around the object. Ie if we have a sphere, we will have a larger sphere outside of our conducting sphere/antenna.

  23   Wed Jun 6 08:54:44 2018 Brian Clark and Oindree BanerjeeHow to Access Jacob's ROOT6 on Oakley 

Source the attached env.sh file. Good to go!

Attachment 1: env.sh
export ROOTSYS=/users/PAS0174/osu8620/root-6.08.06
eval 'source /users/PAS0174/osu8620/root-6.08.06/builddir/bin/thisroot.sh'
export LD_INCLUDE_PATH=/users/PAS0174/osu8620/cint/libcint/build/include:$LD_INCLUDE_PATH
module load fftw3/3.3.5
module load gnu/6.3.0
module load python/3.4.2
module load cmake/3.7.2

#might need this, but probably not
#export CC=/usr/local/gcc/6.3.0/bin/gcc
  24   Wed Jun 6 17:48:47 2018 Jorge TorresHow to build ROOT 6 on an OSC cluster 

Disclaimer: I wrote this for Owens, which I think will also work on Pitzer. I recommend following Steven's instructions, and use mine if it fails to build. J

1. Submit a batch job so the processing resources are not limited (change the project ID if needed.):

qsub -A PAS0654 -I -l nodes=1:ppn=4,walltime=2:00:00

2. Reset and load the following modules (copy and paste as it is):

module reset
module load cmake/3.7.2
module load python/2.7.latest
module load fftw3/3.3.5

3. Do echo $FFTW3_HOME and make sure it spits out "/usr/local/fftw3/intel/16.0/mvapich2/2.2/3.3.5". If it doesn't, do 

 

export FFTW3_HOME=/usr/local/fftw3/intel/16.0/mvapich2/2.2/3.3.5

Otherwise, just do

 

export FFTW_DIR=$FFTW3_HOME

4.  Do (Change DCMAKE_INSTALL_PREFIX and point it to the root source directory)

cmake -DCMAKE_C_COMPILER=`which gcc` \
-DCMAKE_CXX_COMPILER=`which g++` \
-DCMAKE_INSTALL_PREFIX=${HOME}/local/oakley/ROOT-6.12.06 \
-DBLAS_mkl_intel_LIBRARY=${MKLROOT}/lib/intel64 \
../root-6.12.06 2>&1 | tee cmake.log

It will configure root so it can be installed in the machine (takes about 5 minutes).

5. Once it is configured, do the following to build root (takes about 45 min)

make -j4 2>&1 | tee make.log

6. Once it's done, do 

make install

In order to run it, now go into the directory, then cd bin. Once you're in there you should see a .sh called 'thisroot.sh'. Type 'source thisroot.sh'. You should now be able to type 'root' and it will run. Note that you must source this EVERY time you log into OSC. The smart thing to do would be to put this into your bash script. 

(Second procedure from S. Prohira)

1. download ROOT: https://root.cern.ch/downloading-root (whatever the latest pro release is)

2. put the source tarball somewhere in your directory on ruby and expand it into the "source" folder

3. on ruby, open your ~/.bashrc file and add the following lines:

export CC="/usr/local/gnu/7.3.0/bin/gcc"
export CXX="/usr/local/gnu/7.3.0/bin/g++"
module load cmake
module load python
module load gnu/7.3.0

4. then run: source ~/.bashrc

5. make a "build" directory somewhere else on ruby called 'root' or 'root_build' and cd into that directory.

6. do: cmake /path/to/source/folder (e.g. the folder you expanded from the .tar file above. should finish with no errors.) here you can also include the -D flags that you want (such as minuit2 for the anita tools)
   -for example, the ANITA tools need you to do: cmake -Dminuit2:bool=true /path/to/source/folder.

7. do: make -j4 (or the way that Jorge did it above, if you want to submit it as a batch job (and not be a jerk running a job on the login nodes like i did))

8. add the following line to your .bashrc file (or .profile, whatever startup file you prefer):

source /path/to/root/build/directory/bin/thisroot.sh

9. enjoy root!

 

  29   Fri Nov 9 00:44:09 2018 Brian ClarkTransfer files from IceCube Data Warehouse to OSC 

Brian had to move ~7 TB of data from the IceCube data warehouse to OSC.

To do this, he used the gridftp software. The advantage is that griftp is optimized for large file transfers, and will manage data transfer better than something like scp or rsync.

Note that this utilizes the gridftp software installed on OSC, but doesn't formally use the globus end point described here: https://www.osc.edu/resources/getting_started/howto/howto_transfer_files_using_globus_connect. This is because IceCube doesn't have a formal globus endpoint to connect too. The formal globus endpoint would have been even easier if it was available, but oh well...

Setup goes as follows:

  1. Follow the IceCube instructions for getting an OSG certificate
    1. Go through CILogon (https://cilogon.org/) and generate and download a certificate
  2. Install your certificate on the IceCube machines
    1. Move your certificate to the following place on IceCube: ./globus/usercred.p12
    2. Change the permissions on this certificate: chmod 600 ./globus/usercred.p12
    3. Get your "subject" to this key: openssl pkcs12 -in .globus/usercred.p12 -nokeys | grep subject
    4. Copy the subject line into your IceCube LDAP account
      1. Select "Edit your profile"
      2. Enter IceCube credentials
      3. Paste the subject into the "x509 Subject DN" box
  3. Install your certificates on the OSC machines
    1. Follow the same instructions as for IceCube to install the globus credentials, but you don't need to do the IceCube LDAP part

How to actually make a transfer:

  1. Initialize a proxy certificate on OSC: grid-proxy-init -bits 1024
  2. Use globus-url-copy to move a file, for example: globus-url-copy -r gsiftp://gridftp.icecube.wisc.edu/data/wipac/ARA/2016/unblinded/L1/ARA03/ 2016/ &
    1. I'm using the command "globus-url-copy"
    2. "-r" says to transfer recursively
    3. "gsiftp://gridftp.icecube.wisc.edu/data/wipac/ARA/2016/ublinded/L1/ARA03/" is the entire directory I'm trying to copy
    4. "2016/" is the directory I'm copying them to
    5. "&" says do this in the background once launched
  3. Note that it's kind of syntatically picky:
    1. To copy a directory, the source path name must end in "/"
    2. To copy a directory, the destination path name must also end in "/"

 

 

  32   Mon Dec 17 21:16:31 2018 Brian ClarkRun over many data files in parallel 

To analyze data, we sometimes need to run over many thousands of runs at once. To do this in parallel, we can submit a job for every run we want to do. This will proceed in several steps:

  1. We need to prepare an analysis program.
    1. This is demo.cxx.
    2. The program will take an input data file and an output location.
    3. The program will do some analysis on each events, and then write the result of that analysis to an output file labeled by the same number as the input file.
  2. We need to prepare a job script for PBS.
    1. This is "run.sh"; this is the set of instructions to be submitted to the cluster.
    2. The instructions say to:
      1. Source a a shell environment
      2. To run the executable
      3. Move the output root file to the output location.
    3. Note that we're telling the program we wrote in step 1 to write to the node-local $TMPDIR, and then moving the result to our final output directory at the end. This is better for cluster performance.
  3. We need to make a list of data files to run over
    1. We can do this on OSC by running ls -d -1 /fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event*.root > run_list.txt
    2. This places the full path to the ROOT files in that folder into a list called run_list.txt that we can loop over.
  4. Third, we need to script that will submit all of the jobs to the cluster.
    1. This is "submit_jobs.sh".
    2. This loops over all the files in our run_list.txt and submits a run.sh job for each of them.
    3. This is also where we define the $RUNDIR (where the code is to be exeucted) and the $OUTPUTDIR (where the output products are to be stored)

Once you've generated all of these output files, you can run over the output files only to make plots and such.

 

Attachment 1: demo.cxx
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
////  demo.cxx 
////  demo
////
////  Nov 2018
////////////////////////////////////////////////////////////////////////////////

//Includes
#include <iostream>
#include <string>
#include <sstream>

//AraRoot Includes
#include "RawAtriStationEvent.h"
#include "UsefulAtriStationEvent.h"

//ROOT Includes
#include "TTree.h"
#include "TFile.h"
#include "TGraph.h"

using namespace std;

RawAtriStationEvent *rawAtriEvPtr;

int main(int argc, char **argv)
{

	if(argc<3) {
		std::cout << "Usage\n" << argv[0] << " <input_file> <output_location> "<<endl;
		return -1;
	}

	/*
	arguments
	0: exec
	1: input data file
	2: output location
	*/
	
	TFile *fpIn = TFile::Open(argv[1]);
	if(!fpIn) {
		std::cout << "Can't open file\n";
		return -1;
	}
	TTree *eventTree = (TTree*) fpIn->Get("eventTree");
	if(!eventTree) {
		std::cout << "Can't find eventTree\n";
		return -1;
	}
	eventTree->SetBranchAddress("event",&rawAtriEvPtr);
	int run;
	eventTree->SetBranchAddress("run",&run);
	eventTree->GetEntry(0);
	printf("Filter Run Number %d \n", run);

	char outfile_name[400];
	sprintf(outfile_name,"%s/outputs_run%d.root",argv[2],run);

	TFile *fpOut = TFile::Open(outfile_name, "RECREATE");
	TTree* outTree = new TTree("outTree", "outTree");
	int WaveformLength[16];
	outTree->Branch("WaveformLength", &WaveformLength, "WaveformLength[16]/D");
	
	Long64_t numEntries=eventTree->GetEntries();

	for(Long64_t event=0;event<numEntries;event++) {
		eventTree->GetEntry(event);
		UsefulAtriStationEvent *realAtriEvPtr = new UsefulAtriStationEvent(rawAtriEvPtr, AraCalType::kLatestCalib);
		for(int i=0; i<16; i++){
			TGraph *gr = realAtriEvPtr->getGraphFromRFChan(i);
			WaveformLength[i] = gr->GetN();
			delete gr;
		}		
		outTree->Fill();
		delete realAtriEvPtr;
	} //loop over events
	
	fpOut->Write();
	fpOut->Close();
	
	fpIn->Close();
	delete fpIn;
}
Attachment 2: run.sh
#/bin/bash
#PBS -l nodes=1:ppn=1
#PBS -l mem=4GB
#PBS -l walltime=00:05:00
#PBS -A PAS0654
#PBS -e /fs/scratch/PAS0654/shell_demo/err_out_logs
#PBS -o /fs/scratch/PAS0654/shell_demo/err_out_logs

# you should change the -e and -o to write your 
# log files to a location of your preference

# source your own shell script here
eval 'source /users/PAS0654/osu0673/A23_analysis/env.sh'

# $RUNDIR was defined in the submission script 
# along with $FILE and $OUTPUTDIR

cd $RUNDIR

# $TMPDIR is the local memory of this specific node
# it's the only variable we didn't have to define

./bin/demo $FILE $TMPDIR 

# after we're done
# we copy the results to the $OUTPUTDIR

pbsdcp $TMPDIR/'*.root' $OUTPUTDIR
Attachment 3: run_list.txt
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1000.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1001.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1002.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1004.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1005.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1006.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1007.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1009.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1010.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1011.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1012.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1014.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1015.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1016.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1017.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1019.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1020.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1021.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1022.root
/fs/scratch/PAS0654/ara/10pct/RawData/A3/2013/sym_links/event1024.root
Attachment 4: submit_jobs.sh
#!/bin/bash

#where should the outputs be stored?
OutputDir="/fs/scratch/PAS0654/shell_demo/outputs"
echo '[ Processed file output directory: ' $OutputDir ' ]'
export OutputDir

#where is your executable compiled?
RunDir="/users/PAS0654/osu0673/A23_analysis/araROOT"
export RunDir

#define the list of runs to execute on
readfile=run_list.txt

counter=0
while read line1
do
	qsub -v RUNDIR=$RunDir,OUTPUTDIR=$OutputDir,FILE=$line1 -N 'job_'$counter run.sh
	counter=$((counter+1))
done < $readfile
  35   Tue Feb 26 19:07:40 2019 Lauren EnnesserValgrind command to suppress ROOT warnings 

valgrind --suppressions=$ROOTSYS/etc/valgrind-root.supp ./myCode

If you use valgrind to identify potential memory leaks in your code, but use a lot of ROOT objects and functions, you'll notice that ROOTs TObjects trigger a lot of "potential memory leak" warnings. This option will suppress many of those. More info at https://root-forum.cern.ch/t/valgrind-and-root/2ss8506

  Draft   Fri Feb 28 13:09:53 2020 Justin FlahertyInstalling anitaBuildTools on OSC-Owens (Revised 2/28/2020) 
  46   Tue Aug 2 14:34:15 2022 Alex MOSC License Request 

Some programs on OSC require authorized access to use in the form of a license. The license will be automatically read if it is available whenever you open a program on OSC, provided you have access to the license. In order to have access to a license, you need to fill out the attached form and send it to Amy to forward to OSC. Available programs (at least some of) which require a license can be found here: https://www.osc.edu/resources/available_software/software_list . Replace the name of the program at the top of the form with the desired software.

Attachment 1: User_Software_Agreement.pdf
  48   Thu Jun 8 16:29:45 2023 Alan Salcedo Doing IceCube/ARA coincidence analysis 

These documents contain information on how to run IceCube/ARA coincidence simulations and analysis. All technical information of where codes are stored and how to use them is detailed in the technical note. Other supportive information for physics understanding is in the powerpoint slides. The technical note will direct you to other documents in this elog in the places where you may need supplemental information.

Attachment 1: IceCube_ARA_Coincidence_Analysis___Technical_Note.pdf
Attachment 2: ICARA_Coincident_Events_Introduction.pptx
Attachment 3: ICARA_Analysis_Template.ipynb
{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "bcdfb138",
   "metadata": {},
   "source": [
    "# IC/ARA Coincident Simulation Events Analysis"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c7ad7c80",
   "metadata": {},
   "source": [
    "### Settings and imports"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "b6915a86",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<style>.container { width:75% !important; }</style>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "## Makes this notebook maximally wide\n",
    "from IPython.display import display, HTML\n",
    "display(HTML(\"<style>.container { width:75% !important; }</style>\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "6ef9e20b",
   "metadata": {},
   "outputs": [],
   "source": [
    "## Author: Alex Machtay (machtay.1@osu.edu)\n",
    "## Modified by: Alan Salcedo (salcedogomez.1@osu.edu)\n",
    "## Date: 4/26/23\n",
    "\n",
    "## Purpose:\n",
    "### This script will read the data files produced by AraRun_corrected_MultStat.py to make histograms relevant plots of\n",
    "### neutrino events passing through icecube and detected by ARA station (in AraSim)\n",
    "\n",
    "\n",
    "## Imports\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import sys\n",
    "sys.path.append(\"/users/PAS0654/osu8354/root6_18_build/lib\") # go to parent dir\n",
    "sys.path.append(\"/users/PCON0003/cond0068/.local/lib/python3.6/site-packages\")\n",
    "import math\n",
    "import argparse\n",
    "import glob\n",
    "import pandas as pd\n",
    "pd.options.mode.chained_assignment = None  # default='warn'\n",
    "from mpl_toolkits.mplot3d import Axes3D\n",
    "import jupyterthemes as jt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "f24b8292",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/bin/bash: jt: command not found\r\n"
     ]
    }
   ],
   "source": [
    "## Set style for the jupyter notebook\n",
    "!jt -t grade3 -T -N -kl -lineh 160 -f code -fs 14 -ofs 14 -cursc o\n",
    "jt.jtplot.style('grade3', gridlines='')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a5a812d6",
   "metadata": {},
   "source": [
    "### Set constants\n",
    "#### These are things like the position of ARA station holes, the South Pole, IceCube's position in ARA station's coordinates, and IceCube's radius"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "cdb851de",
   "metadata": {},
   "outputs": [],
   "source": [
    "## What IceCube (IC) station are you analyzing\n",
    "station = 1\n",
    "\n",
    "## What's the radius around ARA where neutrinos were injected\n",
    "inj_rad = 5 #in km\n",
    "\n",
    "## IceCube's center relative to each ARA station\n",
    "IceCube = [[-1128.08, -2089.42, -1942.39], [-335.812, -3929.26, -1938.23],\n",
    "          [-2320.67, -3695.78, -1937.35], [-3153.04, -1856.05, -1942.81], [472.49, -5732.98, -1922.06]] #IceCube's position relative to A1, A2, or A3\n",
    "\n",
    "#To calculate this, we need to do some coordinate transformations. Refer to this notebook to see the calculations: \n",
    "# IceCube_Relative_to_ARA_Stations.ipynb (found here - http://radiorm.physics.ohio-state.edu/elog/How-To/48) \n",
    "\n",
    "## IceCube's radius\n",
    "IceCube_radius = 564.189583548 #Modelling IceCube as a cylinder, we find the radius with V = h * pi*r^2 with V = 1x10^9 m^3 and h = 1x10^3 m "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af5c6fc2",
   "metadata": {},
   "source": [
    "### Read the data\n",
    "\n",
    "#### Once we import the data, we'll make dataframes to concatenate it and make some calculations"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "921f61f8",
   "metadata": {},
   "outputs": [],
   "source": [
    "## Import data files\n",
    "\n",
    "#Here, it's from OSC Connolly's group project space\n",
    "source = '/fs/project/PAS0654/IceCube_ARA_Coincident_Search/AraSim/outputs/Coincident_Search_Runs/20M_GZK_5km_S1_correct' \n",
    "num_files = 200  # Number of files to read in from the source directory\n",
    "\n",
    "## Make a list of all of the paths to check \n",
    "file_list = []\n",
    "for i in range(1, num_files + 1):\n",
    "        for name in glob.glob(source + \"/\" + str(i) + \"/*.csv\"):\n",
    "                file_list.append(str(name))\n",
    "                #file_list gets paths to .csv files\n",
    "                \n",
    "## Now read the csv files into a pandas dataframe\n",
    "dfs = []\n",
    "for filename in file_list:\n",
    "        df = pd.read_csv(filename, index_col=None, header=0) #Store each csv file into a pandas data frame\n",
    "        dfs.append(df) #Append the csv file to store all of them in one\n",
    "frame = pd.concat(dfs, axis=0, ignore_index = True) #Concatenate pandas dataframes "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "813fb3ee",
   "metadata": {},
   "source": [
    "### Work with the data\n",
    "\n",
    "#### All the data from our coincidence simulations (made by AraRun_MultStat.sh) is now stored in a pandas data frame that we can work with"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "8b449583",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "19800000\n",
      "19799802\n"
     ]
    }
   ],
   "source": [
    "## Now let's clean up our data and calculate other relevant things\n",
    "print(len(frame))\n",
    "frame = frame[frame['weight'].between(0,1)] #Filter out events with ill-defined weight (should be between 0 and 1)\n",
    "print(len(frame))\n",
    "\n",
    "frame['x-positions'] = (frame['Radius (m)'] * np.cos(frame['Azimuth (rad)']) * np.sin(frame['Zenith (rad)']))\n",
    "frame['y-positions'] = (frame['Radius (m)'] * np.sin(frame['Azimuth (rad)']) * np.sin(frame['Zenith (rad)']))\n",
    "frame['z-positions'] = (frame['Radius (m)'] * np.cos(frame['Zenith (rad)']))\n",
    "\n",
    "## The energy in eV will be 10 raised to the number in the file, multiplied by 1-y (y is inelasticity)\n",
    "frame['Nu Energies (eV)'] = np.power(10, (frame['Energy (log10) (eV)']))\n",
    "frame['Mu Energies (eV)'] = ((1-frame['Inelasticity']) * frame['Nu Energies (eV)']) #Energy of the produced lepton\n",
    "#Here the lepton is not a muon necesarily, hence the label 'Mu Energies (eV)' may be misleading\n",
    "\n",
    "## Get a frame with only coincident events\n",
    "coincident_frame = frame[frame['Coincident'] == 1] \n",
    "\n",
    "## And a frame for strictly events *detected* by ARA\n",
    "detected_frame = frame[frame['Detected'] == 1]\n",
    "\n",
    "\n",
    "## Now let's calculate the energy of the lepton when reaching IceCube (IC)\n",
    "\n",
    "# To do this correctly, I need to find exactly the distance traveled by the muon and apply the equation\n",
    "# I need the trajectory of the muon to find the time it takes to reach IceCube, then I can find the distance it travels in that time\n",
    "# I should allow events that occur inside the icecube volume to have their full energy (but pretty much will happen anyway)\n",
    "## a = sin(Theta)*cos(Phi)\n",
    "## b = sin(Theta)*sin(Phi)\n",
    "## c = cos(Theta)\n",
    "## a_0 = x-position\n",
    "## b_0 = y-position\n",
    "## c_0 = z-position\n",
    "## x_0 = IceCube[0]\n",
    "## y_0 = IceCube[1]\n",
    "## z_0 = IceCube[2]\n",
    "## t = (-(a*(a_0-x_0) + b*(b_0-y_0))+D**0.5)/(a**2+b**2)\n",
    "## D = (a**2+b**2)*R_IC**2 - (a*(b_0-y_0)+b*(a_0-x_0))**2\n",
    "## d = ((a*t)**2 + (b*t)**2 + (c*t)**2)**0.5\n",
    "\n",
    "## Trajectories\n",
    "coincident_frame['a'] = (np.sin(coincident_frame['Theta (rad)'])*np.cos(coincident_frame['Phi (rad)']))\n",
    "coincident_frame['b'] = (np.sin(coincident_frame['Theta (rad)'])*np.sin(coincident_frame['Phi (rad)']))\n",
    "coincident_frame['c'] = (np.cos(coincident_frame['Theta (rad)']))\n",
    "\n",
    "## Discriminant\n",
    "coincident_frame['D'] = ((coincident_frame['a']**2 + coincident_frame['b']**2)*IceCube_radius**2 - \n",
    "                         (coincident_frame['a']*(coincident_frame['y-position (m)']-IceCube[station-1][1])- ## I think this might need to be a minus sign!\n",
    "                          coincident_frame['b']*(coincident_frame['x-position (m)']-IceCube[station-1][0]))**2)\n",
    "\n",
    "## Interaction time (this is actually the same as the distance traveled, at least for a straight line)\n",
    "coincident_frame['t_1'] = (-(coincident_frame['a']*(coincident_frame['x-position (m)']-IceCube[station-1][0])+\n",
    "                            coincident_frame['b']*(coincident_frame['y-position (m)']-IceCube[station-1][1]))+\n",
    "                          np.sqrt(coincident_frame['D']))/(coincident_frame['a']**2+coincident_frame['b']**2)\n",
    "coincident_frame['t_2'] = (-(coincident_frame['a']*(coincident_frame['x-position (m)']-IceCube[station-1][0])+\n",
    "                            coincident_frame['b']*(coincident_frame['y-position (m)']-IceCube[station-1][1]))-\n",
    "                          np.sqrt(coincident_frame['D']))/(coincident_frame['a']**2+coincident_frame['b']**2)\n",
    "\n",
    "## Intersection coordinates\n",
    "coincident_frame['x-intersect_1'] = (coincident_frame['a'] * coincident_frame['t_1'] + coincident_frame['x-position (m)'])\n",
    "coincident_frame['y-intersect_1'] = (coincident_frame['b'] * coincident_frame['t_1'] + coincident_frame['y-position (m)'])\n",
    "coincident_frame['z-intersect_1'] = (coincident_frame['c'] * coincident_frame['t_1'] + coincident_frame['z-position (m)'])\n",
    "\n",
    "coincident_frame['x-intersect_2'] = (coincident_frame['a'] * coincident_frame['t_2'] + coincident_frame['x-position (m)'])\n",
    "coincident_frame['y-intersect_2'] = (coincident_frame['b'] * coincident_frame['t_2'] + coincident_frame['y-position (m)'])\n",
    "coincident_frame['z-intersect_2'] = (coincident_frame['c'] * coincident_frame['t_2'] + coincident_frame['z-position (m)'])\n",
    "\n",
    "## Distance traveled (same as the parametric time, at least for a straight line)\n",
    "coincident_frame['d_1'] = (np.sqrt((coincident_frame['a']*coincident_frame['t_1'])**2+\n",
    "                          (coincident_frame['b']*coincident_frame['t_1'])**2+\n",
    "                          (coincident_frame['c']*coincident_frame['t_1'])**2))\n",
    "coincident_frame['d_2'] = (np.sqrt((coincident_frame['a']*coincident_frame['t_2'])**2+\n",
    "                          (coincident_frame['b']*coincident_frame['t_2'])**2+\n",
    "                          (coincident_frame['c']*coincident_frame['t_2'])**2))\n",
    "\n",
    "## Check if it started inside and set the distance based on if it needs to travel to reach icecube or not\n",
    "coincident_frame['Inside'] = (np.where((coincident_frame['t_1']/coincident_frame['t_2'] < 0) & (coincident_frame['z-position (m)'].between(-2450, -1450)), 1, 0))\n",
    "coincident_frame['preliminary d'] = (np.where(coincident_frame['d_1'] <= coincident_frame['d_2'], coincident_frame['d_1'], coincident_frame['d_2']))\n",
    "coincident_frame['d'] = (np.where(coincident_frame['Inside'] == 1, 0, coincident_frame['preliminary d']))\n",
    "\n",
    "## Check if the event lies in the cylinder\n",
    "coincident_frame['In IC'] = (np.where((np.sqrt((coincident_frame['x-position (m)']-IceCube[station-1][0])**2 + (coincident_frame['y-position (m)']-IceCube[station-1][1])**2) < IceCube_radius) &\n",
    "                                     ((coincident_frame['z-position (m)']).between(-2450, -1450)) , 1, 0))\n",
    "\n",
    "#Correct coincident_frame to only have electron neutrinos inside IC\n",
    "coincident_frame = coincident_frame[(((coincident_frame['In IC'] == 1) & (coincident_frame['flavor'] == 1)) | (coincident_frame['flavor'] == 2) | (coincident_frame['flavor'] == 3)) ]\n",
    "\n",
    "#Now calculate the lepton energies when they reach IC\n",
    "coincident_frame['IC Mu Energies (eV)'] = (coincident_frame['Mu Energies (eV)'] * np.exp(-10**-5 * coincident_frame['d']*100)) # convert d from meters to cm\n",
    "coincident_frame['weighted energies'] = (coincident_frame['weight'] * coincident_frame['Nu Energies (eV)'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "f1757103",
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "## Add possible Tau decay to the frame\n",
    "coincident_frame['Tau decay'] = ''\n",
    "# Again, the label 'Tau Decay' may be misleading because not all leptons may be taus\n",
    "\n",
    "## Calculate distance from the interaction point to its walls and keep the shortest (the first interaction with the volume)\n",
    "\n",
    "coincident_frame['distance-to-IC_1'] = np.sqrt((coincident_frame['x-positions'] - coincident_frame['x-intersect_1'])**2 + \n",
    "                                        (coincident_frame['y-positions'] - coincident_frame['y-intersect_1'])**2)\n",
    "coincident_frame['distance-to-IC_2'] = np.sqrt((coincident_frame['x-positions'] - coincident_frame['x-intersect_2'])**2 + \n",
... 19001 more lines ...
Attachment 4: IceCube_Relative_to_ARA_Stations.ipynb
{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "a950b9af",
   "metadata": {},
   "source": [
    "**This script is simply for me to calculate the location of IceCube relative to the origin of any ARA station**\n",
    "\n",
    "The relevant documentation to understand the definitions after the imports can be found in https://elog.phys.hawaii.edu/elog/ARA/130712_170712/doc.pdf"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "b926e2e3",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "901b442b",
   "metadata": {},
   "outputs": [],
   "source": [
    "#Definitions of translations in surveyor's coordinates:\n",
    "\n",
    "t_IC_to_ARAg = np.array([-24100, 1700, 6400])\n",
    "t_ARAg_to_A1 = np.array([16401.71, -2835.37, -25.67])\n",
    "t_ARAg_to_A2 = np.array([13126.7, -8519.62, -18.72])\n",
    "t_ARAg_to_A3 = np.array([9848.35, -2835.19, -12.7])\n",
    "\n",
    "#Definitions of rotations from surveyor's axes to the ARA Station's coordinate systems\n",
    "\n",
    "R1 = np.array([[-0.598647, 0.801013, -0.000332979], [-0.801013, -0.598647, -0.000401329], \\\n",
    "               [-0.000520806, 0.0000264661, 1]])\n",
    "R2 = np.array([[-0.598647, 0.801013, -0.000970507], [-0.801007, -0.598646,-0.00316072 ], \\\n",
    "               [-0.00311277, -0.00111477, 0.999995]])\n",
    "R3 = np.array([[-0.598646, 0.801011, -0.00198193],[-0.801008, -0.598649,-0.00247504], \\\n",
    "               [-0.00316902, 0.000105871, 0.999995]])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ab2d3206",
   "metadata": {},
   "source": [
    "**Using these definitions, I should be able to calculate the location of IceCube relative to each ARA station by:**\n",
    "\n",
    "$$\n",
    "\\vec{r}_{A 1}^{I C}=-R_1\\left(\\vec{t}_{I C}^{A R A}+\\vec{t}_{A R A}^{A 1}\\right)\n",
    "$$\n",
    "\n",
    "We have a write-up of how to get this. Contact salcedogomez.1@osu.edu if you need that.\n",
    "\n",
    "Alex had done this already, he got that \n",
    "\n",
    "$$\n",
    "\\vec{r}_{A 1}^{I C}=-3696.99^{\\prime} \\hat{x}-6843.56^{\\prime} \\hat{y}-6378.31^{\\prime} \\hat{z}\n",
    "$$\n",
    "\n",
    "Let me verify that I get the same"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "912163d2",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "IC coordinates relative to A1 (in):  [-3696.98956579 -6843.55800868 -6378.30926681]\n",
      "IC coordinates relative to A1 (m):  [-1127.13096518 -2086.4506124  -1944.60648378]\n",
      "Distance of IC from A1 (m):  3066.788996234438\n"
     ]
    }
   ],
   "source": [
    "IC_A1 = -R1 @ np.add(t_ARAg_to_A1, t_IC_to_ARAg).T\n",
    "print(\"IC coordinates relative to A1 (in): \", IC_A1)\n",
    "print(\"IC coordinates relative to A1 (m): \", IC_A1/3.28)\n",
    "print(\"Distance of IC from A1 (m): \", np.sqrt((IC_A1[0]/3.28)**2 + (IC_A1[1]/3.28)**2 + (IC_A1[2]/3.28)**2))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f9c9f252",
   "metadata": {},
   "source": [
    "Looks good!\n",
    "\n",
    "Now, I just get the other ones:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "8afa27c6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "IC coordinates relative to A2 (in):  [ -1100.33577313 -12852.0589083   -6423.00776043]\n",
      "IC coordinates relative to A2 (m):  [ -335.46822352 -3918.31064277 -1958.2340733 ]\n",
      "Distance of IC from A2 (m):  4393.219537890439\n"
     ]
    }
   ],
   "source": [
    "IC_A2 = -R2 @ np.add(t_ARAg_to_A2, t_IC_to_ARAg).T\n",
    "print(\"IC coordinates relative to A2 (in): \", IC_A2)\n",
    "print(\"IC coordinates relative to A2 (m): \", IC_A2/3.28)\n",
    "print(\"Distance of IC from A2 (m): \", np.sqrt((IC_A2[0]/3.28)**2 + (IC_A2[1]/3.28)**2 + (IC_A2[2]/3.28)**2))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "9959d0a4",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "IC coordinates relative to A3 (in):  [ -7609.73440732 -12079.45719852  -6432.31164368]\n",
      "IC coordinates relative to A3 (m):  [-2320.04097784 -3682.76134101 -1961.07062307]\n",
      "Distance of IC from A3 (m):  4774.00452685144\n"
     ]
    }
   ],
   "source": [
    "IC_A3 = -R3 @ np.add(t_ARAg_to_A3, t_IC_to_ARAg).T\n",
    "print(\"IC coordinates relative to A3 (in): \", IC_A3)\n",
    "print(\"IC coordinates relative to A3 (m): \", IC_A3/3.28)\n",
    "print(\"Distance of IC from A3 (m): \", np.sqrt((IC_A3[0]/3.28)**2 + (IC_A3[1]/3.28)**2 + (IC_A3[2]/3.28)**2))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "093dff67",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
ELOG V3.1.5-fc6679b