| |
ID |
Date |
Author |
Subject |
Project |
|
|
Draft
|
Thu Mar 19 15:36:48 2026 |
Brian Clark | Advice for Using the Ray Trace Correlator | Analysis | If you are trying to use the Ray Trace Correlator with AraRoot, you will probably encounter some issues as you go. Here is some advice that Carl Pfendner found, and Brian Clark compiled.
Please note that it is extremely important that your AntennaInfo.sqlite table in araROOT contain the ICRR versions of both Testbed and Station1. Testbed seems to have fallen out of practice of being included in the SQL table. Also, Station1 is the ICRR (earliest) version of A1, unlike the ATRI version which is logged as ARA01. This will cause seg faults in the intial setup of the timing and geometry arrays that seem unrelated to pure geometry files. If you get a seg-fault in the "setupSizes" function or the Detector call of the "setupPairs" function, checking your SQL file is a good idea. araROOT branch 3.13 has such a source table with Testbed and Station1 included.
Which combination of Makefile/Makefile.arch/StandardDefinitions.mk works can be machine specific (frustratingly). Sometimes the best StandardDefinitions.mk is found int he make_timing_arrays example.
Common Things to Check
1: Did you "make install" the Ray Trace Correlator after you made it?
2: Do you have the setup.txt file?
3: Do you have the "data" directory?
Common Errors
1: If the Ray Trace Correlator compiles, and you execute a binary, and get the following:
******** Begin Correlator ********, this one!
Pre-icemodel test
terminate called after throwing an instance of 'std::out_of_range'
what(): basic_string::substr
Aborted
Check to make sure have the "data" directory.
|
|
|
50
|
Wed Jun 12 12:10:05 2024 |
Jacob Weiler | How to install AraSim on OSC | Software | # Installing AraSim on OSC
Readding this because I realized it was deleted when I went looking for it 
Quick Links:
- https://github.com/ara-software/AraSim # AraSim github repo (bottom has installation instructions that are sort of right)
- Once AraSim is downloaded: AraSim/UserGuideTex/AraSimGuide.pdf (manual for AraSim) might have to be downloaded if you can't view pdf's where you write code
Step 1:
We need to add in the dependancies. AraSim needs multiple different packages to be able to run correctly. The easiest way on OSC to get these without a headache is to add the following to you .bashrc for your user.
cvmfs () {
module load gnu/4.8.5
export CC=`which gcc`
export CXX=`which g++`
if [ $# -eq 0 ]; then
local version="trunk"
elif [ $# -eq 1 ]; then
local version=$1
else
echo "cvmfs: takes up to 1 argument, the version to use"
return 1
fi
echo "Loading cvmfs for AraSim"
echo "Using /cvmfs/ara.opensciencegrid.org/${version}/centos7/setup.sh"
source "/cvmfs/ara.opensciencegrid.org/${version}/centos7/setup.sh"
#export JUPYTER_CONFIG_DIR=$HOME/.jupyter
#export JUPYTER_PATH=$HOME/.local/share/jupyter
#export PYTHONPATH=/users/PAS0654/alansalgo1/.local/bin:/users/PAS0654/alansalgo1/.local/bin/pyrex:$PYTHONPATH
}
If you want to view my bashrc
- /users/PAS1977/jacobweiler/.bashrc
Reload .bashrc
- source ~/.bashrc
Step 2:
Go to directory that you want to put AraSim and type:
- git clone https://github.com/ara-software/AraSim.git
This will download the github repo
Step 3:
We need to use make and load sourcing
- cd AraSim
- cvmfs
- make
wait and it should compile the code
Step 4:
We want to do a test run with 100 neutrinos to make sure that it does *actually* run
Try: - ./AraSim SETUP/setup.txt
This errored for me (probably you as well)
Switch from frequency domain to time domain in the setup.txt
- cd SETUP
- open setup.txt
- scroll to bottom
- Change SIMULATION_MODE = 1
- save
- cd ..
- ./AraSim SETUP/setup.txt
This should run quickly and now you have AraSim setup! |
|
|
49
|
Thu Sep 14 22:30:06 2023 |
Jason Yao | How to profile a C++ program | Software | This guide is modified from section (d) of the worksheet inside Module 10 of Phys 6810 Computational Physics (Spring 2023).
NOTE: gprof does not work on macOS. Please use a linux machine (such as OSC)
To use gprof, compile and link the relevant codes with the -pg option:
Take a look at the Makefile make_hello_world and modify both the CFLAGS and LDFLAGS lines to include -pg
Compile and link the script by typing
make -f make_hello_world
Execute the program
./hello_world.x
With the -pg flags, the execution will generate a file called gmon.out that is used by gprof.
The program has to exit normally (e.g. we can't stop with a ctrl-C).
Warning: Any existing gmon.out file will be overwritten.
Run gprof and save the output to a file (e.g., gprof.out) by
gprof hello_world.x > gprof.out
We should at this point see a text file called gprof.out which contains the profile of hello_world.cpp
vim gprof.out |
| Attachment 1: hello_world.cpp
|
#include <iostream>
#include <thread>
#include <chrono>
#include <cmath>
using namespace std;
void nap(){
// usleep(3000);
// this_thread::sleep_for(30000ms);
for (int i=0;i<1000000000;i++){
double j = sqrt(i^2);
}
}
int main(){
cout << "taking a nap" << endl;
nap();
cout << "hello world" << endl;
}
|
| Attachment 2: make_hello_world
|
SHELL=/bin/sh
# Note: Comments start with #. $(FOOBAR) means: evaluate the variable
# defined by FOOBAR= (something).
# This file contains a set of rules used by the "make" command.
# This makefile $(MAKEFILE) tells "make" how the executable $(COMMAND)
# should be create from the source files $(SRCS) and the header files
# $(HDRS) via the object files $(OBJS); type the command:
# "make -f make_program"
# where make_program should be replaced by the name of the makefile.
#
# Programmer: Dick Furnstahl (furnstahl.1@osu.edu)
# Latest revision: 12-Jan-2016
#
# Notes:
# * If you are ok with the default options for compiling and linking, you
# only need to change the entries in section 1.
#
# * Defining BASE determines the name for the makefile (prepend "make_"),
# executable (append ".x"), zip archive (append ".zip") and gzipped
# tar file (append ".tar.gz").
#
# * To remove the executable and object files, type the command:
# "make -f $(MAKEFILE) clean"
#
# * To create a zip archive with name $(BASE).zip containing this
# makefile and the SRCS and HDRS files, type the command:
# "make -f $(MAKEFILE) zip"
#
# * To create a gzipped tar file with name $(BASE).tar.gz containing this
# makefile and the source and header files, type the command:
# "make -f $(MAKEFILE) tarz"
#
# * Continuation lines are indicated by \ with no space after it.
# If you get a "missing separator" error, it is probably because there
# is a space after a \ somewhere.
#
###########################################################################
# 1. Specify base name, source files, header files, input files
###########################################################################
# The base for the names of the makefile, executable command, etc.
BASE= hello_world
# Put all C++ (or other) source files here. NO SPACES after continuation \'s.
SRCS= \
hello_world.cpp
# Put all header files here. NO SPACES after continuation \'s.
HDRS= \
# Put any input files you want to be saved in tarballs (e.g., sample files).
INPFILE= \
###########################################################################
# 2. Generate names for object files, makefile, command to execute, tar file
###########################################################################
# *** YOU should not edit these lines unless to change naming conventions ***
OBJS= $(addsuffix .o, $(basename $(SRCS)))
MAKEFILE= make_$(BASE)
COMMAND= $(BASE).x
TARFILE= $(BASE).tar.gz
ZIPFILE= $(BASE).zip
###########################################################################
# 3. Commands and options for different compilers
###########################################################################
#
# Compiler parameters
#
# CXX Name of the C++ compiler to use
# CFLAGS Flags to the C++ compiler
# CWARNS Warning options for C++ compiler
# F90 Name of the fortran compiler to use (if relevant)
# FFLAGS Flags to the fortran compiler
# LDFLAGS Flags to the loader
# LIBS A list of libraries
#
CXX= g++
CFLAGS= -g
CWARNS= -Wall -W -Wshadow -fno-common
MOREFLAGS= -Wpedantic -Wpointer-arith -Wcast-qual -Wcast-align \
-Wwrite-strings -fshort-enums
# add relevant libraries and link options
LIBS=
# LDFLAGS= -lgsl -lgslcblas
LDFLAGS=
###########################################################################
# 4. Instructions to compile and link, with dependencies
###########################################################################
all: $(COMMAND)
.SUFFIXES:
.SUFFIXES: .o .mod .f90 .f .cpp
#%.o: %.mod
# This is the command to link all of the object files together.
# For fortran, replace CXX by F90.
$(COMMAND): $(OBJS) $(MAKEFILE)
$(CXX) -o $(COMMAND) $(OBJS) $(LDFLAGS) $(LIBS)
# Command to make object (.o) files from C++ source files (assumed to be .cpp).
# Add $(MOREFLAGS) if you want additional warning options.
%.o: %.cpp $(HDRS) $(MAKEFILE)
$(CXX) -c $(CFLAGS) $(CWARNS) -o $@ $<
# Commands to make object (.o) files from Fortran-90 (or beyond) and
# Fortran-77 source files (.f90 and .f, respectively).
.f90.mod:
$(F90) -c $(F90FLAGS) -o $@ $<
.f90.o:
$(F90) -c $(F90FLAGS) -o $@ $<
.f.o:
$(F90) -c $(FFLAGS) -o $@ $<
##########################################################################
# 5. Additional tasks
##########################################################################
# Delete the program and the object files (and any module files)
clean:
/bin/rm -f $(COMMAND) $(OBJS)
/bin/rm -f $(MODIR)/*.mod
# Pack up the code in a compressed gnu tar file
tarz:
tar cfvz $(TARFILE) $(MAKEFILE) $(SRCS) $(HDRS) $(MODIR) $(INPFILE)
# Pack up the code in a zip archive
zip:
zip -r $(ZIPFILE) $(MAKEFILE) $(SRCS) $(HDRS) $(MODIR) $(INPFILE)
##########################################################################
# That's all, folks!
##########################################################################
|
|
|
48
|
Thu Jun 8 16:29:45 2023 |
Alan Salcedo | Doing IceCube/ARA coincidence analysis | | These documents contain information on how to run IceCube/ARA coincidence simulations and analysis. All technical information of where codes are stored and how to use them is detailed in the technical note. Other supportive information for physics understanding is in the powerpoint slides. The technical note will direct you to other documents in this elog in the places where you may need supplemental information. |
| Attachment 1: IceCube_ARA_Coincidence_Analysis___Technical_Note.pdf
|
| Attachment 2: ICARA_Coincident_Events_Introduction.pptx
|
| Attachment 3: ICARA_Analysis_Template.ipynb
|
{
"cells": [
{
"cell_type": "markdown",
"id": "bcdfb138",
"metadata": {},
"source": [
"# IC/ARA Coincident Simulation Events Analysis"
]
},
{
"cell_type": "markdown",
"id": "c7ad7c80",
"metadata": {},
"source": [
"### Settings and imports"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "b6915a86",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<style>.container { width:75% !important; }</style>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"## Makes this notebook maximally wide\n",
"from IPython.display import display, HTML\n",
"display(HTML(\"<style>.container { width:75% !important; }</style>\"))"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "6ef9e20b",
"metadata": {},
"outputs": [],
"source": [
"## Author: Alex Machtay (machtay.1@osu.edu)\n",
"## Modified by: Alan Salcedo (salcedogomez.1@osu.edu)\n",
"## Date: 4/26/23\n",
"\n",
"## Purpose:\n",
"### This script will read the data files produced by AraRun_corrected_MultStat.py to make histograms relevant plots of\n",
"### neutrino events passing through icecube and detected by ARA station (in AraSim)\n",
"\n",
"\n",
"## Imports\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import sys\n",
"sys.path.append(\"/users/PAS0654/osu8354/root6_18_build/lib\") # go to parent dir\n",
"sys.path.append(\"/users/PCON0003/cond0068/.local/lib/python3.6/site-packages\")\n",
"import math\n",
"import argparse\n",
"import glob\n",
"import pandas as pd\n",
"pd.options.mode.chained_assignment = None # default='warn'\n",
"from mpl_toolkits.mplot3d import Axes3D\n",
"import jupyterthemes as jt"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f24b8292",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"/bin/bash: jt: command not found\r\n"
]
}
],
"source": [
"## Set style for the jupyter notebook\n",
"!jt -t grade3 -T -N -kl -lineh 160 -f code -fs 14 -ofs 14 -cursc o\n",
"jt.jtplot.style('grade3', gridlines='')"
]
},
{
"cell_type": "markdown",
"id": "a5a812d6",
"metadata": {},
"source": [
"### Set constants\n",
"#### These are things like the position of ARA station holes, the South Pole, IceCube's position in ARA station's coordinates, and IceCube's radius"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "cdb851de",
"metadata": {},
"outputs": [],
"source": [
"## What IceCube (IC) station are you analyzing\n",
"station = 1\n",
"\n",
"## What's the radius around ARA where neutrinos were injected\n",
"inj_rad = 5 #in km\n",
"\n",
"## IceCube's center relative to each ARA station\n",
"IceCube = [[-1128.08, -2089.42, -1942.39], [-335.812, -3929.26, -1938.23],\n",
" [-2320.67, -3695.78, -1937.35], [-3153.04, -1856.05, -1942.81], [472.49, -5732.98, -1922.06]] #IceCube's position relative to A1, A2, or A3\n",
"\n",
"#To calculate this, we need to do some coordinate transformations. Refer to this notebook to see the calculations: \n",
"# IceCube_Relative_to_ARA_Stations.ipynb (found here - http://radiorm.physics.ohio-state.edu/elog/How-To/48) \n",
"\n",
"## IceCube's radius\n",
"IceCube_radius = 564.189583548 #Modelling IceCube as a cylinder, we find the radius with V = h * pi*r^2 with V = 1x10^9 m^3 and h = 1x10^3 m "
]
},
{
"cell_type": "markdown",
"id": "af5c6fc2",
"metadata": {},
"source": [
"### Read the data\n",
"\n",
"#### Once we import the data, we'll make dataframes to concatenate it and make some calculations"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "921f61f8",
"metadata": {},
"outputs": [],
"source": [
"## Import data files\n",
"\n",
"#Here, it's from OSC Connolly's group project space\n",
"source = '/fs/project/PAS0654/IceCube_ARA_Coincident_Search/AraSim/outputs/Coincident_Search_Runs/20M_GZK_5km_S1_correct' \n",
"num_files = 200 # Number of files to read in from the source directory\n",
"\n",
"## Make a list of all of the paths to check \n",
"file_list = []\n",
"for i in range(1, num_files + 1):\n",
" for name in glob.glob(source + \"/\" + str(i) + \"/*.csv\"):\n",
" file_list.append(str(name))\n",
" #file_list gets paths to .csv files\n",
" \n",
"## Now read the csv files into a pandas dataframe\n",
"dfs = []\n",
"for filename in file_list:\n",
" df = pd.read_csv(filename, index_col=None, header=0) #Store each csv file into a pandas data frame\n",
" dfs.append(df) #Append the csv file to store all of them in one\n",
"frame = pd.concat(dfs, axis=0, ignore_index = True) #Concatenate pandas dataframes "
]
},
{
"cell_type": "markdown",
"id": "813fb3ee",
"metadata": {},
"source": [
"### Work with the data\n",
"\n",
"#### All the data from our coincidence simulations (made by AraRun_MultStat.sh) is now stored in a pandas data frame that we can work with"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "8b449583",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"19800000\n",
"19799802\n"
]
}
],
"source": [
"## Now let's clean up our data and calculate other relevant things\n",
"print(len(frame))\n",
"frame = frame[frame['weight'].between(0,1)] #Filter out events with ill-defined weight (should be between 0 and 1)\n",
"print(len(frame))\n",
"\n",
"frame['x-positions'] = (frame['Radius (m)'] * np.cos(frame['Azimuth (rad)']) * np.sin(frame['Zenith (rad)']))\n",
"frame['y-positions'] = (frame['Radius (m)'] * np.sin(frame['Azimuth (rad)']) * np.sin(frame['Zenith (rad)']))\n",
"frame['z-positions'] = (frame['Radius (m)'] * np.cos(frame['Zenith (rad)']))\n",
"\n",
"## The energy in eV will be 10 raised to the number in the file, multiplied by 1-y (y is inelasticity)\n",
"frame['Nu Energies (eV)'] = np.power(10, (frame['Energy (log10) (eV)']))\n",
"frame['Mu Energies (eV)'] = ((1-frame['Inelasticity']) * frame['Nu Energies (eV)']) #Energy of the produced lepton\n",
"#Here the lepton is not a muon necesarily, hence the label 'Mu Energies (eV)' may be misleading\n",
"\n",
"## Get a frame with only coincident events\n",
"coincident_frame = frame[frame['Coincident'] == 1] \n",
"\n",
"## And a frame for strictly events *detected* by ARA\n",
"detected_frame = frame[frame['Detected'] == 1]\n",
"\n",
"\n",
"## Now let's calculate the energy of the lepton when reaching IceCube (IC)\n",
"\n",
"# To do this correctly, I need to find exactly the distance traveled by the muon and apply the equation\n",
"# I need the trajectory of the muon to find the time it takes to reach IceCube, then I can find the distance it travels in that time\n",
"# I should allow events that occur inside the icecube volume to have their full energy (but pretty much will happen anyway)\n",
"## a = sin(Theta)*cos(Phi)\n",
"## b = sin(Theta)*sin(Phi)\n",
"## c = cos(Theta)\n",
"## a_0 = x-position\n",
"## b_0 = y-position\n",
"## c_0 = z-position\n",
"## x_0 = IceCube[0]\n",
"## y_0 = IceCube[1]\n",
"## z_0 = IceCube[2]\n",
"## t = (-(a*(a_0-x_0) + b*(b_0-y_0))+D**0.5)/(a**2+b**2)\n",
"## D = (a**2+b**2)*R_IC**2 - (a*(b_0-y_0)+b*(a_0-x_0))**2\n",
"## d = ((a*t)**2 + (b*t)**2 + (c*t)**2)**0.5\n",
"\n",
"## Trajectories\n",
"coincident_frame['a'] = (np.sin(coincident_frame['Theta (rad)'])*np.cos(coincident_frame['Phi (rad)']))\n",
"coincident_frame['b'] = (np.sin(coincident_frame['Theta (rad)'])*np.sin(coincident_frame['Phi (rad)']))\n",
"coincident_frame['c'] = (np.cos(coincident_frame['Theta (rad)']))\n",
"\n",
"## Discriminant\n",
"coincident_frame['D'] = ((coincident_frame['a']**2 + coincident_frame['b']**2)*IceCube_radius**2 - \n",
" (coincident_frame['a']*(coincident_frame['y-position (m)']-IceCube[station-1][1])- ## I think this might need to be a minus sign!\n",
" coincident_frame['b']*(coincident_frame['x-position (m)']-IceCube[station-1][0]))**2)\n",
"\n",
"## Interaction time (this is actually the same as the distance traveled, at least for a straight line)\n",
"coincident_frame['t_1'] = (-(coincident_frame['a']*(coincident_frame['x-position (m)']-IceCube[station-1][0])+\n",
" coincident_frame['b']*(coincident_frame['y-position (m)']-IceCube[station-1][1]))+\n",
" np.sqrt(coincident_frame['D']))/(coincident_frame['a']**2+coincident_frame['b']**2)\n",
"coincident_frame['t_2'] = (-(coincident_frame['a']*(coincident_frame['x-position (m)']-IceCube[station-1][0])+\n",
" coincident_frame['b']*(coincident_frame['y-position (m)']-IceCube[station-1][1]))-\n",
" np.sqrt(coincident_frame['D']))/(coincident_frame['a']**2+coincident_frame['b']**2)\n",
"\n",
"## Intersection coordinates\n",
"coincident_frame['x-intersect_1'] = (coincident_frame['a'] * coincident_frame['t_1'] + coincident_frame['x-position (m)'])\n",
"coincident_frame['y-intersect_1'] = (coincident_frame['b'] * coincident_frame['t_1'] + coincident_frame['y-position (m)'])\n",
"coincident_frame['z-intersect_1'] = (coincident_frame['c'] * coincident_frame['t_1'] + coincident_frame['z-position (m)'])\n",
"\n",
"coincident_frame['x-intersect_2'] = (coincident_frame['a'] * coincident_frame['t_2'] + coincident_frame['x-position (m)'])\n",
"coincident_frame['y-intersect_2'] = (coincident_frame['b'] * coincident_frame['t_2'] + coincident_frame['y-position (m)'])\n",
"coincident_frame['z-intersect_2'] = (coincident_frame['c'] * coincident_frame['t_2'] + coincident_frame['z-position (m)'])\n",
"\n",
"## Distance traveled (same as the parametric time, at least for a straight line)\n",
"coincident_frame['d_1'] = (np.sqrt((coincident_frame['a']*coincident_frame['t_1'])**2+\n",
" (coincident_frame['b']*coincident_frame['t_1'])**2+\n",
" (coincident_frame['c']*coincident_frame['t_1'])**2))\n",
"coincident_frame['d_2'] = (np.sqrt((coincident_frame['a']*coincident_frame['t_2'])**2+\n",
" (coincident_frame['b']*coincident_frame['t_2'])**2+\n",
" (coincident_frame['c']*coincident_frame['t_2'])**2))\n",
"\n",
"## Check if it started inside and set the distance based on if it needs to travel to reach icecube or not\n",
"coincident_frame['Inside'] = (np.where((coincident_frame['t_1']/coincident_frame['t_2'] < 0) & (coincident_frame['z-position (m)'].between(-2450, -1450)), 1, 0))\n",
"coincident_frame['preliminary d'] = (np.where(coincident_frame['d_1'] <= coincident_frame['d_2'], coincident_frame['d_1'], coincident_frame['d_2']))\n",
"coincident_frame['d'] = (np.where(coincident_frame['Inside'] == 1, 0, coincident_frame['preliminary d']))\n",
"\n",
"## Check if the event lies in the cylinder\n",
"coincident_frame['In IC'] = (np.where((np.sqrt((coincident_frame['x-position (m)']-IceCube[station-1][0])**2 + (coincident_frame['y-position (m)']-IceCube[station-1][1])**2) < IceCube_radius) &\n",
" ((coincident_frame['z-position (m)']).between(-2450, -1450)) , 1, 0))\n",
"\n",
"#Correct coincident_frame to only have electron neutrinos inside IC\n",
"coincident_frame = coincident_frame[(((coincident_frame['In IC'] == 1) & (coincident_frame['flavor'] == 1)) | (coincident_frame['flavor'] == 2) | (coincident_frame['flavor'] == 3)) ]\n",
"\n",
"#Now calculate the lepton energies when they reach IC\n",
"coincident_frame['IC Mu Energies (eV)'] = (coincident_frame['Mu Energies (eV)'] * np.exp(-10**-5 * coincident_frame['d']*100)) # convert d from meters to cm\n",
"coincident_frame['weighted energies'] = (coincident_frame['weight'] * coincident_frame['Nu Energies (eV)'])"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f1757103",
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"## Add possible Tau decay to the frame\n",
"coincident_frame['Tau decay'] = ''\n",
"# Again, the label 'Tau Decay' may be misleading because not all leptons may be taus\n",
"\n",
"## Calculate distance from the interaction point to its walls and keep the shortest (the first interaction with the volume)\n",
"\n",
"coincident_frame['distance-to-IC_1'] = np.sqrt((coincident_frame['x-positions'] - coincident_frame['x-intersect_1'])**2 + \n",
" (coincident_frame['y-positions'] - coincident_frame['y-intersect_1'])**2)\n",
"coincident_frame['distance-to-IC_2'] = np.sqrt((coincident_frame['x-positions'] - coincident_frame['x-intersect_2'])**2 + \n",
... 19001 more lines ...
|
| Attachment 4: IceCube_Relative_to_ARA_Stations.ipynb
|
{
"cells": [
{
"cell_type": "markdown",
"id": "a950b9af",
"metadata": {},
"source": [
"**This script is simply for me to calculate the location of IceCube relative to the origin of any ARA station**\n",
"\n",
"The relevant documentation to understand the definitions after the imports can be found in https://elog.phys.hawaii.edu/elog/ARA/130712_170712/doc.pdf"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "b926e2e3",
"metadata": {},
"outputs": [],
"source": [
"import numpy as np"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "901b442b",
"metadata": {},
"outputs": [],
"source": [
"#Definitions of translations in surveyor's coordinates:\n",
"\n",
"t_IC_to_ARAg = np.array([-24100, 1700, 6400])\n",
"t_ARAg_to_A1 = np.array([16401.71, -2835.37, -25.67])\n",
"t_ARAg_to_A2 = np.array([13126.7, -8519.62, -18.72])\n",
"t_ARAg_to_A3 = np.array([9848.35, -2835.19, -12.7])\n",
"\n",
"#Definitions of rotations from surveyor's axes to the ARA Station's coordinate systems\n",
"\n",
"R1 = np.array([[-0.598647, 0.801013, -0.000332979], [-0.801013, -0.598647, -0.000401329], \\\n",
" [-0.000520806, 0.0000264661, 1]])\n",
"R2 = np.array([[-0.598647, 0.801013, -0.000970507], [-0.801007, -0.598646,-0.00316072 ], \\\n",
" [-0.00311277, -0.00111477, 0.999995]])\n",
"R3 = np.array([[-0.598646, 0.801011, -0.00198193],[-0.801008, -0.598649,-0.00247504], \\\n",
" [-0.00316902, 0.000105871, 0.999995]])"
]
},
{
"cell_type": "markdown",
"id": "ab2d3206",
"metadata": {},
"source": [
"**Using these definitions, I should be able to calculate the location of IceCube relative to each ARA station by:**\n",
"\n",
"$$\n",
"\\vec{r}_{A 1}^{I C}=-R_1\\left(\\vec{t}_{I C}^{A R A}+\\vec{t}_{A R A}^{A 1}\\right)\n",
"$$\n",
"\n",
"We have a write-up of how to get this. Contact salcedogomez.1@osu.edu if you need that.\n",
"\n",
"Alex had done this already, he got that \n",
"\n",
"$$\n",
"\\vec{r}_{A 1}^{I C}=-3696.99^{\\prime} \\hat{x}-6843.56^{\\prime} \\hat{y}-6378.31^{\\prime} \\hat{z}\n",
"$$\n",
"\n",
"Let me verify that I get the same"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "912163d2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"IC coordinates relative to A1 (in): [-3696.98956579 -6843.55800868 -6378.30926681]\n",
"IC coordinates relative to A1 (m): [-1127.13096518 -2086.4506124 -1944.60648378]\n",
"Distance of IC from A1 (m): 3066.788996234438\n"
]
}
],
"source": [
"IC_A1 = -R1 @ np.add(t_ARAg_to_A1, t_IC_to_ARAg).T\n",
"print(\"IC coordinates relative to A1 (in): \", IC_A1)\n",
"print(\"IC coordinates relative to A1 (m): \", IC_A1/3.28)\n",
"print(\"Distance of IC from A1 (m): \", np.sqrt((IC_A1[0]/3.28)**2 + (IC_A1[1]/3.28)**2 + (IC_A1[2]/3.28)**2))"
]
},
{
"cell_type": "markdown",
"id": "f9c9f252",
"metadata": {},
"source": [
"Looks good!\n",
"\n",
"Now, I just get the other ones:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "8afa27c6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"IC coordinates relative to A2 (in): [ -1100.33577313 -12852.0589083 -6423.00776043]\n",
"IC coordinates relative to A2 (m): [ -335.46822352 -3918.31064277 -1958.2340733 ]\n",
"Distance of IC from A2 (m): 4393.219537890439\n"
]
}
],
"source": [
"IC_A2 = -R2 @ np.add(t_ARAg_to_A2, t_IC_to_ARAg).T\n",
"print(\"IC coordinates relative to A2 (in): \", IC_A2)\n",
"print(\"IC coordinates relative to A2 (m): \", IC_A2/3.28)\n",
"print(\"Distance of IC from A2 (m): \", np.sqrt((IC_A2[0]/3.28)**2 + (IC_A2[1]/3.28)**2 + (IC_A2[2]/3.28)**2))"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9959d0a4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"IC coordinates relative to A3 (in): [ -7609.73440732 -12079.45719852 -6432.31164368]\n",
"IC coordinates relative to A3 (m): [-2320.04097784 -3682.76134101 -1961.07062307]\n",
"Distance of IC from A3 (m): 4774.00452685144\n"
]
}
],
"source": [
"IC_A3 = -R3 @ np.add(t_ARAg_to_A3, t_IC_to_ARAg).T\n",
"print(\"IC coordinates relative to A3 (in): \", IC_A3)\n",
"print(\"IC coordinates relative to A3 (m): \", IC_A3/3.28)\n",
"print(\"Distance of IC from A3 (m): \", np.sqrt((IC_A3[0]/3.28)**2 + (IC_A3[1]/3.28)**2 + (IC_A3[2]/3.28)**2))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "093dff67",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
|
|