Updates and Results Talks and Posters Advice Ideas Important Figures Write-Ups Outreach How-To Funding Opportunities GENETIS
  Important Plots, Tables, and Measurements, all entries  ELOG logo
  IDdown Date Author Type Category Subject Project
  29   Tue Feb 27 23:13:15 2018 Amy Connolly OtherSimulationPlots for QC sites 

Hi All,

We need to revive the icemcQC and AraSimQC pages, and I thought that in the meantime we could keep a list and/or associated code for plots we'd like to see go in there.

Here are a couple plots I just made code attached is forstevenanddave.cc 

event weight (attenuation factor) vs. angle that neutrino makes wrt up when standing under the anita balloon

event weight (attenuation factor) vs. angle that rf makes wrt up from an observer on the anita balloon

BC: Adding old Stephen Hoover plots from ANITA elog: https://www.phys.hawaii.edu/elog/anita_notes/32

 

  28   Tue Oct 10 11:04:05 2017 Brian ClarkAnalysisAnalysisTestbed Channel Mapping and Antenna InformationARA

This is the Testbed polarization channel mapping. This is the polarization result if you use the getGraphfromRFChan Function:

/* Channel mappings for the testbed
Channel 0: H Pol
Channel 1: H Pol
Channel 2: V Pol
Channel 3: V Pol
Channel 4: V Pol
Channel 5: H Pol
Channel 6: V Pol
Channel 7: H Pol
Channel 8: V Pol
Channel 9: H Pol
Channel 10: V Pol
Channel 11: H Pol
Channel 12: H Pol
Channel 13: H Pol
Channel 14: Surface
Channel 15: Surface
*/

Also, the Testbed has a somewhat bizarre menagerie of antennas. Here's how to understand it:

Check out the table of antennas for the testbed (table 1 in both papers): https://arxiv.org/pdf/1105.2854.pdf , https://arxiv.org/pdf/1404.5285.pdf
 
Basically the testbed was weird. There are four bowtie slotted cylinders deployed at ~30 m (these are the "deep hpol") and four bicones deployed at ~30 m (these are the "deep vpol"). So 8 total there: four V, four H.
 
Then, there are two quad slotted cylinders deployed at ~30m, but because they are different from bowties, they are technically hpol, but aren't counted as deep hpol. That brings us to 10. Four V, six H.
 
Then, there are two discones at ~2m, which count as vpol, but because they are different than the bicones and deployed shallow, they aren't deep. That brings us to 12 total: six vpol, six hpol.
 
Then, there are two batwings at ~2m, which counts as hpol, but because they are different than the bowtie and the quad slot and deployed shallow, they're in a class of their own. That brings us to fourteen: six vpol, eight hpol.
 
Finally, there are two fat dipoles right on the surface, which count as neither polarization, which brings us up to 16 total.
  27   Fri Oct 6 15:15:53 2017 Hannah HasanOtherSimulationPlotting ShelfMC Parameter SpaceOther

Attached are instructions and scripts for carrying out a parameter space scan with ShelfMC, the simulation package for the ARIANNA detector.

Because some of the plotted outputs looked like colored stripes and did not offer any insight into how effective volume changed with some variables, I made some changes to the simulation and plotting scripts so that different maximum, minimum, and increment values can be chosen for each variable. Now rather than having fixed, hard-coded values for all variables, the parameter space scan and plotting is more flexible for use with variable inputs.

Quote:

I am trying to write a script that will plot a 2d histogram of effective volume versus two of ShelfMC's parameters.

The script prompts the user for which two parameters (out of five that we vary in our parameter space scan) to plot along the x- and y-axes, as well as what values to hold the other 3 parameters constant at. It then collects the necessary root files from simulation results, generates a plotting script, and runs the plotting script to produce a plot in pdf form.

After many struggles I have the script written to the point where it functions, but the plots don't look right. Some plots look like they could be actual data (like Veff_A_I), and others just look flat-out wrong (like Veff_R_S).

I have yet to pin down the cause of this, but hopefully will be able to sometime in the near future.

 

  Draft   Fri Oct 6 12:40:57 2017 Hannah HasanOtherSimulationPlotting ShelfMC Parameter SpaceOther

 

Quote:

I am trying to write a script that will plot a 2d histogram of effective volume versus two of ShelfMC's parameters.

The script prompts the user for which two parameters (out of five that we vary in our parameter space scan) to plot along the x- and y-axes, as well as what values to hold the other 3 parameters constant at. It then collects the necessary root files from simulation results, generates a plotting script, and runs the plotting script to produce a plot in pdf form.

After many struggles I have the script written to the point where it functions, but the plots don't look right. Some plots look like they could be actual data (like Veff_A_I), and others just look flat-out wrong (like Veff_R_S).

I have yet to pin down the cause of this, but hopefully will be able to sometime in the near future.

 

  24   Sun Sep 17 20:10:21 2017 Spoorthi NagasamudramModelingSimulationA different way to implement ray tracing in AraSim (possibly even other simulations) 

Hi,

I've attached a copy of some of the work I did over the summer on ray tracing and how I did it. Please let me know if you have any questions.

PS The plots are sidewards for some reasons. Sorry about that.

 

  23   Fri Sep 15 23:25:07 2017 Amy Connolly (for Suren)ModelingGeneralUsing spherical harmonics to search for ideal antenna beam pattersOther

Suren put a bunch of info about the work he did this summer on github:

https://github.com/osu-particle-astrophysics/Spherical-Harmonics

He added this in an email:

I added more spherical harmonics into the Chi^2 code so now we can test up to L=12. Adding the additional harmonics brought the chi^2 for the first frequency gain fit down about 30%. However, I also fit the first phase, and it seems to be much harder to fit, especially at the poles. The Chi^2 in in the 200s. Therefore, I am unsure how you wish to proceed with the phases. Perhaps just avoiding the 0 degree and 5 degree theta cones is the way to go. This would involve modifying the chi^2  code and the data to not have those 0 and 5 degree cones.

 

  22   Wed Sep 13 09:28:15 2017 Amy ConnollyAnalysisAnalysisInfo on generating pseudoexperiments, calculating likelihoods from them and finding p-valuesOther

Will point to a bunch of papers and stuff here.

  21   Fri Sep 8 16:51:28 2017 Julie RollaModelingOtherJordan's Code Antenna Optimization/EvolutionOther

Below is the explaination regarding this code from Jordan. Attached is the code in its last form.

 


Hi Lucas,

 

I'll try to fit a bunch of stuff into this email, so hopefully some of it is useful/interesting.

 

As far as the evolutionary algorithm goes, I'll send you the code and let you look at it yourself. the paper that I am developing it from comes from https://www.researchgate.net/publication/282857432_INTEGRATION_OF_REMCOM_XFDTD_AND_OCTAVE_FOR_THE_AUTOMATED_OPTIMAL_DESIGN_OF_MICROWAVE_ANTENNAS. My code has a couple differences in that I didn't look at the past k iterations like the paper does, but instead I have 5 parallel designs being run and look at how many of those improve the output. This is subtly different and might not be good, because it considers each design at the same time, so the mean doesn't have time to adjust before it is reevaluated if that makes sense. Anyways, just something to think about.

 

The basic structure for the code is that it is run with command line arguments, so that you compile it with whatever name ( I usually call it evolved_dipole, but doesn't matter). So it is run as 

 

$./evolved_dipole --start

 

for the first time in a run and then every subsequent time in a run

 

$./evolve_dipole --cont

 

The --start will create the initial population, and record the parameters in a .csv file called handshake.csv. The --cont will read in a file called handshook.csv that theoretically will have the output of the antenna simulations for each antenna.

 

The first obvious thing I can think of that is missing in this script is that it doesn't write a number to a txt file in the watch folder, but I'll explain that later. The second obvious thing that I didn't add to this is the checking of the exit condition d/do < some value (see paper if this is confusing). The third thing I can think of is that I don't have any constraints on the values that the script produces. It will probably be valuable to include some constraints on these values at the very least so that you don't get negative distances. In addition, this script should be easily generalizable by increasing NVAR and changing the mean and deviation vectors. The code is also not particularly pretty so I apologize for that. I tried to comment a decent amount.

 

Then, there is the XF script. So the XF script should input the antenna parameters, create the simulations and then start them. Then output the data. One of the things that I never ended up doing here is that the data output can only be done after the simulations have finished running, so you'll need to figure out how to get that to work if you use XF, so I'll include the scripts separately. For the output.xmacro script you will need to specify the simulation and run id each time it is used. I think it might be possible to just have a while function that will wait x seconds and then reevaluate and the condition is whether the data is available, but that might not be the best way.

 

Then, we get to the controlling portion of the code. I have a batch script which should control a bash script (batch submission for clusters is weird). so the bash script theoretically should load the xf module, run evolved_dipole --start, watch a folder and see if a certain number is written to a file, run the xf simulation, watch again for a different number, run the evolved_dipole --cont (or arasim first eventually) and then it creates a loop and should run theoretically until the exit condition is reached in which case --cont will need to write a different value than before and everything should end (i don't know that I've included this part yet).

 

The big problem here is that calling the XF script from the command line is more difficult than it originally appeared to be. According to Remcom (the company that creates XF), you should be able to run an XF script from the command line with the option --execute-macro-script=/path/to/macro. However, this isn't supported (we think) by the version of XF that we have on OSC, and so they are looking in (talk to Carl abou this) to updating XF and how much that would cost. I'm not entirely sure that this is a solution either, because this requires calling the GUI for XF and I'm not sure that this is able to be done in a batch submission (clusters don't like using cluster computing with GUIs). Thus, it might be worthwhile to look into using NEC or HFSS, which I don't know anything about.

 

Have fun and let me know if you need any clarification or help,

 

Jordan

  20   Fri Jul 28 17:43:20 2017 Abdullah AlhagAnalysisAnalysis GP algorithms 

In this post, I will be pointing out the advantage and the disadvantage of the GP algorithms I came across, particular Eureqa and HeuristicLab.

Eureqa is by far the fastest genetic algorithm software I came across. It is over simplified and easy to use. It has some built-in fitness function and also with some playing with the function that is being solved for and some other feature, it is possible for one to write his/her own fitness function. Moreover, the software is available for free for academic use and for most platform. Other features come with the software is the ability to normalize the data in different ways and even handle outliers and missing data. The program support a large collection of functions including trig and more complex one.

One the other hand, HeuristicLab is much slower than Eureqa but still far faster than Karoo-GP. The latest version of the software was released a year ago, and the support for the software is fairly slow. It is only supported for Windows; however, there is plans to adopted to Linux systems. The software support way more feature than Eureqa or karoo and even different regression and classification algorithms. You could also get the function ready to use in many software such as MATLAB, Excel, Mathematica, and much more. Another cool feature is that it shows you a three of the function and the weight of each node (operation or operand), greener means the node has more weight, see attached. It should be noticed that the software has the tendency to grow large three which could be fixed by changing the default max three length and the max three depth. The software has a problem with the last update of windows 10, you will get the blue screen if you opened too many windows, so be careful.

In booth software you could change how much of the data set goes to training and how much goes to testing, be sure to shuffle the data in HeuristicLab as it will otherwise distribute the data as training and testing non-randomly. Booth software by default shows plot of the current function with the x axis being the data entries (row numbers) and y axis being the target values with a curve of the function estimate values.

Below is a simple run of both software on a fake data that Prof. Amy gave me, see out.txt for data.

For start, using eureqa I got a few functions with an average of mean of R^2 of 0.98 or more which is very good.

First function: frequency = (571953.335547372*y + 15786079*x^2*y^4 + 297065746*x*y^2*asinh(x))/factorial(7.44918899410674 + x + y)

The first has an R^2 of 0.99(1 means perfect fit) and mean absolute error of 2.98(0 means perfect fit, data dependent, not normalized), see plot1 for 1D plot of the function estimate values vs target values.

 

Second function: frequency = (3569.91823898791*x*y - 149.144996501988 - 100.216589664235*x^2)/(5.26462242203202^x + 7.09216771444399^y*x^(1.3193170267439*x))

The second has an R^2 of 0.994(1 means perfect fit) and mean absolute error of 3.2(0 means perfect fit, data dependent, not normalized), see plot2 for 1D plot of the function estimate values vs target values.

Also attached is 2D plot of the function against the data, the function plotted is the second function, but all are very similar, see Eq23.

 

Using HeuristicLab. The function below has an R^2 of 0.987, mean absolute error of 3.6 and normalized mean squared error of 0.012.

The function is: (((EXP((-1.3681014170483*'y')) * ((((-1.06504220396658) * (2.16142798579652*'x')) * (3.44831687407881*'y')) / ((((1.57186418519305*'y') + (2.15361749794796*'y')) / ((1.6912208581006*'y') / (EXP((1.80824695345446*'x')) * 16.3366330774664))) / ((((2.11818004168659*'x') * (1.10362178478116*'y')) - ((((-1.06504220396658) * (2.16142798579652*'x')) * (3.44831687407881*'y')) / ((((2.11818004168659*'x') + (2.15361749794796*'y')) / ((10.9740866421104 + (1.8106235953875*'y')) - (2.15361749794796*'y'))) / (((-7.8798958167) + (-6.76475761634751)) + ((2.87007061579651*'x') + (2.15361749794796*'y')))))) - (((((-8.85637334631747) * ((1.9238243855142*'y') - (1.01219957177297*'y'))) + (((-6.37085286789103) * 5.99856391145622) - ((-12.9565240969832) - 2.84841224458228))) - ((2.11818004168659*'x') * (1.10362178478116*'y'))) - (((2.11818004168659*'x') * ((((0.197306324191089*'y') + (0.255996267596584*'y')) - (2.16142798579652*'x')) - (((-1.06504220396658) * (2.16142798579652*'x')) / (12.2597910897177 / (1.25729305246107*'y'))))) - (EXP((-1.3681014170483*'y')) - ((((-6.29806655709512) * 6.39744830364858) / (12.2597910897177 / (0.728256926023423*'x'))) - (1.10362178478116*'y'))))))))) * 179.42788632856) + 2.24688690162535)

 

As you could see, HeuristicLab tend to generate function which are extremely large, this one has a depth of 15 and a length of 150.

 

See plot3 for 1D plot of the function estimate values vs target values, note that it is different from before because the data is shuffled,  and three.jpg for the three representation of the function showing the wight of each node, greener means has more weight.

  18   Wed Jul 5 19:22:52 2017 Brian Clark and Ian BestLab MeasurementHardwareMac Addresses 

This is a "bank" of mac addresses that we obtained for the lab. They were taken by Ian Best (one of Jim' students) in summer 2017. We purchased 25 AT24MAC402 EEPROMS (https://www.arrow.com/en/products/at24mac402-stum-t/microchip-technology) and used a SOT-23 breakout board and a bus pirate to retrieve their internal mac addresses.

If you take one, please note where you used it so no one tries to take the same one twice:

Serial Number EUI (Mac) Address Used?
0x0A70080064100461105CA000A0000000 FC:C2:3D:0D:A6:71 Spare ADAQ (ADAQF002)
0x0A70080064100460FC6DA000A0000000 FC:C2:3D:0D:A7:37 Spare ADAQ (ADAQF003)
0x0A700800641004611C8DA000A0000000 FC:C2:3D:0D:A8:7C Spare ADAQ (ADAQF004)
0x0A70080064100461E47FA000A0000000 FC:C2:3D:0D:BC:BF -- reserved for testing
0x0A700800641004611C9FA000A0000000 FC:C2:3D:0D:BE:02  
0x0A70080064100461D8D9A000A0000000 FC:C2:3D:0D:C0:2D  
0x0A700800641004611C6EA000A0000000 FC:C2:3D:0D:C6:62  
0x0A70080064100460F0A0A000A0000000 FC:C2:3D:0D:C8:AF  
0x0A70080064100461F870A000A0000000 FC:C2:3D:0D:DC:22  
0x0A700800641004612CC7A000A0000000 FC:C2:3D:0D:E9:F4 ARA3 ADAQ (ADAQG003)
0x0A70080064100461EC43A000A0000000 FC:C2:3D:0D:EF:78  
0x0A7008006410046228B7A000A0000000 FC:C2:3D:0D:FE:3D  
0x0A70080064100461F065A000A0000000 FC:C2:3D:0E:05:6F  
0x0A70080064100461E8CDA000A0000000 FC:C2:3D:0E:14:4F  
0x0A70080064100461E07AA000A0000000 FC:C2:3D:0E:3A:AB ARA6 ADAQ (ADAQG004)
0x0A700800641004611813A000A0000000 FC:C2:3D:0E:6B:08  
0x0A700800641004612014A000A0000000 FC:C2:3D:0E:6B:59  
0x0A70080064100461E415A000A0000000 FC:C2:3D:0E:75:AE  
0x0A70080064100461F034A000A0000000 FC:C2:3D:0E:CB:03 PUEO TURF 2 (rsvd)
0x0A700800641004623434A000A0000000 FC:C2:3D:0E:CB:23 PUEO TURF 2 (main)
0x0A70080064100462A037A000A0000000 FC:C2:3D:0E:D5:55 PUEO TURF 1 (rsvd)
0x0A700800641004610C0DA000A0000000 FC:C2:3D:0E:E9:4A PUEO TURF 1 (main)
0x0A70080064100461E825A000A0000000 FC:C2:3D:0E:E9:DC PUEO TURF 0 (rsvd)
0x0A700800641004617438A000A0000000 FC:C2:3D:0E:EA:9E PUEO TURF 0 (main)
  17   Mon Apr 24 22:27:29 2017 Brian ClarkAnalysisAnalysisEstimate of ARA Station-Year/ LivetimeARA

In response to a request by Amy, I make an estimae of the number of deep "station-years" of data obtained by ARA so far. This means roughly (# deep stations) * (# months livetime). This is very approximate, and only counts days where ARA has data in the storage vault on cobalt. It doesn't verify that cal pulsers are running, or that we actually have data for very hour of every day, etc.

Not accounting for 2013 A1 data, I get the following estimate. All I did was "ls | wc -l" on all of the data directories to count the number of days.
ARA1: 285 days 2012 + 124 days 2014 + 29 days 2015 + 117 days 2016 + 0 days 2017 = 555 days total
ARA2: 211 days 2013 + 310 days 2014 + 345 days 2015 + 314 days 2016 + 109 days 2017 = 1289 days total
ARA 3: 214 days 2013 + 303 days 2014 + 251 days 2015 + 292 days 2016 + 0 days 2017 = 1060 days total
So for the three stations that is 2904 days total, or ~8 station years of data.
 
The spreadsheet with the calculation is attached, including which directories I searched over to make the count.
  16   Sun Apr 23 14:54:50 2017 Hannah HasanOtherSimulationPlotting ShelfMC Parameter SpaceOther

I am trying to write a script that will plot a 2d histogram of effective volume versus two of ShelfMC's parameters.

The script prompts the user for which two parameters (out of five that we vary in our parameter space scan) to plot along the x- and y-axes, as well as what values to hold the other 3 parameters constant at. It then collects the necessary root files from simulation results, generates a plotting script, and runs the plotting script to produce a plot in pdf form.

After many struggles I have the script written to the point where it functions, but the plots don't look right. Some plots look like they could be actual data (like Veff_A_I), and others just look flat-out wrong (like Veff_R_S).

I have yet to pin down the cause of this, but hopefully will be able to sometime in the near future.

  15   Fri Apr 14 12:51:53 2017 Brian Clark and Patrick AllisonOtherHardwareARAFE Master DocumentationARA

Here is documentation on the ARAFE Master firmware and software design for the next generation of ARA stations. I include both the pdf and tex source code.

The firmware is located at: https://github.com/ara-daq-hw/ArafeMasterSoftware

The software is located at: https://github.com/ara-daq-hw/ArafeMasterSoftware

Revision History

2017.04.26: Typo fixes, fault curve addition, and python hex preparation instructions.

  14   Fri Apr 14 12:51:27 2017 Suren Gourapura and Brian ClarkOtherHardwareARAFE master Python communicationARA

With Brian's help, I am writing the python serial commander code used to control and troubleshoot the ARAFE master board. I have worked on it for about 2 weeks so far, and progress is slow but measureable!

Recently, we are powering on a channel and are able to measure the clock on an ARAFE board we attach to it.

You can check out our progress here: https://github.com/ara-daq-hw/arafe-master/blob/master/python_serial_commander.py

  13   Thu Mar 23 21:43:12 2017 Abdullah AlhagAnalysisAnalysisThe results from running Karoo on the inelasticity data 

See attatchd file.

  12   Thu Mar 23 20:08:52 2017 J. C. HansonAnalysisAnalysisLatest near-surface ice report 

Hello!  See the attached report relating the compressibility of firn, the density profile, and the resulting index of refraction profile.  The gradient of the index of refraction profile determines the curvature of classically refracted rays.

  11   Sun Mar 19 14:14:10 2017 Amy ConnollyAnalysisAnalysisSearch for interstellar/interplanetary travel by alien civilizationsOther

See the attached papers.  I wonder if we could look for these with ANITA and distinguish between natural and artificial origin.

Here are some estimates that I did last night.

The paper quotes that for an FRB at a distance ~10-20 kpc, we would see at S_nu=10^10-10^11 Jy where 1 Jy=10^-26 W/m^2/Hz.

For a BW=1 GHz and taking 1 m^2 effective area antennas, we get 10^-7-10^-6=V^2/R where R=50 Ohms, then taking the lower end of the range of S_nu gives V=2.2E-3 V.

For anita V(thermal noise) is roughly V_rms=sqrt(k_B*T*BW*R)=sqrt(1.38E-23 * ~340 K * 10^9 Hz * 50 Ohms) = 1.5E-5 V.

So let's say roughly we can hope to see signals down to the thermal noise level.  Then as expected, an FRB in our own galaxy should be easily observable.

The paper estimates that in a galaxy there are ~10^-5 FRBs/day.  Wikipedia tells me that within 3.59 Mpc there are 127 known galaxies.  Given the observed voltage would go like 1/r, then the furthest ones from that group of 127 would be seen at voltages a factor of ~4 Mpc/20 kpc=200 lower, so the furthest ones would be right at the thermal noise level.

When ANITA is at altitude, we perform our searches below the horizontal, and can hope to see radio signals directly from horizontal down to the horizon at about -6 deg.  So we view a sort of disk from 0 to 6 deg. in all directions in azimuth in payload coordinates.  This intersects the galactic plane at some angle but we can not just look for FRBs in our own galaxy but also in the other galaxies which we pretend to be uniformly distributed in a spherical volume.  If we could view from +90 deg. to -90 deg. then we would be able to see in all directions, so the fraction of the galaxy we can see is about 6/180=1/30.  So the probability that over roughly 100 total days of livetime over all flights, ANITA sees an FRB can be estimated as:

Prob(seeing a FRB from within 3.59 Mpc)=10^-5 FRBs/day * 100 days * 127 galaxies * 1/30 = 0.004.

If the intensity received is on the upper end of the range given (S_nu=10^11 Jy), then maybe we can see out to 40 Mpc.  Assuming the same density of galaxies as within 4 Mpc, since we're looking in a disk the # of galaxies goes like r^2, so we get a factor of 100 increase in prob-> prob=0.4, not bad.

But, we haven't accounted for the beam of the FRBs.  It appears at first glance in their paper that power increases at lower frequencies, and the beam width increases at lower frequencies, I'm not sure, and those two things would both be good for us since FRBs have been observed above a GHz and so we could see them at lower frequencies.

Some things to look into:

Understanding their calculations of the beam width and power, and the frequency dependence

03/20:  It seems the beamwidth is based on some multiple of the diffraction limit lambda/D where D is the size of the source.

How they are proposing one would be able to distinguish between natural and artificially produced FRBs

03/20:  The beam would be shadowed by the sail, and maybe you would see fringes.  Also a repeater could be a signature of the relatively short acceleration and deceleration phase.

Is ANITA unique is how much sky we can see at these frequencies with this sensitivity?

Would the fact that we digitize so quickly help in distinguishing natural from artificial?

Is there any time when ANITA is orientied in such a way that we look along the plane of the MW and thus see the whole galaxy?

Can we extend analyses above horizontal and increase sensitivity that way?

Can we look for reflections from the surface?

Note that the signals would extend ~1 ms in time.

  10   Wed Mar 15 17:18:51 2017 J.C. HansonAnalysisAnalysisARA2/3 Analysis: timing offsets for 12 faces, 4 pairs per face (square faces)ARA

Hello!  Back to ARA analysis.

Whenever I attempt to decide if a signal was an incoming plane wave, I compute the planarity of the event by summing cyclically the time-differences in adjacent channels that form a polygon.  For square polygons, this looks is like summing the time-difference between channels A and B, B and C, C and D, with D and A.  This sum should be zero for a plane wave, and a normal distribution for thermal noise.  I identify 12 faces within the cubical ARA detectors.  Using the Miller cubic crystal notation, the planes I use are the following: (001) (010) (100) plus opposites, (110) (101) (011) plus opposites.  For the first set, opposite means the other side of the cube, and for the second set, opposite means (T10) (T01) (0T1).

When a calibration pulser hits these surfaces, the wave should create a pulse waveform in each channel.  Computing the cyclic sum (planarity) for each of the twelve polygons, I usually get a number close to zero.  This must be a precision measurement, however.  We know where the calibration pulsers are, and we know where the channels are.  Thus, we can make a prediction for the timing corrections to each channel pair.  Each offset to the time-difference in a channel pair may be introduced by my analysis techniques, or some unknown systematic error in the detector.

My analysis code has a mode in which I can run over just tagged calibration pulses, in runs where there are a minimum number of tagged calibration pulse events.  I first check that there are at least 100 events in a run, and then I compute the mean and rms of 100 timing offsets for every channel pair.  The graphs below show the timing offsets versus time.  By applying these corrections to the data, calibration pulse events have planarities centered on zero, and this match improves with increasing amplitude.

Notice two things about these graphs: 1) Sometimes the data goes haywire, and that is because there are either thermal events tagged as calibration pulses, or a channel died. 2) For good data, that has small errors and small values (<10 ns), there seem to be linear trends that show drift in the station timing.  This drift cannot be introduced by my analysis code.

The graphs are for ARA2 data, and ARA3 plots are coming.

  9   Fri Mar 3 15:07:12 2017 Spoorthi NagasamudramAnalysisAnalysisAttempted modeling of inelasticity distribution with karoo gp 

This document summarizes my project with karoo machine learning where I used it to model the inelasticity distribution from the Connolly et al. cross-section paper. I plotted the four distinct functionst that karoo gave me out of a 100 against the data. The parameters I used were: tree depth=5, number of generations = 20 and coeffecients ranging from 0.1 to 0.5. I had issues getting each of the functions to fit the low y values, so in the future I think it would help to isolate just the low y values and try to see if karoo can fit them properly. I also included plots for different energies which showed that the functions did not fit very well to a different energy. In the future, I'd like to see if I can improve karoo's fitness functions to make the best fit functions resemble the data more.

  Draft   Thu Feb 23 15:59:50 2017 Spoorthi NagasamudramAnalysisAnalysisPlots for inelasticity data using karoo gp 


 

  7   Fri Feb 17 19:48:28 2017 J. C. HansonModelingTheoryComparison figure for latest Askaryan Module paperARA

See attached.  I welcome any suggestions on the style.  This would be the caption:

 

"(a) The spectrum from ZHS (dark gray), {symbol representing mine} (black), and Eq. 16 of ARVZ (light gray), scaled by R/E_C.  The cascade width for {symbol representing mine} is a=1.5 m, with R=1000m, F(omega) {not equal} 1 and LPM elongation (E_C = 100 TeV).  The thin black box at upper right encompases cases for which theta=theta_C, and the thin black box at lower left encompasses cases for which theta {not equal} theta_C.  The cases are (from right to left) theta_C - 2.5 deg, theta_C - 5.0 deg, theta_C - 7.5 deg, and theta_C - 10.0 deg. (b) ..."

  6   Fri Jan 20 15:50:50 2017 J.C. HansonModelingGeneralNear-Surface Ice Modeling, Data and Ray-Tracing 

See attached report.

  5   Tue Jan 17 09:38:52 2017 J.C. HansonAnalysisAnalysisn(z) from other section of ELOG 

see attached.

  4   Wed Jan 4 17:34:51 2017 J. C. HansonModelingAnalysisExtending the AskaryanModule analytical formulae for arbitrary Moliere radii (improved form factors) 

I've been working on a calculation to generalize the form factor (Eq. 26 of attached paper) to include particles from wider lateral distances from the cascade axis.  Formulae that were single terms now become sums, as I choose to model the contribution from wide-ranging particles as a sum of exponentials rather than a single exponential distribution.  See attached version of the paper (you can clone it using git: kingbee.mps.ohio-state.edu:/home/hanson.369/AskaryanPaper).  As an example of what I'm describing, see also the two attached plots.  The first is an original figure from the paper, where I model the lateral charge in the cascade with a single exponential distribution in rho'.  The other figuire is a sum of exponentials.

Finally, I've copied below the latest reviewer comments below in bold font.

Reviewer #3: Reviewer Comments for Complex Analysis of Askaryan Radiation: A Fully Analytic Treatment including the LPM effect and Cascade Form Factor.

This study presents a code based on analytical calculations of Askaryan pulses. The authors have provided new form factors, essential for these calculations, derived from first principles. The release of a publicly available code is a welcome addition and the reviewer applauds the authors for doing this. The work done for this paper merits publication.
 
However, there is one major issue that needs to be addressed before publication. Given that the authors are presenting an open source code it seems that another section comparing their results to previous results in the existing literature is necessary. For example, Equation 39 is an improved and updated version of equation 34. The reader is left wondering what the impact of using Equation 34 vs. 39 is in their simulations. They should be compared explicitly in this paper. Some comparisons to prior results are done in Section 7.2 but it should be brought out of the appendices into a main section. Someone looking to apply this code will want to know in what cases it gives the same results as previously published results and in which cases it deviates significantly, if at all.

There are also some issues with the summary section. In the first paragraph of Section 5 states "The fully analytic calculations and associated code require no a priori MC analysis, making them computationally efficient and accurate." This is simply not true. Using a a parameterization based on MC analysis is just as efficient once it has been obtained. In any case, the analytic calculations have to be compared to MC analysis for validation. It is worth mentioning that the treatment of the LPM effect will be less accurate since it is treated analytically as an elongation. Simulations presented in Alvarez-Muniz el al., PRD, 84, 103003 (2011) show that the LPM effect for ultra-high energy shower tends to produce longitudinal profiles with random clumps of particles rather than a smooth elongation. Without an analytical way to accurately model the stochastic behavior of these clumps the authors cannot claim this approach is more computationally efficient and accurate. The
accuracy and efficiency of the calculations, particularly in comparison to previous work, has not been treated in the main body of the text. Therefore this conclusion is not supported by the body of the paper.

The last paragraph of this section states "Rejecting the thermal noise in favour of neutrino signals is an exercise in the mathematical analysis of thermal fluctuations [51]. Armed with a firm theoretical understanding of the Askaryan effect, this challenge is made easier." The use of a parametric approach is just as valid and leads to the same conclusion. The authors should focus on whether their treatment, whether it be analytical, parametric or MC based, provides a more accurate and efficient model rather than lauding the fact that it was derived analytically.

The following issue is mainly about style and presentation. I will not require the authors do this as it is not the reviewers job to rewrite the paper for the authors, but I strongly recommend it.  It is very difficult to follow what the original contributions to the calculations of Askaryan signals are. The paper spends way too much space presenting known results that can simply be referenced. Sections 3.1, 3.2, 3.3, 3.4.1 can be almost be cut altogether. The contents of these sections should be summarized into one short section referencing the material as appropriate rather than reproducing the previously published results. The results and treatment in Section 3.4.2 seems out of place and should be the starting point of Section 4. This is apparent since equation 27 and 34 are the same and it really only needs to be presented once. As far as the reviewer can tell, equation 39 is the new result for the field. The paper should focus on presenting this derivation as succinctly
as possible, with enough commentary for an expert to reproduce it, and move on to discussing the implications to simulations.

Reviewer #1:

I cannot accept the paper. The implementation of the LPM effect  (one of the two new additions in the paper w.r.t. older literature, as written in the title of it) is NOT physical.

I repeat my argument:

At low frequency the field is proportional to the total tracklength, i.e. proportional to the total area = integral[N(X) dX] under the longitudinal development, with N(X) the number of particles at depth X. The tracklength depends linearly with energy, or in other words it is practically constant at a fixed shower energy (shower-to-shower fluctuations of the tracklength are small). Even this can be seen in Fig. 9 right panel of Cillis et al. to use the same reference as the authors used in their arguments. The shower tracklength at a fixed energy is due to low-energy physics at a the few MeV scale, and it is unaffected by the LPM effect. 

As a consequence:

The conclusion: "The LPM effect is found to modify the low-frequency emission" is NOT correct.

Showing an enhacement of a factor larger than 5 at frequencies between 1 and 100 MHz in the field at the Cherenkov angle of a 10 PeV shower with LPM effect, with respect to that in a 10 PeV 
shower without the LPM effect (left panel of Fig. 9 of the current version of the paper) is NOT correct.

 

  Draft   Fri Dec 16 11:24:09 2016  Software Installation   
  1   Fri Dec 16 11:11:55 2016 J.C. HansonProblem FixedGeneralWelcome to ELOG :) 

Note: Please use a consistent name when you write your name as author.

J.C.Hanson

ELOG V3.1.5-fc6679b