Week #174

26 – 29 May

The weekend was okayish productive in word count, but considering the topic which needs a lot of reading I’m satisfied. On Tuesday I work on some scripts and classes I prepared to make them easier to use for others. The reason is the analysis meeting on Wednesday where I volunteered to share some findings and scripts myself with a ‚short‘ presentation1. Because people seem interested, I take the time to explain the stuff.

After Wednesday morning, I can continue with my own work. I’m refining some old plots and create new ones, working off points on my todo list. Eventually, I hit another show stopper: the back propagation seems off.

As a filter criterion, I wanted to propagate a combined particle candidate backwards and see where this leads. Ideally, this should be close to the production point (which I know roughly) and I can throw away those too far off. For some reason though, almost every candidate seems off. Moreover, the structure seems not right — I would expect a peak around my expected value and some noise from wrong combinations further away.

To narrow the problem down, I produce more plots. Unfortunately, everything seems alright until the point of propagation. Something I have to discuss next week at the next meeting with the professor.

  1. The password is the usual one. []

Week #173

18 – 22 May

The first half of the week I’m still fiddling with software tools and finding the proper combination that I’m satisfied with. Basically the same as last week, but with a result on Wednesday: new track finder and new propagator, both with useful options that actually produce proper output.

After that, I can finally take on my original task and dig into plotting data. I update some functions taking over recurring tasks and the scripts submitting jobs to the cluster. There, I found out that my jobs are taking too long and are getting killed repeatedly. Thus, I shrink the job size and extend my script resubmitting jobs, making it easier to find missing pieces. With the new setup, I’m producing a bunch of data and can redo some older plots that I intend to use in my thesis.

Other than that we have a first analysis meeting/workshop/discussion on tips and tricks Andi found interesting to share. It is a good idea to share gained knowledge and increase productivity in your workflow, even though I already knew most parts. Thursday is meeting day, as always. Ah, and then we were making a doctoral hat for a colleague from China who will graduate on Sunday. Strange rhythm, but ok. I for myself focused on writing for my thesis for the weekend.


Week #172

11 – 13 May

Huh, a short week. Holiday on Thursday and bridge day on Friday. I start with where I finished last week: trying to find out, why the track filter task messes things up, at least sometimes. I present on the status on Wednesday in the hyperon online meeting, even though this is not the proper place — the computing meeting was cancelled. After that I also prepare a bug report for the developers.

In the meantime I try to vary some software versions, mainly the main framework and a propagator. Ultimately I find out that the new propagator works nicer than the old one but it causes other parts of the simulation chain to fail. Argh — one problem solved, the next one appears.

On Wednesday I have the next meeting with my professor. Basically I present my findings on the framework’s behavior but I also wanted to have at least some plots to show. So the evening before I program until after midnight to get something ready which caused the initial discussion on the last meeting.

For the long weekend I intend to work on my first introductory chapter for my thesis. This involves a lot of reading, so progress is slow. But steady, at least.


Week #171

4 – 7 May

With the help of the event display I’m finally able to have more tangible insight into the data. It soon becomes obvious that something strange is going on with the track finding. I use ideal track finding which takes its information from the simulated Monte Carlo data and should give the exact result. Instead, the tracks deviate quite significantly from the original input.

After digging into it with my supervisor, we find out that spurious hits from a tracking detector (GEM) foul the tracking. They seem to originate from ghost hits that are created when two one dimensional detector layers are combined to give a two dimensional information. For just one hit this is unambiguously but with two one gets in total four combinations: two real and two ghost hits.
For future tracking algorithms, this will be solved by only using meaningful hit information and discard the others. The ideal tracking on the other hand is supposed to just assign hits without further algorithms needed. Thus we decide to just ignore hits having two simulated Monte Carlo points.

This solves one problem, another one is generated by the track filter which generates a proper track from a collection of hits. Without the track filter, the parameters seem legit, but after they are off sometimes. Which, in my understanding, shouldn’t be since I’m running with ideal recognition.
I present my findings on both on the Thursday meeting. And after that, the weekend starts for me because I get married on Saturday. A busy weekend ahead.


Week #170

27 – 30 April

On Thursday the next meeting with the professor is scheduled. So I’m trying to get some requested plots ready until then. One plot I’m particularly interested in is a probability ratio from the best to the second best fit. The data producing classes are ready since two weeks, now I’m starting to analyze the data coming out.

With a bunch of plots and diagrams ready, I enter the discussion on Thursday. During that, we realize that something seems off. The fitting produces strange results and needs to be investigated further. Since the bare data is not helpful here, I start another try on the event display.

This is a software displaying (simulated) data in a three dimensional viewer in order to get a tangible view on the abstract data. The viewer didn’t work before on my MacBook, so I try a bit around until I update the software to the most recent version. Then, for some reason, it finally works and I’m good to go for some more detailed investigations.

Friday is off because of a holiday. I use the day to finish the second chapter of my thesis about the readout system.


Week #169

20 – 24 April

The weekend has been productive and I got some parts written on my readout system chapter. On Monday I discuss my findings on the closest distance of helix to point. It seems like a good idea to implement this, but I’m not going to do this. Instead, I should use real propagators to get the information.

With the help from some colleagues, I get the propagators running and store three different distances: the distance to the point of closest approach with a real propagator, the distance to the point when the propagator crosses the xy plane at z = 0 (which should be similar to the first), and the distance to a point from a helix at z = 0, parameterized by the position and momentum information. The first one should be the most accurate to the reality, while the second one should be still accurate but not anymore the point of closest approach. The last is expected to be the fastest because it is just analytically, but its accuracy is not given. Hence, I plan to make a comparison between the three approaches.

On Thursday, I finish all of this and have a running version. I send some jobs to the cluster to produce a large amount of data, just to find out that there is some bug with certain combination of numbers. After the bugfix on Friday, I run the jobs again to start with the analysis on Monday.


Week #168

13 – 17 April

I continue with the project started last week: a class to calculate the probability ratio. While getting this, I port some other functions into the class that were used with the ROOT interpreter before. My progress is quite good, so on Wednesday I’m able to finish this and have a ratio value for each particle combination.

On Thursday, I stay at home to write and stand by in case I need to drive to Bochum. The next day I’m back in Jülich to start with the next big task on the list: the distance to a propagated track.

The idea here is, that a combined candidate has to come at some point from the origin. By using the momentum and position information from the reconstruction, one can propagate this particle back to the presumable origin at (0,0,0), the interaction point. This should be done until this track is closest to the interaction point and the distance to this stored. The distance shouldn’t be far off, otherwise the track is probably from a wrong combination.

My first idea is to use momentum and position information to define a helix, which is approximately the path a charged particle takes in a solenoid magnetic field as we have it in PANDA. From this helix it is now possible to calculate the distance to a point and this should be minimizable, at least numerically. I find a paper describing exactly this, so I read it to understand the effort needed to implement this.


Week #167

7 – 9 April

A short week in terms of being at the institute. Until Monday including, I wrote quite a bit and finished my first chapter. I also started with an outline for the next chapter.

Back in Jülich, I’m preparing some plots I can show my professor on Wednesday for our bi-weekly meeting. This is a productive one with lots of ideas on how to proceed. Two main tasks I’ll look into in the next weeks: a probability ratio and a distance from a propagated track to a point.

First, I start with the probability ratio. When reconstructing, one often needs to originate particles from the same point in space, a so called vertex. A possibility in the analysis is to slightly alter the tracks of final state particles in order to have a common vertex. This process is called vertex fitting and yields a probability on how likely this altered track would be according to measurement uncertainties.

If more than one particle combination exists, one has to determine the likelihood of one combination over the other. My approach is to calculate the ratio of the probability for the current fit with the best and see, if this ratio is far off. That means, the current candidate has a much lower probability than the most likely one, thus is probably a wrongful combination.

I started to implement this by trying to get an ordered list of probabilities. It turns out to be more complicated than originally thought because of the interpreter I’m using which is not very capable when it comes to more specific C++ features. Therefore, I’m generating a class for this, but it doesn’t get finished this week.

Friday is off and the weekend full of big celebrations with friends and family, so not much writing :(


Week #166

30 March – 2 April

On Monday I heading to the institute to get some organizational stuff done but also to prepare my office for the work coming on Wednesday. I use some spare time to get more papers for the chapter I’m currently writing.

Tuesday and Wednesday I stay at home to green some workmen which have to work in my flat and because my office in Jülich is getting a new ceiling. I use the time to write. I’m quite pleased with my result.

Thursday I’m back in Jülich again, preparing a talk that I give after lunch. In some free minutes I squeeze in some words for my thesis. But that’s basically it for the Thursday, but Thursdays are always bad in terms of productivity.

Friday onwards it is Easter weekend, which for me means a lot of time to write. Very good!