16 – 20 March
Three months are over, it is PANDA meeting time again. So on Monday morning, I drive to Gießen and have three packed days of talks, discussions, and social evenings. It was good and it felt efficient. On Tuesday afternoon I give my talk in the hyperon session (password is the title of this blog), the first time that I’m not talking about electronics but physics.
On Wednesday I drive back home so that I can use at least the two other days for my work and writing. One of the days I write a script that automatically creates and generates a composed screenshot of the digital design. The problem is, that exporting is not well supported, so one can only make screenshots of the current display. This works fine if the design is small, but for larger scales this just gives you a colored brick (the resolution is not high enough).
My idea was, to move the current display of the zoomed design, take a screenshot, move it again, screenshot, … and so on until I have a bunch of tiles of the whole design. This then gets merged together to one big image. At the end of the day, I have it running. Unfortunately, the design itself was an old version, so I first have to generate the proper one.
9 – 13 March
On the weekend I started preparing the talk for the upcoming DPG meeting. I have a rehearsal talk on Wednesday, which went quite good. Only small modifications have to be done.
Otherwise, I started the week by continuing the analysis of my track filter. It’s effects turn out to be slight but significant, at least for most particles in my decay. So it seems to be a good idea to have it in since it is (probably) reducing the unreasonable reconstructed tracks.
On Thursday we have our biweekly supervisor meeting. The others are going in first, so I have a quick look at what the particle identification does in my case. For the meeting I have a first plot ready, good. After the meeting we have the majority of DPG test talks.
On Friday I continue my analysis macro which is taking care of getting data from the previous steps, combining decayed particles to their mother and filtering what is reasonable here. The last part is the most interesting, but for that I need to figure out what the „resonable“ means in my case.
2 – 6 March
I spend the first days trying to install our experiment’s software framework (PandaRoot) on my MacBook. It should work, at least in the most recent version. Strangely though, it does not. I get some errors related with graphics output, specifically a libpng version conflict. This was a known bug in an older root version, but the given workaround didn’t work for me (and the bug shouldn’t appear in the first place).
After two days of downloading, recompiling, changing parameters, and more recompiling I give up. At the end I have an installation which basically works but a certain program of the framework doesn’t. It is not important to get data, but a very helpful tool to investigate what is going on.
The rest of the week I spend finishing my track filter and start to have a look, what is coming out of that. Or, to be more specific, what the filter is throwing away. After struggeling a bit with how exactly I have to access the data properly I get a skeleton of the script ready until the end of the week.
PASTA: The submission deadline was Monday, which we couldn’t make but more for technical reasons (the wafer was full already). Nontheless, the design is ready, so finally I can fully concentrate my work on the analysis.
23 – 27 February
The week starts with a meeting with the professor. I don’t have many new plots because I’m still working on the basic classes to create the data. So I’m explaining what I’m doing and the progress with PASTA. The first time since I give these reports I have a good feeling that the ASIC is realistically close to submission.
After the meeting I continue with the task I started with last week. Eventually, I get it done in the week, but working on it was interrupted by a meeting on Wednesday. We have an online meeting for the branch of hadron physics my channel falls into. To inform people what I’m working on right now, I combine a few plots together and give a first presentation about my cascade simulation (password is the title of this blog).
PASTA: Just when I thought I was out… they pull me back in. On Thursday afternoon I got the info that the other designers found another check they can do on the design concerning antenna violations. The test checks for long neighboring lines which work as antennas and might pick up signals from the other line. This will lead to noise and in the worst case to a defective design. The interesting thing is that my design checks these violations and corrects them automatically. Apparently, it is not thorough enough because the other tool, which the other designers used, found some violations. So on Friday we try to find a solution to that and decide at the end that I cannot do anything useful. This has to be corrected by hand 🙁 Luckily, it seems that this can be done and shouldn’t prohibit us from submitting the design.
17 – 20 February
Monday is Rose Monday and my workplace is heavily influenced by carnival. Therefore, we get the day off. I use the day (and the weekend before) to get some writing done.
Tuesday onwards I work on another extension for our simulation framework. I’m using an ideal track finding algorithm which basically finds tracks from charged particles by taking a hit in a detector and looking up the simulated truth information. Of course, this only works in simulation and won’t in the real detector, but for now this is fine. I have to do it this way because the track finding algorithms capable for my channel are not ready yet.
Ok, ideal track finding. This is working quite fine, too good actually. Only if there is just one point, it will find the appropriate track — which in real life wouldn’t be possible. You need at least three points to fit a circle and two for a straight line. In order to make this a bit more realistic, I want to filter out those tracks which left too few hitpoints in the detector to realistically reconstruct a track.
I want to make it properly, thus it takes me some time to figure out where exactly I should insert this task and how it should be written. I make good progress, but can’t finish because of …
We finally got an LVS clean desing.
PASTA: After trying to solve the assign issue from last week and not finding a suitable solution in time, I decide to remove the related pins. They were not doing anything anyways, so logically there is no change.
After solving this, we finally got an LVS (layout vs. schematics) clean design, meaning that the design on the metal level is the same as the logical input. Puh, that was quite a ride.
9 – 13 February
In this week, I finish the event filter I started working on last week. I do a quick test to see if it produces what I think it does and am quite satisfied with the result. The next thing to do would be to produce a lot of data on the computing cluster, but before doing that, I want to include something else.
So far, I had the hit counting done separately, leading to a file which just gave me the number of hits associated with a particle and detector. To do some more elaborate analysis, I want to crosscheck the hit count of each event and apply some cuts on that. The afore-mentioned approach doesn’t allow that since the assignment of events is quite difficult. My plan is to write a task which is called during event simulation which collects the hit count information and stores it along with the other information.
I start with that, but it is a bit difficult to get into. On Thursday I have a task running which is initialized and stores data, but doesn’t do anything yet. I’ll continue next week on that. Friday and the weekend I stay at home to write.
PASTA: During the week, a quite severe issue appeared. Due to some
assign statements in the schematics, the checks detect a short-circuit and consequently fail. This is bad for submission, so we (me and the guys in Torino) try to figure out a way on how to solve this. Along the way we realize some other issues, part of which we are able to solve, part of which need more time. Let’s see what next week brings.
2 – 6 February
The week begins with a meeting with the professor. Discuss last results and next steps. I have quite some plots to show since I found this bug with the wrong particle. The main thing we decide on is related to the event filtering method: keep it! So I start adapting the concept to the full simulation chain.
There it is crucial to do the filtering early because full event processing takes a lot of time. Luckily, some colleague already extended our simulation framework to enable such things but of course not with the filter I have in mind. So I start reading into the filter mechanism and how to include my own.
PASTA: In the meantime, some connection problems occurred with the GCTRL. I try to fix it, compile a new version, see other problems occurring, fix them, compile again, … Args. Someday, this will be finished.
26 – 30 January
I generate more hit count plots and at the end I have a display which I like. In one plot I display the amount of hits for different particle species and do that for all four particle detectors.
After finishing this, I remember another thing I wanted to do: use another event generation model. That basically means, how the particles are produced in the initial reaction. By default, the generator assumes a homogeneous distribution, also called phase space distribution. But reality shapes an asymmetry because some quarks are not converted and thus carry their initial momentum.
Anyhow, after discussing with some more experienced people in this field than me we figure out that this would be too much to implement on a realistic level. So we opt for another method: generate a homogeneous distribution and randomly exclude some events with a certain probability for a given value. This is relatively easy when just looking at the event generator because this is quite fast. So we don’t have a huge overhead of producing, let’s say, twice the amount of data.
While discussing, we notice something: I used a wrong particle in my initial state. This led to strange results. Even though I first have to define the particle, the fix is relatively easy.
On Thursday I take a day off to deal with some organizational stuff at home.
19 – 23 January
I finish the program counting the hits per tracking detector and send this to the computing cluster to simulate 100 000 events. On a single machine this would take a week or so, on the cluster only half a day. Quite nice speed-up.
While this is running, I make some detail corrections on the life time plots, mainly to have something that can be properly used and is not just something for a quick discussion.
On Wednesday my simulation is finished and I start analyzing the data. In the course of the next days I think of several ways on how to display the amount of hits in a good way.
PASTA: The idea we had last week (leave the top are completely empty) wasn’t such a good idea after all because this violated some design rules. Therefore, I went back to the concept I had before.
12 – 16 January
The week starts with me preparing a talk in our PhD seminar for Tuesday morning, this time focusing on neutrino sources. Surprisingly I’m quite efficient with that, considering that I get a ~50 minutes seminar talk together in only 1,5 days.
On Wednesday we have the bi-weekly meeting with our professor. The main tasks I got from last time were already finished and the next step of using the full simulation is still ongoing (mainly because I have to understand how the concept of the framework is set up). Towards the end of the week I figure out where I have to insert my part of the code. And also how to use the computing cluster to generate a significant number of data.
Until Friday I’ve prepared scripts to generate data and analyze the number of hits per sub detector, they just need some final touch. But this is work for next week.
PASTA stuff is also going on. The developers in Torino had the idea to really leave the space on top of the GCTRL empty (not just without active cells), so I figure out how to do that. Unfortunately, all these things are trial and error based on a compilation that takes 4-5 hours. So the progress is really slow 🙁