This is a follow-up to the discussion on the slack development channel.
I've been simulating electromagnetic (1TeV initial electron) showers with the examples/corsika.cpp script.
The only option I have changed is the emcut, which I changed to 20MeV.
If I look at the longitudinal profile (here for the charged particles), they look like this:
This means that I should expect zero particles on my observation plane.
But if I look at the output from the ObservationPlane, I have 8931 entries in the parquet file!
The energy distribution of these particles look like this:
Note that a lot of these particles have an energy very close to the cut energy, but not all of them.
For a 10 TeV shower, we would expect some charged particles according to the observation plane:
But the output of the observation plane shows 91676 particles!
This might, but doesn't have to be, related to issue #572.
Edited
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related.
Learn more.
I just checked: 7202 out of 8931 charged particles have energies below the 20 MeV cut (0.0194890010539 GeV in kinetic energies).
They don't have all identical energies, but there are some accumulations.
For example, there are 5924 particles which all have the (numerically) identical energy of 0.019486999139190 GeV. Note that they seem to have very different x and y coordinates on the observation plane:
Good point. All these 5924 particles don't have the exact cut value, but they are the particles with the energies closest to the cut value! Interesting.
I just re-checked: I don't see this unreasonable amount of photons! Only for charged particles!
I have now tried turning off the magnetic field and multiple scattering (which would be an obvious candidate for causing issues since these processes only affect charged particles), but this didn't solve the issue...
That's very interesting...
Does this change when one uses a different order in the process sequence? I thought the results would be independent of the order.
And so far I don't understand why this would happen with SO many particles, so why do SO many particles reach the Observation plane if they are not correctly cut? I could understanding this for particles close to the plane, but not for particles that are just anywhere in the shower
We mucked up the calculation. Here is the code from getMaxStepLength in proposal::ContinuousProcess. We are mixing kinetic and total energies which leads to the particles being below the cut by their mass times 0.99999 (or something).
// Limit the step size of a conitnuous loss. The maximal continuous loss seems to be// a hyper parameter which must be adjusted.//autoconstenergy=vP.getEnergy();autoconstenergy_lim=std::max(energy*0.9,// either 10% relative loss max., orget_kinetic_energy_propagation_threshold(code)// energy thresholds globally defined for individual particles*0.9999// need to go slightly below global e-cut to assure removal// in ParticleCut. This does not matter since at cut-time// the entire energy is removed.);
fixing that the electrons end up at 0.019998 GeV as expected.
I don't think so. I think the problem is that the cut and observation plane are both continuous processes but the actual deleting of particles happens at the higher level (in Cascade). So in the case that the particle absorption happens during the final step to the observation plane (it may be a big step!) they will also get written to file because the writing to file happens also during the last step. The deleting happens AFTER the step.
Nope that can't be it either. What I described above is only responsible for the peak at the cut value... If we fix it we'd still get thousands of particles arriving at the observation level, at least according to the energy spectrum shown above.
The result you see above. Btw this is a 100GeV shower (for performance reasons). There no photons arriving at the ground and also no electrons! BUT somehow many (all?) electrons (massive particles?) end up in the output.
I was expecting this to be the case, otherwise I wouldn't have an explanation why there are more particles on the observation plane than in the highest bin of the longitudinal profile
Do we know when this started to be wrong? I would guess with the implementation of the new Step object?
this means ObservationLevel needs to figure out if it is the limiting process, i.e. if the step ends at the ObservationLevel. Only in this case it writes out the particles.
which process is limiting the step is determined in Cascade by calling getMaxStepLength.
the id of the limiting process is then passed to all Continuous processes in doContinuous.
inside ProcessSequence the limiting id is compared with the current Continuous process' id doContinuous(step, limitId == ContinuousProcessIndex(IndexProcess2))
In the case of multiple nested process sequences eg. SwitchProcess in emContinuous, these id's seem to get confused, be ill defined, or something.
In any case this makes ObservationLevel think it is Proposal::ContinuousProcess
I imagine the id's look something like this (p1,p2,(p3,p4),p4).
A workaround is changing the order of LongitudinalProfile and ObservationLevel in the process sequence or rather not make ObservationLevel follow the continuous energy loss (Proposal and Bethe).
With a local fix showers at 1 TeV and emcut of 20, 10 and 5 MeV run through without any problem. There are tons of warnings from the proposal interface about negative step lengths though. I think we have to remove the warning. Not sure if setting the step length to zero itself causes any problems.
Testing 500 keV now.. there are lot's of warnings from proposal about a negative MeanFreePath.
[proposal.interaction] [warning] Negative MeanFreePath detected at energy 1.2408481412992356 MeV. Returning INF instead.