Tuesday, May 08, 2012

Possibility of LHC Data Pile-Up

Too  much of a good thing is not necessarily a good thing.

So they want a lot of collisions, now they might get too many to handle. The detectors at LHC are preparing for the possibility of data pile-up, now that the LHC is operating at a higher energy and higher luminosity than what the detectors are designed to handle.

Every time two tightly packed bunches of protons cross, they generate not one collision, but on average 27, Lamont says. But within a few weeks, that number is expected to rise into the mid-30s, peaking at around 40 collisions per crossing. The two main detectors at the LHC were designed to handle only around two dozen collisions at once. But they have managed to cope so far.
While this is a good problem to have, it is still a problem, because you simply don't want to have to discard something simply because you can't handle it. It appears that they can, so far, but I can easily see that this number will continue to increase. Considering the daunting size and what they are trying to do, the LHC continues to be an astounding machine that is performing incredibly better than expected.

Zz.

1 comment:

Unknown said...

The term 'data pileup' is a tad odd. Are they referring to data piling up in the trigger stage - means where the decission takes place if the event is worthy for further evaluation?
Or do they mean that the data getting dumped in to the storage cloud keeps piling up?
The latter desn't seem to be much of a problem, heck, even the Tevatron guys still have decades of data for further analysis.