- Physics for everyone
- Studying and working
- About us
Data processing plays a crucial role on all levels of the experiment: Computing systems capture the data produced directly in the detector. Besides that, the ability to prepare simulated events and results for scientific publications is critical for the researchers.
Therefore the buildup of the ATLAS detector with all its constituent detectors is geared toward the digital usability of the data: All signals that particles leave behind, for example in track detectors or muon chambers, are electronically recorded and digitized. Digitization means that a measurement, such as the size of a signal in one cell of the calorimeter, is converted to a binary number as compact as possible. In this way, all measurements can be quickly transported out of the detector into nearby computer systems and processed further there.
In the ATLAS detector, proton packets collide with each other at the rate of 40 million times per second. The amount of data coming in as a result is unimaginably large – bigger than the total of all telecommunications traffic on Earth. Only one hundred-thousandth part of it is interesting for the investigation of matter. But even this generates huge data volumes of more than a gigabyte per second. For the discovery of the Higgs particle, the physicists at the ATLAS experiment analyzed 10 million gigabytes of data – a herculean effort that could only be performed with a unique IT infrastructure.
When the ATLAS experiment is taking data, several megabytes of data from selected events are collected and stored up to 1000 times per second. Thus a data stream of around one gigabyte per second must be processed. This volume of data corresponds to the simultaneous streaming of 50 high-definition videos.
The data of the ATLAS experiment are stored at CERN and distributed worldwide in 11 large computing centers (Tier 1). One of these large computing centers is located at the Karlsruhe Institute of Technology and also serves, besides ATLAS, the other big experiments at the Large Hadron Collider. The data processing and the production of simulated data take place in more than 100 smaller computing centers (Tier 2) in collaboration with the 11 large centers.
The ATLAS group at the MPP operates one such Tier 2 computing center at the Max Planck Computing and Data Facility (MPCDF). The group carries the funding and provides staff to take care of the so-called grid middleware and storage systems as well as handling technical support.
The computing facility of the MPP at the MPCDF, which for the most part is used for the purposes of the ATLAS group's Tier 2 computing center, has at present more than two petabytes of storage capacity (one petabyte corresponds to a million gigabytes) and more than 100 high-performance server computers. A typical server computer has two CPUs with 12 cores each, 128 gigabytes of RAM, a fast hard drive, and a 10 gigabyte-per-second network connection.