%0 Conference Paper %F Poster %T The Fermi-LAT Dataprocessing Pipeline %+ Laboratoire Univers et Particules de Montpellier (LUPM) %+ Centre de Physique des Particules de Marseille (CPPM) %A Zimmer, S. %A Johnsson, T. %A Glanzmann, T. %A Lavalley, C. %A Arrabito, L. %A Tsaregorodtsev, A. %< avec comité de lecture %B Computing in High Energy and Nuclear Physics (CHEP2012) %C New-York, United States %8 2012-05-21 %D 2012 %Z Sciences of the Universe [physics]/Astrophysics [astro-ph]/Instrumentation and Methods for Astrophysic [astro-ph.IM] %Z Physics [physics]/Astrophysics [astro-ph]/Instrumentation and Methods for Astrophysic [astro-ph.IM] %Z Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC]Poster communications %X The Data Handling Pipeline ("Pipeline") has been developed for the Fermi Gamma-Ray Space Telescope (Fermi) Large Area Telescope (LAT) which launched in June 2008. Since then it has been in use to completely automate the production of data quality monitoring quantities, reconstruction and routine analysis of all data received from the satellite and to deliver science products to the collaboration and the Fermi Science Support Center. In addition it receives heavy use in performing production MonteCarlo tasks. In daily use it receives a new data download every 3 hours and launches about 2000 jobs to process each download, typically completing the processing of the data before the next download arrives. The need for manual intervention has been reduced to less than <.01% of submitted jobs. The Pipeline software is written almost entirely in Java and comprises several modules. The software comprises web-services that allow online monitoring and provides AIDA charts summarizing work flow aspects and performance information. The server supports communication with several batch systems such as LSF and BQS and recently also Sun Grid Engine and Condor. This is accomplished through dedicated JobControlDaemons that for Fermi are running at SLAC and the other computing site involved in this large scale framework, the Lyon computing center of IN2P3. While being different in the logic of a task, we evaluate a separate interface to the Dirac system in order to communicate with EGI sites to utilize Grid resources, using dedicated Grid optimized systems rather than developing our own. More recently the pipeline and its associated data catalog have been generalized for use by other experiments, and are currently being used by the Enriched Xenon Observatory (EXO), Cryogenic Dark Matter Search (CDMS) experiments as well as for MonteCarlo simulations for the future Cherenkov Telescope Array (CTA). %G English %L in2p3-00703727 %U https://hal.in2p3.fr/in2p3-00703727 %~ IN2P3 %~ CPPM %~ CNRS %~ UNIV-AMU %~ UNIV-MONTP2 %~ LUPM %~ FRANCE-GRILLES %~ MIPS %~ UNIV-MONTPELLIER %~ UM1-UM2