The Project

Due to the increasing number of data produced in the HEP - High Physics Energy experiments, the need for computing has also increased in a big order of magnitude on the last ten years. The data production had acquire a degree of Petabytes to Exabytes of data per year. The storage and the analysis of these data on a unique center became unfeasible. Just the integration of many computing centers in a global scope through the use of a grid architecture has made it possible to hold this demand.

The first efforts of a HEPGRID (High Energy Physics GRID) was motivated by the new experiments of LHC (Large Hadron Collider) ,the particle accelerator at CERN where the four main detectors work (ATLAS, CMS (Compact Muon Solenoid), LHCb and ALICE.

One of the main goals of the UERJ-Tier2 group is to establish a Brazilian HEPGRID for physics and other sciences to support brazilian activities on given areas but still contributing with groups around the world through the integration of the Brazilian grid to preexisting ones.

The supercomputing sites that supports CMS experiments from CERN are classified by a given nomenclature according to the magnitude of its resources power (total number of CPU cores, storage capacity and additional services supported). The site receives an identification of Tier n where n ranges from 0 to 4.

The CMS offline computing system is arranged in four tiers. The system is geographically distributed, consistent with the nature of the CMS collaboration itself. By following such an approach, CMS not only gains access to the valuable resources and expertise which exist at collaborating institutes, but also bene?ts from improvements in robustness and data security, through redundancy amongst multiple centres.

  • A single Tier-0 centre at CERN accepts data from the CMS Online Data Acqui-sition System, archives the data and performs prompt ?rst pass reconstruction.

  • The Tier-0 distributes raw and processed data to a set of large Tier-1 centres in CMS collaborating countries. These centres provide services for data archiving, reconstruction, calibration, skimming and other data-intensive analysis tasks.

  • A more numerous set of Tier-2 centres, smaller but with substantial CPU re-sources, provide capacity for analysis, calibration activities and Monte Carlo simulation. Tier-2 centres rely upon Tier-1s for access to large datasets and secure storage of the new data they produce.

The Current Scenario

UERJ-T2 operates as a site member of OSG [1] (Open Science Grid). The OSG model of operation is that of a distributed facility wich provides access to computing and storage resources. From the resource owners perspective, resources need to be registered with the OSG so as to be available. The researchers are able to use resources provided by OSG once they are registered with one or more .Virtual Organizations. (VOs) previously associated with OSG that can encompass individuals from multiple institutions.

The resource, site/institution members and individual registrations to start working in OSG Grid characterizes the stringent requirements for security and accounting that differentiate grid computing from other distributed computing models.

Ortogonol to OSG but still fiel to the grid architecture one could also point the LCG Grid architecture. Since 2005 UERJ-T2 has been affiliated with OSG. The site is capable of handle many tasks that requires an accepted quality of service between interconnection networks. As such a well planned infrastructure had been constructed in the site.

The logical and physical structure of UERJ T2 is comprised of:

  • 704 high performance CPU cores;

  • 1 PB of distributed storage;

  • External link via Redecomep / RNP of 1 Gbps.