home gallery

Sunday, 22 November 2015

Pleiades Supercomputer

Pleiades Supercomputer

Pleiades, one of the world's most powerful supercomputers, represents NASA's state-of-the-art technology for meeting the agency's supercomputing requirements, enabling NASA scientists and engineers to conduct modeling and simulation for NASA missions. This distributed-memory SGI ICE cluster is connected with InfiniBand® in a dual-plane hypercube technology.
Pleiades supercomputer at the NASA Advanced Supercomputing facility
The system contains the following types of Intel Xeon processors: E5-2680v3 (Haswell), E5-2680v2 (Ivy Bridge), E5-2670 (Sandy Bridge), and X5670 (Westmere). Pleiades is named after the astronomical open star cluster of the same name.

System Architecture

  • Manufacturer: SGI
  • 163 racks (11,312 nodes)
  • 5.34 Pflop/s peak cluster
  • 4.09 Pflop/s LINPACK rating (#11 on July 2015 TOP500 list)
  • 132 Tflop/s HPCG rating (#5 on July 2015 HPCG list)
  • Total CPU cores: 211,872
  • Total memory: 724 TB
  • 4 racks (128 nodes total) enhanced with NVIDIA graphics processing units (GPUs)
    • 217,088 CUDA cores
    • 0.317 Pflop/s total
  • 1 rack (32 nodes total) enhanced with Intel Xeon Phi co-processors (MICs)
    • 3,840 MIC cores
    • 0.064 Pflop/s total
Pleiades Node Detail
 Haswell NodesIvy Bridge NodesSandy Bridge NodesWestmere Nodes
Number of Nodes2,0885,4001,9361,856
Processors per Node2 twelve-core processors per node2 ten-core processors per node2 eight-core processors per node2 six-core processors per node
Node TypesIntel Xeon E5-2680v3 processorsIntel Xeon E5-2680v2 processorsIntel Xeon E5-2670 processorsIntel Xeon X5670 processors
Processor Speed2.5 GHz2.8 GHz2.6 GHz2.93 GHz (X5670) or 3.06 GHz (X5675)
Cache30 MB for 12 cores25 MB for 10 cores20 MB for 8 cores12 MB Intel Smart Cache for 6 cores
Memory TypeDDR4 FB-DIMMsDDR3 FB-DIMMsDDR3 FB-DIMMsDDR3 FB-DIMMs
Memory Size5.3 GB per core, 128 GB per node3.2 GB per core, 64 GB per node (plus 3 bigmem nodes with 128 GB per node)2 GB per core, 32 GB per node2 GB per core, 24 GB per node (plus 17 bigmem nodes with 48 GB and 4 with 96 GB per node)
Host Channel AdapterInfiniBand FDR host channel adapter and switchesInfiniBand FDR host channel adapter and switchesInfiniBand FDR host channel adapter and switchesInfiniBand QDR host channel adapter and switches
GPU- and MIC-Enhanced Nodes
 Sandy Bridge + GPU NodesWestmere + GPU NodesSandy Bridge + MIC Nodes
Number of Nodes646432
Processors per NodeTwo 8-core host processors and one GPU coprocessor (2,880 CUDA cores)Two 6-core host processors and one GPU coprocessor (512 CUDA cores)Two 8-core host processors and two 60-core coprocessors
Node TypesIntel Xeon E5-2670 (host); NVIDIA Tesla K40 (GPU)Intel Xeon X5670 (host); NVIDIA Tesla M2090 (GPU)Intel Xeon E5-2670 (host); Intel Xeon Phi 5110p (MIC)
Processor Speed2.6 GHz (host); 3.0 GHz (GPU)2.93 GHz (host); 1.3 GHz (GPU)2.6 GHz (host); 1.05 GHz (MIC)
Cache20 MB for 8 cores (host)12 MB Intel Smart Cache for 6 cores (host)20 MB for 8 cores (host)
Memory TypeDDR3 FB-DIMMS (host); GDDR5 (GPU)DDR3 FB-DIMMS (host); GDDR5 (GPU)DDR3 FB-DIMMS (host); GDDR5 (MIC)
Memory Size4 GB per core, 64 GB per node (host); 12 GB per node (GPU)4 GB per core, 48 GB per node (host); 6 GB per node (GPU)2 GB per core, 32 GB per node (host); 8 GB shared among the 60 cores in each coprocessor (MIC)
Host Channel AdapterInfiniBand FDR host channel adapter and switches (host)InfiniBand QDR host channel adapter and switches (host)InfiniBand FDR host channel adapter and switches (host)
Subsystems
 10 Front-End NodesBridge Nodes 1 & 2Bridge Nodes 3 & 4PBS server pbpspl1PBS sever pbspl3
Number of Processors2 eight-core processors per node2 quad-core processors per node8 quad-core processors per node2 six-core processors per node2 quad-core processors per node
Processor TypesXeon E5-2670 (Sandy Bridge) processorsXeon E5472 (Harpertown) processorsXeon X7560 (Nehalem -EP) processorsXeon X5670 (Westmere) processorsXeon X5355 (Clovertown) processors
Processor Speed2.6 GHz3 GHz2.27 GHz2.93 GHz2.66 GHz
Memory64 GB per node64 GB per node256 GB per node72 GB per node16 GB per node
Connection10 Gigabit and 1 Gigabit Ethernet connection10 Gigabit Ethernet connection10 Gigabit Ethernet connectionN/AN/A

Interconnects

  • Internode: InfiniBand®, with all nodes connected in a partial hypercube topology
  • Two independent InfiniBand® fabrics
  • InfiniBand® DDR, QDR and FDR
  • Gigabit Ethernet management network

Storage

  • SGI® InfiniteStorage NEXIS 9000 home filesystem
  • 15 PB of RAID disk storage configured over several cluster-wide Lustre filesystems

Operating Environment


  • Operating system: SUSE® Linux®
  • Job scheduler: Altair PBS Professional®
  • Compilers: Intel and GNU C, C++ and Fortran
  • MPI: SGI MPT, MVAPICH2, Intel MPI

No comments:

Post a Comment