Pleiades Supercomputer
Pleiades, one of the world's most powerful supercomputers, represents NASA's state-of-the-art technology for meeting the agency's supercomputing requirements, enabling NASA scientists and engineers to conduct modeling and simulation for NASA missions. This distributed-memory SGI ICE cluster is connected with InfiniBand® in a dual-plane hypercube technology.
The system contains the following types of Intel Xeon processors: E5-2680v3 (Haswell), E5-2680v2 (Ivy Bridge), E5-2670 (Sandy Bridge), and X5670 (Westmere). Pleiades is named after the astronomical open star cluster of the same name.
System Architecture
- Manufacturer: SGI
- 163 racks (11,312 nodes)
- 5.34 Pflop/s peak cluster
- 4.09 Pflop/s LINPACK rating (#11 on July 2015 TOP500 list)
- 132 Tflop/s HPCG rating (#5 on July 2015 HPCG list)
- Total CPU cores: 211,872
- Total memory: 724 TB
- 4 racks (128 nodes total) enhanced with NVIDIA graphics processing units (GPUs)
- 217,088 CUDA cores
- 0.317 Pflop/s total
- 1 rack (32 nodes total) enhanced with Intel Xeon Phi co-processors (MICs)
- 3,840 MIC cores
- 0.064 Pflop/s total
Pleiades Node Detail
| Haswell Nodes | Ivy Bridge Nodes | Sandy Bridge Nodes | Westmere Nodes |
Number of Nodes | 2,088 | 5,400 | 1,936 | 1,856 |
Processors per Node | 2 twelve-core processors per node | 2 ten-core processors per node | 2 eight-core processors per node | 2 six-core processors per node |
Node Types | Intel Xeon E5-2680v3 processors | Intel Xeon E5-2680v2 processors | Intel Xeon E5-2670 processors | Intel Xeon X5670 processors |
Processor Speed | 2.5 GHz | 2.8 GHz | 2.6 GHz | 2.93 GHz (X5670) or 3.06 GHz (X5675) |
Cache | 30 MB for 12 cores | 25 MB for 10 cores | 20 MB for 8 cores | 12 MB Intel Smart Cache for 6 cores |
Memory Type | DDR4 FB-DIMMs | DDR3 FB-DIMMs | DDR3 FB-DIMMs | DDR3 FB-DIMMs |
Memory Size | 5.3 GB per core, 128 GB per node | 3.2 GB per core, 64 GB per node (plus 3 bigmem nodes with 128 GB per node) | 2 GB per core, 32 GB per node | 2 GB per core, 24 GB per node (plus 17 bigmem nodes with 48 GB and 4 with 96 GB per node) |
Host Channel Adapter | InfiniBand FDR host channel adapter and switches | InfiniBand FDR host channel adapter and switches | InfiniBand FDR host channel adapter and switches | InfiniBand QDR host channel adapter and switches |
GPU- and MIC-Enhanced Nodes
| Sandy Bridge + GPU Nodes | Westmere + GPU Nodes | Sandy Bridge + MIC Nodes |
Number of Nodes | 64 | 64 | 32 |
Processors per Node | Two 8-core host processors and one GPU coprocessor (2,880 CUDA cores) | Two 6-core host processors and one GPU coprocessor (512 CUDA cores) | Two 8-core host processors and two 60-core coprocessors |
Node Types | Intel Xeon E5-2670 (host); NVIDIA Tesla K40 (GPU) | Intel Xeon X5670 (host); NVIDIA Tesla M2090 (GPU) | Intel Xeon E5-2670 (host); Intel Xeon Phi 5110p (MIC) |
Processor Speed | 2.6 GHz (host); 3.0 GHz (GPU) | 2.93 GHz (host); 1.3 GHz (GPU) | 2.6 GHz (host); 1.05 GHz (MIC) |
Cache | 20 MB for 8 cores (host) | 12 MB Intel Smart Cache for 6 cores (host) | 20 MB for 8 cores (host) |
Memory Type | DDR3 FB-DIMMS (host); GDDR5 (GPU) | DDR3 FB-DIMMS (host); GDDR5 (GPU) | DDR3 FB-DIMMS (host); GDDR5 (MIC) |
Memory Size | 4 GB per core, 64 GB per node (host); 12 GB per node (GPU) | 4 GB per core, 48 GB per node (host); 6 GB per node (GPU) | 2 GB per core, 32 GB per node (host); 8 GB shared among the 60 cores in each coprocessor (MIC) |
Host Channel Adapter | InfiniBand FDR host channel adapter and switches (host) | InfiniBand QDR host channel adapter and switches (host) | InfiniBand FDR host channel adapter and switches (host) |
Subsystems
| 10 Front-End Nodes | Bridge Nodes 1 & 2 | Bridge Nodes 3 & 4 | PBS server pbpspl1 | PBS sever pbspl3 |
Number of Processors | 2 eight-core processors per node | 2 quad-core processors per node | 8 quad-core processors per node | 2 six-core processors per node | 2 quad-core processors per node |
Processor Types | Xeon E5-2670 (Sandy Bridge) processors | Xeon E5472 (Harpertown) processors | Xeon X7560 (Nehalem -EP) processors | Xeon X5670 (Westmere) processors | Xeon X5355 (Clovertown) processors |
Processor Speed | 2.6 GHz | 3 GHz | 2.27 GHz | 2.93 GHz | 2.66 GHz |
Memory | 64 GB per node | 64 GB per node | 256 GB per node | 72 GB per node | 16 GB per node |
Connection | 10 Gigabit and 1 Gigabit Ethernet connection | 10 Gigabit Ethernet connection | 10 Gigabit Ethernet connection | N/A | N/A |
Interconnects
- Internode: InfiniBand®, with all nodes connected in a partial hypercube topology
- Two independent InfiniBand® fabrics
- InfiniBand® DDR, QDR and FDR
- Gigabit Ethernet management network
Storage
- SGI® InfiniteStorage NEXIS 9000 home filesystem
- 15 PB of RAID disk storage configured over several cluster-wide Lustre filesystems
Operating Environment
- Operating system: SUSE® Linux®
- Job scheduler: Altair PBS Professional®
- Compilers: Intel and GNU C, C++ and Fortran
- MPI: SGI MPT, MVAPICH2, Intel MPI
No comments:
Post a Comment