sgi_carlsbad_nda

Содержание

Слайд 2

This presentation contains forward-looking statements regarding the SGI Altix® XE server family

This presentation contains forward-looking statements regarding the SGI Altix® XE server family
and roadmap, other SGI® technologies, and third-party technologies that are subject to risks and uncertainties. These risks and uncertainties could cause actual results to differ materially from those described in such statements. The viewer is cautioned not to rely unduly on these forward-looking statements, which are not a guarantee of future or current performance. Such risks and uncertainties include long-term program commitments, the performance of third parties, the sustained performance of current and future products, financing risks, the impact of competitive markets, the ability to integrate and support a complex technology solution involving multiple providers and users, the acceptance of applicable technologies by markets and customers, and other risks. These forward-looking statements are subject to risks and uncertainties as set forth in the company's Forms 8K dated September 8, 2006, and most recent SEC reports on Form 10-Q and Form 10-K. Silicon Graphics is under no obligation to publicly update or revise any forward-looking statements, whether to reflect new information, future events or otherwise.
©2006 Silicon Graphics, Inc. All rights reserved. Silicon Graphics, SGI, SGI Altix, the SGI logo and the SGI cube are registered trademarks. SGI ProPack, Performance Co-Pilot, and Innovation for Results are trademarks of Silicon Graphics, Inc., in the United States and/or other countries worldwide. Linux is a registered trademark of Linus Torvalds in several countries. Linux penguin logo created by Larry Ewing. Itanium and VTune are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries trademarks. Red Hat and all Red Hat-based trademarks are trademarks or registered trademarks of Red Hat, Inc., in the United States and other countries. Windows is a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries. All other trademarks mentioned herein are the property of their respective owners. (11/06). Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Images courtesy of Stony Brook University, NASA Ames, gsiCom, Accelrys, Landmark, and Leonard Wikberg III.
Product plans, descriptions, and dates are estimates only and subject to change without notice. SGI may choose not to announce or make generally available any products or programs discussed in this presentation. Therefore, you should not make changes in your business operations on the basis of the information presented here.
Carlsbad, Dixon, Ultraviolet, Santa Fe, Oro Valley, Chama and Taos are internal project code names.

Слайд 3

SGI Today

Industry Leading Innovation
More than 1600 employees
800+ Customer-facing employees
300+ Engineers to continue

SGI Today Industry Leading Innovation More than 1600 employees 800+ Customer-facing employees
innovation
More than $500m in annual revenues
6000+ Customers in over 50 countries around the world
Core technology leadership in
Advancement of Linux® OS into HPC market
Scalable system architecture
Global shared memory
File systems and shared storage
Consulting and services

Слайд 4

SGI Unique Capabilities

20+ Years of expertise in solving the most demanding
compute and

SGI Unique Capabilities 20+ Years of expertise in solving the most demanding
data-intensive problems
Unified server, cluster and storage architecture
Wide use of Open Standards, including Linux® OS
Largest and fastest storage systems
Global memory address to over 100TB
Filesystems over 100TB and 12GB/s disk-to-SAN
Renown for deep vertical expertise of employees
More than 200 employees with security clearances
World class customer service organization

Слайд 5

Project Carlsbad

Next-generation integrated blade platform, with breakthrough performance density and reliability.

DENSITY

POWER

RELIABILITY

Project Carlsbad Next-generation integrated blade platform, with breakthrough performance density and reliability. DENSITY POWER RELIABILITY

Слайд 6

Project Carlsbad: Technology for a New Era in Computing

Next generation blade platform

Project Carlsbad: Technology for a New Era in Computing Next generation blade
for breakthrough scalability and price/performance.
Modularity to add & update resources independently for perfectly right-sized systems: memory, storage, processors
Packaged for best overall price/performance – 512 Intel Xeon Processor cores per rack, easily scales to thousands of processors.
Integrated blade platform reduces complexity, simplifies management, and lowers total cost of ownership.
50% less space (based on Tflps/rack versus ‘rack’ or ‘box competitors)
Fewer blade components reduce potential points of failure.
Leading energy efficiency: average $100K in annual savings for 10 Tflps compute power.
Enhanced serviceability, blade based platform that is monitored and managed at the blade, chassis, rack levels.
Fully redundant system components, hot-swappable blades.
SGI Platform Manager (name TBD) provides multi-level management across complete Carlsbad system.
SGI “out of the box” deployment, backed by SGI world-class support and service, for immediate productivity.
10 Tflp of compute power “up and running” user apps in a day
Standards-based – Intel Xeon Processor technology, certified Linux, Microsoft Windows CCS
Fully integrated, includes SGI Platform Solution (name TBD), a complete software solution stack.

Слайд 7

System Hardware Overview

System Hardware Overview

Слайд 8

Intel® 5000X Chipset (Greencreek)
(2) Intel® Xeon® DP SKU Processors
Dual-core Woodcrest
Quad-core Clovertown
(8)

Intel® 5000X Chipset (Greencreek) (2) Intel® Xeon® DP SKU Processors Dual-core Woodcrest
Fully buffered memory DIMM slots per blade
1GB, 2GB, 4GB DIMMs
32GB Memory Support
(2) x4 DDR IB ports on embedded HCAs
No on-board storage
Power: 487W at 12VDC (high-bin processor SKU & (8) 2GB FB-DIMMs (2GB/core))

Project Carlsbad Compute Blade

Слайд 9

10U 16-Node Individual Rack Unit (IRU)

(16) 2-Socket Nodes (Supports (8) 4-Socket Nodes)

10U 16-Node Individual Rack Unit (IRU) (16) 2-Socket Nodes (Supports (8) 4-Socket

(4) 4x DDR IB Switch Blades Shown
(2) 24-Port IB switch ASICs per blade
(6) 4X IB + (1) 4X IB external
Cable connections per blade

10U 24-inch EIA Form Factor (17.50-in H x 22.5-in W x 32-in D)

(1) Chassis Management Controller

(7+1) 1625W 12VDC Output Front-End Power Supplies

Front View

Top View

Слайд 10

IRU Backplane Topology

Sphere = IRU 24-Port Switch Black Links = H-Dimension Red Links

IRU Backplane Topology Sphere = IRU 24-Port Switch Black Links = H-Dimension
= W-Dimension Green Links = D-Dimension

Blue Links = 4x DDR IB (Atoka to IRU Switch Cards) Gray Links = 4x DDR IB (IRU to external admin nodes) Black Links = 4x DDR IB (H-Dimension Torus) Red Links = 4x DDR IB (W-Dimension Torus) Green Links = 4x DDR IB (D-Dimension Torus)

(1) 24-Port 4x IB Switch per Blade

(6) 4x IB Cables External (Connects to Torus)

(2) 4x IB Cables External (Connects admin nodes )

Plane-2

Plane-1

Project Carlsbad 4x DDR IB Backplane Topology

Слайд 11

(16) Carlsbad Blades

Blade Interface (2) 4x DDR IB per 2-socket Node (2 x

(16) Carlsbad Blades Blade Interface (2) 4x DDR IB per 2-socket Node
4GB/s = 8GB/s Total)

Backplane Interface (16) 4x DDR IB per switch blade (16 x 4 GB/s) = 64 GB/s Total

Project Carlsbad 4x DDR IB Backplane Topology

Cabled Interface (8) 4x DDR IB per switch blade (6 x 4 GB/s) = 24 GB/s to Torus (2 x 4 GB/s) = 8 GB/s available

Слайд 12

Project Carlsbad Blade

GBX

10/100 Ethernet

10/100 Ethernet

GigE Serdes

GigE Serdes

Unused

Unused

Thru Back Plane
To Chassis
Manager

Thru Back

Project Carlsbad Blade GBX 10/100 Ethernet 10/100 Ethernet GigE Serdes GigE Serdes
Plane
To Chassis
Manager

Project Carlsbad Node

Слайд 13

Chassis Manager

Front Panel

GBX

9 Pin Serial Console

Stack-up

Stack-dn

16 Node GEnet
Serdes Inputs

16 Node

Chassis Manager Front Panel GBX 9 Pin Serial Console Stack-up Stack-dn 16
10/100
Enet Inputs

Backplane
Interface

Слайд 14

Rack Chassis Manager Cabling Topology

Leader Node Local Conn.

1588 Vlan Conn.

CM/Leader Vlan Conn

IRU

Rack Chassis Manager Cabling Topology Leader Node Local Conn. 1588 Vlan Conn.
Ring Conn

… Daisy Chain

… Daisy Chain

Слайд 15

(7+1) 1625W 12VDC Output Front-End Power Supplies

IB Backplane

(4) IB Switch Blades

10U

(7+1) 1625W 12VDC Output Front-End Power Supplies IB Backplane (4) IB Switch
16-Node Individual Rack Unit (IRU)

Слайд 16

10U (17.50-in H x 12-in D) 24-inch EIA Form Factor

IRU Rear Blower Assembly

10U (17.50-in H x 12-in D) 24-inch EIA Form Factor IRU Rear

Rear View

(7+1) 175mm Blowers (Reused Altix 4700)

Слайд 17

Project Carlsbad IRU Assembly Exploded View

Carlsbad Blades

Switch Blades

Blade enclosure

Blower enclosure

175 mm Blowers

Project Carlsbad IRU Assembly Exploded View Carlsbad Blades Switch Blades Blade enclosure

Слайд 18

Single Project Carlsbad Rack

Each 42U rack (30” W x 40” D) rack

Single Project Carlsbad Rack Each 42U rack (30” W x 40” D)
has:
(4) IRUs with (16) 2-Socket Carlsbad Nodes each
(128) DP Xeon sockets
DDR IB ports on (4) backplanes for torus
(48) 4x DDR IB
2U Space at Top of Rack Contains 1U SGI Altix XE210 Leader Node (1 per Rack)
SGI offers optional chilled water-cooled units for use in large system configurations
39.5kW (high-bin SKUs + (4) FB DIMMs /socket)
31.6kW (assuming 80% system-level derate)
Rack weight ~ 2050 Lb (246 Lb/ft2 footprint)

Слайд 19

19” Standard Rack Also Supported…

19” Standard Rack Also Supported…

Слайд 20

SGI® Altix® XE240 (Default)
Used to provision and manage the cluster using Cluster

SGI® Altix® XE240 (Default) Used to provision and manage the cluster using
Management SW
Network connections: GigE to leader nodes, communications to/from CMC & compute nodes administratively restricted.
Quantity: one per cluster

Project Carlsbad Administrative Support Node

Слайд 21

SGI® Altix XE® 240 Administrative Node

2U Server Board
Dual Intel® Xeon® Processors (Woodcrest

SGI® Altix XE® 240 Administrative Node 2U Server Board Dual Intel® Xeon®
or Clovertown)
Intel 5000P chipset (Blackford)
8 fully buffered DIMMs
Quad Channel DDR-2
Memory Sparing, Mirroring
Optional expansion modules (SAS or Dual GigE)
Dual Gigabit Ethernet ports
Integrated graphics (ATI ES1000 w/ 16MB)
5 slots, hot-swap drives (SAS/SATA) with HW RAID 0, 1, 5, 10
Up to 3 PCI-X & 4 PCIe
Optional redundant power

Слайд 22

Leader Support Node
Provisioned & functioned by the administrative support node
Runs fabric

Leader Support Node Provisioned & functioned by the administrative support node Runs
management software
Monitors, manages & pulls data from IRUs and compute nodes within the rack
Consolidates and forwards upon request data from IRUs & compute nodes to the administrative node
Provides shared read-only kernel/initrd (~40MB) & root fs (~1.6GB) images for rack's compute nodes
Provides non-shared read-write system storage (~64MB /var, /etc) & minimal swap space (256MB) for rack's compute nodes
Can be combined with fabric management support node
Quantity: 1 per rack
Network connections: GigE to other leader nodes & to first IRU within the rack, IB to whole cluster

Слайд 23

Additional Nodes

Login service node
User's login here to create/compile programs, etc.
Quantity:

Additional Nodes Login service node User's login here to create/compile programs, etc.
1 or more per cluster, commonly combined with batch and gateway service nodes
Batch service node
Runs batch scheduler (PBS/LSF). User's login or connect here to submit jobs to the compute nodes.
Quantity: 1 or more per cluster, commonly combined with login and gateway service nodes
Gateway service node
Acts as a gateway from IB to various kinds of services such as storage (direct attached, fiber channel, etc.)
Quantity: 1 or more per cluster, commonly combined with login and batch service nodes
Storage service node
A NAS appliance bundle that provides shared, IB attached, filesystems for the cluster
Quantity: 1 or more per cluster
A storage appliance that provides node private, IB-connected, scratch storage for the cluster
Quantity: 1 or more per cluster
Fabric management support node
Provisioned & functioned by the admin node
Runs fabric management software, monitors & manages the IB fabric
Forwards upon request fabric status to the admin node
Quantity: 1 or more per system, commonly combined with one or more leader nodes in the cluster

Слайд 24

42U Project Carlsbad 24-inch EIA Rack

30-inch W x 40-inch D Footprint 24-inch

42U Project Carlsbad 24-inch EIA Rack 30-inch W x 40-inch D Footprint
EIA Configurable Space

Rear View

(2) 60A 200-240VAC 3-Phase IEC 60309 Plugs

(4) Hinged Water-Cooled Coils

Rack Chilled-Water Supply 45°F to 60°F (7.2°C to 15.6°C) 14.4 gpm (3.3 m3/hr) Max. 15 psi (103.4kPa) Max.

(2) 18-Receptacle Power-Strips

Слайд 25

42U Project Carlsbad 24-inch EIA Rack (Empty)

42U Project Carlsbad 24-inch EIA Rack (Empty)

Слайд 26

Concerns about Facility (Space, Weight, Power)

Concerns about Facility (Space, Weight, Power)

Слайд 27

Facility (Power) : Energy Efficiency of Altix 4K & Carlsbad

AC ? DC

Facility (Power) : Energy Efficiency of Altix 4K & Carlsbad AC ?
conversion counted

(c) Apply trade-secret

48V

1.85V
DIMMs

1.2V
SHub2

Intel Socket
Power Pod

(90%)

(92%)

(85.7%)

90%)

12V

AC

90%)

Слайд 28

SGI Energy Efficiency

SGI® Altix® 4700 server delivers a world-class power solution
High

SGI Energy Efficiency SGI® Altix® 4700 server delivers a world-class power solution
efficiency, high reliability, high density, remotely manageable
Standards-based
Over 90% efficiency on 12VDC front-end power supply
Up to 87% efficiency on compute blades
Up to 76% efficiency at rack-level
Project Carlsbad design leverages the SGI Altix 4700 power architecture
3rd generation water cooled solution
For systems above 15.0kW per rack, SGI strongly recommends the customer uses water cooled solution
SGI remains committed to evolving high-efficiency power architectures for current and future products

Слайд 29

Topology Overview

Topology Overview

Слайд 30

Example Project Carlsbad Configuration Single rack topology (64 blades, 128 soc.)

Example Project Carlsbad Configuration Single rack topology (64 blades, 128 soc.)

Слайд 31

Example Project Carlsbad Configuration 128 blade – Dual-plane Torus Topology (2 Racks)

Example Project Carlsbad Configuration 128 blade – Dual-plane Torus Topology (2 Racks)

Слайд 32

Example Project Carlsbad Configuration 4-rack topology (256 blades)

Example Project Carlsbad Configuration 4-rack topology (256 blades)

Слайд 33

8192-Socket Project Carlsbad 4x DDR IB 4H x 8D x 8W Torus

8192-Socket Project Carlsbad 4x DDR IB 4H x 8D x 8W Torus
(1-rack Group x 8 x 8 = 64 Rack)

Only Torus Connections Shown (Node Fan-In / Fan-Out are Additional Hops)

1-Rack Group (Contains H-Dimension)

Red Links (Interleaved down the ranks)

Green Links (Interleaved across the aisles)

(4) IRUs per Rack

4H x 8D x 8W = 256 Switches / Plane 512 Total Switches (256) 1.0m Cables H (256) 2.0m Cables H (128) 5.0m Cables D (384) 8.0m Cables D (64) 2.0m Cables W (256) 3.0m Cables W (192) 4.0m Cables W 1536 Total Cables

Bisection: 256 links H-Dim 128 links W-Dim 128 links D-Dim

(128 links)*(4GB/s) = 512 GB/s Bisection / 8192 sockets = 0.0625 GB/s/socket 12-hops =[(2)*(160nS node fan in/out + 4.95nS for 30-in pcb) + (1+2+3+4+4+4+5+8+8+8m network cables)*(4.3nS/m) + (10-hops torus network)*(160nS + 3.3nS for 20-in pcb)] = 2,165.0 nS 1-Way Longest Path Latency

(256 IRUs)

Слайд 34

Project Carlsbad Topology Summary (4X DDR IB)

8192-Socket 4x DDR IB (4H

Project Carlsbad Topology Summary (4X DDR IB) 8192-Socket 4x DDR IB (4H
x 8D x 8W Torus)
62.5 MB/s/socket Bisection
5,105 nS MPI Latency (2,165 nS 1-Way Longest Path Latency)

Слайд 35

Software Overview

Software Overview

Слайд 36

Complete, Factory Integrated Solution Stack

Linux® Operating System

Complete cluster solution stack
Cost-effective, standards-based
Optimized for

Complete, Factory Integrated Solution Stack Linux® Operating System Complete cluster solution stack
ease of use
Factory Integrated and Tested

Слайд 37

SGI and Linux® Open Standards Industry Leadership

SGI Linux leadership:
Unmatched in the industry,

SGI and Linux® Open Standards Industry Leadership SGI Linux leadership: Unmatched in
major contributor to Linux standard
Expertise to resolve kernel-level issues quickly, efficiently
100% Linux - scalable, robust, standards-based
Industry standard SUSE® Linux® Enterprise Server 10
Red Hat® Enterprise Linux® 5 (avail. Q4 CY07)
SGI® Propack™ Toolkit combines essential tools for workflow optimization
SGI® InfiniteStorage delivers complete data lifecycle management solution.
Superior reliability, availability, serviceability:
Comprehensive RAS roadmap, ease of service with blades

Слайд 38

SGI® ProPack™ Benefits for Project Carlsbad

Dramatically enhanced performance:
FFIO: Accelerated I/O bandwidth
CPUSETS, NUMATOOLS:

SGI® ProPack™ Benefits for Project Carlsbad Dramatically enhanced performance: FFIO: Accelerated I/O
Fine tuning for processors, memory
Simplified system administration:
Performance Co-Pilot™, ESP, Cluster Manager
Storage administration tools

SGI kernel-level Linux® expertise unmatched in the industry – to resolve customer issues in-house, fast, effectively.

Слайд 39

Boot Services provided by rack’s leader node
(1) Leader Node services all (64)

Boot Services provided by rack’s leader node (1) Leader Node services all
diskless AtokaP nodes in a given rack
Linux® OS images received over administrative GigE network
Enables scalability (leader nodes are single point of control in each rack)
No BIOS modifications necessary
File System Services
Root images mounted via InfiniBand (using NFS from rack’s leader node)
Root images can be shared by all blades in a rack
Use InfiniBand native storage (otherwise NFS)

Booting and Configuring OS

Слайд 40

Use a standard Linux® OS distribution
Use a standard kernel and remove all

Use a standard Linux® OS distribution Use a standard kernel and remove
unnecessary RPMs
Preserve 3rd party application certification
OS and boot support will be based on industry standards to assure compliance with standard data center operations
Synchronization of OS overhead (OS jitter, OS noise)
SGI value added hardware and software will reduce OS overhead effects
Enables greater performance on parallel workloads
Detailed slide in Back-Up
SGI® ProPack™ for Linux® OS
Combines essential tools for workflow optimization

Booting and Configuring OS

Слайд 41

Carlsbad SW: OS Noise (overhead) Synchronization Significant Speedups for Parallel Workloads

Time

Barrier Complete

Process on:

Process

Carlsbad SW: OS Noise (overhead) Synchronization Significant Speedups for Parallel Workloads Time
on:

Слайд 42

Node-level
Baseboard Management Controller (BMC) and onboard NICs
Utilize industry standard IPMI 2.0

Node-level Baseboard Management Controller (BMC) and onboard NICs Utilize industry standard IPMI
compliant protocols
Chassis management controller (CMC) in IRU
SGI developed CMC
Hierarchical design for scalability enabling larger systems
Provides dedicated GigE network for all management functions, remote console access, and cluster management
Provides dedicated GigE network for synchronization of OS overhead
System management and monitoring
Performed via a common cluster management software tool

System Management

Слайд 43

SGI Developed Solution
Based on Open Source Cluster Application Resources (OSCAR) from OpenClusterGroup.org
Provides

SGI Developed Solution Based on Open Source Cluster Application Resources (OSCAR) from
centralized SW and system provisioning, monitoring and cluster-specific management
Hierarchical design for scalability enabling larger systems
Cluster management features supported include:
Software installation (admin, leader, compute, and non-storage service nodes)
Software configuration and customization (admin, leader, and compute nodes)
Establish, expand and contract the Project Carlsbad cluster
Power control
Booting/shutdown
Console management
Monitoring, logging, alarms
Project Carlsbad Interconnect Verification Tool (diagnostic tool)
Scalable cluster-wide commands (C3)

Cluster Management

Слайд 44

SGI developed solution
Based on OpenFabrics Enterprise Distribution (OFED) from Open Fabrics Alliance

SGI developed solution Based on OpenFabrics Enterprise Distribution (OFED) from Open Fabrics
(OpenFabrics.org)
Subnet management (SM) based on OpenSM
Runs on a leader node
Features supported:
Automatic fabric configuration
Administrative fabric re-configuration (zoning-partitioning)
Management of virtual lanes (MPI traffic, Storage traffic)
Monitoring, diagnostic testing, SM software updating
Redundant SM with fail-over

InfiniBand Fabric Configuration & Management

Слайд 45

Storage Integration

Storage Integration

Слайд 46

Storage – Typical needs

Key Types of IO (each with different IO usage

Storage – Typical needs Key Types of IO (each with different IO
patterns)
Shared Systems Data “installed application” (mostly read-only, low reliability, low performance)
Persistent User Data “home directories” (read-write, high reliability, low performance)
High Performance “Scratch” storage, non-shared (read-write, low reliability, high performance)
High Performance specialized application IO, “shared common data” (read-write, high reliability, high performance)

Слайд 47

Storage – provisioning methods

Needs
Shared Systems Data
Persistent User Data
(home directories,
job input,

Storage – provisioning methods Needs Shared Systems Data Persistent User Data (home
job output)
High Performance “Scratch” storage,
non-shared.
High Performance Application Specific IO,
Shared (common data).

NFS-rdma

CXFS

NFS-rdma

Lustre*

NFS

XFS - iSER

Core
File Server

Core
File Server

File Server –
Sized to need.

Panasas*

GPFS*

Panasas*

GPFS*

NFS

Local Disks

XFS - iSER

Infiniband or GigE

* PS offering

Solution: protocols cluster Carlsbad

Router: IB<->Enet

Слайд 48

Storage – Carlsbad options

Shared Systems Data
Place on Leader Nodes – one to

Storage – Carlsbad options Shared Systems Data Place on Leader Nodes –
serve each rack
Persistent User Data
Use IB to Enet router, for Carlsbad access to facility data
Can use multiple routers for bandwidth
Scratch Storage
Configure a fileserver to need, can be 0.
disk/node eliminated: saving power, weight, cost.
High Performance Shared/Common Data
Configure specialized fileserving as needed.

Слайд 50

1H’07

2H’06

Platform Segment

RISC/Mainframe
Replacement Enterprise DP/MP

RISC Replacement
HPC DP/MP

Enterprise MP

Intel® E8870 Chipset (400 FSB) /
Enabled Chipsets

Intel® E8870 Chipset

1H’07 2H’06 Platform Segment RISC/Mainframe Replacement Enterprise DP/MP RISC Replacement HPC DP/MP
(400 FSB) /
Enabled Chipsets

Dual-Core Intel® Itanium® 2 Processor
9000 sequence; 24MB L3
Intel® Itanium® 2 Processor 9M / 1.66 GHz / 667 FSB

Dual-Core Intel® Itanium® 2 Processor
9000 sequence Intel® Itanium® 2 Processor (DP only)
3M / 1.60 GHz 400/533 FSB

Dual-Core Intel® Xeon® Processor 7000
Series 667/800 FSB

Intel® E8501 Chipset /
Enabled Chipsets

Dual-Core Intel® Itanium® 2 Processor 9000 sequence; 24MB L3

Dual-Core Intel® Xeon® Processor 7100 Series 667/800 FSB; 16MB L3

Dual-Core Intel® Itanium® 2 Processor
9000 sequence

Intel® E8870 Chipset (400 FSB) /
Enabled Chipsets

Intel® E8870 Chipset (400 FSB) /
Enabled Chipsets

Intel® E8501 Chipset /
Enabled Chipsets

Dual-Core Intel® Xeon® Processor 7100
Series 667/800 FSB; 16MB L3

Intel® Server & Workstation Platform Roadmap

Слайд 51

1H’07

2H’06

Platform Segment

Performance & Volume DP

Value DP

Entry UP

Intel® 5000P Chipset

Quad-Core Intel® Xeon® Processor 5300

1H’07 2H’06 Platform Segment Performance & Volume DP Value DP Entry UP
Series
Dual-Core Intel® Xeon® Processor 5100 Series
Dual-Core Intel® Xeon® Processor 5000 Series

Dual-Core Xeon® Processor 5100 Sequence
Dual-Core Xeon® Processor 5000 Sequence

Intel® 5000V Chipset

Dual-Core Xeon® Processor 5100 Sequence
Dual-Core Xeon® Processor 5000 Sequence

Intel® 3100 Series Chipsets

Dual-Core Intel® Xeon® Processor 3000 Series
Intel® Pentium® D Processor 900 Sequence

Intel® 5000P Chipset

Intel® 5000V Chipset

Quad-Core Intel® Xeon® Processor 3200 Series
Dual-Core Intel® Xeon® Processor 3000 Series
Intel® Pentium® D Processor 900 Sequence

Intel® 3100 Series Chipsets

Quad-Core Intel® Xeon® Processor 5300 Series
Dual-Core Intel® Xeon® Processor 5100 Series
Dual-Core Intel® Xeon® Processor 5000 Series

Intel® Server & Workstation Platform Roadmap

Слайд 52

Solve PFlp/Pbyte Problems
Maximize MPI Job Throughput
Mainframe-class RAS Capabilities
Ease of Use Co-Processors
Maximize Compute

Solve PFlp/Pbyte Problems Maximize MPI Job Throughput Mainframe-class RAS Capabilities Ease of
Density
Minimize Power/Heating

Customer Value Roadmap

CY 2006

2007

Nov’06

FUTURE

Ease of Program
Development
Ease of Administration
Fastest Time to Solution
Solve the Biggest Problems
Buy a System to Solve Your Problem
“Out of Box Experience”
Interoperability

Maximize Compute Density
Minimize Power/Heating
Use Compute/OS Standards
Maximize Compute/$
“Out of Box Experience”
Interoperability

Cheapest Initial Investment
Use Compute/OS Standards
Interoperability

Single-
System
Image
Cluster

Rack mount Blade

Слайд 53

SGI Technologies Roadmap

CY 2006

2007

Big Nodes
to 512S nodes
TBPB GAM
Cluster
2-4S nodes
10sGB GAM

Nov’06

Ultraviolet
UVH, NL5
Intel Itanium2

SGI Technologies Roadmap CY 2006 2007 Big Nodes to 512S nodes TBPB
®
2nd Generation RASC ™
Intel Xeon ®
SGI Enhanced RAS
Propack ™
Linux

Industry Leading Interconnect
(NumaLink ™ 4)
Intel Itanium2 ®
RASC™ Technology
Multi-paradigm Computing
Memory-only Blades
IRU Technology
Maximum Power Efficiency
Propack ™
“Out of Box Experience”
Linux

Intel Xeon ®
IRU Technology
IB4x 3D Torus
Maximum Power Efficiency
Propack ™
“Out of Box Experience”
Linux

FUTURE

Rack mount Blade

Standard Motherboards
Intel Xeon ®
GigE or IB4x
Propack ™
SGI Cluster Solution Stack
Linux or Windows

Motherboards Maximized
For HPC

Слайд 54

Server Roadmap

CY 2006

2007

Altix 4700
SHub2, NL4
Montvale

Big Nodes
to 512S nodes
TBPB GAM
Cluster
2-4S nodes
10sGB GAM

Nov’06

Ultraviolet
UVH, NL5
Tukwila

Server Roadmap CY 2006 2007 Altix 4700 SHub2, NL4 Montvale Big Nodes
/ Beckton

Altix 4700
SHub2, NL4
Montecito
Santa Fe/Dixon
Clovertown

-
-
Tigerton  Dunnington  Beckton

Altix XE 210/240
Woodcrest

FUTURE

Oro Valley
DPR* Optimized
4S Tigerton

Rack mount Blade

Ongoing advances in: Linux function, RAS, Density, Power/cooling, Easy Deployment
Gallup – 2x2
Clovertown
Taos – 4S
Tigerton

*DPR = Density, Power, Reliability

Слайд 56

Project Carlsbad and Altix® XE1300 Trounce the Competition!

Project Carlsbad and Altix® XE1300 Trounce the Competition!

Слайд 57

Product Comparison

Product Comparison

Слайд 58

Project Carlsbad Customer Value

Project Carlsbad Customer Value

Слайд 60

Project Carlsbad Water-Cooled Coils

Target Heat Rejection 95% water / 05% air

(4)

Project Carlsbad Water-Cooled Coils Target Heat Rejection 95% water / 05% air
Individual Coils

Chilled-Water Supply 45°F to 60°F (7.2°C to 15.6°C) 15 psi (103.4kPa) Max. 14.4 gpm (3.3 m3/hr) Max.

Swivel Coupling to Supply Hose

Branch Feed to Individual Coil

Condensate Drain Pan

Слайд 61

Project Carlsbad Water-Cooled Coils

Project Carlsbad Water-Cooled Coils

Слайд 62


Environmental Operating Windows

ASHRAE Class 1 Allowable Operating Window*: 59°F to 90°F (15°C

Environmental Operating Windows ASHRAE Class 1 Allowable Operating Window*: 59°F to 90°F
to 32°C) 20% Rh to 80% Rh (62.5°F (17°C) dew point Max)

Present SGI Operating Window: 41°F to 95°F (5°C to 35°C) 10% Rh to 90% Rh (non-condensing)

SGI Recommended Operating Window for Water-Cooled Coil: 68°F to 77°F (20°C to 25°C) 40% Rh to 50% Rh (non-condensing) Matches ASHRAE Class 1 Recommended*

* American Society of Heating Refrigerating, and Air-Conditioning Engineers, Inc. (ASHRAE), 2004, “Thermal Guidelines for Data Processing Environments”, Atlanta, GA

Слайд 63

Representative breakdown*
59% Computer Loads (33% to 73%)
25% HVAC Pumps & Chiller
10% HVAC

Representative breakdown* 59% Computer Loads (33% to 73%) 25% HVAC Pumps &
Air-Movement
05% UPS Losses
01% Lighting
1 kW datacom load ~ 1.7 kW load at facility mains transformer*
1.4 kW to 3.0 kW range

Data Center Energy Use

* Tschudi, W., et al, 2003 “Data Centers and Energy Use - Let’s Look at the Data”, American Council for an Energy-Efficient Economy (ACEEE) Paper No. 162

Слайд 64

Intel® Xeon® 5100 Series Platform

Die photos are not to scale; 1 Based

Intel® Xeon® 5100 Series Platform Die photos are not to scale; 1
on SPECint*_rate_base2000 vs. Intel Xeon Single- Core; 2 Vs. DDR2-400 Memory; 3 Vs. Standard Gigabit Ethernet

Dual Independent High-Speed
Buses

Up to 1333 MHz

Leading
Memory
Technology

FB-DIMM

High
Performance I/O

Intel® I/O Acceleration Technology

Intel®
Smart Cache Technology

4MB Shared L2

Up To 3X
Performance1

Up To 3X Faster
& 4x Capacity2

Greater than 2X Throughput3

Dual-Core Processors

1

2

I/O

MCH

Platform Innovation

Слайд 65

Intel® Xeon® 5300 Series Platform

Die photos are not to scale; 1 Based

Intel® Xeon® 5300 Series Platform Die photos are not to scale; 1
on SPECint*_rate_base2000 vs. Intel Xeon Single- Core; 2 Vs. DDR2-400 Memory; 3 Vs. Standard Gigabit Ethernet

Слайд 66

Project Carlsbad Blade

Green-creek

SIO3

Serial Int

Flash

FLASH

GbE

PCIe x8 (4GB/s)

FBD 533/677

FBD 533/677

FBD

Project Carlsbad Blade Green-creek SIO3 Serial Int Flash FLASH GbE PCIe x8
533/677

FBD 533/677

1066/1333 MTS

PCIe x8 (4GB/s)

DMI x4

PCIe x8

GbE

PCIe x8 (4GB/s)

X4 DDR IFB (4GB/s)

X4 DDR IFB (4GB/s)

PCIe x8 Connector

533 MHz 17GB/s read BW 667 MHz 21GB/s read BW

FSB 1066 MT/s (8.5GB/s) FSB 1333 MT/s (10.6GB/s)

Слайд 67

Chassis Manager Front Panel

9 Pin Serial
Console port

Stack up

Stack Dn

5

Chassis Manager Front Panel 9 Pin Serial Console port Stack up Stack
GEnet Ports

Leader
Local

Leader
Left

Leader
Right

1588
Left

1588
Right

Genet ports are equivalent.
Labels to indicate typical POR
Connectivity

Слайд 68

SGI® ProPack™

SGI® ProPack™

Слайд 69

SGI® ProPack 5 SP1 Features* and Benefits

IPF x86 IPF x86

SLES RHEL5

Feature

Benefit

SGI® ProPack 5 SP1 Features* and Benefits IPF x86 IPF x86 SLES RHEL5 Feature Benefit
Имя файла: sgi_carlsbad_nda.pptx
Количество просмотров: 27
Количество скачиваний: 0