Module 1 NetApp and the Storage Industry - Upgreat · © 2009 NetApp. All rights reserved. 47...

93
© 2009 NetApp. All rights reserved. Tag line, tag line NetApp Podstawy Bartłomiej Matysiak FAE @ Arrow ECS

Transcript of Module 1 NetApp and the Storage Industry - Upgreat · © 2009 NetApp. All rights reserved. 47...

© 2009 NetApp. All rights reserved.

Tag line, tag line

NetApp – Podstawy

Bartłomiej Matysiak

FAE @ Arrow ECS

© 2009 NetApp. All rights reserved. 2

Portfolio

© 2009 NetApp. All rights reserved.

Szeroka gama macierzy NetApp

FAS2520

E5600

E2700

E5500

EF540/EF560

FAS8020

FAS8040

FAS8060

FAS2550

FAS8080

336TB84 dysków

4TB flash pool

576TB144 dysków

4TB flash pool

1920TB480 dysków

6TB flash pool

2880TB720 dysków

12TB flash pool

4800TB1 200 dysków

18TB flash pool

5760TB1 440 dysków

36TB flash pool

192 dysków

384 dysków

384 dysków

120 dysków

© 2009 NetApp. All rights reserved.

Prosta ścieżka upgraedu

4

© 2009 NetApp. All rights reserved.

Modele z serii FAS2500

5

Oprogramowanie w

cenie macierzy

Oprogramowanie

dodatkowe

FC, iSCSI, NFS, CIFS

Data ONTAP Essentials:

Dodatkowe opcje S/W:

SnapRestore®

FAS22XX Series

SnapMirror®

SnapVault®

Oprogramowanie zarządzające

Deduplikacja

ThinProvisionig

Multistore

Snapshoty

Kompresja

FlexClone®

SnapManager® Suite

SnapLock C/E

Complete Bundle

© 2009 NetApp. All rights reserved. 6

FAS2520 Replaces FAS2220

5x more hybrid flash (4TB)

2x cluster size (4 nodes)

3x more memory (18GB)

40% more storage (84 drives)

4 10Gb Base T and 2 1GbE ports

1 1GbE management port

2 6Gb SAS ports

FAS2552 Replaces FAS2240-2

5x more hybrid flash (4TB)

2x cluster size (8 nodes)

3x more memory (18GB)

4 UTA2 and 2 1GbE ports

1 1GbE management port

2 6Gb SAS ports

FAS2554 Replaces FAS2240-4

5x more hybrid flash (4TB)

2x cluster size (8 nodes)

3x more memory (18GB)

4 UTA2 and 2 1GbE ports

1 1GbE management port

2 6Gb SAS ports

Setting a New Standard for Value with Entry Hybrid Arrays

Enhanced system architecture increases useful system life, minimizing acquisition costs

3x more memory, new I/O profile (10Gb Base-T on 2520, UTA2 on 255x)

Extended hybrid capabilities increases capacity, performance, delivering an improved ROI

5x more hybrid flash; already industry-leading performance (4x faster than competitors)

Increased scale-out support, simplifies growth and eliminates risk of downtime via NDO

2x more nodes; sets groundwork for clustered Data ONTAP® moving forward

NetApp Confidential – Limited Use Only

Introducing the FAS2500Next-Generation Entry Platform System

© 2009 NetApp. All rights reserved.

Entry Platform Positioning

7

SAS/SSD

FAS2240-2

FAS2240-4

SATA/SSD

FAS2220

SATA/SAS/SSD

FAS2552

FAS2554

FAS2520

SATA/SAS/SSD

SATA/SSD

SAS/SSD

36GB memory vs. 12GB 3x increase

4GB NVMEM vs. 2GB 2x increase

4TB vs. 800GB Flash Pool™ limit 5x increase

NetApp Confidential – Limited Use

© 2009 NetApp. All rights reserved. 8

A

B

1

2

DC AC

x22

DC AC

x22

e0a

e0b

0a0b

LNK LNK

SSN MAC

e0c e0d e0e e0f0c 0d 0e 0f

LNK LNK LNK LNK

NV

e0a

e0b

0a0b

LNK LNK

SSN MAC

e0c e0d e0e e0f0c 0d 0e 0f

LNK LNK LNK LNK

NV

FAS2552 and FAS2554 Controller I/O

UTA2SAS

Console Port

GbEUSB port

Private

Management Port

4 x Unified Target Adapter

(UTA2) ports

– 10GbE Ethernet

– 16Gb/s FC or 8Gb/s FC

2 x GbE ports

2 x SAS ports

1 x GbE management port

1 x private management port

1 x USB port (disabled)

1 x console port

NetApp Confidential – Limited Use

© 2009 NetApp. All rights reserved. 9

4 x 10GBaseT

2 x GbE ports

2 x SAS ports

1 x GbE management port

1 x private management port

1 x USB port (disabled)

1 x console port

FAS2520 Controller I/O

e0a

e0b

0a

LNK LNK

SSN MAC

e0c e0d e0e e0f

LNK LNKLNKLNK

0bNV

10GBaseTSAS

GbEUSB port

Console Port

Private

Management Port

NetApp Confidential – Limited Use

© 2009 NetApp. All rights reserved.

Introducing FAS8000Three New Models

FAS8020 replaces FAS3220

FAS8040 replaces FAS3250

FAS8060 replaces FAS6220

No IOXM option

Single chassis configurations

– Standalone with controller / blank

– HA with two controllers

Dual chassis configurations

– Dual-chassis HA requires

MetroCluster™

Ships on Data ONTAP® 8.2.1 RC2NetApp Confidential – Limited Use 10

© 2009 NetApp. All rights reserved. 11

1.5U

FAS8020 Controller Close-UpTaking a Closer Look

FAS8020 controller close-up

NetApp Confidential – Limited Use

© 2009 NetApp. All rights reserved. 12

2 x SAS ports

2 x 10GbE ports

2 x Unified Target Adapter (UTA2)

ports

– 10GbE Ethernet

– 16Gb/s FC

2 x GbE ports

PCIe Cards

Slots 1 and 2

SAS 10GbE

UTA2

GbE

Management

USB

Console

1 x Management port

– e0M runs at GbE, SP at 10/100

1 x private management port

1 x USB port (disabled)

1 x console port

2 x PCIe Gen 3 adapter slotsNVRAM and Attention LEDs

FAS8020 Controller I/OTaking a Closer Look

NetApp Confidential – Limited Use

© 2009 NetApp. All rights reserved. 13

3U

FAS8040/FAS8060 Controller Close-UpTaking a Closer Look

NetApp Confidential – Limited Use

© 2009 NetApp. All rights reserved.

FAS8040/FAS8060 Controller I/O

14NetApp Confidential – Limited Use

4 x SAS ports

4 x 10GbE ports

4 x UTA2 ports

– 16Gb FC or 10GbE Ethernet

4 x GbE port

SAS 10GbE GbE Mgmt ConsoleUTA2

PCIe Cards Slots 1 and 2

NVRAM and Attention LEDs

PCIe Cards Slots 3 and 4

USB (disabled)

1 x Management port

– e0M runs at GbE, SP at 10/100

1 x private management port

1 x USB port (disabled)

1 x console port

4 x PCIe Gen 3 adapter slots

© 2009 NetApp. All rights reserved.

New Unified Target Adapter 2 (UTA2)NetApp First to Market

Extreme performance

Industry-leading flexibility

Increased network efficiency

Field configurable for:

− 16Gb FC (adapts to 4Gb, 8Gb)

− 10GbE

Supported protocols:

− FC, iSCSI, FCoE, and NAS

Supported platforms:

− FAS/V3220, FAS/V3250

− FAS/V6200

− FAS8000

Data ONTAP 8.2.1RC2 and later

© 2009 NetApp. All rights reserved. 16

Półki dyskowe

© 2009 NetApp. All rights reserved. 17

DS4246

Mixed SSD-HDD– SSD + NLSAS HDD

© 2009 NetApp. All rights reserved. 18

DS4486

Idealna dla archiwów, backupów

Pojemność– 48 x 3.5” dysków 3TB SATA

Supportowalne platformy

– FAS/V3240 and FAS/V3270

– FAS/V6000 series and SA600

– FAS/V6200 series and SA620

© 2009 NetApp. All rights reserved. 19

Uniwersalność

Unified Storage

aka

dostęp plikowy

i blokowy w jednym

urządzeniu

© 2009 NetApp. All rights reserved. 20

Uniwersalność ????

© 2009 NetApp. All rights reserved. 21

PRAWDZIWA Uniwersalność

© 2009 NetApp. All rights reserved. 22

NFS (v2, v3, v4) [mostly Unix]

CIFS [Windows/Samba/MacOS X]

FCP

iSCSI [SCSI over TCP/IP]

Data ONTAP - Wspierane Protokoły

HTTP, HTTPS FTP

NDMP

SNMP

SMTP

Telnet, RSH, SSH, RPC

© 2009 NetApp. All rights reserved. 23

Zastosowanie

• Wirtualizacja (MS HyperV, Vmware, Xen, KVM)

• Aplikacje biznesowe ERP i inne (Oracle, MS SQL, Exchange, SAP)

© 2009 NetApp. All rights reserved. 24

Wirtualizacja

© 2009 NetApp. All rights reserved. 25

Integracja z aplikacjamiSnapManager dla …

© 2009 NetApp. All rights reserved. 26

Zastosowanie

• Wirtualizacja (MS HyperV, Vmware, Xen, KVM)

• Aplikacje biznesowe ERP i inne (Oracle, MS SQL, Exchange, SAP)

• Konsolidacja - serwer plików (windows, linux czy unix)

• Konsolidacja środowisk heterogenicznych

© 2009 NetApp. All rights reserved. 27

DepartmentalEnterprise

SAN (ERP, Bazy, Poczta)

Enterprise

NAS (Serwery plików)

Departmental

Disk RAID

Group

Vol

iSCSIFibre

Channel

Aggregate

LANCIFS NFS

© 2009 NetApp. All rights reserved. 28

Zastosowanie

• Wirtualizacja (MS HyperV, Vmware, Xen, KVM)

• Aplikacje biznesowe ERP i inne (Oracle, MS SQL, Exchange, SAP)

• Konsolidacja - serwer plików (windows, linux czy unix)

• Konsolidacja środowisk heterogenicznych

• Backup i HA-DR

© 2009 NetApp. All rights reserved. 29

Szerokie wsparcie dla wirtualizacji

– VMware®, Microsoft® Hyper-V™, and

Citrix XenServer

– Ścisła integracja

– SnapManager dla większości platform

– Integracja z VMware SRM dla

szybkiego przełączania lokalizacji po

awarii

Bezpieczeństwo

– Dane poszczególnych klientów na

odseparowanych częściach macierzy

(technologia Multistore)

Primary Data Center DR Site

VM1 VM2 VM3 VM1 VM2 VM3

Virtual

Storage

Partition

Data

Data

Data

Virtual

Storage

Partition

Data

Data

Data

SnapMirror®

SiteFailure

Bezpieczeństwo - DRThinReplication, SnapMirror

© 2009 NetApp. All rights reserved. 30

SnapProtect

Server

App data

Quntum i40

Virtualization

NAS data

SnapVault®

SnapMirror®

Content

Indexing

Secondary

Storage

Provisioning

Tape Copy

Management

App Consistent

Snapshot® copies

Replicated

Snapshot copies

OnCommand®

Server

BackupSnapProtect, SnapMirror, SnapVault

Dane z hostów

• Licencjonowanie

• Rozbudowa/skalowanie

• Wsparcie dla bibliotek

• Niezawodność

• KonsolidacjaProdukcja Backup

Dlaczego Backup na NetApp ?

© 2009 NetApp. All rights reserved. 31

Podstawy DataONTAP

© 2009 NetApp. All rights reserved. 32

Składniki Data ONTAP

FreeBSD

ke

rne

lm

od

ule

© 2009 NetApp. All rights reserved. 33

WAFL – Brak alokowania dedykowanych przestrzeni na dane oraz

metadane. Zapisy do najbliższego wolnego bloku

Berkeley Fast File System/Veritas File System/NTFS/etc. –

Zapis danych i metadanych do dedykowanej przestrzeni

...

...

Hybryda: system plików/volume

manager aka WAFL

© 2009 NetApp. All rights reserved. 34

Architektura WAFL

© 2009 NetApp. All rights reserved.

Write Request Data Flow: Write Buffer

NetworkNetwork

Stack

RS-232

HBA

NIC

Protocols

SAN

Service

NFS

Service

CIFS

Service

Memory Buffer

N

V

R

A

M

NVRAM Full

WAFLSAN Host

UNIX

Client

Windows

Client

NVLOG

NVLOG

NVLOG

NVLOG

NVLOG

Storage

RAID

© 2009 NetApp. All rights reserved.

WAFL

Write Request Data Flow: WAFL to RAID

NetworkNetwork

Stack

RS-232

HBA

NIC

Protocols

SAN

Service

NFS

Service

CIFS

Service

Memory Buffer

Storage

N

V

R

A

M

NVLOG

NVLOG

NVLOG

NVLOG

NVLOG

NVRAM Full

RAID

SAN Host

UNIX

Client

Windows

Client

© 2009 NetApp. All rights reserved.

Write Request Data Flow: RAID to Storage

NetworkNetwork

Stack

RS-232

HBA

NIC

Protocols

SAN

Service

NFS

Service

CIFS

Service

Memory Buffer

RAID

Storage

N

V

R

A

M

NVLOG

NVLOG

NVLOG

NVLOG

NVLOG

NVRAM Full

WAFL

4k

SAN Host

UNIX

Client

Windows

Client Block or zone

checksum

computed

© 2009 NetApp. All rights reserved.

Storage

Write Request Data Flow: Storage Writes

NetworkNetwork

Stack

RS-232

HBA

NIC

Protocols

SAN

Service

NFS

Service

CIFS

Service

Memory Buffer

N

V

R

A

M

NVLOG

NVLOG

NVLOG

NVLOG

NVLOG

NVRAM Full

WAFL

SAN Host

UNIX

Client

Windows

Client

RAID

© 2009 NetApp. All rights reserved. 40

Dyski fizyczne

RAID Group

Data blocks

Controller

B C D E F J

(A-D)PDCBA

H (E-H)PGF

:

:

:

:

:

:

:

:

:

:

:

:

:

:

:

E

Data disks + Parity disks

:Parity

A G H I

RAID 4

Data blocks

Controller

B C D E F J

(A-D)PDCBA

H(E-H)PGF

:

:

:

:

:

:

:

:

:

:

:

:

:

:

:

E

Data disks + Parity disks

:Parity

A G H I

RAID 5

© 2009 NetApp. All rights reserved. 41

Dyski fizyczne

RAID Group

Data blocks

(A-C)P(A-C)PCBA

F(D-F)P(D-F)PE

:

:

:

:

:

:

:

:

:

:

:

:

D

(G-I)P IH(G-I)PG

:

:

:

Data disks + Parity disks

Controller

B C D E F G H IA J

RAID 6

Data blocks

Controller

B C D E F J

(A-C)PSUMACBA

SUMA (D-F)PFE

:

:

:

:

:

:

:

:

:

:

:

:

:

:

:

D

Data disks + Parity disks

A G H I

RAID DP

© 2009 NetApp. All rights reserved. 42

RG 0

RG 1

RG 2

Aggregate

Dyski fizyczne

Aggregat

Default RAID type = raid_dp

© 2009 NetApp. All rights reserved. 44

Zarządzanie

Logicznym Storage’m

© 2009 NetApp. All rights reserved. 45

. . .. . .. . .

. . . . . .. . .

Flexvols: Wolumeny są logicznie rozłożone na wszystkie dyski w agregacie

Volume

“Tradycyjne” wolumeny: Wolumeny fizycznie związany z grupą RAID

tradvol 1

raidgroup

tradvol 2

raidgroup raidgroup

flexvol 1

raidgroup

flexvol 2

raidgroup raidgroup

aggregate

flexclone

© 2009 NetApp. All rights reserved. 46

Traditional

volumeNiski pozpiom

utylizacji

~20-30%

Wysoki poziom

utylizacji

~60-80%

Flexible

volume Dowolnie dostępna

App 2App 1

Zmarnowana

App 1 App 3

App 3

App 2

Zmarnowana

Woluminy tradycyjne vs FlexVOL

© 2009 NetApp. All rights reserved. 47

Aggregate

Aggregate = to kolekcja dysków,

chroniona przez RAID-4 lub RAID-DP,

mogą się rozszerzać ale nie kurczyć.

Tworzymy agregaty tak duże jak tylko

to możliwe

Limity:

- max ponad 600 Tb

- Nie miksujemy dysków różnego typu

- Nie miksujemy dysków różnego

rozmiaru

FlexVol1

FlexVol2

Flexible Volume = logiczna przestrzeń

wewnątrz agregatu

aggregate, zawierają dane, np. pliki

Mogą Rosnąć oraz się Kurczyć

File1 File2 ...

LUNLUN = Logical Unit Number = logiczna

Przestrzeń wewnątrz woluminu przypisana

do serwera jako logiczny dysk.

Może się rozrastać ale nie kurczyć.

Zarządzanie Logicznym Storage’m

© 2009 NetApp. All rights reserved.

Snapshot

© 2009 NetApp. All rights reserved.

How Snapshot Technology Works

CA B

Active Data

File X

Disk Blocks

© 2009 NetApp. All rights reserved.

How Snapshot Technology Works (Cont.)

Snapshot

File X

CA B

Active Data

File X

Disk Blocks Blocks are “frozen” on disk

Consistent point-in-time copy

Ready to use (read-only)

Consumes no space*

* With the exception of 4-KB replicated root inode

block that defines the Snapshot copy

© 2009 NetApp. All rights reserved.

How Snapshot Technology Works (Cont.)

Snapshot

File/LUN: X

C C’A B

Active Data

File/LUN: X

Disk Blocks

Client sends new data for block C

New

Data

© 2009 NetApp. All rights reserved.

How Snapshot Technology Works (Cont.)

Active version of X is now comprised of blocks A, B, C’

Snapshot version of X remains comprised of blocks A, B, C

Atomically moves active data to new consistent state

Snapshot

File X

C C’A B

Active Data

File X

Disk Blocks

© 2009 NetApp. All rights reserved.

FlexClone

© 2009 NetApp. All rights reserved. 54

Mamy volume1

Volume 2

(Clone)

Tworzymy klon nowego

volumenu bazując na

snapshocie

Volume 1

Snapshot™

Kopia

Volume 1

Tworzymy kopię przez

Snapshot

Efekt:

Niezależne kopie

volumenów

Modyfikujemy vol2

Modyfikujemy vol1

Dane

zapisane

na dysku

Snapshot Copy

Cloned Volume

Zmienione bloki

Volume 1

Zmienione bloki

Snapshot - FlexClone

© 2009 NetApp. All rights reserved. 55

Dokładne kopie – zabierają

przestrzeń i czas

Test 1

QATest 2

Dev 1 Dev 2

Production Mirror Production

Test 1 Test 2

QA

Dev 1 Dev 2

FlexClone

Snapshot - FlexClone

© 2009 NetApp. All rights reserved.

Storage Efficiency

© 2009 NetApp. All rights reserved. 57

Data

Standardowe dane w

volumenie

Meta Data

Deduplication

Process

Dane po

deduplikacji w

wolumenie

Deduped

(Single Instance) Storage

Meta Data

DEDUPLIKACJA

Deduplikowane są 4K bloki w

aktywnym systemie plików na

wolumeninie typu Flex

© 2009 NetApp. All rights reserved. 58

Path State Status Progress

/vol/vol5 Enabled Active 40MB (20%) done

Path State Status Progress

/vol/vol5 Enabled Active 30MB Verified

OR

/vol/vol5 Enabled Active 10% Merged

netapp1> sis status

Path State Status Progress

/vol/vol5 Enabled Active 25 MB Scanned

Path State Status Progress

/vol/vol5 Enabled Active 25 MB Searched

Skanowanie

volumenu

Wyszukiwanie

duplikatów danych

Deduplikacja

Usuwanie

nieużywanych

danych

DEDUPLIKACJAWszystkie kroki

Wyniknetapp1> df –s /vol/vol5

Filesystemused saved %saved

/vol/vol5/ 24072140 9316052 28%

© 2009 NetApp. All rights reserved. 59

DEDUPLIKACJAMetody uruchamiania

Harmonogram – określamy dni oraz czas

Ręcznie z Command line

Automatycznie, kiedy 20 % nowych danych zostanie zapisanych do wolumenu

Tylko jeden proces deduplikacji na pojedynczym wolumenie może być uruchomiony w danym czasie

Do 8 procesów deduplikacji może być jednocześnie uruchomionych na tej samej pamięci masowej NetAppa)jeśli kolejny pochodzi z harmonogramu to trafia do kolejki

b) jeśli kolejny to uruchamiany ręcznie to się nie wykona

c) stosowany jest mechanizm round-robin więc pierwszy nie będzie zawsze uruchamiany

© 2009 NetApp. All rights reserved. 60

DEDUPLIKACJABest Practices

Ogranicz liczbę aktywnych snapshotów na wolumenie

Uruchamiaj dedupliakcję tylko wtedy kiedy

potrzebujesz (przy mały przyroście nowych danych =

zbędne obciążenie dla procesora, a mała

efektywność)

Popularna opcja uruchamiania ręcznego

© 2009 NetApp. All rights reserved. 61

Kompresja

© 2009 NetApp. All rights reserved. 62

Deduplikacja oraz SnapMirror

Site A,

e.g. Branch Office

Network Efficiency

Reduced data across the

network

Site B,

e.g. Central Site

Dedupe

Optimized

Storage

VSM

Dedupe

Optimized

Storage

Vol1

(deduped)

Vol1’

(deduped)

© 2009 NetApp. All rights reserved. 63

SSD i Flash

© 2009 NetApp. All rights reserved.

FlexClone

© 2009 NetApp. All rights reserved. 65

Performance Acceleration Module

FLASHCACHE

© 2009 NetApp. All rights reserved. 6

6

66 13

PAM II

Karta PCIe, taka sama jak karta PAM I

Dostępna wersja 1 TB

Do 4 TB dodatkowego cache jest

wspierane na pamięć masową w trybie

HA

© 2009 NetApp. All rights reserved.

Read data block from

disk

32NetApp Confidential – Internal Use Only

Flash Cache

Disk

Czas boot’owania niższy o 47%

Operacje na dyskach niższe o 50%

Wzrost wydajności o 71%

Błyskawiczny start z cache

© 2009 NetApp. All rights reserved.

© 2009 NetApp. All rights reserved.

© 2009 NetApp. All rights reserved.

Take a Walk Down “A Path of Latency”

NetApp Storage ArrayClient

NetApp Confidential - Limited Use Only 70

© 2009 NetApp. All rights reserved.

Autosupport

© 2009 NetApp. All rights reserved. 72

DataONTAP AutoSupport

© 2009 NetApp. All rights reserved. 73

DataONTAP AutoSupport

© 2009 NetApp. All rights reserved.

HA

© 2009 NetApp. All rights reserved.

Loss of Cable

Loss of Shelf

Loss of Controller

Loss of Building

Loss of Site

High Availability Spectrum

High availability is the process of providing

solutions that increase storage

resiliency

NetApp® provides solutions

to overcome all these

business continuity problems

Loss of Region

© 2009 NetApp. All rights reserved.

Shelf Multipathing

8 975 642 31

8 975 642 31

e0e e0f

e0a e0b e0c e0dRLM

LN

K

LN

K

LN

K

LN

K

0f0e 0g 0h

LN

K

LN

K

LN

K

LN

K

0b0a 0c 0d

2HI-POT

2200VDC1HI-POT

2200VDC

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

PROPERLY SHUT DOWN SYSTEM BEFORE OPENING CHASSIS.

AC IN PWR AC IN PWR

X2

X2

SHELFID

1Gb 4Gb

A

B1 2

2Gb

3

2Gb

ESH4

4Gb1GbELP

2Gb

ESH4

4Gb 1Gb ELP

Adding a second

cable provides

availability even

if a single cable

goes bad

Multipathing provides:

1. Increased availability

2. Increased throughput

© 2009 NetApp. All rights reserved.

SyncMirror

SyncMirror may be configured:

– In a stand alone storage system

– or, most commonly, in a high-availability pair

– SyncMirror divides up disks into 2 pools

aggr1

plex0 plex1

Pool0 becomes plex0 Pool1 becomes plex1

/vol

/vol0

/etc

/vol

/vol0

/etc

Data within

the aggregate

Mirrored data

within the aggregate

© 2009 NetApp. All rights reserved.

High-Availability Controller Configuration

8 975 642 31

8 975 642 31

e0e e0f

e0a e0b e0c e0dRLM

LN

K

LN

K

LN

K

LN

K

0f0e 0g 0h

LN

K

LN

K

LN

K

LN

K

0b0a 0c 0d

2HI-POT

2200VDC1HI-POT

2200VDC

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

PROPERLY SHUT DOWN SYSTEM BEFORE OPENING CHASSIS.

AC IN PWR AC IN PWR

8 975 642 31

8 975 642 31

e0e e0f

e0a e0b e0c e0dRLM

LN

K

LN

K

LN

K

LN

K

0f0e 0g 0h

LN

K

LN

K

LN

K

LN

K

0b0a 0c 0d

2HI-POT

2200VDC1HI-POT

2200VDC

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

PROPERLY SHUT DOWN SYSTEM BEFORE OPENING CHASSIS.

AC IN PWR AC IN PWR

X2

X2

SHELFID

1Gb 4Gb

A

B1 2

2Gb

32Gb

ESH4

4Gb1GbELP

2Gb

ESH4

4Gb 1Gb ELP

X2

X2

SHELFID

1Gb 4Gb

A

B1 2

2Gb

3

2Gb

ESH4

4Gb1GbELP

2Gb

ESH4

4Gb 1Gb ELP

Connected to its own disk shelvesConnected to the other controller’s disk shelvesStorage controllers are connected to each other

If a storage controller fails, the surviving partner serves

the data of failed controller

© 2009 NetApp. All rights reserved.

Surviving partner has two identities, with each identity able to

access appropriate volumes and networks only

You can access the failed node using console commands

Takeover Operation

8 975 642 31

8 975 642 31

e0e e0f

e0a e0b e0c e0dRLM

LN

K

LN

K

LN

K

LN

K

0f0e 0g 0h

LN

K

LN

K

LN

K

LN

K

0b0a 0c 0d

2HI-POT

2200VDC1HI-POT

2200VDC

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

PROPERLY SHUT DOWN SYSTEM BEFORE OPENING CHASSIS.

AC IN PWR AC IN PWR

8 975 642 31

8 975 642 31

e0e e0f

e0a e0b e0c e0dRLM

LN

K

LN

K

LN

K

LN

K

0f0e 0g 0h

LN

K

LN

K

LN

K

LN

K

0b0a 0c 0d

2HI-POT

2200VDC1HI-POT

2200VDC

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

PROPERLY SHUT DOWN SYSTEM BEFORE OPENING CHASSIS.

AC IN PWR AC IN PWR

X2

X2

SHELFID

1Gb 4Gb

A

B1 2

2Gb

3

2Gb

ESH4

4Gb1GbELP

2Gb

ESH4

4Gb 1Gb ELP

X2

X2

SHELFID

1Gb 4Gb

A

B1 2

2Gb

3

2Gb

ESH4

4Gb1GbELP

2Gb

ESH4

4Gb 1Gb ELP

system> cf takeover

system system2

© 2009 NetApp. All rights reserved.

Stretch MetroCluster expands high-availability to up to 300m

See the High Availability Web-based course for more information

Building 2

X2

X2

SHELFID

1Gb 4Gb

A

B1 2

2Gb

3

2Gb

ESH4

4Gb1GbELP

2Gb

ESH4

4Gb 1Gb ELP

8 975 642 31

8 975 642 31

e0e e0f

e0a e0b e0c e0dRLM

LN

K

LN

K

LN

K

LN

K

0f0e 0g 0h

LN

K

LN

K

LN

K

LN

K

0b0a 0c 0d

2HI-POT

2200VDC1HI-POT

2200VDC

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

PROPERLY SHUT DOWN SYSTEM BEFORE OPENING CHASSIS.

AC IN PWR AC IN PWR

8 975 642 31

8 975 642 31

e0e e0f

e0a e0b e0c e0dRLM

LN

K

LN

K

LN

K

LN

K

0f0e 0g 0h

LN

K

LN

K

LN

K

LN

K

0b0a 0c 0d

2HI-POT

2200VDC1HI-POT

2200VDC

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

PROPERLY SHUT DOWN SYSTEM BEFORE OPENING CHASSIS.

AC IN PWR AC IN PWR

X2

X2

SHELFID

1Gb 4Gb

A

B1 2

2Gb

32Gb

ESH4

4Gb1GbELP

2Gb

ESH4

4Gb 1Gb ELP

Stretch MetroCluster

Building 1

© 2009 NetApp. All rights reserved.

Fabric-attached MetroCluster expands high-availability up to 100 km

See the High Availability Web-based course for more information

Site 2

X2

X2

SHELFID

1Gb 4Gb

A

B1 2

2Gb

3

2Gb

ESH4

4Gb1GbELP

2Gb

ESH4

4Gb 1Gb ELP

8 975 642 31

8 975 642 31

e0e e0f

e0a e0b e0c e0dRLM

LN

K

LN

K

LN

K

LN

K

0f0e 0g 0h

LN

K

LN

K

LN

K

LN

K

0b0a 0c 0d

2HI-POT

2200VDC1HI-POT

2200VDC

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

PROPERLY SHUT DOWN SYSTEM BEFORE OPENING CHASSIS.

AC IN PWR AC IN PWR

8 975 642 31

8 975 642 31

e0e e0f

e0a e0b e0c e0dRLM

LN

K

LN

K

LN

K

LN

K

0f0e 0g 0h

LN

K

LN

K

LN

K

LN

K

0b0a 0c 0d

2HI-POT

2200VDC1HI-POT

2200VDC

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

PROPERLY SHUT DOWN SYSTEM BEFORE OPENING CHASSIS.

AC IN PWR AC IN PWR

X2

X2

SHELFID

1Gb 4Gb

A

B1 2

2Gb

32Gb

ESH4

4Gb1GbELP

2Gb

ESH4

4Gb 1Gb ELP

Fabric-Attached MetroCluster

Site1

ISL Trunk

© 2009 NetApp. All rights reserved.

SnapMirror allows mirroring volumes or qtrees

See the NetApp Protection Software Administration ILT course for

more information

Region 2

X2

X2

SHELFID

1Gb 4Gb

A

B1 2

2Gb

3

2Gb

ESH4

4Gb1GbELP

2Gb

ESH4

4Gb 1Gb ELP

8 975 642 31

8 975 642 31

e0e e0f

e0a e0b e0c e0dRLM

LN

K

LN

K

LN

K

LN

K

0f0e 0g 0h

LN

K

LN

K

LN

K

LN

K

0b0a 0c 0d

2HI-POT

2200VDC1HI-POT

2200VDC

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

PROPERLY SHUT DOWN SYSTEM BEFORE OPENING CHASSIS.

AC IN PWR AC IN PWR

8 975 642 31

8 975 642 31

e0e e0f

e0a e0b e0c e0dRLM

LN

K

LN

K

LN

K

LN

K

0f0e 0g 0h

LN

K

LN

K

LN

K

LN

K

0b0a 0c 0d

2HI-POT

2200VDC1HI-POT

2200VDC

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

REPLACE THIS ITEM

WITHIN 2 MINUTES OF

REMOVAL

PROPERLY SHUT DOWN SYSTEM BEFORE OPENING CHASSIS.

AC IN PWR AC IN PWR

X2

X2

SHELFID

1Gb 4Gb

A

B1 2

2Gb

32Gb

ESH4

4Gb1GbELP

2Gb

ESH4

4Gb 1Gb ELP

SnapMirror

Region 1

© 2009 NetApp. All rights reserved.

Wsparcie dla VMware Site Recovery Manager

Production Recovery Test/Dev

VMware ESX

vSphere

VMware ESX

vSphere

NetApp

SnapMirror

Dzięki funkcjonalności FlexClone możemy testować

lokalizację zapasową

© 2009 NetApp. All rights reserved.

Swoboda w budowie centrum zapasowego

Protected Site Recovery Site

iSCSIFC

FC Disk SATA Disk

NetApp

SnapMirror

© 2009 NetApp. All rights reserved.

Zoptymalizowana replikacja danych

Protected Site Recovery Site

Zapis nowych danych

Przed deduplikacją Po deduplikacji

Data Deduplication

NetApp

SnapMirror

© 2009 NetApp. All rights reserved.

Zoptymalizowana replikacja danych

Kompresja w celu optymalizacji wykorzystania łącza WAN

Recovery SiteProtected Site

Po deduplikacji

Compression DecompressionNetApp

SnapMirror

© 2009 NetApp. All rights reserved. 2

Scale Out aka

Clustered Data Ontap

© 2009 NetApp. All rights reserved.

LIF3LIF4 LIF2 LIF1LIF2 LIF1

HA

Clustered ONTAP 8.2Architectural Benefits

HA

Consolidated management

Access to any data from

anywhere

Nondisruptive operations

Dynamic flexibility

Virtualized “tiered services”

VS2

VS1

Integrated data protection and

efficiency

NetApp Confidential - Limited Use Only 88

© 2009 NetApp. All rights reserved.

Outstanding performance that scales linearly with node count

No client-side code to manage

Each Vserver provides a separate consistent namespace

Non-disruptively move, load-balance, and change the storage without changing namespace

Seamlessly scale to many petabytes across 24 nodes

B

A2

A3

A1

B1B2

A

R

C A4

Single NFS Mount / CIFS Share

Clustered ONTAP 8.2NAS Benefits

A B

A1

A2 A3B1 B2

R

C

A4

NetApp Confidential - Limited Use Only 89

© 2009 NetApp. All rights reserved.

ActiveNonoptimized

MPIO

ALUA

ActiveNonoptimized

ActiveNonoptimized

ActiveOptimized

AA1

BB2

ActiveOptimized

Clustered ONTAP 8.2SAN Access: Optimized path to each LUN

LIF1LIF2LIF3LIF4

ActiveNonoptimized

NetApp Confidential - Limited Use Only 90

© 2009 NetApp. All rights reserved.

5 6 7

Storage System Architecture

8

9

10

56

8

9

10

7 1 1

4 4

2

3

Controller or node1

HA interconnect2

MPHA storage connections3

Disk shelf4

System memory5

NVRAM

Flash Cache

SSD aggregate

HDD aggregate

Flash Pool

6

7

8

9

10

HA PairHA PairHA Pair

Cluster

Clustered Data ONTAP

11

Cluster Network (Clustered Data ONTAP)11

Additional Cluster Nodes (Clustered Data ONTAP)12

12

11

NetApp Confidential - Limited Use Only 91

© 2009 NetApp. All rights reserved.

Clustered Data ONTAP Data Access

Volume

Indirect

Volume

Direct Data Access

The target volume is

owned by the node that

is hosting the LIF

Indirect Data Access

The target volume is not

owned by the node that is

hosting the LIF

LIF

LIF

NetApp Confidential - Limited Use Only 92

© 2009 NetApp. All rights reserved.

Move flexible volumes, LUNs, and LIFs while data is accessed

The client or host view remains the same as data and network access changes

Nondisruptive operations

NAS

LIF

SAN

LIF

NAS

LIF

SAN

LIF

SAN

LIF

SAN

LIF

SAN

LIFSAN

LIF

NAS

LIF

SAN

LIF

SAN

LIF

SAN

LIF Storage Virtual Machine (SVM) 1

Storage Virtual Machine (SVM) 2

Storage Virtual Machine (SVM) 3

© 2009 NetApp. All rights reserved.

Nondisruptive operations

Online hardware upgrade of an

HA pair using aggregate relocate

(ARL)

Online shelf removal and

replacement with nondisruptive

shelf removal (NDSR)

LIFLIF

HA interconnect and cluster network

omitted for clarity

LUN

© 2009 NetApp. All rights reserved.

The host and client view of a cluster

Storage Virtual Machines (SVMs)

Virtual storage systems

Define storage services available to

tenants or applications

Required for data access

Serve SAN, NAS, or both

Include FlexVol® volumes and LUNs

Can use physical storage on any

cluster node

Include logical interfaces (LIFs); a LIF

may have an IP address or a WWPN

Can have LIFs on any cluster node

LIF

SVM 2

LIF

LUN

LIF

SVM 1

LIF

LUN

LIF

SVM 3

Cluster

Tenant 1

Block and File

Storage Services

Tenant 2

Block Storage

Services

Tenant 3

File Storage

Services