Jak skutecznie wykorzystać zBX

45
© 2011 IBM Corporation IBM zEnterprise - Freedom through Design IBM Blade Center Extension – How to effectively use it ? Mike Storzer Certified Senior IT Specialist TMCC R&D Client Centers, Boeblingen Lab

Transcript of Jak skutecznie wykorzystać zBX

Page 1: Jak skutecznie wykorzystać zBX

© 2011 IBM Corporation

IBM zEnterprise - Freedom through Design

IBM Blade Center Extension – How to effectively use it ?

Mike StorzerCertified Senior IT SpecialistTMCC R&D Client Centers, Boeblingen Lab

Page 2: Jak skutecznie wykorzystać zBX

2 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Agenda

zEnterprise/zBX Technical Overview – Update– Unified Resource Manager

– zBX Blades

– zBX HighAvailability

Unified Resource Manager – live demo

Page 3: Jak skutecznie wykorzystać zBX

3 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Web Servers

Application Servers

Database ServerRouter

Router

Storage

What is zEnterprise all about ? … it’s ALL about the workload…

Connected Integrated Flexible, Dynamic, and

Responsive Aligned with Business

Objectives

Subset representing a specific workload

Firewall

Page 4: Jak skutecznie wykorzystać zBX

4 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Firewall

Private, secure

networks

Web Servers

Application Servers –z/OS – zLinux – AIX –Distr. Linux - Windows

Database Server – z/OS

zBladeCenter Extension

Storage

It’s ALL about the workload…

Connected Integrated Flexible, Dynamic,

and Responsive Aligned with Business

Objectives zEnterprise

Unified Resource and Workload Management

Page 5: Jak skutecznie wykorzystać zBX

5 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

A “System of Systems” for Predictable Service Delivery

Unifies management of resources, extending IBM System z® qualities of service end-to-end across workloads

Provides platform, hardware and workload management

Optimized to host transaction, and mission-critical applications

The most efficient platform for large-scale Linux® consolidation

Massive scale-up – 26 MIPS to over 50K MIPS

IBM zEnterprise™ 196 (z196) or IBM zEnterprise 114 (z114)

Selected IBM POWER7® blades and IBM System x® blades for deploying applications in a multi-tier architecture

High-performance optimizers and appliances to accelerate time to insight and reduce cost

Dedicated high-performance private network

zEnterprise Unified Resource Manager

zEnterprise BladeCenter® Extension (zBX)

© 2011 IBM Corporation

Page 6: Jak skutecznie wykorzystać zBX

6 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Z CPU, Memory and IO

PR/SM

Z CPU, Memory and IO

SESE

PR/SM

z/VM

Virt

ual M

achi

nezO

S Linu

x

Linu

x

Linu

x

Virt

ual M

achi

nezO

S

Virt

ual M

achi

nezO

S

Virt

ual M

achi

nezO

S

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

AMM

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

AMM

z Blade Extension

ISS

DP

HPC

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

x86 Power

AMM

Virt

ual M

achi

nezO

S

Z CPU, Memory and IO

PR/SM

Z CPU, Memory and IO

SESESESE

PR/SM

z/VMz/VM

Virt

ual M

achi

nezO

SVi

rtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S Linu

x

Linu

x

Linu

x

Linu

xLi

nux

Linu

x

Linu

xLi

nux

Linu

x

Linu

xLi

nux

Linu

x

Virt

ual M

achi

nezO

SVi

rtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S

Virt

ual M

achi

nezO

SVi

rtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S

Virt

ual M

achi

nezO

SVi

rtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

AMM

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

AMMAMM

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

AMM

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

AMMAMM

z Blade Extension

ISS

DP

HPC

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

x86 Power

AMMAMM

Virt

ual M

achi

nezO

SVi

rtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S

Z CPU, Memory and IO

PR/SM

Z CPU, Memory and IO

SESE

PR/SM

z/VM

Virt

ual M

achi

nezO

S Linu

x

Linu

x

Linu

x

Virt

ual M

achi

nezO

S

Virt

ual M

achi

nezO

S

Virt

ual M

achi

nezO

S

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

AMM

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

AMM

z Blade Extension

ISS

DP

HPC

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

x86 Power

AMM

Virt

ual M

achi

nezO

S

Z CPU, Memory and IO

PR/SM

Z CPU, Memory and IO

SESESESE

PR/SM

z/VMz/VM

Virt

ual M

achi

nezO

SV

irtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S Linu

x

Linu

x

Linu

x

Linu

xLi

nux

Linu

x

Linu

xLi

nux

Linu

x

Linu

xLi

nux

Linu

x

Virt

ual M

achi

nezO

SV

irtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S

Virt

ual M

achi

nezO

SVi

rtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S

Virt

ual M

achi

nezO

SVi

rtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

AMM

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

AMMAMM

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

AMM

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

AMMAMM

z Blade Extension

ISS

DP

HPC

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

x86 Power

AMMAMM

Virt

ual M

achi

nezO

SV

irtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S

Z CPU, Memory and IO

PR/SM

Z CPU, Memory and IO

SESE

PR/SM

z/VM

Virt

ual M

achi

nezO

S Linu

x

Linu

x

Linu

x

Virt

ual M

achi

nezO

S

Virt

ual M

achi

nezO

S

Virt

ual M

achi

nezO

S

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

AMM

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

AMM

z Blade Extension

ISS

DP

HPC

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

x86 Power

AMM

Virt

ual M

achi

nezO

S

Z CPU, Memory and IO

PR/SM

Z CPU, Memory and IO

SESESESE

PR/SM

z/VMz/VM

Virt

ual M

achi

nezO

SVi

rtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S Linu

x

Linu

x

Linu

x

Linu

xLi

nux

Linu

x

Linu

xLi

nux

Linu

x

Linu

xLi

nux

Linu

x

Virt

ual M

achi

nezO

SVi

rtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S

Virt

ual M

achi

nezO

SV

irtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S

Virt

ual M

achi

nezO

SV

irtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

AMM

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

AMMAMM

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

AMM

z Blade Extension

ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

x

Linu

x

Linu

x ISS

DP

Cel

l

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

AMMAMM

z Blade Extension

ISS

DP

HPC

DW

A

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

pHyp

AIX

AIX

AIX

AIX

AIX

AIX

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

xHyp

Linu

x

Linu

x

Linu

x

xHyp

Linu

xLi

nux

Linu

xLi

nux

Linu

xLi

nux

x86 Power

AMMAMM

Virt

ual M

achi

nezO

SVi

rtua

l Mac

hine

zOS

Virt

ual M

achi

nezO

S

HM

C –

Uni

fied

Res

ourc

e M

anag

er

zEnterprise NodezEnterprise Ensemble

Z CPU, Memory and IO

PR/SM

Z CPU, Memory and IO

SESE

PR/SM

z/VM

Virt

ual M

achi

ne

zOS

Linu

x

Linu

x

Linu

x

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Z CPU, Memory and IO

PR/SM

Z CPU, Memory and IO

SESESESE

PR/SM

z/VMz/VM

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Linu

x

Linu

x

Linu

x

Linu

xLi

nux

Linu

x

Linu

xLi

nux

Linu

x

Linu

xLi

nux

Linu

x

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

Virt

ual M

achi

ne

zOS

An ensemble allows you to have a single pool of resources –integrating system and workload management across the

multi-system, multi-tier, multi-architecture environment.

An ensemble is a collection of up to eight zEnterprise nodes that are managed collectively by the Unified Resource Manager as a single logical virtualized system A zEnterprise node is a z196/z114 with 0 or 1

zBX. The zBX may contain from 1 to 4 racks each containing up to two BladeCenters. At least one node must have a zBX installed zEnterprise nodes are deployed within a single

site Automated failover to ensemble back up HMC

What is a Ensemble?

Page 7: Jak skutecznie wykorzystać zBX

7 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Creating an integrated solution experience … blades are easier to deploy and manage– Increased optics for fibre channel connections– Infrastructure built and tested at the factory– zBX hardware redundancy provides improved availability – IBM System z engineer for installation, service and upgrade process

Improving the connectivity between blades and IBM System z– Isolated, secure, redundant network dynamically configured– High speed 10Gb/EN dedicated network for data– New 1 GbE optics for access from zBX to customer network (routers only)– Lower latency due to fewer devices

Preserving the customer application architecture– No modifications required for operating systems or applications– No System z software running in IBM zEnterprise BladeCenter Extension (zBX)– Customer network and storage architectures unchanged

zBX – A Uniquely Configured Extension of the zEnterpriseLooks like a rack with BladeCenters in it … but so much more

… managed by the zEnterprise Unified Resource Manager

Page 8: Jak skutecznie wykorzystać zBX

8 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

New Blades Provide Added Flexibility for Workload Deployment andIntegration

IBM zEnterprise BladeCenter Extension (zBX)Machine Type: 2458 Mod 002

One to four – 42u racks – capacity for up to 112 blades

• Up to 112 PS701 Power blades• Up to 28* HX5 System x blades• Up to 28 DataPower XI50z blades (double-

wide)

Optimizers• IBM WebSphere DataPower Integration

Appliance XI50z for zEnterprise

Select IBM Blades• IBM BladeCenter PS701 Express• IBM BladeCenter HX5 (7873)

IBM System x Blades– IBM BladeCenter HX5 7873 dual-socket 16-core blades– Four supported memory configurations for zBX – 64 GB, 128 GB, 192

GB, 256 GB IBM POWER7 Blades

– IBM BladeCenter PS701 8-core processor 3.0GHz– Three configurations supported in zBX - 32 GB, 64 GB, 128 GB

Flexibility in ordering – acquired though existing channels, including IBM Unified Resource Manager will install hypervisor on blades in

the zBX– Integrated hypervisor (KVM-based) for System x blades– PowerVM Enterprise Edition for POWER7 blades

Up to 112 Blades supported on zBX– Ability to mix and match blades in the same chassis – Number of blades supported varies by type

Blades assume System x warranty and maintenance when installed in the zBX

*Support for 56 System x blades (March 30, 2012)

Page 9: Jak skutecznie wykorzystać zBX

9 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Syst

em z

Har

dwar

e M

anag

emen

t Con

sole

(HM

C)

with

Uni

fied

Res

ourc

e M

anag

er

zBX

Select IBM Blades

Blade HW Resources

Optimizers

Dat

aPow

er X

I50z

z HW Resources

z/OS®

Support Element

Linuxon

System z

z/VM

Unified Resource Manager

Private data network (IEDN)

Customer Network Customer Network

System z Host

Linux on System x

AIX on POWER7

Dat

aPow

erXI

50z

Blade Virtualization

Blade Virtualization

System z PR/SM™

z/TPF z/VSE®

Linux on System z

Windows on System

x

Blade Virtualization

Private High Speed Data Network IEDN

Private Management Network INMNPrivate Management Network (information only)

Putting zEnterprise System to the TaskUse the smarter solution to improve your application design

Page 10: Jak skutecznie wykorzystać zBX

10 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

HMCHMC

Hypervisors Energy

Networks

Energy Management Monitoring and trend reporting of CPU energy

efficiency

Performance

VirtualServers

Operations

Operational Controls Auto-discovery and configuration support

for new resources (including storage) Cross platform hardware problem

detection, reporting and call home Physical hardware configuration, backup

and restore Delivery of system activity using new user

interface

Hypervisor Management Integrated deployment and configuration of

hypervisor Hypervisors (except z/VM) shipped and

serviced as firmware Management of ISO images Creation of virtual networks

Network ManagementMonitoring and collecting metrics of networking resources Management of virtual networks including access control

Key Manage suite Adv Mgmt suite Automate suite

zEnterprise Unified Resource ManagerHardware Management

Page 11: Jak skutecznie wykorzystać zBX

11 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

HMCHMC

Hypervisors Energy

Networks

Energy Management Static power savings

Ability to query maximum potential power

Performance

VirtualServers

Operations

Hypervisor Management Manage and control communication

between virtual server operating systems and the hypervisor

Single view of virtualization across platforms. Ability to deploy multiple, cross-platform virtual

servers within minutesManagement of virtual networks including access

control Integration of HiperSockets network with IEDN

Virtual Server Lifecycle Management

HMC provides a single consolidated and consistent view of resources

Wizard-driven set up of resources in accordance with specified business process

Ability to monitor and report performance Load balance recommendationsManage to a performance policy

Resource Workload Awareness and Platform Performance Management

zEnterprise Unified Resource ManagerPlatform Management

Key Manage suite Adv Mgmt suite Automate suite

Page 12: Jak skutecznie wykorzystać zBX

12 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

FIRMWARE

MULTIPLE OPERATING SYSTEMSe.g., z/OS, z/TPF, z/VSE, z/VM,

Linux on System zAIX

MIDDLEWARE

APP APP APP APP APP APP

Linux on System x 1

VIRTUALIZATION – PR/SM, z/VM, PowerVM, System x Hypervisor

System z Power System x1 IBM Optimizers

Heterogeneous Virtual Infrastructure Management

1 All statements regarding IBM future direction and intent are subject to change or withdrawal without notice,and represents goals and objectives only.

Platform ManagementPlatform Management

Service ManagementService Management

Hardware ManagementHardware Management

Unified Resource ManagerUnified Resource Manager

Page 13: Jak skutecznie wykorzystać zBX

13 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Unified Resource Manager APIs Enabling external management tools

New API support allows programmatic access to the same underlying functions exploited by the HMC user interface (UI)

– Same resource types, instances and policies

– HMC UI steps are accomplished using panels in a wizard-style task while API steps are accomplished by calling API management primitives

– Therefore the API functions correspond to views and tasks in the UI such as:

Listing resource instances

Creating, changing, deleting resource instances

Operational control of resource instances

Access to these functions will enable tools external to the HMC to manage the Unified Resource Manager Initially the priority scenarios will be the discovery,

monitoring, and provisioning use cases

UI API

HMCHMC

zEnterprise System

Page 14: Jak skutecznie wykorzystać zBX

14 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

HMCHMC

Hypervisors Energy

Networks

Performance

VirtualServers

Operations

zEnterprise Unified Resource ManagerManagement of zEnterprise from external toolsApplication Programming Interface (API) is a new implementation in the HMC Build on existing SNMP/CIM

function plus new Unified Resource manager capabilities TCP/IP Sockets/HTTP is

underlying network support with SSL for connection security Supports modern scripting

languages (e.g., Perl, Python) that have HTTP supporting libraries Fully documented and

supported for customer and third-party use HMC UI remains in place,

supported and will continue to be extended as Unified Resource Manager evolves APIs are governed by the

functions they involve such as ‘Manage’ or ‘Automate’

API allows programmatic access to the same functions exploited by the HMC UI. Corresponding to views and tasks in the UI such as: List and get properties for core

(traditional) entities, ensemble, workloads, virtual networks, virtual hosts, virtual servers, storage, zBX infrastructure (as well as provide start/stop/restart for many of these also) Can provide service oriented

functions like metrics retrieval and inventory Manage energy management

modes Help on recover actions of

virtual actions And more …

Page 15: Jak skutecznie wykorzystać zBX

15 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

HMC API and UI Provide Same Level of Function

• Example: Creating a Virtual Server on an IBM Blade

• Regardless of the interface used, this is accomplished through a series of steps:

Create virtual server Assign virtual server to workload

Define virtual server characteristics

Define virtual server network connectivity

Add storage to virtual server

Specify virtual server options

Activate virtual server

Select hypervisor for new virtual server

•Name, description•Virtual or dedicated virtual processors•Number of virtual processors•Amount of memory

• Select from among defined virtual networks

•Select from storage resources previously defined to hypervisor

•Example: Specify the boot device type and instance:

• Disk• Network• ISO

Page 16: Jak skutecznie wykorzystać zBX

16 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

HMC API and UI Provide Same Level of Function (con’t)

• HMC UI: Steps are accomplished using panels in a wizard-style task

Create virtual server Assign virtual server to workload

Define virtual server characteristics

Define virtual server network connectivity

Add storage to virtual server

Specify virtual server options

Activate virtual server

Select hypervisor for new virtual server

Page 17: Jak skutecznie wykorzystać zBX

17 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

HMC API and UI Provide Same Level of Function (con’t)

• zManager API: Steps are accomplished by calling management primitives of the API

Note: Function names listed below are conceptual, not the actual API syntax

Create virtual server Assign virtual server to workload

Define virtual server characteristics

Define virtual server network connectivity

Add storage to virtual server

Specify virtual server options

Activate virtual server

Select hypervisor for new virtual server

•Call List-Hypervisors function to obtain a list of hypervisors•<Invoking application selects desired hypervisor>

•<Create VS>•Call Create-VS function specifying selected hypervisor as target and basic VS parameters to get base VS created

•Call List-VNetworksfunction to obtain current virtual networks•<Select network>•Call Add-VNIC function specifying new VS as target and virtual network parameters

•Call List-Stg-Resourccesfunction to obtain list of available volumes•<Select volume>Call Add-VDisk function specifying new VS as target and selected storage resource

•<Select boot device> •Call Update-VS function to set boot device

Page 18: Jak skutecznie wykorzystać zBX

18 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

zEnterprise Unified Resource Manager

Operational Controls for Hardware/Firmware Service & Support for Hardware/Firmware Hardware configuration management

Workload-based Resource Allocation & provisioning for zEnterprise

Physical & Virtual Resource Management Goal Oriented Management of zEnterprise resources

(Availability, Performance, Energy, Security) Faster transaction processing with reduced network

latency

zEnterprise extended by IBM Service Management Center

Visibility, Control and Automation for Applications, Transactions, Databases, all Datacenter Resources

End-to End Workload Management and Service Level Objectives Align IT Management with Business Goals

Common usage and accounting for business accounting

Dynamic/Centralized management of Application Workloads based on policies

End-to-End Enterprise Security

Business Resiliency for Multi-site recovery

Multi-site Storage management and disaster recovery

High availability and disaster recovery for the cloud

Cloud provisioning and management

Asset and Change Management for physical and virtual resources

Service Management extends zEnterprise Firmware Functionality for Heterogeneous Environments

Page 19: Jak skutecznie wykorzystać zBX

19 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

How is it different? Security: VLAN support provides enforced

isolation of network traffic with secure private networks.

Improved support: Monitoring of hardware with “call home” for current/expected problems and support by System z Service Support Representative.

System z packaging: Increased quality with pre-testing of blade and zBX. Upgrade history available to ease growth.

Operational controls: Monitoring rolled into System z environment from single console. Consistent change management with Unified Resource Manager.

Purpose-built hardware for simplified deployment and hardened security helpsbusinesses quickly react to change and reduce time to market

IBM WebSphere DataPower Integration Appliance XI50 for zEnterprise(DataPower XI50z) helps extend the value of zEnterprise

What is it? The IBM WebSphere DataPower Integration

Appliance XI50 for zEnterprise can help simplify, govern, secure and integrate XML and IT services by providing connectivity, gateway functions, data transformation, protocol bridging, and intelligent

load distribution.

HTTP MQ JMS FTP IMS

SOAPXML

COBOLCSV

Page 20: Jak skutecznie wykorzystać zBX

20 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

IBM WebSphere DataPower Integration Appliance XI50 for zEnterprise

DataPower XI50z (2462-4BX)– Same hardware as DataPower XI50B (4195-4BX)

“Double-wide” Blade: 2 BladeCenter slotsIBM HS22 Blade + DataPower expansion unit

– Current firmware based on DataPower firmware v3.8.1– New firmware based on DataPower firmware v4.0.1– Same Acceleration, Security, and Integration capabilities

Can coexist with POWER7 and IBM System x blades in the same zBX BladeCenter

Leverages advanced zBX BladeCenter networking infrastructure– 2 x 1 GbE interfaces to zBX 1 GbE top of rack switches (INMN)– 2 x 10 GbE interfaces to zBX 10 GbE top of rack switches (IEDN)

Ordering, configuration and installation– Hardware and firmware are configured and ordered by eConfig as zBX features– Ships installed in a new-build zBX or field installed by IBM service as an MES

Tightly integrated with zEnterprise– Hardware and firmware management by Unified Resource Manager– Inherits zEnterprise Ensemble serviceability, monitoring and reporting capabilities

Page 21: Jak skutecznie wykorzystać zBX

21 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

IBM POWER7 PS701Express Blade

8 cores Single Wide 3.0GHz POWER7 Up to 128GB of Memory

General-purpose computing platform– Housed in standard BladeCenter H chassis inside IBM zEnterprise

BladeCenter Extension enclosure– Up to 112 blades

• 14 blades per BladeCenter• 2 BladeCenters per rack• 4 racks per zBX Model 2

Managed by the IBM zEnterprise Unified Resource Manager

Virtualized with firmware-supplied hypervisor

Entitled through System z firmware• Performance and Energy Efficiency

• Single-wide 8-core with three configurations• POWER7 processor-based PS blades automatically optimize performance • Ideal for highly virtualized environments with demanding commercial

workload performance• Virtualization performance and scalability superior to IBM System x

Blades• Pioneering EnergyScale technology and IBM Systems Director Active

Energy Manager™ software• Take advantage of the power of IBM’s industry-leading UNIX operating

system, AIX

Page 22: Jak skutecznie wykorzystać zBX

22 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

IBM BladeCenter PS701 (8406-71Y) Configurations for zBX IBM BladeCenter PS701 (8406-71Y)

– POWER7 8 Core Processor– 8 Processor Cores activated– 1 Processor socket– Single wide Blade only– 3.0GHz– 16 dimm slots (4 or 8 GB DIMMs)– 300GB HDD Internal Disk

3 Configurations shown are supported POWER7 Blades may be acquired by the

customer through existing channels or through IBM

PowerVM Enterprise Edition licence and Software Maintenance Agreement is required for all 8 Cores, and must be maintained for the duration of use.

PowerVM Enterprise Edition is controlled as zEnterprise Licensed Internal Code (LIC)– pHyp 2.1, VIOS 2.1.3– Extensions for configuration and systems

management: Hardware setup, FFDC, Heartbeat, PPM daemon

Blade FC# Config 1 Config 2 Config 3Processor 3.0GHz@150W

1 1 1

Processor Activations 84118412

44

44

44

Memory kits8 GB (2 x 4 GB)16 GB (2 x 8 GB)

82088209

32 GB4 0

64 GB 8 0

128 GB08

HDD 300GB 8274 1 1 1

8406-8275 QLogic 2-port 10Gb Converged Network Adapter (CFFh) 8275 1 1 1

8406-8242 QLogic 8Gb Fibre Channel Expansion Card (CIOv) 8242 1 1 1

PowerVM EE 5228 8 8 8

Required SW PID

PowerVM EESW License PID

5765-PVE(0001) 8 8 8

PowerVM EE1 YR SWMA PID

5771-PVE(1991)

Choose Qty 8 of 1 YR or 3 YR

PowerVM EE3 YR SWMA PID

5773-PVE(0999)

Choose Qty 8 of 1 YR or 3 YR

Reference – ITSO Redpaper REDP-4655IBM BladeCenter PS700, PS701, and PS702 Technical Overview and Introduction

Warranty and MaintenanceSeparate blade warranty is NOT required if in a zBX under IBM maintenance. zBX maintenance includes 24x7 on-site support for parts (including blades) and service during the 1 year System z warranty and subsequent post warranty maintenance terms

Page 23: Jak skutecznie wykorzystać zBX

23 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

IBM PowerVM

Page 24: Jak skutecznie wykorzystać zBX

24 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Four Supported Configurations: Client acquired, not configured or shipped by System z manufacturingProcessor Chips

– Intel® Xeon® E7-2830 processorNehalem microarchitectureWestmere-EX core (32 nm)

Memory DIMMs– DDR3, 1333 Mhz capable– Operating frequency 1066 MHz– 6.4 GT per second

Speed Burst CardSSD Expansion Card SSD Internal Disks (Two 50 GB)

– Hypervisor storage controlled by Unified Resource Manager

10 GbE 2-port Expansion Card– CFFh PCIe 2.0 x16 slot

QLogic 8Gb FC Expansion Card– CIOv PCIe 2.0 x4 slot– 2 ports

*The last character in the SSCT model is determined by geography. “U” is correct in the USA. It is not necessary to order extended warranty or enhanced service response. When a supported System x blade is installed in the zBX under IBM warranty or service, a System z SSR services the blade as part of the zBX. Typically 24 x 7, 2 hour response.

IBM System x blades: IBM BladeCenter HX5 7873-A4x/A5x/A6X/A7x

1111146244X194544X1946Qlogic 8 Gb Fibre Channel Expansion Card CIOv

1111009946M616846M6170Broadcom 10Gb virtual fabric CFFh

11119012No Internal Raid

2222542843W772643W772750GB MLC SSD

1111576546M690846M6906SSD Exp Card

1111174159Y588946M6843Speed Burst Card

161284GB/Core

016

88

160

80

A17Q2422

46C057046C0599

46C055849Y1527

Memory 8 GB 1333 MHzMemory 16 GB 1333 MHz

16161616Total Cores

SingleSingleSingleSingleBlade width

2222# Intel Processors (Sockets)

1111A17969Y307469Y3072Additional Processor2.13 GHz 105W (E7-2830 8C)

1111A16S69Y307469Y3071Initial Processor

2.13 GHz 105W (E7-2830 8C)

1111Blade base - HX5 (7873)

Config 3 (7873-A7x*)

Config 2 (7873-A6x*)

Config 1(7873-A5x*)

Config 0(7873-A4x*)

FeatureCode

Option Part

Number (for SSCT)

SBB PartNumberBlade *

Link to the IBM Standalone Solutions Configuration Tool (SSCT):http://www-947.ibm.com/support/entry/portal/docdisplay?brand=5000008&lndocid=MIGR-62168

Page 25: Jak skutecznie wykorzystać zBX

25 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

IBM System x Blade Hypervisor and Operating Systems zBX Integrated Hypervisor for IBM System x Blades

– Linux Kernel-based Virtual Machine (KVM) technology– License included in z196 or z114 FC 0042 – “Manage Firmware for a System x blade”)– Supports both Linux on System x and Windows on the same blade– Includes Certified Windows1 VirtIO Drivers with installation support using the

Unified Resource Manager

Operating Systems– Separately acquired and licensed by the customer

For license, System x blades are “2-socket servers”. Consider the number of guests required.

– 64-bit Linux on System xRed Hat® Enterprise Linux (RHEL) 5.5, 5.6 and 6.0

http://www.redhat.com/rhel/versions/rhel5/Novell® SUSE® Linux Enterprise Server (SLES) 10 (SP4) and SLES11 (SP1)

http://www.suse.com/products/server/

– 64-bit Windows ServerWindows Server 2008 R2 – Datacenter Edition, 64 bit versionWindows Server 2008 (SP2) – Datacenter Edition, 64 bit version

Page 26: Jak skutecznie wykorzystać zBX

26 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

KVM Based - Virtualization

Page 27: Jak skutecznie wykorzystać zBX

© 2011 IBM CorporationPage 27

Discovery of Storage Resource

Discovery of storage resources–Instructs Hypervisors to scan SAN for available storage resources (and

paths)–Lists available storage resources and paths–Allows direct import, or creation of a Storage Access List for later import–Benefit:

• Avoids issues associated with manually adding storage resource information

–Supports all Hypervisor types (x86 hypervisor, PowerVM, z/VM)• Note: for z/VM, Discovery only works if the host WWPN is free

Page 28: Jak skutecznie wykorzystać zBX

28 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

External Customer DataNetwork

External Customer DataNetwork

Virtual Server10B

VMAC-B

TCPIP1 (z/OS1)

OSX

Virtual Server

Top of Rack

OSD

External Customer Data Network

External Customer Data Network

IP Filtering

MAC Filtering

2. Enter through an OSD Connection attached to an Ensemble Member.

1. Enter through a Router connection to the TOR – Switch connections not permitted!

2

1Router

L3

Router L3

VLAN & VMAC Enforcement

VLAN & VMAC Enforcement

IP Filtering

VLAN & VMACEnforcement

Switch L2

Recommended external network access to the zBX There are 2 ways to get to the IEDN from an outside network. They are not mutually exclusive

1. Connect your external network router directly into an IEDN TOR switch on the zBX, using the ports designated for external connectivity2. Connect your external network into a zEnterprise OSD OSA port. Traffic would flow through this port to a zCPC operating system (z/OS® or Linux® on

System z), which would then route the traffic into the IEDN via an OSX CHPID One is not necessarily better than the other in terms of availability. Both approaches must implement redundancy to

provide high availability. Option (1) requires router redundancy, option (2) requires operating system and CPC redundancy.

Page 29: Jak skutecznie wykorzystać zBX

29 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

To ensure maximum availability for workloads in an ensemble with zBX, configure multiple nodes with sufficient zBX capacity to accommodate the work of a failing zBX. This can be accomplished in two

phases: develop a single node with a zBX, deploy additional node(s).

zEnterprise Ensemble with zBX: Availability Overview

1. zBX and CPC are designed and delivered with high levels of hardware and firmware redundancy, so losing either one is unlikely. Nevertheless, both are a single point of failure and must be redundantly configured on another ensemble node for high availability.

2. BladeCenters and blades also contain single points of failure, so virtual servers and disks needing high availability must be redundantly configured on failure isolated components. (e.g. back up blades in a different BladeCenter in a different frame or zBX.)

Because of eConfig tool constraints, extra blade entitlements may need to be purchased to protect from a BladeCenter failure. (Multiple sparsely populated BladeCenter Chassis are not supported)

3. High availability of Virtual Servers running in zBX can be achieved through automated failover and recovery to/from redundant blades in the same or different zBX.

4. High availability of external data can be achieved by means of synchronous mirroring, using capabilities like Metro Mirror. If the primary disk storage system fails, a copy of the virtual server containing the secondary storage definitions can be started automatically, using scripts, to access the mirrored disks.

Page 30: Jak skutecznie wykorzystać zBX

30 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

z196 zBX

TOR

TOR

P

S Application Server Blades

DataPower

HMC P

HMC A

BladeCenter Chassis

(Example: single rack with 2 BladeCenters)

IP Routers ( Firewall Load Balancers

ExternalNetwork

IEDN

OSD

OSX

High Availability Architecture: Phase 1, Start with a single zBX

Blade and BladeCenter chassis configuration– Blades are a SPoF. Install redundant capacity across multiple BladeCenter Chassis

Network configuration– Deploy redundant load-balancing solutions in the external network for requests into the ensemble. Connect to redundant OSD

OSA ports on the zEnterprise or to redundant ports in the TOR switches in the zBX, or both. Load Balancing solutions can receive workload insight from Unified Resource Manager APIs.

Storage and Data configuration– Follow normal best practices for HA SAN configuration, such as cabling the dual SAN switches in the zBX to two SAN fabrics with

disk storage systems that have dual storage controllers. – Provide application server blades with access to primary and secondary disk systems

Establish synchronous data replication with LVM, Metro Mirror and TPC-R management. Virtual server configuration

– Follow normal best practices for clustering virtual severs across blades and chassis using standard clustering technologies– Configure shared access to the same primary disks and networks for each member of the cluster – Define a copy of the virtual server configuration with access to the disks on the secondary storage system, instead of the primary

storage system. This virtual server configuration is to be used when the primary storage system fails.– Plan to configure each blade with a redundant “migration partner” blade to which its virtual servers can be migrated if/when

needed.

Linu

x®on

Sys

tem

z

z/O

Page 31: Jak skutecznie wykorzystać zBX

31 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Application Server Blades

DataPower

Ensemble

Redundant Hardware – Removes CPC and zBX as potential single

points of failure– Extend clustering approaches to span the

multiple ensemble nodes

LoadBalancers/Firewalls

Phase 2: Deploy at least a two-node ensemble, each with its own zBX

z196 zBX

TOR

TOR

P

S

HMC P

z196 or z114 zBX

TOR

TOR

HMC A

z/OS

z/OS

SysplexDistributor

IEDNLi

nux

on S

yste

m z

Linu

x on

Sys

tem

z

Page 32: Jak skutecznie wykorzystać zBX

32 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Overview of Performance Management Architecture

(back to Q&A)

Page 33: Jak skutecznie wykorzystać zBX

33 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

How does Ensemble SASP Load Balancing Work?

Load Balancer

HMC

GPMP

GPMP

GPMP

IncomingRequests

Forwarded

RequestsForwarded Requests

Forwarded Requests

SA

SP

Com

mun

icat

ion

URM Communication

HTTP Server

HTTP Server

HTTP Server

Internet

(back to Q&A)

Page 34: Jak skutecznie wykorzystać zBX

34 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Sample Conventional Design for Router Attachment

Sample IEDN Design for External Router Attachment to

zEnterprise ensemble nodewith a zBX

Each Router Connects to Both TORs

Non-zBX

BladeCenter

HTTP1SWITCH

# B

SWITCH

# A

Router #A

Router #B

VRRP orHSRP

zBX

HTTP1VRRP orHSRP

[ VRRP orHSRP ]

[ VRRP orHSRP ]

IEDN

Router #A

Router #B

TOR SWITCH #A

TOR SWITCH #B

Example (below) external router attachment to a single zBX

Page 35: Jak skutecznie wykorzystać zBX

35 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Sample IEDN Design for External Router Attachment to Ensemble:

Each Router Connects to TOR A in one zBX and TOR B in a 2nd zBX

zBX

HTTP1

Router #B

VRRP orHSRP

[ VRRP orHSRP ]

IEDN

zBX

HTTP1

Router #A

TOR SWITCH #A

TOR SWITCH #B

TOR SWITCH #A

TOR SWITCH #B

Example external router attachment to multiple zBX’s

Page 36: Jak skutecznie wykorzystać zBX

36 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

FC1

FC2

FCn

QLOGIC Switch 1

int. ports

:

ext. ports

:

QLOGIC Switch 2

int. ports

:

ext. ports

Blade Expansion Cards 1 2-port FC HBA adapter

per Blade 8 Gb QLOGIC

BladeCenter Switches 2 8 Gb QLOGIC SAN SM per

Chassis Run in‚ transparent' mode Connection to external switch

through up to 6 ports

SAN Switches 2 separate SANs SAN switch

connected to zBX must be NPIV-capable switch

Primary andSecondary DiskStorage Systems Each with dual storge

controllers

DATA FP LUNs

DATA FP LUNs

P

SMetro Mirror

LVM

zBX

Redundant Fibre Channel Topology for IBM Blades in zBX

Page 37: Jak skutecznie wykorzystać zBX

37 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

zBX Change Scenarios

Configuration Management Serviceability Management Upgrade Management

– Firmware Update: BladeCenters, Switches (TORs, ESMs), BladesApplying Firmware for zBX Components

– CPC Enhanced Driver Maintenance IBM zBX Change Policy: Minimize disruption, Maximize Concurrency

1. zBX Change Scenarios are concurrent with all CPC operations.2. zBX Redundant Hardware Components key for zBX concurrency approach.3. zBX Change Scenario algorithms prioritize concurrency over performance

e.g. Updating one member of a pair of switches at a time, but in parallel with others.

4. HMC/SE Automated Controls, including toleration (hardening/retry) support, orchestrate the concurrency operations

Page 38: Jak skutecznie wykorzystać zBX

38 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Configuration Scenarios:

1.Ensemble Node components can be added concurrently– Expand the Ensemble Node whenever the business requires it.– Adding zBX to a CPC is concurrent to CPC operations and running workloads– Adding Loose Piece MES to existing zBX (Racks, BladeCenter Chassis, Blades) is

concurrent to CPC and existing zBX operations and running workloads– Updating Blade Type High Water Marks (LICCC)

Concurrent for increasing valuesNo formal support for decreasing values (RPQ only)

Any active blades exceeding new high water mark total must be powered off

2.Ensemble Node components can be removed concurrently– Be sure the component to be removed is not accessing Ensemble resources. – If the workload in the host CPC uses IEDN in another node in the Ensemble, the

CPC must be cabled for IEDN access through the IEDN switch in a second zBX.

Page 39: Jak skutecznie wykorzystać zBX

39 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

All but 2 zBX FRU replacements and repair actions are concurrent:For High availability, use clustering and automation to move all work to an alternate *

Serviceability Scenarios

1. BladeCenter Chassis Mid-plane – Replacement of BladeCenter midplane impacts workloads for all blades in that BladeCenter– All blades in other BladeCenters in zBX are not impacted.

2. Blades – Any blade FRU replacement will impact zBX workload for that blade, but only that blade – FRU replacement containing key persistence data restores that data upon completion

Firmware, serial number, any firmware customization data: Networking information / other customization data for DataPower XI50zVirtualization Blades firmware customization data stored on HMC/SE using existing

backup/restore infrastructureNo additional actions needed to bring zBX workloads back online after repair

* Alternatively, planned blade outages can be addressed through the use of static virtual server migration to move virtual servers across blade boundaries This requires the virtual servers to be shut down prior to the move, and rebooted after the move. (IBM will never take a blade offline until work is quiesced.)

Page 40: Jak skutecznie wykorzystać zBX

40 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Upgrade Scenarios: Applying Firmware for the zBX Components

1. All TOR and BladeCenter Switches firmware updates are concurrent to zBX workloads

– Switches updated in parallel, but only one of the redundant pair is updated at a time

– Approach allows no impact IEDN or INMN

2. All BladeCenter firmware updates are concurrent to zBX workloads

– Primary AMM is updated without impact to blade workloads

– Standby AMM is updated in background after Primary AMM update complete

Page 41: Jak skutecznie wykorzystać zBX

41 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Upgrade Scenarios: Applying Firmware for the zBX Components

3. Impact of blade firmware updates on zBX workloads varies by blade type

– Each blade type has at least one firmware component update that disrupts workloads running on the blade. See Appendix

– It is possible to know in advance what is disruptive. See Screen shots in Appendix

– For non-disruptive updates (the Service Element (SE) determines if the updates are disruptive)

If the blade is up, the updates are installed and activated If the blade is down, the updates are queued. When the blade is

powered back up, those updates are automatically installed during boot of the hypervisor.

– Disruptive updates are deferred to be installed until the service user logs on to the SE and runs the Manage zBX Internal Code Dialog and directs updates to specific blades.

Apply non-concurrent blade firmware updates in the least disruptive manner possible. e.g. one at a time, or a subset of all blades at a time, but not all blades at once.Group all non-concurrent blade firmware upgrades within a blade to one blade outage event. Quiesce or move work to another blade

Page 42: Jak skutecznie wykorzystać zBX

42 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Upgrade Scenarios: Applying CPC Driver UpgradesEnhanced Driver Maintenance (Concurrent Driver upgrade) requires a zBX blade outage*

All CPC firmware upgrades are current All zBX firmware components change

– zBX TOR Switch firmware upgrades are concurrent– BladeCenter firmware upgrade is concurrent– Blade Firmware upgrade is not concurrent and impacts running

workloads: (each blade type has at least one non-concurrent firmware component)

*Typically CPC and zBX firmware are shipped together in a single CPC driver

Apply non-concurrent blade firmware updates in the least disruptive manner possible. e.g. one at a time, or a subset of all blades at a time, but not all blades at once.Group all non-concurrent blade firmware upgrades within a blade to one blade outage event. Quiesce or move work to another blade

Page 43: Jak skutecznie wykorzystać zBX

43 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

All blades are done in parallel, there is little to no time added for multiples Mixing of blade types results in total time equal to blade type having longest update time

Upgrade Scenarios: zBX Firmware Update Times

AMM: 20 – 30 min– Primary AMM completed in 20 – 30 min.– Standby AMM done in background taking 20 – 30 min

Standby AMM update time doesn’t affect update time total zBX Switches: 1.5 hours

Parallel update to all switches of each redundant pair zBX Blades: 1 hour 45 min

– DataPower: 6 min or 30 min (if licensing file level change)DataPower update times will increase when Platform FW update support added

System x Blade: 1 hour Power Blade: 1 hour 45 min

Page 44: Jak skutecznie wykorzystać zBX

44 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager

Questions ?

No ? Then let‘s start the live demo

Page 45: Jak skutecznie wykorzystać zBX

45 © 2011 IBM Corporation

IBM zEnterprise zBX & Unified Resource Manager