QuickSpecs
Overview
HP TruCluster Server V5.1B-1
HP TruCluster Server Version 5.1B-1 for HP Tru64 UNIX Version 5.1B-1 provides highly available and scalable solutions for mission-critical computing
environments. TruCluster Server delivers powerful but easy-to-use UNIX clustering capabilities, allowing AlphaServer systems and storage devices to operate
as a single virtual system.
By combining the advantages of symmetric multiprocessing (SMP), distributed computing, and fault resilience, a cluster running TruCluster Server offers high
availability while providing scalability beyond the limits of a single system. On a single-system server, a hardware or software failure can severely disrupt a
client's access to critical services. In a TruCluster Server cluster, a hardware or software failure on one member system results in the other members providing
these services to clients.
TruCluster Server reduces the effort and complexity of cluster administration by extending single-system management capabilities to clusters. It provides a
clusterwide namespace for files and directories, including a single root file system that all cluster members share. A common cluster address (cluster alias) for
the Internet protocol suite (TCP/IP) makes the cluster appear as a single system to its network clients while load balancing client connections across member
systems.
A single system image allows a cluster to be managed more easily than distributed systems. TruCluster Server cluster members share a single root file system
and common system configuration files. Therefore, most management tasks need to be done only once for the entire cluster rather than repeatedly for each
cluster member. The cluster can be managed either locally from any of its members or remotely using Tru64 UNIX Web-based management tools. Tru64
UNIX and TruCluster Server software, and applications, are installed only once. Most network applications, such as the Apache Web server, need to be
configured only once in the cluster and can be managed more easily in a cluster than on distributed systems.
A choice of graphical, Web-based, or command-line user interfaces makes management tasks easier for the administrator, flexible for those with large
configurations, and streamlined for expert users.
TruCluster Server facilitates deployment of services that remain highly available even though they have no embedded knowledge they are running in a
cluster. Applications can access their disk data from any cluster member. TruCluster Server also provides the support for components of distributed
applications to run in parallel, providing high availability while taking advantage of cluster-specific synchronization mechanisms and performance
optimizations.
TruCluster Server allows the processing components of an application to concurrently access raw devices or files, regardless of where the storage is located in
the cluster. Member-private storage and clusterwide shared storage are equally accessible to all cluster members. Using either standard UNIX file locks or the
distributed lock manager (DLM), an application can synchronize clusterwide access to shared resources, maintaining data integrity.
TruCluster Server is an efficient and reliable platform for providing services to networked clients. To a client, the cluster appears to be a powerful single-server
system; a client is impacted minimally, if at all, by hardware and software failures in the cluster.
TruCluster Server simplifies the mechanisms of making applications highly available. A cluster application availability (CAA) facility records the
dependencies of, and transparently monitors the state of, registered applications. If a hardware or software failure prevents a system from running a service,
the failover mechanism automatically relocates the service to a viable system in the cluster, which maintains the availability of applications and data.
Administrators can manually relocate applications for load balancing or hardware maintenance.
TCP-based and UDP-based applications can also take advantage of the cluster alias subsystem. These applications, depending on their specific
characteristics, can run on a single cluster member or simultaneously on multiple members. The cluster alias subsystem routes client requests to any member
participating in that cluster alias. During normal operations, client connections are dynamically distributed among multiple service instances according to
administrator-provided metrics.
TruCluster Server supports a variety of hardware configurations that are cost-effective and meet performance needs and availability requirements. Hardware
configurations can include different types of systems and storage units, and can be set up to allow easy maintenance. In addition, administrators can set up
hardware configurations that allow the addition of a system or storage unit without shutting down the cluster.
For the fastest communication with the lowest latency, use the PCI-based Memory Channel cluster interconnect for communication between cluster members.
TruCluster Server Version 5.1B-1 also supports the use of 100 Mbps Ethernet or 1000 Mbps Ethernet hardware as a private LAN cluster interconnect. The LAN
interconnect is suitable for clusters with low-demand workloads generated by a cluster running failover style, with highly available applications in which
there is limited application data being shared between the nodes over the cluster interconnect. Refer to the Cluster Technical Overview manual for a
discussion of the merits of each cluster interconnect.
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 1
QuickSpecs
Features - TruCluster Server V5.1B-1
HP TruCluster Server V5.1B-1
Flexible N etwork
C onfiguration
TruCluster Server offers flexible network configuration options. Cluster members do not need to have identical routing
configurations. An administrator can enable IP forwarding and configure a cluster member as a full-fledged router.
Administrators can use routing daemons such as gated or routed, or they can configure a cluster member to use only static
routing. When static routing is used, administrators can configure load balancing between multiple network interface cards
(NICs) on the same member. Whether gated, routed, or static routing are used, in the event of a NIC failure, the cluster
alias reroutes network traffic to another member of the cluster. Ss long as the cluster interconnect is working, cluster alias
traffic can get in or out of the cluster.
C luster Application
A va ila b ility F a cility
The cluster application availability (CAA) facility delivers the ability to deploy highly available single instance applications
in a cluster by providing resource monitoring and application relocation, failover, and restart capabilities. CAA is used to
define which members can run a service, the criteria under which to relocate a service, and the location of an application-
specific action script. Monitored resources include network adapters, tape devices, media changers, and applications. CAA
allows services to manage and monitor resources by using entry points within their action scripts. Applications do not need
to be modified in any way to utilize CAA.
Administrators can request that CAA reevaluate the placement within the cluster of registered applications either at a
regularly scheduled time, or any time at which they desire to manually balance applications by using the caa_balance
command. Balancing decisions are based on the standard CAA placement mechanisms. Similarly, administrators can
request that CAA schedule an automatic failback of a resource for a specific time. This allows an administrator to benefit
from CAA automatically moving a resource to the most-favored cluster member without the worry of the relocation
occurring at a critical time. The caa_report utility can provide a report of availability statistics for application resources.
Administrators can redirect the output of CAA resource action scripts it is visible during execution. Lastly, user-defined
attributes can be added to a resource profile and they will be available to the action script upon its execution.
Rolling U pgrade
TruCluster Server allows rolling upgrade from the previous version of the base operating system and the TruCluster software
to the next subsequent release of the base operating system and TruCluster software. It also allows the rolling of patches into
the cluster. Updating the operating system and cluster software does not require a shutdown of the entire cluster. A utility is
provided to roll the cluster in a controlled and orderly fashion. The upgrade procedure allows the monitoring of the status of
the upgrade while it is in progress. Clients accessing services are not aware that a rolling upgrade is in progress.
To speed the process of upgrading the cluster, the administrator can use the parallel rolling upgrade procedure that
upgrades more than one cluster member at a time in qualifying configurations.
Administrators looking for a quicker alternative to a rolling upgrade when installing patches have the option of a patch
procedure that favors upgrade speed over cluster high availability. After the first member receives the patch, all remaining
members of the cluster receive the patch at the same time followed by rebooting the entire cluster as a single operation.
See the Cluster Installation manual for recommended and supported paths to upgrade or roll to the latest version of
TruCluster Server.
Cluster M anagem ent
The SysMan system management utilities provide a graphical view of the cluster configuration, and can be used to
determine the current state of availability and connectivity in the cluster. The administrator can invoke management tools
from SysMan, allowing the cluster to be managed locally or remotely.
Clusterwide signaling allows applications to send UNIX signals to processes operating on other members.
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 3
QuickSpecs
Features - TruCluster Server V5.1B-1
HP TruCluster Server V5.1B-1
Performance M anagement The performance management capability of Tru64 UNIX has been modified from one large performance management tool
(pmgr) to several smaller and more versatile tools. The performance management tool suite consists of collect, collgui, and
two Simple Network Management Protocol (SNMP) agents (pmgrd and clu_mibs).
The collect tool gathers operating system and process data under Tru64 UNIX Versions 4.x and 5.x. Any subset of the
'subsystems' (Process, Memory, Disk, LSM Volumes, Network, CPU, Filesystems) and Header can be defined for which data is
to be collected. Collect is designed for high reliability and low system-resource overhead. Accompanying collect are two
highly integrated tools: collgui (a graphical front-end) and cfilt, (which allows completely arbitrary extraction of data from
the output of collect to standard output). Collgui is a laborsaving tool that allows a user to quickly analyze collect data.
The Performance Manager metrics server (pmgrd) is a UNIX daemon process that provides general UNIX performance
metrics on request. The pmgrd metrics server supports the extensible SNMP agent mechanism (eSNMP).
Cluster M IB
TruCluster Server supports the HP Common Cluster MIB. HP Insight Manager uses this Cluster MIB to discover cluster
member relationships, and to provide a coherent view of clustered systems across supported platforms.
H ighly Available N FS
Server
When configured as an NFS server, a TruCluster Server cluster can provide highly available access to the file systems it
exports. No special cluster management operations are required to configure the cluster as a highly available NFS server. In
the event of a system failure, another cluster member will become the NFS server for the file system, transparent to external
NFS clients. NFS file locking is supported, as are both NFS V2 and V3 with UDP and TCP.
TruCluster Server allows NFS file systems to be served from the cluster through both the default cluster alias and alternate
aliases. Alternate cluster aliases can be defined to limit NFS server activity to those members that are actually connected to
the storage that contains the exported file systems. NFS clients can use this alternate alias when they mount the file systems
served by the cluster.
Fast File System Recovery The Advanced File System (AdvFS) log-based file system provides higher availability and greater flexibility than traditional
UNIX file systems. AdvFS journaling protects file system integrity. TruCluster Server supports AdvFS for both read and write
access.
An optional, separately licensed product, the Advanced File System Utilities, performs online file system management
functions. See the OPTIONAL SOFTWARE section of this document for more information on the AdvFS utilities.
Increased D ata Integrity
Tru64 UNIX Logical Storage Manager (LSM) is a cluster-integrated, host-based solution to data storage management. In a
TruCluster Server cluster, LSM operations continue despite the loss of cluster members, as long as the cluster itself continues
operation and a physical path to the storage is available. LSM disk groups can be used simultaneously by all cluster
members and the LSM configuration can be managed from any cluster member.
Basic LSM functionality, including disk spanning and concatenation, is provided with the Tru64 UNIX operating system.
Extended functions, such as striping (RAID 0), mirroring (RAID 1), and online management, are available with a separate
license. Mirroring of LSM is RAID Advisory Board (RAB) certified for RAID Levels 0 and 1.
LSM is supported for use in a TruCluster Server cluster and will support any volume in a cluster, including swap and cluster
root and excluding the quorum disk and member boot disks. Hardware mirroring is supported for all volumes in a cluster
without exception.
LSM RAID 5 volumes are not supported in clusters. See the OPTIONAL SOFTWARE section of this document for more
information on LSM.
G lobal Error Logger &
Event M anager
TruCluster Server can log messages about events that occur in the TruCluster environment to one or more systems. Cluster
administrators can also receive notification through electronic mail when critical problems occur.
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 4
QuickSpecs
Features - TruCluster Server V5.1B-1
HP TruCluster Server V5.1B-1
Cluster Storage I/O
Failover
TruCluster Server provides two levels of protection in the event of storage interconnect failure. When configured with
redundant storage adapters, the storage interconnect will be highly available. Should one interconnect fail, traffic will
transparently fail over to the surviving adapter. When a member system is connected to shared storage with a single storage
interconnect and it fails, transactions are transparently performed via the cluster interconnect to another cluster member
with a working storage interconnect.
C luster C lient N etwork
Failover
TruCluster Server supports highly available client network interfaces via the Tru64 UNIX redundant array of independent
network adapters (NetRAIN) feature.
C luster Interconnect
Failover
TruCluster Server allows the elimination of the cluster interconnect as a single point of failure by supporting redundant
cluster interconnect hardware. You can configure dual-rail Memory Channel, allowing the cluster to survive the failure of a
single rail. For LAN interconnect, two or more network adapters on each member are configured as a NetRAIN virtual
interface. When properly configured across two or more switches, the cluster will survive any LAN component failure. This
not only guards against rare network hardware failures, but also facilitates the upgrade and maintenance of the network
without disrupting the cluster.
Support for Parallelized
D atabase Applications
TruCluster Server provides the software infrastructure to support parallelized database applications, such as Oracle 9i Real
Application Clusters (RAC) and Informix Extended Parallel Server (XPS) to achieve high performance and high availability.
9i RAC and XPS are offered and supported separately by Oracle Corporation and Informix Software, Inc., respectively.
Distributed Lock M anager The distributed lock manager (DLM) synchronizes access to resources that are shared among cooperating processes
throughout the cluster. DLM provides a software library with an expansive set of lock modes that applications use to
implement complex resource-sharing policies. DLM provides services to notify a process owning a resource that it is
blocking another process requesting the resource. An application can also use DLM routines to efficiently coordinate the
application's activities within the cluster.
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 5
QuickSpecs
Features - TruCluster Server V5.1B-1
HP TruCluster Server V5.1B-1
Support for Memory
C hannel API
TruCluster Server provides a special application programming interface (API) library for high-performance data delivery over
Memory Channel by giving access to Memory Channel data transfer and locking functions. This Memory Channel API
library enables highly optimized applications that require high-performance data delivery over the Memory Channel
interconnect. This library is supported solely for use with Memory Channel.
High performance within the cluster is achieved by providing user applications with direct access to the capabilities of the
Memory Channel. For example, a single store instruction on the sending host is sufficient for the data to become available
for reading in the memory of another host.
The Memory Channel API library allows a programmer to create and control access to regions of the clusterwide address
space by specifying UNIX style protections. Access to shared data can be synchronized using Memory Channel spin locks for
clusterwide locking.
The Memory Channel API library facilitates highly optimized implementations of Parallel Virtual Machine (PVM), Message
Passing Interface (MPI), and High Performance Fortran (HPF), providing seamless scalability from SMP systems to clusters of
SMP machines. This provides the programmer with comprehensive access to the current and emerging de facto standard
software development tools for parallel applications while supporting portability of existing applications without source
code changes.
NOTE: To users of the Memory Channel API V1.6 product on Tru64 UNIX Version 4.0*:
On Tru64 UNIX V5.*, the TruCluster Memory Channel Software V1.6 product is bundled as a feature of TruCluster Server
V5.*. To run the Memory Channel API library on Tru64 UNIX Version 5.1B-1, you must install a TruCluster Server license to
configure a valid TruCluster Server cluster.
NOTE: To users of the Memory Channel application programming interface with Memory Channel virtual hub (vhub)
configuration:
The Memory Channel API is not supported for data transfers larger than 8K bytes when loopback mode is enabled in two
member clusters configured with MC virtual hub. For more information on loopback mode go to
http://www.tru64unix.hp.com/docs/updates/TCR51A/TITLE.HTM and refer to the 21.Mar.2002 issue titled "MC API
Applications May Not Use Transfers Larger Than 8 KB with Loopback Mode Enabled on Clusters Utilizing Virtual Hubs."
Connection M anager
The connection manager is a distributed kernel component that ensures that cluster members communicate with each other
and enforces the rules of cluster membership. The connection manager forms a cluster, and adds and removes cluster
members. It tracks whether members in a cluster are active and maintains a cluster membership list that is consistent on all
cluster members.
Support for Fibre C hannel TruCluster Server supports the use of switched Fibre Channel storage and Fibre Channel arbitrated loop. Compared to
Solutions
parallel SCSI storage, Fibre Channel provides superior performance, greater scalability, higher reliability and availability,
and better serviceability. Compared to parallel SCSI storage, Fibre Channel is easier to configure and its long distance
permits greater flexibility in configurations. Fibre Channel can be used for clusterwide shared storage, cluster file systems,
swap partitions, and boot disks.
Compared with a switched Fibre Channel topology, arbitrated loop offers a lower cost solution by trading off bandwidth,
and therefore some performance. Arbitrated loop is supported for two-member configurations only.
For more information on supported TruCluster Server configurations and specific cabling restrictions using Fibre Channel,
see the Cluster Hardware Configuration manual at the following URL:
Enhanced Security with
D istributed
A uthentication
TruCluster Server supports the Enhanced Security option on all cluster members. This includes support for features for
enhanced login checks and password management. Audit and access control list (ACL) support can also be enabled
independently of the Enhanced Security option on cluster members.
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 6
QuickSpecs
Configuration
HP TruCluster Server V5.1B-1
Software Requirem ents
TruCluster Server Version 5.1B-1 requires the Tru64 UNIX Version 5.1B-1 operating system. The Tru64 UNIX operating system
is a separately licensed product. See the Tru64 UNIX operating system QuickSpecs for more information.
TruCluster Server requires that additional software subsets be installed. See the TruCluster Server Cluster Installation manual
for more information.
Software C onfiguration
Requirem ents
When configuring TruCluster Server software, an additional 64 MB of memory is required on each member system.
Each system requires a disk for a member boot disk and the cluster requires a minimum of one disk for clusterwide
root, /usr, and /var file systems. A quorum disk is optional. See the TruCluster Server Cluster Hardware
Configuration manual for details. Free disk space required for use (permanent): 62 MB to load TruCluster Server
software onto a Tru64 UNIX system disk.
These requirements refer to the disk space required on the system disk. The sizes are approximate; actual sizes may vary
depending on the system environment, configuration, and software options.
G rowth Considerations
The minimum hardware and software requirements for any future version of this product may be different from the
requirements for the current version.
A rolling upgrade to the next version of the cluster software requires the following:
At least 50 percent free space in root (/), cluster_root#root
At least 50 percent free space in /usr, cluster_usr#usr
At least 50 percent free space in /var, cluster_var#var, plus an additional 425 MB to hold the subsets for the new
version of the Tru64 UNIX operating system
At least 50 percent free space in /usr/i18n, cluster_i18n#i18n when used.
Ordering Information
TruCluster Server V5.1B-1 licenses include:
Systems
TruC luster Plus packages*
QP-6R9AC-AA
TruCluster Server License
QL-6BRAC-AA
TruCluster Server M igration
License**
AlphaServer 800, 1000A, 1200, DS10,
DS10L, DS20, DS20E, DS25, ES47 server
tower
QL-6J1AC-AA
AlphaServer ES40, ES45, ES47, TS20
QP-6R9AE-AA
QP-6R9AG-AA
QL-6BRAE-AA
QL-6BRAG-AA
QL-6J1AE-AA
QL-6J1AG-AA
AlphaServer 2000, 2100, 2100A, 4000,
4100, GS60E, GS80, GS80, ES80
AlphaServer 8200, 8400, GS60, GS140,
GS160, GS320, GS1280
QP-6R9AQ-AA
QL-6BRAQ-AA
QL-6J1AQ-AA
Software Documentation: QA-6BRAA-GZ
*TruCluster Plus Software packages include licenses for TruCluster Server, Logical Storage Manager, and AdvFS Utilities.
**If you currently have TruCluster Available Server or TruCluster Production Server and want to convert to TruCluster Server, use the QL-6J1A*-AA migration
license.
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 7
QuickSpecs
Configuration
HP TruCluster Server V5.1B-1
Software Licensing
The HP TruCluster Server license provides the right to use the software as described in this QuickSpecs, and is furnished
under the licensing of HP Computer Corporation's Standard Terms and Conditions. The version of HP TruCluster Server
described in this QuickSpecs qualifies as a minor version release. Licenses for prior versions must be updated to this version
either through the purchase of a Service Agreement that includes the rights-to-use new versions, or through the purchase of
Update Licenses. Each system in the TruCluster Server environment requires separate Tru64 UNIX and TruCluster Server
licenses. Hard partitions of ES80, GS80, GS160, GS320, and GS1280 AlphaServers can be clustered together either across
separate systems or within systems, and only one TruCluster license is required per system. For more information about the
HP licensing terms and policies, contact your local HP representative or reseller.
This product supports the Tru64 UNIX License Management Facility (LMF). License units for the TruCluster Server product are
allocated on an unlimited-system-use basis.
For more information on the License Management Facility, see the Tru64 UNIX Operating System QuickSpecs or the Tru64
UNIX operating system documentation.
D istribution M edia
TruCluster Server is a separately licensed product and is distributed on the Tru64 UNIX Associated Products Volume 2 CD-
ROM. The TruCluster Server documentation is available on-line at
Software Product Services A variety of service options are available from HP. For more information, contact your local HP service representative.
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 8
QuickSpecs
Optional Software
HP TruCluster Server V5.1B-1
HP Advanced Server for
UN IX (ASU)
HP Advanced Server for UNIX (ASU) provides Windows networking services, such as file sharing, print sharing, and security
for Tru64 UNIX. In addition to basic file and print services, ASU provides full Windows domain controller support, support
for enterprise-wide trust relationships, and support for Windows security - including file permissions and Windows local and
global groups.
Additionally, you can manage users, file shares, and printers using native Windows administrative tools. When combined
with TruCluster Server software, ASU provides highly available and highly scalable file shares, print shares, and even
Primary Domain Controller resources to Windows clients. For more information on ASU, visit the HP Advanced Server for
Legato NetW orker
See the NetWorker Read This First letter for information on evaluating or purchasing a version of NetWorker that supports
TruCluster Server.
SANworks Data
Replication M anager by
H P
SANworks Data Replication Manager (DRM) is controller-based data replication software for disaster tolerance and data
movement solutions. DRM works with the new StorageWorks Fibre Channel MA8000 /EMA12000 Storage Solutions by HP.
The RAID Array 8000 (RA8000) and Enterprise Storage Array 12000 (ESA12000) are also supported.
Multiple clusters or standalone systems can be connected using DRM to replicate application data. DRM within a single
cluster is supported only through Custom Special Systems' "Campus-Wide Disaster Tolerant Cluster" product offering.
For more information about DRM, see the SANworks Data Replication Manager QuickSpecs. For more information about
the Campus-Wide Disaster Tolerant Product, see
Advanced File System
(A d vFS) U tilities
The Advanced File System (AdvFS) log-based file system provides flexibility, compatibility, high availability, and high
performance for files and filesets, up to 16 terabytes (TB). Administrators can add, remove, reconfigure, tune, and
defragment files - and back up storage - without unmounting the file system or halting the operating system. By supporting
multivolume file systems, AdvFS enables file-level striping to improve file transfer rates, and integrates with the functionality
provided by the Logical Storage Manager (LSM).
A graphical user interface simplifies management tasks and utilities to dynamically resize file systems, load balance,
undelete files, and clone files for hot backup.
The AdvFS Utilities is a separately licensed software product for Tru64 UNIX. See the AdvFS Utilities Software Product
Description (SPD) 44.52 for more information.
Logical Storage M anager The Tru64 UNIX Logical Storage Manager (LSM) is an integrated, host-based solution to data storage management.
(LSM )
Concatenation, striping, mirroring, hot-sparing, and a graphical user interface allow data storage management functions
to be done online, without disrupting users or applications. LSM manages storage as a single entity in both cluster and
single node environments. LSM is a separately licensed software product for Tru64 UNIX. For more information, see the
Logical Storage Manager QuickSpecs.
StorageW orks Software
The StorageWorks Software package includes the licenses for Tru64 UNIX Logical Storage Manager and the Advanced File
System Utilities. The part number for the StorageWorks software package is QB-5RXA*-AA.
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 9
QuickSpecs
Optional Software
HP TruCluster Server V5.1B-1
Service Tools - W EBES
WEB-Based Enterprise Service (WEBES) tools integrate a high availability system fault management architecture, Distributed
Enterprise Service Tools Architecture (DESTA), with HP's architecture for distributed, Web-Based System Management. The
tool functionality contained in the WEBES kit includes the following: HP Analyze (symptom-directed hardware diagnosis
tool), HP Crash Analysis Tool [CCAT] (symptom-directed operating system software diagnosis tool), and Revision and
Configuration Management [RCM] tool (system configuration and revision data collection tool).
HP Analyze is a hardware diagnosis software tool that provides analysis for single errors or fault events at a rudimentary
level, as well as multiple event and complex analysis. HP Analyze provides automatic notification and isolation of
hardware components to quickly identify areas of the system that may be having problems. HP Analyze is the successor to
DECevent and supports the newer EV6-based systems. Refer to the release notes for the products that are supported.
CCAT is a software application tool that helps service engineers and system managers to analyze operating system crashes.
This tool collects data that describes system crashes and matches that data against a set of operating specific rules.
The RCM tool collects system configuration and revision data information. The data is stored in the RCM Server at HP
Services and the server is then used to create detailed revision and configuration reports.
O racle 9i RAC
Oracle 9i RAC technology is a relational database management system that capitalizes on the benefits of high availability,
performance, and expandability made possible by Tru64 UNIX clusters. Oracle 9i RAC must be ordered separately through
Oracle Corporation.
Inform ix Dynam ic Server
Informix Dynamic Server with the Extended Parallel Option delivers high performance while supporting commercially
available cluster systems used for data warehousing and decision support applications. Informix Dynamic Server with the
Extended Parallel Option must be ordered through Informix Software, Inc.
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 10
QuickSpecs
Optional Software
HP TruCluster Server V5.1B-1
Supported Systems and Cluster Interconnect Hardware Requirements
TruCluster Server Version 5.1B-1 supports the systems listed in the following table with up to eight systems in a configuration.
TruCluster Server Version 5.1B-1 supports the KZPSA-BB, KZPBA-CB, 3X-KZPBA-CC, 3X-KZPEA-DB, KGPSA-BC, KGPSA-CA, KGPSA-DA, and KGPSA-EA as
shared storage bus adapters, subject to the current maximum number of adapters and any other restrictions for a given system. TruCluster Server supports a
maximum of 62 shared buses per system in any combination. Information on firmware release can be found at http://www.hp.com/support/ or from the
current Alpha systems firmware update CD-ROM.
Supported Systems
System
Shared Storage IO Adapter
KZPSA, KZPBA-CB/CC, KGPSA-BC/CA
KZPSA
C luster Interconnect3
MC1, 1.5 & 2, 100 Mbps & 1000 Mbps LAN
MC1.5, 100 Mbps & 1000 Mbps LAN
AlphaServer 800
AlphaServer 1000A
AlphaServer 1200
AlphaServer 2000
AlphaServer 2100
AlphaServer 2100A
KZPSA, KZPBA-CB/CC, KGPSA-BC/CA
KZPSA
MC1, 1.5 & 2, 100 Mbps & 1000 Mbps LAN
MC1, 1.5, 100 Mbps & 1000 Mbps LAN
MC1, 1.5, 100 Mbps & 1000 Mbps LAN
MC1, 1.5, 100 Mbps & 1000 Mbps LAN
MC1, 1.5 & 2, 100 Mbps & 1000 Mbps LAN
MC1, 1.5 & 2, 100 Mbps & 1000 Mbps LAN
MC1, 1.5 & 2, 100 Mbps & 1000 Mbps LAN
KZPSA
KZPSA
AlphaServer 4000, 4100
AlphaServer 8200
AlphaServer 8400
AlphaServer DS10
KZPSA, KZPBA-CB/CC, KGPSA-BC/CA
KZPSA, KZPBA-CB/CC, KGPSA-BC/CA
KZPSA, KZPBA-CB/CC, KGPSA-BC/CA
KZPBA-CB/CC, KGPSA -BC/CA/DA, KZPEA-DB1 MC2, 100 Mbps & 1000 Mbps LAN, DEGXA-
SA/TA
AlphaServer DS10L
AlphaServer DS20
AlphaServer DS20E
KZPBA-CB/CC, KGPSA-DA
100 Mbps LAN, DEGXA-SA/TA
KZPBA-CB/CC, KGPSA-BC/CA
KZPBA-CB/CC, KGPSA-BC/CA/DA, KZPEA-DB1 MC2, 100 Mbps & 1000 Mbps LAN, DEGXA-
SA/TA
MC2, 100 Mbps & 1000 Mbps LAN
AlphaServer DS20L
AlphaServer DS25
KZPBA-CB/CC, KGPSA-CA/DA/EA
100 Mbps & 1000 Mbps LAN, DEGXA-SA/TA
MC2, 100 Mbps & 1000 Mbps LAN2, DEGXA-
SA/TA
3X-KZPBA-CC, KGPSA-CA/DA, KZPEA-DB1
TS204
KZPBA-CB/CC, KGPSA-CA/DA
100 Mbps & 1000 Mbps LAN
AlphaServer ES40
KZPBA-CB/CC, KGPSA -BC/CA/DA, KZPEA-DB1 MC2, 100 Mbps & 1000 Mbps LAN, DEGXA-
SA/TA
AlphaServer ES45
KZPBA-CB/CC, KGPSA-CA/DA, KZPEA-DB1
MC2 (see notes), 100 Mbps & 1000 Mbps LAN,
DEGXA-SA/TA
AlphaServer ES47 server tower
AlphaServer ES47
KZPBA-CC5, KGPSA-DA, KZPEA-DB1
KZPBA-CC5, KGPSA-DA, KZPEA-DB1
KZPBA-CC5, KGPSA-DA/EA, KZPEA-DB1
KZPSA, KZPBA-CB/CC, KGPSA-BC/CA
KZPBA-CB/CC, KGPSA-CA/DA
MC2, 100 Mbps LAN, DEGXA-SA/TA
MC2, 100 Mbps LAN, DEGXA-SA/TA
MC2, 100 Mbps LAN, DEGXA-SA/TA
MC1, 1.5 & 2, 100 Mbps & 1000 Mbps LAN
AlphaServer ES80
AlphaServer GS60, GS60E
AlphaServer GS80
MC2, 100 Mbps & 1000 Mbps LAN, DEGXA-
SA/TA
AlphaServer GS140
AlphaServer GS160
KZPSA, KZPBA-CB/CC, KGPSA-BC/CA
KZPBA-CB/CC, KGPSA-CA/DA
MC1, 1.5 & 2, 100 Mbps & 1000 Mbps LAN
MC2, 100 Mbps & 1000 Mbps LAN, DEGXA-
SA/TA
AlphaServer GS320
AlphaServer GS1280
KZPBA-CB/CC, KGPSA-CA/DA
MC2, 100 Mbps & 1000 Mbps LAN, DEGXA-
SA/TA
KZPBA-CC5, KGPSA-DA/EA, KZPEA-DB1
MC2, 100 Mbps LAN, DEGXA-SA/TA
DA - 11444 Worldwide — Version 4 — December 8, 2003
Page 11
QuickSpecs
Optional Software
NOTES:
HP TruCluster Server V5.1B-1
Hard partitions of GS80, GS160, and GS320 AlphaServers can be clustered together either across separate systems or within systems, but each hard
partition must have at least one cluster interconnect connection.
TruCluster Server Version 5.1B-1 requires all members to be connected with either all members using Memory Channel hardware or all members
using a private 100 Mbps or 1000 Mbps full-duplex LAN. Note that there are two variants of Memory Channel (MC).
ES45 models 1, 1B, 2, and 2B support single rail Memory Channel (MC) configured for either 512 MB or 128 MB. Dual rail MC is supported with
both Memory Channel adapters placed on the same PCI bus and jumpered to run at 128 MB.
ES45 models 3 and 3B can support dual rail Memory Channel (MC) with both rails configured either for 512 MB or 128 MB. Memory Channel
adapters must be placed on separate PCI busses when jumpered to run at 512 MB.
The use of the MC API is not supported in a two-member cluster containing an AlphaServer ES45 and configured with Memory Channel in virtual
hub mode. To use the MC API in a two-member cluster containing an AlphaServer ES45, a Memory Channel hub must be configured for each rail.
For further information on deploying an ES45 in a TruCluster, refer to the Cluster Hardware Configuration manual at
1. KZPEA-DB is supported for configuring a shared bus in a TruCluster with a maximum of two members per shared bus. Patch Kit 1 is required. Please
check the specific platform Supported Options Lists to see if a particular platform is supported.
2. The embedded Gigabit Ethernet adapter (10/100/1000Mbps) on the DS25 is supported by TruCluster Server as a LAN cluster interconnect if patch kit 2 is
installed.
3. 3X-DEGXA-SA/TA Gigabit Ethernet adapter is supported by TruCluster Server as a LAN cluster interconnect if patch kit 2 is installed. Please check the
specific platform Supported Options Lists to see if a particular platform is supported. http://h18002.www1.hp.com/alphaserver/products/options.html
4. TS20 is supported for two member LAN interconnected cluster only.
5. KZPBA-CC is supported on the ES47s, ES80, & ES1280 for shared bus only with HSZ80 storage.
TruCluster Server Version 5.1 requires all members to be connected using Memory Channel hardware. Note that there are two variants of Memory Channel
(MC):
Supported Memory Channel Hardware
MC 1 & 1.5
MC2
D escription
CCMAB-BA
CCMAB-AA
CCMHB-AA
CCMLB-AA
BN39B-04
5.0 Volt/3.3 Volt Compatible
CCMAA-AA or CCMAA-BA
CCMHA-AA
CCMLA-AA
PCI adapter
Hub
Line card
N/A
4-meter cable
BC12N-10
10-foot copper cable
10-meter cable
BN39B-10
BN39B-01 (one meter)
CCMFB-AA
Connects MC adapter to CCMFB optical converter
Fiber Optics converter
CCMFB-BA
5.0 Volt/3.3 Volt Compatible Fiber Optics
converter
BN34R-10 (10 meter)
BN34R-31 (31 meter)
Fiber-optic cable: Connect one optical converter to
another
Fiber-optic cable: Connect one optical converter to
another
Memory Channel
configuration notes:
At least one Memory Channel adapter must be installed in a PCI slot in each member system. One or more link
cables are required to connect systems to each other or to a hub. A cluster environment with two nodes does not
require a hub. A configuration of more than two members requires a Memory Channel hub.
MC1and MC1.5 cannot be mixed on the same rail with MC2.
There are special rules about circumstances where Memory Channel 1 and Memory Channel 2 can be used together in the
same cluster. The TruCluster Server Hardware Configuration manual provides information regarding supported Memory
Channel configurations.
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 12
QuickSpecs
Optional Software
HP TruCluster Server V5.1B-1
Supported LAN
Interconnect Hardware
Use of dedicated LAN is supported for use as a cluster interconnect. A LAN interconnect must be private to cluster members.
As long as any packet that is transmitted by a cluster member's interconnect adapter can only be received by interconnect
adapters of other members of the same cluster, the interconnect meets the privacy requirement.
A LAN interconnect can be a direct connection between two cluster members or can employ hubs or switches. In general,
any Ethernet adapter, switch, or hub that works in a standard LAN at 100 Mbps or 1000 Mbps should work within a LAN
interconnect. (Adapters on combo cards such as the KZPCM, DEPVD, and the DEPVZ are not supported.) Check the
supported options list (at http://h18002.www1.hp.com/alphaserver/products/options.html) for the hardware platform in
question to verify if the DEGXA-SA/TA is supported for LAN interconnect. Fiber Distributed Data Interface (FDDI), ATM LAN
Emulation (LANE), and 10 Mbps Ethernet are not supported.
Although hubs and switches are interchangeable in most LAN interconnect configurations, switches are recommended for
performance and scalability. Most hubs run in half-duplex mode and do not detect network collisions, so their use in a LAN
interconnect may limit cluster performance. Overall, using a switch, rather than a hub, provides greater scalability for
clusters with three or more members.
Adapters and switch ports must be configured compatibly with respect to speed (100 Mbps or 1000 Mbps) and operational
mode (full-duplex). A maximum of three hops is allowed between cluster members, where a hop means passing from a
system, switch, hub, or router, to another system, switch, hub, or router. That is, any combination of up to two hubs,
switches, or routers is supported between two cluster members. You must not introduce unacceptable latencies by using, for
example, a satellite uplink or a wide area network (WAN) in the path between two components of a LAN interconnect.
A fully redundant LAN interconnect configuration employs two or more Ethernet adapters in a NetRAIN set on each
member, with redundant wiring to two or more switches interlinked by two crossover cables. These Ethernet switches must be
capable of one of the follow mechanisms for managing traffic across parallel inter-switch links: link aggregation (also
known as port trunking), resilient links, or per-port-enabled spanning tree algorithm.
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 13
QuickSpecs
Storage Hardware Requirements
HP TruCluster Server V5.1B-1
Supported Fibre Channel TruCluster Server supports the following Fibre Channel hardware. For more information on the supported Fibre Channel
Hardware
solutions, see the TruCluster Server Release Notes and the Cluster Hardware Configuration manual. For a list of supported
Fibre Channel switches, see the SAN Product Support Tables for the SAN Design Reference Guide at the following URL:
Device
D escription
KGPSA-BC
KGPSA-CA
KGPSA-DA
PCI-to-Fibre Channel host adapter
PCI-to-Fibre Channel host adapter
PCI-to-Fibre Channel host adapter
KGPSA-EA (302784-B21, FCA2384)
2Gb PCI or PCI-X to FibreChannel Host Bus Aadapter
Array Controller
HSG60
HSG80
Array Controller
HSV110
Array Controller
XP128/1024
Array Controller
XP512/48
Array Controller
MSA1000 (3R-A4328-AA, 201723-B22)
Storage Works modular SAN array
HSG60 and HSG80 controllers may be contained in many cabinet configurations including MA6000, RA8000, MA8000, ESA12000, EMA12000, and
EMA16000. HSV110 controllers may be contained in many cabinet configurations. Any model of the Enterprise Virtual Array (EVA) product set based on the
HSV110 controller is supported.
For more information on the supported XP array hardware, see the XP/Tru64 Connectivity Streams at the following URLs:
Fibre C hannel Arbitrated TruCluster Server supports Fibre Channel arbitrated loop for clusters with a maximum of two members only. The KGPSA-CA
Loop Support
is the only Fibre Channel adapter supported for arbitrated loop configurations. The DS-SWXHB-07 seven-port Fibre
Channel hub is required to build an arbitrated loop configuration and is restricted to use with DS10, DS20, DS20E, and
ES40 systems. The other AlphaServers including the DS10L and ES45 systems are not supported for arbitrated loop.
Supported SCSI
C ontrollers
The systems are connected to shared SCSI buses using an adapter from the following list. The TruCluster Server Cluster
Hardware Configuration manual and Release Notes provide more information regarding SCSI controller configuration,
respectively:
KZPSA - PCI to Fast Wide Differential SCSI-2 adapter
KZPBA-CB/CC - PCI to UltraSCSI wide differential adapter
KZPEA-DB1 - LVD Multimode U3 PCI SCSI Adapter
NOTE: KZPEA only supported for shared bus with one or two hosts. TruCluster V5.1B-1 Initial Patch Kit (IPK) is required.
Supported SCSI Signal
Converters
TruCluster Server supports the following SCSI signal converters. The Cluster Hardware Configuration manual provides
information regarding supported SCSI signal converters configuration.
Signal C onverter
DWZZA-AA
D escription
Standalone unit, single-ended/narrow to differential/narrow
SBB, single-ended/narrow to differential/narrow
Standalone unit, single-ended/wide to differential/wide
SBB, single-ended/wide to differential/wide
UltraSCSI hub
DWZZA-VA
DWZZB-AA
DWZZB-VW
DS-DWZZH-03
DS-DWZZH-05
3X-DWZCV-BA
UltraSCSI hub
HVD to LVD converter
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 14
QuickSpecs
Storage Hardware Requirements
HP TruCluster Server V5.1B-1
Supported SCSI Cables
TruCluster Server supports the following SCSI cables:
Device
D escription
BN21W-0B
SCSI-2 Cable "Y"
BN21R or BN23G
BN21K, BN21L, or 328215-00X
BN21M
SCSI-2 Cable "A"
SCSI-3 Cable "P"
50-pin LD to 68-pin HD
50-pin LD Cable
BC06P or BC19J
BN38C, BN38D, or BN38E
BN37A
VHDCI to HD68 cable
Ultra VHDCI Cable
VHDCI to HD68 cable
50-pin LD to HD68 cable
50-pin HD to 68-pin HD
50-pin HD to 50-pin HD
68-pin HD
BN37B
BN21M
199629-002 or 189636-002
146745-003 or 146776-003
189646-001 or 189646-002
BN38E-0B
HD68 to VHDCI technology adapter cable
6 FT VHDCI TO VHDCI, U160
12 FT VHDCI TO VHDCI, U160
24 FT VHDCI TO VHDCI, U160
U160 SCSI Cable "Y"
3X-BC56J-O2
3X-BC56J-O3
3X-BC56J-O4
3X-BN55A-01*
*NOTE: Part 3X-BN55A-01 consists of the VHDCI "Y" cable P/N 17-05144-01 and connector plug P/N 12-10015-01
*NOTE: See the 3X-KZPEA Release Notes for a listing and description of the cables and terminators supported for the KZPEA in a shared bus configuration.
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 15
QuickSpecs
Hardware Requirements
HP TruCluster Server V5.1B-1
Supported Term inators
and Connectors
TruCluster Server supports the following terminators and connectors:
Device
D escription
H879-AA or 330563-001
H885-AA
HD68 terminator
HD68 tri-link connector
VHDCI tri-link connector
VHDCI terminator
H8861-AA
H8863-AA
H8574-A
50-pin LD terminator
50-pin LD terminator
50-pin HD terminator
H8860-AA
341102-001
152732-001
3X-H32CT-AA
VHDCI 68-pin LVD terminator
LVD U160 Terminator
Supported Disk Devices
Every SCSI and Fibre Channel storage disk currently sold by HP that appears in the supported options list for a supported
AlphaServer are supported for use in a cluster on a shared bus.
Some legacy disk devices are not supported for use on a shared bus. TruCluster Server supports the following list of disk
devices on shared storage. Any SCSI or Fibre Channel storage disk manufactured by HP later than the ones in the following
list, and appearing in the supported options list for a supported AlphaServer, is supported.
RZ26-VA Narrow
RZ26L-VA Narrow
RZ26L-VW Wide
RZ26N-VA Narrow
RZ26N-VW Wide
RZ28-VA Narrow
RZ28-VW Wide
RZ28D-VA Narrow
RZ28D-VW Wide
RZ28L-VA Narrow
RZ28L-VW Wide
RZ28M-VA Narrow
RZ28M-VW Wide
RZ29-VA Narrow
RZ29-VW Wide
RZ29B-VA Narrow
RZ29B-VW Wide
RZ29L-VA Narrow
RZ29L-VW Wide
RZ40-VA Narrow
RZ40-VW Wide
RZ40L-VA Narrow
RZ40L-VW Wide
RZ28B-VA Narrow
Data Routers and Network The HP Fibre Channel Tape Controller, Modular Data Router (MDR), and Network Storage Router (NSR) are Fibre Channel-
Storage Routers
to-SCSI bridges that allow a SCSI tape device to communicate with other devices on a Fibre Channel. Tapes and tape
libraries supported by the MDR or NSR are supported for use in a TruCluster when deployed on the MDR or NSR.
The following tape controllers and modular data routers are supported on shared storage:
340654-001 Fibre Channel Tape Controller
152975-001 Fibre Channel Tape Controller II
163082-B21 (3R-A2673-AA) Fibre Channel Modular Data Router (MDR)
163083-B21 (3R-A2774-AA) Fibre Channel Modular Data Router (MDR)
218240-B21 (3R-A3292-AA) Fibre Channel Modular Data Router (MDR)
218241-B21 (3R-A3312-AA) Fibre Channel Modular Data Router (MDR)
280823-B21 (3R-A3747-AA) Network Storage Router (N1200)1
262672-B21 (3R-A3746-AA) Network Storage Router (E1200)1
286694-B21 (3R-A3871-AA) Network Storage Router (E1200)1
262665-B21 (3R-A3935-AA) Network Storage Router (E2400)1
262664-B21 (3R-A3745-AA) Network Storage Router (E2400)1
262653-B21 (3R-A3740-AA) Network Storage Router (M2402)1
262654-B21 (3R-A3741-AA) Network Storage Router (M2402)1
1 Note that TruCluster support for tape libraries with N1200, E1200, E2400, and M2402 is restricted to DLT and SDLT
based libraries only, i.e. MSL5026SL/DLX, ESL9198SL/DLX, and ESL9326SL/DL/DLX.
For a list of tape automation devices supported by HP Nearline Storage, go to
http://h18006.www1.hp.com/products/storageworks/ebs/index.html to see the the EBS Compatibility Matrix.
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 16
QuickSpecs
Hardware Requirements
HP TruCluster Server V5.1B-1
Supported Tape Devices
and M edia Changers
TruCluster Server supports the configuration of specific tape devices on a shared SCSI bus and on Fibre Channel. These
devices will function properly in a multi-initiator environment. On a SCSI bus, they will be disrupted by bus resets that occur
during cluster membership change events. Backup software must be explicitly capable of handling and recovering from
such events and must utilize the cluster application availability (CAA) facility to facilitate highly available backup.
The following tape devices are supported on shared storage:
TZ88 Tape Drive
Tabletop
SBB
TZ88N-TA
TZ88N-VA
TZ89 Tape Drive
Tabletop
SBB
DS-TZ89N-TA
DS-TZ89N-VW
TZ885 DLT MiniLibrary
TZ887 DLT MiniLibrary
Tabletop
Rackmount
Tabletop
Rackmount
Tabletop
Tabletop
Tabletop
Rackmount
Tabletop
TZ885-NT
TZ885-NE
TZ887-NT
TZ887-NE
HP 20/40-GB DLT Tape Drive
HP 40/80-GB DLT Tape Drive
340744-B21
146197-B22
DS-TL891-NT1
TL891 DLT MiniLibrary
(2-5-2 Part Numbers)
DS-TL891-NE/NG 1,2
TL891 with 1 DLT 35/70 drive
TL891 with 2 DLT 35/70 drives
TL891 with 1 DLT 35/70 drive
TL891 with 2 DLT 35/70 drives
MiniLibrary Expansion Unit
MiniLibrary Data Unit
TL881 with 1 DLT 20/40 drive
TL881 with 2 DLT 20/40 drives
TL881 with 1 DLT 20/40 drive
TL881 with 2 DLT 20/40 drives
MiniLibrary Expansion Unit
MiniLibrary Data Unit
TL891 DLT MiniLibrary
(6-3 Part Numbers)
120875-B213
120875-B22
120876-B213
129876-B22
120877-B214
128670-B21
128667-B215
128667-B225
128669-B21
128669-B224
120877-B21
128670-B21
Rackmount
TL881 DLT MiniLibrary
Tabletop
Rackmount
TL893 Automated Tape Library
TL894 Automated Tape Library
TL895 DLT Automated Tape Library
DS-TL893-BA
DS-TL894-BA
Num ber of Drives
2-5-2 Part
6-3 Part
Number
Number
DS-TL895-H2
N/A
2
3
4
5
6
7
1
349350-B22
349350-B23
349350-B24
349350-B25
349350-B26
349350-B27
349351-B217
N/A
DS-TL895-BA
N/A
N/A
DS-TL89X-UA6
TL896 DLT Automated Tape Library - DS-TL896-BA
ESL9326D Enterprise Library
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 17
QuickSpecs
Hardware Requirements
HP TruCluster Server V5.1B-1
Num ber of Drives
6-3 Part Number
146205-B218
146205-B238
146205-B248
146205-B258
146205-B268
146205-B278
146205-B28
0
6
8
10
12
14
16
1. Can be expanded to two TZ89N-AV drives with part number DS-TL892-UA.
2. A DS-TL890-NE/NG MiniLibrary Expansion Unit can be connected to up to three DS-TL891-NE/NG drive units to manage the drives and cartridges in all
connected units. A DS-TL800-AA pass through mechanism is required for the second and third DS-TL891-NE/NG.
3. Can be expanded to two DLT 35/70 drives with part number 120878-B21.
4. The MiniLibrary Expansion Unit can be used to control the drives and cartridges of up to five drive and data units. A MiniLibrary Pass-Through
Mechanism, part number 120880-B21, is needed for each additional unit beyond the first drive unit.
5. Can be expanded to two DLT 20/40 drives with part number 128671-B21.
6. An upgrade kit to add one DS-TZ89N-VA tape drive.
7. An upgrade kit to add one 35/70 DLT tape drive.
8. Can be upgraded by the addition of a single or multiple 35/70 DLT tape drives with part number 146209-B21.
NOTE: The TL881, TL891, TL893, TL894, TL895, TL896, HP 20/40-GB DLT Tape Drive, HP 40/80 GB DLT Tape Drive, and ESL9326D Enterprise Library are
supported with both the KZPSA and KZPBA-CB/CC adapters. All other SCSI tape device and media changer support is provided with KZPSA adapters only.
NOTE: The TL891 Mini Library, TL895 Automated Tape Library, and the ESL9326D Enterprise Library are also supported on a Fibre Channel storage bus
with the KGPSA-BC and KGPSA-CA adapter.
NOTE: The HP Fibre Channel Tape Controller and Modular Data Router are Fibre Channel-to-SCSI bridges that allow a SCSI tape device to communicate
with other devices on a Fibre Channel.
For a list of tape devices supported by Enterprise Backup Storage, see the Enterprise Backup Storage documentation.
Supported Storage Boxes
TruCluster Server supports the following storage boxes:
D escription
Storage Box
BA350
Single ended, narrow
BA356
Single ended, wide
DS-BA356
Ultra SCSI, SBB shelf
DS-SSL14-RS
4254 Storage Enclosure, dual bus, Ultra2
4354 Storage Enclosure, dual bus, Ultra3
MSA30SB Single Bus (S/B) Rack Mountable
MSA30DB Dual Bus (D/B) Rack Mountable
DS-SL13R-BA
3R-A4075-AA (302969-B21)
3R-A4076-AA (302970-B21)
Supported Parallel SC SI
Array (RAID ) C ontrollers
TruCluster Server supports the following parallel SCSI RAID controllers on shared storage buses. The TruCluster Server
Release Notes provide information regarding supported Array (RAID) controllers firmware revisions.
RAID C ontroller
RAID Array 3000 (HSZ22)
HSZ80 Array Controller
NOTE: The following RAID controllers are no longer supported:
SWXRA-Z1 Array Controller (HSZ20)
HSZ40-Bx Array Controller
HSZ40-Cx Array Controller
HSZ50-Ax Array Controller
HSZ70 Array Controller
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 18
QuickSpecs
Hardware Requirements
HP TruCluster Server V5.1B-1
Network Adapters
TruCluster Server supports for client networks any Ethernet, FDDI, ATM (LAN emulation mode only), or Gigabit Ethernet
adapters that are supported by the version of Tru64 UNIX on which it is running.
Hardware Restrictions
TruCluster Server has the following hardware restrictions. The TruCluster Server Cluster Hardware Configuration manual
provides additional information regarding hardware restrictions.
Prestoserve NVRAM failover is not supported on shared disk devices.
TruCluster Server supports up to eight member cluster configurations as follows:
Switched Fibre C hannel: Eight-member systems may be connected to common storage over Fibre Channel in a
fabric (switch) configuration.
Parallel SC SI: Only four of the member systems may be connected to any one SCSI bus. Multiple SCSI buses may
be connected to different sets of members and the sets of members may overlap. Use of a DS-DWZZH-05 UltraSCSI
hub with fair arbitration enabled is recommended when connecting four member systems to a common SCSI bus.
Hardware Configuration
Exam ples
The TruCluster Server Hardware Configuration manual provides hardware configuration examples.
Hardware Configuration
Exam ples
The TruCluster Server QuickSpecs is updated and corrected periodically to reflect new hardware options and platforms
support. Please check online for the latest revision. Go to http://www.hp.com/products/quickspecs/productbulletin.html
and select either "Worldwide QuickSpecs" or "U.S. QuickSpecs" and then navigate through "High Availability and
Clustering" to "Tru64 UNIX Clustering".
© Copyright 2003 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice.
Windows is a trademark of Microsoft Corporation in the U.S. and/or other countries. UNIX and X/Open are trademarks of The Open Group in the U.S.
and/or other countries. Informix and Informix Extended Parallel Server are trademarks of Informix Software, Inc. NetWorker and Prestoserve are trademarks of
Legato Systems, Inc. Oracle is a trademark of the Oracle Corporation in the U.S. and/or other countries. NFS is a trademark of Sun Microsystems, Inc. All
other product names mentioned herein may be the trademarks of their respective companies.
The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein
should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
DA - 11444
Worldwide — Version 4 — December 8, 2003
Page 19
|