Managing storage in the old mainframe days was a simple matter. Everything was stored in one place, under the watchful eyes of the operators.
But client/server and low-cost disks changed all that. PCs now come with more storage than most mainframes used to have. Add to this all the capacity strewn throughout the enterprise’s file and print servers, database servers, file storage systems and Web servers scattered around the organization. You end up with a vast jungle of capacity that is a nightmare to manage.
“Hard coupling of storage to the servers and applications results in low storage utilization, increases management complexity and continually drives up the cost for operational support,” says Robert Smalley, senior project specialist in the Bank of Montreal’s Mid-Range Services Department. “We needed to centralize and simplify the storage management.”
Planning for the Future
The Bank of Montreal, part of the BMO Financial Group, is Canada’s oldest bank with more than 33,000 employees and $247 billion in assets. Those staff, together with customers accessing account information over the Internet or through ATMs, are serviced by the company’s 7,500 IT workers. BMO has three main data centers, two in Toronto where the bank is headquartered, and the other in Chicago, where its U.S. unit, Harris Bank, is located.
As with most large enterprises, BMO uses several operating systems, with a different type of direct-attached disk for each operating system. AIX servers used IBM’s proprietary Serial Storage Architecture, which transfers data at up to 160 Mbps. Sun servers utilized LVS RAID arrays. BMO’s Windows NT/2000 servers used a variety of disks from IBM and Compaq.
Combined, the datacenters held more than 43 terabytes of storage split among 300 servers. While this was adequate for current needs, the company anticipated capacity demand growth of 20% to 30% per year. Providing for such expansion couldn’t be done in the same haphazard method as it had in the past. Rather than trying to expand the existing mishmash of storage, BMO decided to take a strategic approach to the problem, a five-year, $20 million plan to consolidate and simplify storage.
Dropping Direct Attached
That strategy consists of replacing the existing direct attached storage with a series of Storage Area Networks (SANs). Each of the three major data centers receives its own SAN, with additional SAN islands deployed at the office towers and for different operating systems and business needs.
“Our strategy is to deploy solutions that support cross-platform resource sharing and measurable service and management improvements which provide cost-effective, highly available, ‘on demand’ networked-storage solutions, which facilitate automation of business continuity operations,” Smalley explains.
Rolling out the new equipment began in September when the bank installed a 125TB STK9840 tape storage system from Storage Technology Corp. of Louisville, Colo. To improve management of the data, it also deployed IBM’s Tivoli Storage Manager on an IBM eSeries p660, a rack mounted midrange UNIX server.
But, just as important as the storage equipment itself is the connection between the user and the files. Any bottleneck along the line wastes even the highest performance back-end. So BMO wanted to put in a 64-port director class switch. It evaluated switches from several manufacturers, including the StorageTek SN6000 and McDATA Corp.’s (Broomfield, Colo.) ED-6064 Director, but wound up selecting INRANGE Technologies Corp.’s FC/9000 Fibre Channel Director. BMO was already using other INRANGE hardware and the FC/9000 supports IBM’s 17Mbps fiber-optic channel, Enterprise Systems Connection (ESCON).
“We already had favorable past experience and familiarity with this technology since the bank was utilizing an INRANGE CD/9000 switch with the zOS mainframe,” says Smalley. “This implementation allowed BMO to utilize existing cabling and ESCON infrastructure.”
The main datacenter in Toronto received two of the FC/9000s so there wouldn’t be a single point of failure. A third unit went to the other Toronto facility, which functions as a backup for the main datacenter. This backup data center will also adding a second FC/9000 so both locations have full dual-fabric switching.
Once the tape drives and switches were in place, it was time to roll out the centralized storage equipment, starting with a 10TB IBM TotalStorage Enterprise Storage Server (ESS). The ESS scales up to 384 disks and a total capacity of 55.9TB. In addition to ESCON, it also connects via Fibre Channel, 2Gbit Fiber Channel/FICON and SCSI. It shares storage for devices running IBMs proprietary operating systems (OS/400, OS/390 and zOS) as well as Unix, Windows NT/2000 and Linux.
The Missing Piece
BMO will continue rolling out its centralized storage through 2006. Besides the new hardware, the bank is also reducing the number of UNIX flavors it supports, cutting down to Sun’s Solaris and IBM’s AIX. In addition, it is trimming the number of supported database environments to save on support and licensing costs.
The Fibre Channel infrastructure and disk sub-systems should all be in place by the end of this year. Connecting and migrating UNIX storage is in progress, scheduled for completion in 2005. Next year, BMO will start connecting and migrating the mainframe, midrange and Intel server storage.
Although project completion is still several years down the road, nevertheless there are already noticeable results.
“We have increased connectivity to disk space so we almost have ‘storage on demand,’ Smalley relates, “and we have seen some performance throughput gains.”
There is one piece, however, that is missing from the equation — storage management. Hardware prices are down to a few pennies per megabyte, but managing and maintaining that storage runs an estimated six to 10 times the hardware costs. Smalley did use Tivoli Storage Manager for part of the project, but hasn’t found a product he feels would adequately manage the entire group of SANs.
“We need a ‘world class’ storage resource management tool for open systems disk storage,” he says, “but the software doesn’t yet have the necessary maturity to manage a structure such as ours.”