In this article: |
|
|
|
|
OH, YES, there are just loads of client/server systems out there running off the mainframe with all kinds of different stuff happening on separate hardware tiers, like, say, a bunch of client workstations hardwired into a database server, transaction-processing system, and a data-staging platform.
And as far as their physical structure is concerned–their locational architecture–you certainly could call that a C/S architecture.
But in such an architecture, it’s the arrangement of the hardware that has determined exactly where and what the software is running. Not a whole lot different from letting the mainframe architecture determine how and where you run your apps, eh?
Indeed, the logical structure of that two-tier C/S hardware arrangement is more likely than not still using some variation of the old mainframe “two-tier” data-passing scheme, says Marc Sokol, VP of product strategy at Computer Associates in Islandia, N.Y.
“In the olden days, you’d have a CICS/IDMS app running on the mainframe, and the client was the app running under CICS, and the server was the database system–IDMS–and that was in actual fact a client/server system,” says Sokol. “The only difference now is that we’ve broken those two out of the mainframe and placed them on separate hardware.”
About 90% of all the “client/server” applications in use today are of the two-tier, mainframe-based data-passing type, according to the Gartner Group of Stamford, Conn. That leaves only about 10% that are actually based on a three- or multitier logical model, says Roy Schulte, Gartner software management strategies VP. It may come as a surprise to the 90% of the nation’s IS shops that’ve been busting their humps over the past four years to move to C/S systems post haste that what they’ve actually been doing all this time is nothing more than replicating their old mainframe architecture–but so say the gurus at the Gartner Group.
All those old (and not so old) mainframe apps were developed just to maintain an airtight seam, a kind of “digital iron curtain,” between the user-access-driven business process logic and the database-management logic.
Says Schulte: “That seam between the data and the application has been in place for 30 years. And when everybody started setting up applications on the PC client, they left that seam in place.” Schulte describes that traditional two-tier data-passing architecture as one where two physically partitioned processes are running either in separate memory space on the mainframe or on physically separate servers and clients on the LAN. The mainframe-based logical structure is the same regardless of its physical location.
With a two-tier mainframe-type C/S system, how you partition the applications becomes a hardware/locational decision, not a business-function- or application-logic-driven decision. The problem with letting the hardware arrangement call the shots–that is, letting your physical architecture make your locational decision about the layers of an app even before the app has been written–is that you’re tying today’s, as well as any future, business processes to an admittedly short-term hardware arrangement.
Well, that’s been changing of late. Developers are starting to use new client/server development tools, distributed-object architectures, and new data and process-messaging and middleware technologies to shoot that digital curtain protecting the data from the app full of holes.
One big reason for this change comes from simple technical need–passing all that data or managing all those SQL calls across the network just ain’t too efficient after a certain long-past point. Those two-tier C/S apps have introduced not a little network latency into the world of client/server, as all those client apps sitting on all those desktops keep thumping on those database servers.
That’s why leading RDBMS vendors developed stored procedure technologies–to slim down the client applications, put more processing on the server, and reduce the amount of messaging and data moving around on the network.
The increasing use of stored procedures in two-tiered apps is actually a first step toward developing true three-tier, multitier, and even distributed-object C/S applications, says Schulte. It paves the way toward helping developers start to think about using the network more efficiently. The next step is to start building distributed applications that can reuse separate modules of an app. The stored procedures almost let you reuse code in your apps, but not across the board: You have to handcraft each one for each RDBMS or for each change in business logic.
The next step–the way to get to three-tier C/S–is to kick the stored procedures off the RDBMS and replace them with true cross-platform middleware tools like, for example, the remote procedure call (RPC) technologies embedded in DCE or Distributed OLE (recently dubbed Network OLE by Microsoft), or the objectware “stubs” found in the Object Request Broker (ORB) or NeXT’s Portable Distributed Objects (PDO).
Now developers have some real choices when it comes to creating multitier applications, thanks to these improved middleware tools and some new, easier-to-use development tools that can plug into all that middleware or that come with their own middleware solutions in the box.
And the middleware itself can now handle just about everything you might want to do across a network, be it LAN or Internet: everything from reading and writing disparate types or structures of data and RDBMSs, to coordinating transactions via net-aware TP monitors, to swimming through multiple network types and services, to linking together procedural and object-based applications, even to managing telephony and other communications links. All this stuff called “middleware” is getting pretty powerful and a lot easier to use.
IT MATTERS NOT
There’s still a few gotchas, though. None of the middleware makers has agreed with any other on a single middleware standard API, for example. Even the Object Management Group–formed specifically to prevent the middleware madness that’s possible in a multiplatform client/server system–has yet to reconcile the differences among IBM’s and Apple’s OpenDoc and DSOM, Microsoft’s NETWORK OLE and COM, NeXT’s PDO and Enterprise Objects, Sun’s ONC and DOE, and HP’s DOMF, just to mention a few.
But forget that for now. What matters to developers these days is that it soon will matter not what your physical architecture is. It will matter not where you locate the various components or partitions of your applications. And it will matter not how you break up and spread out your data. In fact, all these “it matters not” give developers one very big “it matters a lot.”
How much is a lot? In this emerging multitiered world, developers can build complex client/server apps based purely on business process, business needs, and end users’ requirements as if they’ll be running on a single unified computer with a single memory space and worry about how to partition the “single” application into its logical layers later. That puts the business process they are trying to build in the driver’s seat and puts the hardware architecture at the bottom of the list.
Says Sokol: “The developers’ view that a certain piece of the business logic should be built here and another piece built to run over there–that’s wrong. Placement should be done at deployment, not during development. Instead, you should think of the C/S system as just one big address space in one machine when you’re doing the development work.”
Then, when you finish building all the layers of the application, you can use location analysis to decide which layers should run on which hardware systems. (See sidebar, Federated’s Five Layers Make Client/Server Work.)
How does that work in practical terms? Take CA’s OpenRoad development system. Developers build applications in layers and use what CA calls “affinity logic” to help users make the final deployment decisions. “You’ve got to build the layers of the application independent of where they’re going to land,” says Sokol.
That means using a logical architecture and settling on its layered structure as late as possible in the process. And it means that the physical decisions about where to place the partitions shouldn’t even come into the picture until the very end of the development process–the reverse of the approach implied in the Gartner Group’s classical “Five Styles of Coopera tive Computing” model.
Today, Multitier–Tomorrow, Distributed Objects
It also means you’re counting a lot on the middleware to take the responsibility for making your partition-location decisions work. No wonder all of the big RDBMS and application development tool vendors have been working so hard to boost their middleware capabilities–from Sybase’s purchase of Micro Decisionware; to PowerBuilder’s strong links to DCE, ODBC, and other middleware; and even to NeXT’s decision to let OpenStep developers write applications that talk to a wide range of object-oriented middleware, including OLE 2.0, DOE, and ORB.
Says David Guzman of Federated Department Stores’ Federated Systems IS division:
“Since we’re trying to put as much on the network as possible, middleware is critical to what we’re building.”
Federated selected a soup-to-nuts multitier client/server development system, Seer Technologies’ High Performance System (HPS), which ships with its own middleware package, dubbed NetEssential. “By far the heart and soul of our multitier apps is the NetEssential middleware piece,” says Guzman. “It falls in line with what we’re trying to achieve with our new multitier architecture–to build business apps for Federated and not waste our time building the disparate technology needed to make it work together.”
So, when you see software vendors touting their “three-tier client/server” architectures, or if you read marketing material that cites Gartner’s five-part model as the basis for a particular enterprisewide C/S application, think again. Because you can get just as locked into a particular three-tier hardware structure as you might have been with the traditional slave-host model of the mainframe era.
But if you start right off with Gartner’s new model–an architecture based on the logical relationships among the modules of a client/server system, and not their physical relationships–that won’t happen.
In particular, if it’s C/S development tools you’re thinking about, find out if the tool actually does anything to help you decide how the software should talk to itself–what the best messaging and communication routines might be to implement a system, what middleware it uses, and what standards it meets, or even if it can help you figure out how best to partition the logical components of that system. ‘Cause it’s the tools you use that will determine what kind of architecture you’ll actually end up with when all the work is done. And if your tools aren’t flexible enough to support a wide range of C/S implementations, you may well live to regret it. //
Illustration by August Stein
Federated’s Five Layers Make Client/Server Work
When you start trying to develop complex C/S apps, you first have to develop a series of tactics to keep it all under control. David Guzman, information systems manager at the Federated Systems Group, a wholly owned subsidiary of Federated Department Stores in Atlanta, developed the following five layering standards for developers to follow as part of the multitier development process.
Says Guzman, “It’s really important that the application itself is designed from the start to take advantage of the multitier architecture. To do that, you’ve got to develop a set of application-layering standards. In our case, we have five layers in our application architecture.”
First, Federated separates out the application presentation layer–where they develop components such as smooth-scrolling logic and data-formatting logic.
Second is the event coordination layer-where they put the basic “if then, else” conditional logic and case state logic–the event-driven logic that checks which buttons on the GUI a user selects and then decides what actions to take as the result of that selection.
Third is the application process layer. That’s where all the hard-core business logic takes place. When a certain event or case takes place in the event coordination layer, the process layer decides exactly what code to call from a business perspective.
Fourth comes the data abstraction layer. This is the logical view that the process logic gets of the data attached to the application.
And fifth is the physical data access layer–where the whole application is tied to a particular DBMS.
“During this whole process,” says Guzman, “you can’t concern yourself with the physical partitioning of the application or with its location, network-operating system, or even its overall application environment. You’ve just got to focus on the application logic itself.”
Today, Multitier-Tomorrow, Distributed Objects
The coming wave of client/server technology will use objects distributed across multiple computers on the network. Conceptually, distributed-object C/S systems are nothing more than multitier systems broken down into many, many very small components, all of which can be mixed and matched (in the best systems) to allow the creation of custom applications from common modular parts.
In effect, distributed objects are a combination of several multitier-computing systems in which both the client and the server application logic consist of many, finer-grained modules than today’s all-too-often monolithic business-logic layer.
Take a look at a typical “fat client” application. Code runs to about a full megabyte or more just on the client side; you can expect at least the same size applications running on the server side.
But with distributed-object computing, you’re talking about 10 or 15 modules of only 100KB each running on the desktop and eight or nine modules of another 100KB each running on the server. At that point, you’ll likely stop calling these mini-apps “tiers” and start calling them objects or components.
One major benefit of moving to a distributed-object architecture in large, dispersed C/S systems is that it makes using distributed database systems a whole lot easier, says Gartner Group software management strategies VP Roy Schulte.
“Five years ago, people thought two-phase commit would make it possible to have all distributed databases with a single logical database structure. Now we know that’s not going to happen. The issue with distributed data management is how to manage distributed databases–they’re just too hard to design, manage, and maintain.” Too hard, and mostly not happening.
Instead, newer technologies–replication, publish and subscribe, database middleware, remote procedure calls, and especially distributed objects–are making it easy to use dispersed databases in a logical structure that looks and acts a whole lot like a distributed database system.
Once you get to multitier computing and to distributed objects, then you can get at distributed data much more easily.