Previous Page TOC Next Page



27

Windows NT Server as a Database Server

No one I know of has ever accused Microsoft of lacking ambition. The Windows NT Server product is certainly no exception to this rule. Not only did the design team intentionally take on industry-leader Novell in the file server and print server markets, but they targeted the client/server database server market as well. About the only computer market they aren't in with NT Server is the "dumb" terminal market of the mainframes (although you can get Telnet servers that enable NT Servers to do some of this type of work). This chapter addresses the use of NT Servers to operate database management systems that are accessed by remote clients.

This is somewhat of a challenge. There are entire books written about the various database management systems you can purchase for Windows NT. These books cover topics such as database administration, application development, and even specialized topics such as database tuning. Obviously, I cannot cover all these topics in one chapter. Instead, I focus on the interaction between the more common database management systems (Oracle, Microsoft SQL Server, and Microsoft Access's Jet) and the Windows NT system. The chapter is divided into the following parts:

First Impressions of Databases Under Windows NT

My first exposure to Windows NT was when I was asked to serve as database administrator for an Oracle Workgroup Server database. The goal of this project was to store a fair amount of data within this database and to use some of the more advanced (and demanding) features of the database, such as storing audio in the database. I was more than a bit skeptical, coming from the large UNIX server environment. I was concerned that even if Microsoft had done a good job on the operating system, the limited processing and data transfer capabilities of a PC-based server would quickly overload the system and cause miserable response times.

The good news is that I was actually quite surprised. Even though the Windows NT product had not been out on the market very long, and its main focus at the time was combating NetWare as a file and print server, I was able to get more than adequate response times in my development environment. Don't get me wrong[md]I wouldn't try to put a multi-terabyte database on a Pentium PC server (although it might be interesting to see a DEC Alpha running NT trying this out). However, most databases are just not that big. Also, with distributed database and database replication technologies coming into their own, you can often design a series of smaller databases that accomplish the same functions as a single large database in traditional environments.

When you work on an NT Server for such functions as word processing, you will notice that it seems very slow when compared with the NT workstation product on the same computer (or even a smaller computer with less memory). You might be the only one in the building and there are no other jobs running on the computer. This is actually an intentional design feature that distinguishes the server version of NT from the workstation version. The server version is tuned to respond to background server processes (services) that are designed to service other users who might want some of your server's computing capacity. The person on the server's keyboard gets leftover CPU cycles. This is a key feature that enables NT Server to function as a database server. Almost all the big, multi-user database management systems use a series of background processes to store and retrieve information from the database and perform all the necessary housekeeping functions. Therefore, when running a database management system, you need to have a system that gives these background processes the time they need to keep information flowing between the database and the users. Keep this in mind if your boss ever proposes that you can use the server as your personal PC to save money.

Types of Database Management Systems

I bet that title scared some of you. Fear of a lengthy discussion about the relative merits of the hierarchical model versus the relational model flashed through your head. What this section is actually devoted to is a discussion of the alternative architectures from the operating system's point of view. Database management systems are among the most challenging applications I have run across when it comes to using operating system resources. They are always demanding on memory and, depending on the types of processing, they often stress the input/output subsystems.

All too often, when a database server is performing poorly, the system administrator blames the database, and the database administrator blames the server and operating system. Of course, everybody blames the programmers who wrote the applications. With a little knowledge in the hands of the system administrators, many of these problems could be avoided. This section starts this discussion with a presentation of the various architectural alternatives for database management systems on NT servers that you are likely to come across. These include the following:

The first industry buzz words I throw at you in this section are client/server computing. Quite simply, this process involves applications that divide their processing between two or more computers. This is a simplified discussion, but it is good enough for what I am working with here. The contrast to client/server computing is host/terminal computing. Figure 27.1 illustrates these two architectures.

Figure 27.1

Client/server versus host/terminal computing.

It is often easiest to start with the host/terminal computing architecture. It was the first computer user access architecture to evolve after the days of punched cards. It involves hooking a controller to the big central computer that can take input from a number of terminals. These terminals are usually referred to as "dumb" terminals because they have very limited memory and almost no capacity to process information. They merely display information in the format the central computer has given them.

The alternative, client/server computing, splits the work between several computers, each of which has some degree of "smarts." This can be thought of as a divide and conquer approach for computer systems. The user sits at a workstation that performs most of the display functions and possibly some of the data analysis. The server is assigned the duties of data access and transmission.

Once you have grasped the basic concepts of client/server computing, I have another wrinkle to throw at you. There are actually multiple flavors of client/server computing. The differences lie primarily in how the labor is divided between the client and the server. Although the client is typically assigned the display functions and the server the database access, the question comes up as to where the business processing functions are performed. Although there are many minor variations between different vendor products, the key concepts to grasp are the two-tier and three-tier client/server architectures. There are books devoted to the intricate details of client/server architectures if you are interested; however, this is a Windows NT book, so you can get away with just understanding the basics. The two-tier and three-tier architectures are illustrated in Figure 27.2.

Figure 27.2

Two-tier and three-tier client/server architectures.

The key point to pick up here is that the two-tier architecture splits business processing between the user's client workstations and the database server. The machines might become overloaded as you try to implement a large number of users running complex business processes. Remember, your poor NT Server might be asked to support dozens or hundreds of users at a given time. Also, many of your users might still have older client workstations (such as 80386-based PCs). You might also need a large degree of coordination at a central site between users of certain applications. This is where another server whose sole function is to implement business processing logic comes into play. It interfaces with clients and the database to coordinate the business activities and off-load the other processors.

Now to tie these concepts to the Windows NT environment (yes, this is still an NT book). Most of the Windows NT installations I have seen operate in a client/server mode. Although you can set up terminal functions under NT, it is really designed to use the built-in networking functions (NetBEUI, TCP/IP, and IPX/SPX, which are described in more detail in Chapter 7) to enable free communication between client processes and server processes. You might have an NT workstation running your front-end database applications (the ones that the users actually see) and an NT Server running the database management system. The business logic can be split between the client and server as the developers see fit or even off-loaded to an applications server (which could also be running NT). In a sense, Microsoft designed NT around the same framework that Sun Microsystems made so popular in the UNIX world[md]that of the computer actually being a collection of computing devices located across a network that come together to serve the user's needs. This is actually fairly easy once you have a network of devices that can communicate easily with one another, which I have found in the NT, Windows 95, and UNIX environments. I like to imagine this environment something like that presented in Figure 27.3.

Figure 27.3

Networked computers forming a virtual computer.

Now that you have the basics out of the way, I will introduce a few of the concepts that are creeping into real systems today. The basic theory presented earlier works well (divide and conquer to get the job done using client/server architectures). However, there are two circumstances that present problems when you actually try to implement these systems (see Figure 27.4):

Figure 27.4

Problems with client/server implementations.

If you have not done a lot of work with computer networks, the question of adequate communications for remote locations might seem strange. However, the reality of the current market is that it is relatively inexpensive to transmit data signals over a range of a few hundred yards, but it gets expensive when you have to transmit the same volume of data over hundreds or even thousands of miles. The transmission lines running to other cities, or even within your own city, are controlled by communications companies that are out to make a profit and have access to a limited commodity (the transmission system). It can cost hundreds or thousands of dollars, per month, to maintain communications circuits. You might have had communications circuits for years when you used mainframe links; however, the communications companies charge not only by the distance, but also by the transmission capacity. You might be impressed with the vast amount of data that is at your fingertips with your new client/server applications. However, this high volume of data multiplied by the large number of users that now have access to your system require you to install a very robust communications system, and this can be expensive.

Now, permit me a slight diversion[md]the Internet. Almost everyone in the computer industry is talking about it (try to get Bill Gates or Larry Ellison to talk about anything else). The Internet can be thought of as a collection of networks someone else has already installed that you can connect to with just a local communications circuit (see Figure 27.5). The Internet could be the low cost transmission system everyone has been dreaming about. Windows NT is very well integrated with TCP/IP (and therefore Internet) communications, and NT Server actually powers a fair number of the services you find on the Internet. However, a few issues need to be worked out before the Internet can be used as the solution to everyone's problems. First, many organizations are gun-shy about the idea of transmitting their critical business data over a public network. Anyone could detect the packets, and your competitors might even be used as key links to various parts of the Internet. Many people are working on security for the Internet, and they will probably solve this problem soon. The second problem is the transmission capacity of the system. Much of the current Internet transmission capacity is funded in the United States with government funds. The pressures of cost-cutting, desires to limit transmission of pornography and, of course, a few suggestions from telephone company lobbyists might limit expansion of the current Internet transmission capacity. If a large number of companies start doing business on the already busy Internet, it might become overloaded, and performance might degrade. Anyway, it is something to consider if you are really strapped for communications capabilities to remote sites.

Figure 27.5

An overly simplified picture of the Internet.

Now back to the main discussion. The second problem that creeps into many client/server systems is that of increased demand to integrate data from various departments. When everyone had to live with nightly mainframe printouts, they had enough trouble wading through their existing printouts and relied on weekly reports from other departments to integrate business functions. Once they realize how easy it is to get at their business data using computer networks, they usually get demanding and want access to all the business data that affects them so they can make the business function better as a whole. This puts pressure on the information systems department to be able to link together a large number of databases that, taken together, are too large to fit on any single computer.

Seeing this trend in the market, the database and computer systems vendors have started to produce products to support the concept of distributed databases. This works out well for Windows NT because most NT servers are relatively small (such as high-end PC servers) and can be linked together to form a powerful computing environment. The key components of a distributed database include the following:

Windows NT supports this type of environment in a variety of ways. First, it is very network-friendly. Because networking was part of its base design (as opposed to being a product added decades after the first release of the operating system, as is the case in many mainframes), it supports efficient links between applications (including database management systems) and its networking utilities. Also, with its multitasking features, it is capable of supporting numerous background processes working together behind the scenes to take care of the communications functions while the user interacts with the front-end application.

Before I leave this topic, I want to extend the concept of distributed databases just a bit. This seems to be a direction that database vendors (Oracle and Microsoft in its SQL Server product) seem to be heading. The basic concept is that of replicated databases. Imagine that you wish to have a single directory of employees and their associated personnel information synchronized throughout your entire multinational organization. You could have everybody log into the home office in Cleveland through an extensive corporate network to look up the phone number of the guy two floors up. However, there is another way. That way is that you have a master database in Cleveland that contains the verified and updated information as entered by the central personnel department. The computer network to Cleveland is very busy during the day, but it is used lightly at night. Because there is no great penalty for having telephone numbers that are a day old, why not transmit the updated personnel databases to all regional offices every night? This gets them the information they need and saves your limited communications bandwidth for important business processing.

Companies have been implementing applications that transfer selected bits of information between computers for years. They write code that transfers the information to tape or even transfers it across the network where another program loads up the information into the local database. Now imagine the you could tell your database management system to keep multiple tables in multiple databases in synch and have it take care of all the details. This is what database replication is all about. The database management systems build the utilities for you to take care of all those annoying details. This is especially attractive to many Windows NT installations, where you are running near capacity during normal business hours and don't have a lot of legacy software written in older development environments to hold you back. It is often worth spending some time investigating these technologies to save yourself weeks (that you might not have) developing complex routines or trying to justify additional network bandwidth to central office.

OK, so where am I now? The database administrator has laid a bit of database theory and product information on people who are interested in Windows NT, the operating system. This was not just an attempt to get sympathy from operating system proponents for the complexity of managing modern databases. Instead, it lays the foundation for the next section, which talks about the types of interactions between applications, database management systems, and the operating system, specifically Windows NT.

Interactions Between the Operating System and Database Management System

Perhaps you are in the position of having worked for a while using Windows NT as a file and print server. Someone tells you that they need to use your server to host a database for a critical corporate function (such as coordinating arrangements for use of the executive condominium in Barbados). It is important to understand the demands placed on your operating system and server computer. You can then compare these requirements with your current capabilities and load to ensure that you can support this new requirement.

For purposes of this discussion, I divide the interactions between the operating system and database management system into the following categories:

I start with a discussion of shared server memory areas. This is the key to understanding the differences between your common print and file sharing functions under NT and database management systems. Most server database management systems use large areas of memory to store database and processing information. Why? The answer is simple: speed. Memory is several orders of magnitude faster to access than disk drives (even the good ones). Although it is OK to scan a small file of a few hundred kilobytes to find the information you need, you don't want to scan large database files of megabytes or even gigabytes.

With everyone wanting to get more for less, database vendors are in a big competition to see who can get more performance out of less hardware. They want to process more transactions per second or handle larger database warehouses. To do this, they have gotten extremely creative about devising ways to keep frequently accessed information readily available in memory and also using memory to cache information retrieved through efficient transfers from the disk storage files, as shown in Figure 27.6. This figure shows how Oracle uses memory areas. User processes interact with various memory areas while background processes transfer information from the disk drives to the memory areas. When the database management system developers do their jobs well, you will find that the data you want is usually located in the rapid-access memory areas and you therefore do not have to wait for the data to be transferred from the slower disk drives. The bottom line is that the big database management systems such as Oracle and SQL Server ask for a lot more memory than PC people are used to. You can easily see requirements for 8MB to 32MB just for the database itself (in addition to the operating system and all the other services running on the system).

Figure 27.6

Database shared memory areas.

This can be critical if you are running NT on Intel-based platforms. Although NT supports a large amount of memory (4GB), you rarely find an Intel-based computer that can actually hold that amount. This is especially true of computers that were designed several years ago, when very few applications in the PC world dreamed of needing this kind of memory. There are two types of limitation on memory. The first is the capacity of your motherboard to recognize the memory (many have limitations of 128MB or less). The other is the number of slots you have to hold memory (often 4 slots). This can be a problem, because RAM chips are among the most expensive components in a PC and you always get some on the motherboard when you buy the computer. If you want to upgrade a computer to 64MB of RAM and you have 32MB already, all you have to do is buy 32MB more, right? Maybe. You might also find that you have four 8MB memory chips in your machine that are filling all four of your available slots. Then your only solution is to remove your existing four memory chips and install four 16MB memory chips. Perhaps you can reallocate your other memory chips to users who have recently upgraded their workstations from Windows 3.1 to Windows NT Workstation or Windows 95 (they probably need as much memory as they can get), or sometimes you can get a trade-in deal from vendors who are hungry for your business.

One final note is on the banking of memory slots on the motherboard. It might seem like an annoying hardware detail, but it can come around to bite you. Imagine you have four memory slots, two of which are filled with 16MB memory chips. Your database administrator works with you to determine the memory requirements for a new database, and it turns out that you need an additional 16MB of memory. You just order another one of those 16MB chips and you'll still have room for future expansion, right? Once again, maybe. Computer motherboards often link two slots together in their logical designs and require that these two slots be filled with memory chips of the same size and type. Therefore, you might not be able to mix sizes or leave one slot in these banks empty.

The key to managing the database management system demands on your memory is to understand both the database's memory requirements and the capabilities of your system. The database administrator or vendor should be able to give you an idea of the required memory size. One thing I have always noted about databases is that they tend to grow rapidly, so it is good to allow yourself a little room to spare. You also need to understand the other needs and capabilities of your operating system. You might have to consult a hardware expert or crawl under the hood yourself to determine the amount of memory that is currently installed and the maximum amount of memory that can be installed in your computer. The Performance Monitor in the NT administrator utilities enables you to measure the usage of several resources, including memory, over a reasonable amount of time to understand the memory requirements of your operating system and other services. This tool is discussed in more detail in Chapter 19.


CAUTION:

When you run out of memory, your system starts swapping applications and data to the hard drive, and performance degrades rapidly. Avoid swapping whenever possible. Trust me, it hurts.


Now that I have solved all the memory problems you might run into, it's time to discuss the next load a database management system places on your system: server background processes. You might have wondered how those memory areas that give the database management system all that speed get maintained. What writes the changes from the memory areas to the data files on disk (which stick around after you turn the power off) and gets the anticipated data from disk to memory before the user needs it? The answer for most database management systems is a series of background processes (services and threads under NT) that perform specialized functions when needed (see Figure 27.7). Some of these processes also service user requests. One of the keys to modern database management systems is that the users tell the system what they want and the system figures out the most efficient way to get it. This works well because teams of people work on efficient search algorithms that you would never have time to implement if you had to do it yourself. The end result is a complex series of processes that need to have computer processing capacity to get their jobs done.

Figure 27.7

Background database services.

So how do these background processes impact the NT administrator? I would typically consider three key areas:

The next key resource to discuss is the disk storage system (see Figure 27.8). Databases with large volumes of information need to have a permanent home for the user data; this typically takes the form of an array of fixed disk drives. Databases often take up much more space than would ever be needed by print servers or file sharing. Therefore, it is important to get disk storage sizing estimates before starting a new database project.

Figure 27.8

A disk storage system.

It is also important to lay out the physical configuration of the disk drives. Many PC servers have a limited number of internal disk drive bays, and some of these bays are larger than others. You also have to consider the limitations of the number of drives on a given disk drive controlled (typically seven for SCSI disk drives and two for IDE disk drives). It turns into a bit of a puzzle to ensure that you have enough bays, cables, and controllers. Due to physical space limitations or drive bay limitations, you might have to remove older, smaller disk drives and replace them with larger capacity drives, especially on Intel-based servers.

Following are a few things you might not think of that might require some disk space:

Another key system resource that you have to be concerned with, especially in some Intel-based NT systems, is the disk input/output capacity. Basically, this is how quickly you can transfer information from memory to the disk drives. Although most UNIX systems are built with high-speed SCSI disk drives and controllers, many Intel-based servers have relatively slower data transfer subsystems. In many database servers (even large UNIX boxes or mainframes), the data transfer speeds can be the limiting factor in the overall performance of the database. Therefore, you need to know a few tricks to ensure that you are doing the best you can to provide adequate data transfer capabilities (see Figure 27.9):

Figure 27.9

Keys to data transfer capacity.

Next on the list of interactions between the database management system and the operating system is the network communications processing load it places on your server. Typically, NT handles tuning the communications processes you use to connect your database servers, application servers, and workstations together. There are, however, two things for you to consider when implementing the database client/server networking on your computers:

Last on the list of resources is something that is not in your NT computers and that the operating system lacks any control over. I'm talking about the transmission capacity of the network itself. Why mention it in a book on NT? Typically, when performance is poor, the users call the database administrator, system administrator, or developers. Eventually, the finger usually falls on the servers and the operating system. You should check out all the things mentioned earlier that you do have control over, but also check out the network as a possible problem. Remember, it takes all the components of a client/server system working efficiently together to achieve overall system performance.

To really see if the network itself is causing the problem, you have to be familiar with the overall network topology. I have included Figure 27.11 as an example. You might have to fight like heck to get one of these drawings out of a network guru, but it is usually worth your effort. There are a few key items to look for when a problem occurs:

Figure 27.11

Network transmission capacity limitations.

Did you ever feel like a database administrator has locked you in a room and hit you with more database information than you ever wanted to know? There has been a method to my madness. One of the biggest problems I have run across in computer environments is a lack of understanding of the needs of other specialty groups. On a database server, the database is the reason the server exists. If you know a bit about what the database is doing to your operating system and server, you are in a better position to react to those demands and keep things running smoothly. I hate sitting in a room where one techie says it's not his subsystem that's causing the problem, it's the other subsystem that's to blame, and vice versa. When asked why, they usually resort to "it's obvious" or "I've tested my system thoroughly." If nothing else, you are better armed to determine exactly where the problem is and explain why the other people have to take corrective actions and how you have already taken your corrective actions.

Oracle Under Windows NT

If every database management system were alike, I would be finished with this chapter. However, there are many vendors in the highly competitive computer world. Each of them tries to do some things just a little differently to achieve better performance or cost less or fill some special niche, such as decision support systems. Therefore, I have chosen to examine three of the most common types of database management systems you are likely to run across so that you can get a feel for these differences.

The first system I have chosen is the one with the largest share of the overall database market (depending on what you measure). Oracle runs on computers ranging from mainframes to personal computers running Windows 3.1. They also have a wide product family, ranging from the database management system itself to development tools to full-fledged applications such as financial and manufacturing systems (see Figure 27.12).

Figure 27.12

The Oracle product family.

What would the average NT system administrator need to know about Oracle database management systems running on their server? I concentrate on the database because that is probably the most demanding component for most administrators. The basic architecture of the database management system consists of several memory areas (designated for different purposes and tunable by the DBA to be larger or smaller), a series of background processes (combined into one service per database instance under NT), and the database data files (see Figure 27.13).

Figure 27.13

The basics of the Oracle architecture.

To facilitate communications with your Oracle database administrator, a few definitions are in order:

Now that I have the definitions out of the way, here are a few tips on running Oracle under Windows NT:

This section was actually quite a challenge. You could write an entire book on Oracle database administration (I know, I did it). It can be a complex beast[md]powerful and a challenge to discipline. However, it is popular and works fairly well in the NT environment. In fact, Oracle has recently announced that it will add NT to the list of operating systems included in the initial wave of releases when they upgrade the product. This family of operating systems includes the more popular UNIX systems and is a sign that Oracle sees NT as a viable (and hence profitable) platform for its operating system. Anyway, this section should provide you with the basics you need to communicate with your database administrator and, combined with previous sections, help you understand what the Oracle database management system is doing to your operating system and server.

Microsoft SQL Server Under Windows NT

Microsoft has its own database management system that it would be glad to sell to you with your Windows NT operating system. It is called Microsoft SQL Server, to differentiate it from Sybase SQL Server, which runs under NT in addition to several flavors of UNIX. Microsoft originally teamed with Sybase to develop this product under Windows NT, but they have recently agreed to go their separate ways on development. Anyway, the Microsoft SQL Server product is something you might have to support on your NT servers and workstations for the following reasons:

So what is Microsoft SQL Server like? Although the salespeople might stress the differences between the various products and underlying technologies (such as scalability), I tend to see the similarities. The Microsoft SQL Server architecture is somewhat similar to the Oracle architecture, at least from the operating system administrator's point of view (see Figure 27.14). It has the three key components found in most server database management systems: shared memory areas, background processes, and disk storage files.

Figure 27.14

The basics of the Microsoft SQL Server architecture.

A few of the terms you might want to be familiar with for dealing with Microsoft SQL Server databases include the following:

Now for a few final notes on dealing with Microsoft SQL Server as your database management system:

Again, Microsoft SQL Server can fill an entire book by itself. Actually, it has an enormous documentation set stored online that comes with the product. (Some people are still not comfortable working with online documentation, but at least you never leave your copy at home.) The key components of SQL Server were discussed earlier, and you can apply the principles discussed earlier in the book for dealing with database interactions with the operating system to plan and manage your system more effectively. One side note: I have found that the system administrator is more likely to have to administer a SQL Server database than an Oracle database. That is because Microsoft often integrates its BackOffice system products (such as Systems Management Server) with SQL Server. It is not a formally maintained application such as general ledger, where you have a development group and a support group that contains a database administrator.

Microsoft Access's Jet Database Engine Under Windows NT

A final topic that I want to discuss briefly is the concept of using a simple database file stored on a server to meet modest database needs. Back in the old days (which were not always good), people often stored dBASE files on servers that could be accessed from several PCs on the network. The locking mechanisms were primitive (locking mechanisms prevent one user from overwriting the changes of another user who is accessing the row at the same time), but they met many needs. I wanted to mention that you might consider using something like the Microsoft Access Jet database engine (access to it comes bundled in Visual C++, Visual Basic, and Microsoft Query) to set up one of these simple databases.

If you use this approach, you have to provide a storage area on a shared directory that contains a database file (in Microsoft Access's case, a file with the .mdb extension). People use applications or access tools (such as MS Query) that use the Open Database Connection (ODBC) tool set to access this data file. When you create the ODBC data source, you use a standard browse window to select the .mdb file you want to access. It is simple and does not provide complex transaction logging, recovery, or locking services, but it is relatively simple to implement and might meet many modest user requirements.

A Quick Discussion of ODBC and OLE

One of the concepts you might have to deal with when supporting client/server databases is middleware. Middleware can be defined as the software products that are needed to make the applications talk to the operating system networking utilities. The complicating factor is that each database can support several different sets of middleware, and these products can come from several vendors. The middleware stacks are generally as depicted in Figure 27.15.

Figure 27.15

Middleware and database access.

I typically divide middleware into two components that I refer to as upper middleware and lower middleware. The upper middleware is responsible for interfacing with the database or applications. The lower middleware component is responsible for interfacing with the network drivers. In Oracle's case, you have SQL*Net. For Oracle tools (such as SQL*Forms), SQL*Net provides both upper and lower middleware services. However, if you are writing Visual C++ applications, you need to use something like the Open Database Connection (ODBC) drivers to link your applications to SQL*Net. Confused? There are also ODBC drivers for Oracle (such as Openlink) that integrate the two layers of middleware into one package.

Just when you get comfortable with the concepts behind ODBC, Microsoft comes out with the OLE 2 (Objet Linking and Embedding) utilities that provide an alternative to ODBC. If you were one of the few people to get comfortable with developing for OLE, Microsoft has another database access technology (Database Access Objects, or DAO) that is optimized toward certain types of transactions to the Jet database engine.

Why am I scaring you with all these terms, technologies, and options? What I wanted you to get from this brief discussion of access technologies is an understanding that client/server database environments consist of applications and databases that have been designed to work in client/server environments, along with a set of communications software (middleware) that links everything together. Vendors provide many options for middleware, and you have to take the time to figure out how to connect the various products that are standard in your organization before you field production client/server databases. This can take a fair amount of time and is always best done in a prototype environment before the pressure is on to deliver that system to the field. You might want to try several vendors' products to see which ones work best for the type of database you are deploying. Finally, be prepared for the fact that many middleware errors translate into "something, somewhere is causing some kind of a problem" (in other words, it can take you a bit of time to figure out and correct errors). However, once you get the hang of things, middleware works well and provides a level of service to database users that cannot be beat.

Summary

This chapter was devoted to the use of Windows NT Server to support databases. I covered the ways a database can impact the operating system and how to deal with these effects. Several of the more common database management systems were presented[md]not to make you an expert database administrator, but to give you a feel for a few of the implementations and their basic terminologies. I have found that database management systems can be a challenge even to the capacities of large UNIX, VMS, and mainframe computers. They use a lot of memory and try to use some complex tricks to increase their performance, and this can cause a few headaches for the system administrator. I hope you now understand the concepts behind these effects and know where to look to see what is happening and how to deal with it.

Previous Page Page Top TOC Next Page

Hosted by uCoz