Network revamp: Linux with Windows: Page 2

(Page 2 of 2)

After the system was locked down and hardened, it was brought up on the network. An IPRoute router had already been installed so Internet connectivity was available via dial on demand 56Kbps. All current updates were downloaded from a RedHat updates mirror and installed to bring the system up to current specification. Copies of all installed updates and non-standard packages were kept in /home/ftp/public/updates so they'd be available if any other Linux systems were ever brought online.

Migrating data

After the Linux server was ready to be deployed, it was decided that data migration should take place over a weekend so the transition would be as transparent as possible. Samba was configured to run as a secondary domain controller so it authenticated users off the NT server. This method was chosen for both simplicity and security. Since the username and password was the same for both the NT server and the Samba file server, passwords were cached and the user was automatically logged in. It also reduced the chances of /etc/passwd being compromised. For Samba configuration, each line of smb.conf is extensively documented, more so than the documentation that comes with most software.

To make sure all data was moved from the NT server to the Linux server reliably, I double-checked permissions on the NT server to make sure that users had read privileges to all files in their home directories. Once it was guaranteed that each user had the proper permissions, I ran smbmount as root and mounted each users' home directory from \\ntserver\username to /mnt/username. I then logged in as each user individually and ran script backuplog.txt to start up a keystroke and output logging session. After that, it was a simple matter to run cp -avR /mnt/username/* ~ to copy all files into their home directories.

I then double-checked backuplog.txt for each user to make sure there were no errors. Once all users were finished copying, I created manufact and humanrel accounts for the manufacturing and human relations shared directories. I followed the same process to copy over those shared directories, then set group read/write permissions and revoked all world permissions. After double-checking the group lists on the NT server, the appropriate user accounts were added to the manufact and humanrel groups so each user would have access to their shared files.

To test the installation, I used one of the Windows 95 systems to log into the network using every single username to make sure that each account worked. Although it's unnecessary to check every single account once three or four random accounts have been verified to work, I took this extra step because it's safer to spend more time verifying than to somehow miss something. Once the accounts were verified to be working and the data intact, a full backup of the NT server was made and the system was shut down and unplugged so it couldn't accidentally be used.

Internet configuration

The entire LAN was connected to the Internet via a standalone IPRoute router and the Linux server acted as a DNS and proxy server using Apache's built-in proxy facility. A proxy was chosen to cache often-accessed Web pages due to the fact that the entire company would be sitting on a 56Kbps dialup and 30 people hitting their homepages at 9:00 AM sharp tends to slow down a network. An alternate solution to Apache proxying would be an optimized Squid2 proxy, and RPMs are now available for Red Hat Linux 6.0.

Since none of the systems on the network had ever used TCP/IP before, it was necessary to come up with an IP addressing scheme and assign the NT server an IP address. For simplicity and to make it possible to monitor the network using Shomiti Surveyor and NAI SnifferPro, the entire network would be converted to TCP/IP from NetBEUI. Because the IPRoute router would act as a Network Address Translation (NAT) firewall and the LAN wouldn't have direct access to the Internet, the reserved IP block for unconnected networks was used.

Since the IPRoute router used NAT, all outbound packets were transparently proxied out to the Internet and all inbound packets were filtered. Only connections on SMTP, httpd, and sshd were accepted and forwarded to the Linux server. Regardless of how the network is connected to the Internet, it's a good idea to come up with a definite IP addressing scheme and use Visio (or equivalent) to put together a network diagram:

  • Network:
  • Subnet Mask:
  • Broadcast:
  • Gateway:
  • Nameserver:
  • core (Linux):
  • base (NT):
Once the Internet connection was configured and IP addresses were assigned, the systems needed to know how to get to each other. A caching nameserver (named, included with all versions of Linux) was set up to handle internal DNS and to make requests to the outside world. Since an unconnected block was used for IP addresses, a bogus domain name was used to make sure that no names would conflict with real systems out on the Internet. The Linux server became core.companyname.lab inside the firewall and outside the firewall. An alternate but less secure method would be to use for all systems and put DNS information for internal systems on the external name server (never give out more information about your internal LAN than is needed!).

Both forward and reverse DNS zonefiles were created, then added to the named.boot config file. The ISP's name servers were then added to /etc/resolv.conf as secondary and tertiary name servers for faster lookups on external domain names.

The last step was setting up qmail to replace sendmail. Because sendmail has the reputation of having miscellaneous bugs and security holes pop up a few times a year, I needed to use the most secure mail daemon possible. Once the Linux server was set up and I finished my contract there, chances were that the server would never see a systems administrator again so security was a primary concern. The version of qmail that I installed was the full archive downloaded from, but now RPM files can be installed for full compatibility with Red Hat Linux 6.0.

Because qmail delivers to /home/username/Mailbox instead of /var/mail/username and no simple patch was available for the current version of IMAP POP3, symbolic links were created so /var/mail/username pointed at /home/username/Mailbox. Although I initially thought that file locking would be a problem, the symbolic link method worked out fine because no one ever interactively read e-mail using pine or mail. Even though the /var/mail entries were symbolic links, qmail still pulled file locking so if someone was downloading their email via IMAP or POP3, qmail would queue the message in /var/qmail/queue until the lock had cleared. Full patches for IMAP are now available from so the symbolic links are no longer necessary.

Inbound email and outbound relay configuration then needed to be configured, which was extremely simple. Without relay authentication, it would be possible for external users to push e-mail through the open relay and send spam or fakemail. Both companyname.lab and were added to /var/qmail/control/rcpthosts and /var/qmail/control/locals so inbound e-mail would be recognized, then outbound relay was configured using tcp_wrappers:

Reinstalling Windows 95

Now that all of the services had been set up, it was time to do a clean install of Windows 95 on all of the end users' systems. We chose to use OEM copies of Windows 95 OSR2 to keep the systems as stable as possible. A minimum operating system installation with Office 97 and their SMS Database application were installed and copies of the Windows 95 CAB files and all installation programs were made available on the Samba server so the Windows 95 disc didn't need to be left out in the open for users to install drivers. Internet Explorer 4 with Outlook Express was then installed for Web surfing and IMAP e-mail.

IP addresses were then assigned, and manageable names were assigned to each system. For example, in the accounting department would be named Acct-22 (acct-22.companyname.lab) with the description being "James' PC - Room 12." All systems with local printers were then shared so end users could send printouts to each other without going through e-mail first. The added bonus to this situation was that all systems then showed up in Network Neighborhood as a quick reference to display who owned which PC, their room number, their IP address and if they were offline. An HP4000N network laser printer was then configured and queued through the Linux server using lpd and Samba printer sharing.

Overall, integrating Linux is fairly simple and, with proper preparation, can be done over a weekend for a LAN of 20 or 30 end user systems. It's been two years since the Linux server was installed and it's still working just fine today without a systems administrator, using Red Hat's built-in crontabs to prune logfiles and keep everything clean. The record uptime so far is 270 days. Not bad for a salvaged 486 that was about to be thrown out.ø

Related resources

1. Samba, Integrating UNIX and Windows, by John D. Blair.
2. ZedZ Consultants Hub for secure Linux projects and related info.
3. qmail Preferred to Sendmail because of higher security.
4. Red Hat mirror sites index Where to download the latest updates and releases.
5. Squid mirror sites Where to download this proxy caching program, in source code only.

Sean Sosik-Hamor is an Alpha Geek and systems administrator for Lucent Technologies and, in his free time, runs Sosik-Hamor Networks off a T1 out of his basement.

Page 2 of 2

Previous Page
1 2

Comment and Contribute


(Maximum characters: 1200). You have characters left.