Goggling ajaH

Custom Search

Result

5.16.2009

Host Management

Host management

The foregoing chapters have explored the basics of how hosts need to function
within a network community; we are now sufficiently prepared to turn our atten-
tion to the role of the individual host within such a network. It should be clear
from the previous chapter that it would be a mistake to think of the host as being
the fundamental object in the human–computer system. If we focus on too small a
part of the entire system initially, time and effort can be wasted configuring hosts
in a way that does not take into account the cooperative aspects of the network.
That would be a recipe for failure and only a prelude to later reinstallation.

4.1 Global view, local action
Life can be made easy or difficult by the decisions made at the outset of host
installation. Should we:
• Follow the OS designer’s recommended setup? (Often this is insufficient for
our purpose)
• Create our own setup?
• Make all machines alike?
• Make all machines different?
Most vendors will only provide immediate support for individual hosts or, in
the best case, clusters of hosts manufactured by them. They will almost never
address the issue of total network solutions, without additional cost, so their
recommendations often fall notably short of the recommendable in a real network.
We have to be aware of the big picture when installing and configuring hosts.
4.2 Physical considerations of server room
Critical hardware needs to be protected from accidental and malicious damage.
An organization’s very livelihood could be at stake from a lack of protection of its
basic hardware. Not all organizations have the luxury of choosing ideal conditions
for their equipment, but all organizations could dedicate a room or two to server
equipment. Any server room should have, at the very least, a lockable door,
probably cooling or ventilation equipment to prevent the temperature from rising
above about 20 degrees Celsius and some kind of anti-theft protection.
Remember that backup tapes should never be stored in the same room as the
hosts they contain data from, and duplicate servers are best placed in different
physical locations so that natural disasters or physical attacks (fire, bombs etc.)
will not wipe out all equipment at the same time.
Internet Service Providers (ISP) and Web hosting companies, who rely on 100
percent uptime for their customers, need a quite different level of security. Any
company with a significant amount of computing equipment should consider a
secure environment for their hardware, where the level of security is matched
with the expected threat. In some countries, bombs or armed robbery are not
uncommon, for instance. With high capital costs involved, physical security is
imperative.
An ISP should consider obscuring the nature of its business to avoid terrorist
attack, by placing it in an inauspicious location without outer markings. Security
registration should be required for all workers and visitors, with camera recorded
registration and security guards. Visitors should present photo-ID and be pre-
vented from bringing anything into the building; they should be accompanied at
all times. Within the server area:
• A reliable (uninterruptable) power supply is needed for essential equipment.
• Single points of failure, e.g. network cables, should be avoided.
• Hot standby equipment should be available for minimal loss of uptime in
case of failure.
• Replaceable hard disks should be considered1 with RAID protection for
continuity.
• Protection from natural disasters like fire and floods, and heating failure in
cold countries should be secured. Note that most countries have regulations
about fire control. A server room should be in its own ‘fire cell’, i.e. it should
be isolated by doorways and ventilation systems from neighboring areas to
prevent the spread of fire.
• Important computing equipment can be placed in a Faraday cage to prevent
the leakage of electromagnetic radiation, or to protect it from electromagnetic
pulses (EMP), e.g. from nuclear explosions or other weaponry.
• Access to cabling should be easy in case of error, and for extensibility.
• Humans should not be able to touch equipment. No carpeting or linoleum
that causes a build up of static electricity should be allowed near delicate
equipment. Antistatic carpet tiles can be purchased quite cheaply.
1On a recent visit to an Internet search engine’s host site, I was told that vibration in large racks
of plugin disks often causes disks to vibrate loose from their sockets, meaning that the most common
repair was pushing a disk back in and rebooting the host.
Humidity should also be kept at reasonable levels: too high and condensation
can form on components causing short circuits and damage; too low and
static electricity can build up causing sparks and spikes of current. Static
electricity is especially a problem around laser printers that run hot and
expel moisture. Static electricity causes paper jams, as pages stick together
in low moisture environments.
In a large server room, one can easily lose equipment, or lose one’s way!
Equipment should be marked, tagged and mapped out. It should be monitored
and kept secure. If several companies share the floor space of the server room, they
probably require lockable cabinets or partitioned areas to protect their interests
from the prying hands of competitors.
4.3 Computer startup and shutdown
The two most fundamental operations which one can perform on a host are to start
it up and to shut it down. With any kind of mechanical device with moving parts,
there has to be a procedure for shutting it down. One does not shut down any
machine in the middle of a crucial operation, whether it be a washing machine in
the middle of a program, an aircraft in mid-flight, or a computer writing to its disk.
With a multitasking operating system, the problem is that it is never possible to
predict when the system will be performing a crucial operation in the background.
For this simple reason, every multitasking operating system provides a procedure
for shutting down safely. A safe shutdown avoids damage to disks by mechanical
interruption, but it also synchronizes hardware and memory caches, making sure
that no operation is left incomplete.
4.3.1 Booting Unix
Normally it is sufficient to switch on the power to boot a Unix-like host. Sometimes
you might have to type ‘boot’or‘b’ to get it going. Unix systems can boot in several
different modes or run levels. The most common modes are called multi-user
mode and single-user mode. On different kinds of Unix, these might translate
into run-levels with different numbers, but there is no consensus. In single-user
mode no external logins are permitted. The purpose of single-user mode is to allow
the system administrator access to the system without fear of interference from
other users. It is used for installing disks or when repairing filesystems, where the
presence of other users on the system would cause problems.
The Unix boot procedure is controlled entirely by the init program; init
reads a configuration file called /etc/inittab. On older BSD Unices, a file called
/etc/rc meaning ‘run commands’ and subsidiary files like rc.local was then
called to start all services. These files were no more than shell scripts. In the
System V approach, a directory called (something like) /etc/rc.d is used to
keep one script per service. /etc/inittab defines a number of run-levels, and
starts scripts depending on what run-level you choose. The idea behind inittab
is to make Unix installable in packages, where each package can be started
or configured by a separate script. Which packages get started depends on the
run-level you choose.
The default form for booting is to boot in multi-user mode. We have to find out
how to boot in single-user mode on our system, in case we need to repair a disk
at some point. Here are some examples.
Under SunOS and Solaris, one interrupts the normal booting process by typing
stop a, where stop represents the ‘stop key’ on the left-hand side of the keyboard.
If you do this, you should always give the sync command to synchronize disk
caches and minimize filesystem damage.
Stop a
ok? sync
ok? boot -s
If the system does not boot right away, you might see the line
type b) boot, c) continue or n) new command
In this case, you should type
b-s
in order to boot in single-user mode. Under the GNU/Linux operating system,
using the LILO OR GRUB boot system, we interrupt the normal boot sequence by
pressing the SHIFT key when the LILO prompt appears. This should cause the
system to stop at the prompt:
Boot:
To boot, we must normally specify the name of a kernel file, normally linux.To
boot in single-user mode, we then type
Boot: linux single
OrattheLILOprompt,itispossibletotype‘?’ in order to see a list of kernels.
ThereappearstobeabuginsomeversionsofGNU/Linuxsothatthisdoesnot
have the desired effect. In some cases one is prompted for a run-level. The correct
run-level should be determined from the file /etc/inittab. It is normally called
S or 1 or even 1S.
Once in single-user mode, we can always return to multi-user mode just by
exiting the single-user login.
4.3.2 Shutting down Unix
Anyone can start a Unix-like system, but we have to be an administrator or
‘superuser’ to shut one down correctly. Of course, one could just pull the plug, but
this can ruin the disk filesystem. Even when no users are touching a keyboard
anywhere, a Unix system can be writing something to the disk – if we pull the plug,
we might interrupt a crucial write-operation which destroys the disk contents. The
correct way to shut down a Unix system is to run one of the following programs.
or configured by a separate script. Which packages get started depends on the
run-level you choose.
The default form for booting is to boot in multi-user mode. We have to find out
how to boot in single-user mode on our system, in case we need to repair a disk
at some point. Here are some examples.
Under SunOS and Solaris, one interrupts the normal booting process by typing
stop a, where stop represents the ‘stop key’ on the left-hand side of the keyboard.
If you do this, you should always give the sync command to synchronize disk
caches and minimize filesystem damage.
Stop a
ok? sync
ok? boot -s
If the system does not boot right away, you might see the line
type b) boot, c) continue or n) new command
In this case, you should type
b-s
in order to boot in single-user mode. Under the GNU/Linux operating system,
using the LILO OR GRUB boot system, we interrupt the normal boot sequence by
pressing the SHIFT key when the LILO prompt appears. This should cause the
system to stop at the prompt:
Boot:
To boot, we must normally specify the name of a kernel file, normally linux.To
boot in single-user mode, we then type
Boot: linux single
OrattheLILOprompt,itispossibletotype‘?’ in order to see a list of kernels.
ThereappearstobeabuginsomeversionsofGNU/Linuxsothatthisdoesnot
have the desired effect. In some cases one is prompted for a run-level. The correct
run-level should be determined from the file /etc/inittab. It is normally called
S or 1 or even 1S.
Once in single-user mode, we can always return to multi-user mode just by
exiting the single-user login.
4.3.2 Shutting down Unix
Anyone can start a Unix-like system, but we have to be an administrator or
‘superuser’ to shut one down correctly. Of course, one could just pull the plug, but
this can ruin the disk filesystem. Even when no users are touching a keyboard
anywhere, a Unix system can be writing something to the disk – if we pull the plug,
we might interrupt a crucial write-operation which destroys the disk contents. The
correct way to shut down a Unix system is to run one of the following programs.
boot block is located in the first sector of the boot-able drive. It identifies which
partition is to be used to continue with the boot procedure. On each primary
partition of a boot-able disk, there is a boot program which ‘knows’ how to load
the operating system it finds there. Windows has a menu-driven boot manager
program which makes it possible for several OSs to coexist on different partitions.
Once the disk partition containing Windows has been located, the program
NTLDR is called to load the kernel. The file BOOT.INI configures the defaults for
the boot manager. After the initial boot, a program is run which attempts to
automatically detect new hardware and verify old hardware. Finally the kernel is
loaded and Windows starts properly.
4.4 Configuring and personalizing workstations
Permanent, read–write storage changed PCs from expensive ping-pong games into
tools for work as well as pleasure. Today, disk space is so cheap that it is not
uncommon for even personal workstations to have several hundreds of gigabytes
of local storage.
Flaunting wealth is the sport of the modern computer owner: more disk,
more memory, better graphics. Why? Because it’s there. This is the game of
free enterprise, encouraged by the availability of home computers and personal
workstations. Not so many years before such things existed, however, computers
only existed as large multiuser systems, where hundreds of users shared a few
kilobytes of memory and a processor no more powerful than a now arthritic PC.
Rational resource sharing was not just desirable, it was the only way to bring
computing to ordinary users. In a network, we have these two conflicting interests
in the balance.
4.4.1 Personal workstations or ‘networkstations’?
Today we are spoiled, often with more resources than we know what to do with.
Disk space is a valuable resource which can be used for many purposes. It would
be an ugly waste to allow huge areas of disk to go unused, simply because small
disks are no longer manufactured; but, at the same time, we should not simply
allow anyone to use disk space as they please, just because it is there.
Operating systems which have grown out of home computers (Windows and
MacIntosh) take the view that, whatever is left over of disk resources is for the
local owner to do with as he or she pleases. This is symptomatic of the idea that
one computer belongs to one user. In the world of the network, this is an inflexible
model. Users move around organizations; they ought not to be forced to take their
hardware with them as they move. Allowing users to personalize workstations is
thus a questionable idea in a network environment.
Network sharing allows us to make disk space available to all hosts on a
network, e.g. with NFS, Netware or DFS. This allows us to make disk space
available to all hosts. There are positives and negatives with sharing, however. If
sharing was a universal panacea, we would not have local disks: everything would
be shared by the network. This approach has been tried: diskless workstations,
network computers and X-terminals have all flirted with the idea of keeping all
disk resources in one place and using the network for sharing. Such systems
have been a failure: they perform badly, are usually more expensive than an
off-the-shelf PC, and they simply waste a different resource: network bandwidth.
Some files are better placed on a local disk: namely the files which are needed
often, such as the operating system and temporary scratch files, created in the
processing of large amounts of data.
In organizing disk space, we can make the best use of resources, and separate:
• Space for the operating system.
• Space which can be shared and made available for all hosts.
• Space which can be used to optimize local work, e.g. temporary scratch
space, space which can be used to optimize local performance (avoid slow
networking).
• Space which can be used to make distributed backups, for multiple redun-
dancy.
These independent areas of use need to be separated from one another, by
partitioning disks.
4.4.2 Partitioning
Disks can be divided up into partitions. Partitions physically divide the disk
surface into separate areas which do not overlap. The main difference between
two partitions on one disk and two separate disks is that partitions can only be
accessed one at a time, whereas multiple disks can be accessed in parallel.
Disks are partitioned so that files with separate purposes cannot be allowed to
spill over into one another’s space. Partitioning a disk allows us to reserve a fixed
amount of space for a particular purpose, safe in the knowledge that nothing else
will encroach on that space. For example, it makes sense to place the operating
system on a separate partition, and user data on another partition. If these two
independent areas shared common space, the activities of users could quickly
choke the operating system by using up all of its workspace.
In partitioning a system, we have in mind the issues described in the previous
section and try to size partitions appropriately for the tasks they will fulfill. Here
are some practical points to consider when partitioning disks:
• Size partitions appropriately for the jobs they will perform. Bear in mind that
operating system upgrades are almost always bigger than previous versions,
and that there is a general tendency for everything to grow.
• Bear in mind that RISC (e.g. Sun Sparc) compiled code is much larger than
CISC compiled code (e.g. software on an Intel architecture), so software will
take up more space on a RISC system.
• Consider how backups of the partitions will be made. It might save many
complications if disk partitions are small enough to be backed up in one go
with a single tape, or other backup device.
Choosing partitions optimally requires both experience and forethought. Thumb-
rules for sizing partitions change constantly, in response to changing RAM
requirements and operating system sizes, disk prices etc. In the early 1990s
many sites adopted diskless or partially diskless solutions [11], thus centraliz-
ing disk resources. In today’s climate of ever cheaper disk space, there are few
limitations left.
Disk partitioning is performed with a special program. On PC hardware, this
is called fdisk or cfdisk. On Solaris systems the program is called, confusingly,
format. To repartition a disk, we first edit the partition tables. Then we have
to write the changes to the disk itself. This is called labelling the disk. Both of
these tasks are performed from the partitioning programs. It is important to make
sure manually that partitions do not overlap. The partitioning programs do not
normally help us here. If partitions overlap, data will be destroyed and the system
will sooner or later get into deep trouble, as it assumes that the overlapping area
can be used legitimately for two separate purposes.
Partitions are labelled with logical device names in Unix. As one comes to
expect, these are different in every flavor of Unix. The general pattern is that of
a separate device node for each partition, in the /dev directory, e.g. /etc/sd1a,
/etc/sd1b, /dev/dsk/c0t0d0s0 etc. The meaning of these names is described in
section 4.5.
The introduction of meta-devices and logical volumes in many operating sys-
tems allows one to ignore disk partitions to a certain extent. Logical volumes
provide seamless integration of disks and partitions into a large virtual disk which
can be organized without worrying about partition boundaries. This is not always
desirable, however. Sometimes partitions exist for protection, rather than merely
for necessity.
4.4.3 Formatting and building filesystems
Disk formatting is a way of organizing and finding a way around the surface of a
disk. It is a little bit like painting parking spaces in a car park. We could make a
car park in a field of grass, but everything would get rapidly disorganized. If we
paint fixed spaces and number them, then it is much easier to organize and reuse
space, since people park in an orderly fashion and leave spaces of a standard,
reusable size. On a disk surface, it makes sense to divide up the available space
into sectors or blocks. The way in which different operating systems choose to do
this differs, and thus one kind of formatting is incompatible with another.
The nomenclature of formatting is confused by differing cultures and technolo-
gies. Modern hard disks have intelligent controllers which can map out the disk
surface independently of the operating system which is controlling them. This
means that there is a kind of factory formatting which is inherent to the type of
disk. For instance, a SCSI disk surface is divided up into sectors.Anoperating
system using a SCSI disk then groups these sectors into new units called blocks
which are a more convenient size to work with, for the operating system. With the
analogy above, it is a little like making a car park for trucks by grouping parking
spaces for cars. It also involves a new set of labels. This regrouping and labelling
procedureiscalled formatting in PC culture and is called making a filesystem
in Unix culture. Making a filesystem also involves setting up an infrastructure
for creating and naming files and directories. A filesystem is not just a labelling
scheme, it also provides functionality.
If a filesystem becomes damaged, it is possible to lose data. Usually filesystem
checking programs called disk doctors, e.g. the Unix program fsck (filesystem
check), can be used to repair the operating system’s map of a disk. In Unix
filesystems, data which lose their labelling get placed for human inspection in a
special directory which is found on every partition, called lost+found.
The filesystem creation programs for different operating systems go by various
names. For instance, on a Sun host running SunOS/Solaris, we would create a
filesystem on the zeroth partition of disk 0, controller zero with a command like
this to the raw device:
newfs -m 0 /dev/rdsk/c0t0d0s0
The newfs command is a friendly front-end to the mkfs program. The option -m
0, used here, tells the filesystem creation program to reserve zero bytes of special
space on the partition. The default behavior is to reserve ten percent of the total
partition size, which ordinary users cannot write to. This is an old mechanism
for preventing filesystems from becoming too full. On today’s disks, ten percent of
a partition size can be many files indeed, and if we partition our cheap, modern
disks correctly, there is no reason not to allow users to fill them up completely.
This partition is then made available to the system by mounting it. This can either
be performed manually:
mount /dev/dsk/c0t0d0s0 /mountpoint/directory
or by placing it in the filesystem table /etc/vfstab.
GNU/Linux systems have the mkfs command, e.g.
mkfs /dev/hda1
The filesystems are registered in the file /etc/fstab. Other Unix variants register
disks in equivalent files with different names, e.g. HPUX in /etc/checklist (prior
to 10.x) and AIX in /etc/filesystems.
On Windows systems, disks are detected automatically and partitions are
assigned to different logical drive names. Drive letters C: to Z: are used for non-
floppy disk devices. Windows assigns drive letters based on what hardware it finds
at boot-time. Primary partitions are named first, then each secondary partition is
assigned a drive letter. The format program is used to generate a filesystem on a
drive. The command
format /fs:ntfs /v:spare F:
would create an NTFS filesystem on drive F: and give it a volume label ‘spare’.
The older, insecure filesystem FAT can also be chosen, however this is not
recommended. The GUI can also be used to partition and format inactive disks.
2Sometimes Unix administrators speak about reformatting a SCSI disk. This is misleading. There
is no reformatting at the SCSI level; the process referred to here amounts to an error-correcting scan,
in which the intelligent disk controller re-evaluates what parts of the disk surface are undamaged and
can be written to. All disks contain unusable areas which have to be avoided.

4.4.4 Swap space
In Windows operating systems, virtual memory uses filesystem space for saving
data to disk. In Unix-like operating systems, a preferred method is to use a whole,
unformatted partition for virtual memory storage.
A virtual memory partition is traditionally called the swap partition, though few
modern Unix-like systems ‘swap’ out whole processes, in the traditional sense.
The swap partition is now used for paging. It is virtual memory scratch space, and
uses direct disk access to address the partition. No filesystem is needed, because
no functionality in terms of files and directories is needed for the paging system.
The amount of available RAM in modern systems has grown enormously in
relation to the programs being run. Ten years ago, a good rule of thumb was to
allocate a partition twice the size of the total amount of RAM for paging. On heavily
used login servers, this would not be enough. Today, it is difficult to give any firm
guidelines, since paging is far less of a problem due to extra RAM, and there is
less uniformity in host usage.
4.4.5 Filesystem layout
We have no choice about the layout of the software and support files which are
installed on a host as part of ‘the operating system’. This is decided by the system
designers and cannot easily be changed. Software installation, user registration
and network integration all make changes to this initial state, however. Such
additions to the system are under the control of the system administrator and it is
important to structure these changes according to logical and practical principles
which we shall consider below.
A working computer system has several facets:
• The operating system software distribution,
• Third party software,
• Users’ files,
• Information databases,
• Temporary scratch space.
These are logically separate because:
• They have different functions,
• They are maintained by different sources,
• They change at different rates,
• A different policy of backup is required for each.
Most operating systems have hierarchical file systems with directories and
subdirectories. This is a powerful tool for organizing data. Disks can also be
divided up into partitions. Another issue in sizing partitions is how you plan to

make a backup of those partitions. To make a backup you need to copy all the
data to some other location, traditionally tape. The capacity of different kinds of
tape varies quite a bit, as does the software for performing backups.
The point of directories and partitions is to separate files so as not to mix
together things which are logically separate. There are many things which we
might wish to keep separate: for example,
• User home directories
• Development work
• Commercial software
• Free software
• Local scripts and databases.
One of the challenges of system design is in finding an appropriate directory
structure for all data which are not part of the operating system, i.e. all those files
which are locally maintained.
Principle 13 (Separation I). Datawhich are separate from the operating system
should be kept in a separate directory tree, preferably on a separate disk partition.
If they are mixed with the operating system file tree it makes reinstallation or
upgrade of the operating system unnecessarily difficult.
The essence of this is that it makes no sense to mix logically separate file trees.
For instance, users’ home directories should never be on a common partition
with the operating system. Indeed, filesystems which grow with a life of their own
should never be allowed to consume so much space as to throttle the normal
operation of the machine.
These days there are few reasons for dividing the files of the operating system
distribution into several partitions (e.g. /, /usr). Disks are large enough to install
the whole operating system distribution on a single independent disk or partition.
If you have done a good job of separating your own modifications from the system
distribution, then there is no sense in making a backup of the operating system
distribution itself, since it is trivial to reinstall from source (CD-ROM or ftp file
base). Some administrators like to keep /var on a separate partition, since it
contains files which vary with time, and should therefore be backed up.
Operating systems often have a special place for installed software. Regrettably
they often break the above rule and mix software with the operating system’s
file tree. Under Unix-like operating systems, the place for installed third party
software is traditionally /usr/local,orsimply /opt. Fortunately under Unix,
separate disk partitions can be woven anywhere into the file tree on a directory
boundary, so this is not a practical problem as long as everything lies under a
common directory. In Windows, software is often installed in the same directory as
the operating system itself; also Windows does not support partition mixing in the
same way as Unix so the reinstallation of Windows usually means reinstallation
of all the software as well.

Data which are installed or created locally are not subject to any constraints,
however; they may be installed anywhere. One can therefore find a naming scheme
which gives the system logical clarity. This benefits users and management issues.
Again wemay use directories for this purpose. Operating systems which descended
from DOS also have the concept of drive numbers like A:, B:, C: etc. These are
assigned to different disk partitions. Some Unix operating systems have virtual
file systems which allow one to add disks transparently without ever reaching a
practical limit. Users never see partition boundaries. This has both advantages
and disadvantages since small partitions are a cheap way to contain groups of
misbehaving users, without resorting to disk quotas.
4.4.6 Object orientation: separation of independent issues
The computing community is currently riding a wave of affection for object orien-
tation as a paradigm in computer languages and programming methods. Object
orientation in programming languages is usually presented as a fusion of two
independent ideas: classification of data types and access control based on scope.
The principle from which this model has emerged is simpler than this, however: it
is simply the observation that information can be understood and organized most
efficiently if logically independent items are kept separate.
3 This simple idea is a
powerful discipline, but like most disciplines it requires a strong will on the part
of a system administrator in order to avoid a decline into chaos. We can restate
the earlier principle about operating system separation now more generally:
Principle 14 (Separation II). Data which are logically separate belong in
separate directory trees, perhaps on separate filesystems.
The basic filesystem objects, in order of global to increasingly local, are disk par-
tition, directory and file. As system administrators, we are not usually responsible
for the contents of files, but we do have some power to decide their organization by
placing them in carefully labelled directories, within partitions. Partitions are use-
ful because they can be dumped (backed-up to tape, for instance) as independent
units. Directories are good because they hide and group related files into units.
Many institutions make backups of the whole operating system partition
because they do not have a system for separating the files which they have
modified, or configured specially. The number of actual files one needs to keep is
usually small. For example
• The password and group databases
• Kernel configuration
• Files in /etc like services, default configurations files
• Special startup scripts.
3It is sometimes claimed that object orientation mimics the way humans think. This, of course, has
no foundation in the cognitive sciences. A more careful formulation would be that object orientation
mimics the way in which humans organize and administrate. That has nothing to do with the
mechanisms by which thoughts emerge in the brain.

It is easy to make a copy of these few files in a location which is independent of
the locations where the files actually need to reside, according to the rules of the
operating system.
A good solution to this issue is to make master copies of files like /etc/group,
/etc/services, /etc/sendmail.cf etc., in a special directory which is separate
from the OS distribution. For example, you might choose to collect all of these in a
directory such as /local/custom and to use a script, or cfengine to make copies
of these master files in the actual locations required by the operating system. The
advantages to this approach are
• RCS version control of changes is easy to implement
• Automatic backup and separation
• Ease of distribution to other hosts.
The exception to this rule must be the password database /etc/passwd which
is actually altered by an operating system program /bin/passwd rather than the
system administrator. In that case the script would copy from the system partition
to the custom directory.
Keeping a separate disk partition for software that you install from third parties
makes clear sense. It means that you will not have to reinstall that software later
when you upgrade your operating system. The question then arises as to how
such software should be organized within a separate partition.
Traditionally, third party software has been installed in a directory under
/usr/local or simply /local. Software packages are then dissected into libraries,
binaries and supporting files which are installed under /local/lib, /local/bin
and /local/etc, to mention just a few examples. This keeps third party software
separate from operating system software, but there is no separation of the third
party software. Another solution would be to install one software package per
directory under /local.
4.5 Installing a Unix disk
Adding a new disk or device to a Unix-like host involves some planning. The first
concern is what type of hard-disk. There are several types of disk interface used
for communicating with hard-disks.
• ATA/IDE disks: ATA devices have suffered from a number of limitations in
data capacity and number of disks per controller. However, most of these
barriers have been broken with new addressing systems and programming
techniques. Both parallel (old ribbon cables) and serial interfaces now exist.
• SCSI disks: The SCSI interface can be used for devices other than disks too.
It is better than IDE at multitasking. The original SCSI interface was limited
to 7 devices in total per interface. Wide SCSI can deal with 14 disks. See also
the notes in chapter 2.

• IEEE 1394 disks: Implementations include Sony’s iLink and Apple Com-
puter’s FireWire brandnames. These disks use a superior technology (some
claim) but have found limited acceptance due to their expense.
In order to connect a new disk to a Unix host, we have to power down the system.
Here is a typical checklist for adding a SCSI disk to a Unix system.
• Power down the computer.
• Connect disk and terminate SCSI chain with proper terminator.
• Set the SCSI id of the disk so that it does not coincide with any other disks.
On Solaris hosts, SCSI id 6 of controller zero is typically reserved for the
primary CD-ROM drive.
• On SUN machines one can use the ROM command probe-scsi from the
monitor (or probe-scsi-all, if there are several disk interfaces) to probe
the system for disks, This shows which disks are found on the bus. It can be
useful for trouble-shooting bad connections, or accidentally overlapping disk
IDs etc.
• Partition and label the disk. Update the defect list.
• Edit the /etc/fstab filesystem table or equivalent to mount the disk. See
also next section.
4.5.1 mount and umount
To make a disk partition appear as part of the file tree it has to be mounted.
We say that a particular filesystem is mounted on a directory or mountpoint.The
command mount mounts filesystems defined in the filesystem table file. This is a
file which holds data for mount to read.
The filesystem table has different names on different implementations of Unix.
Solaris 1 (SunOS) /etc/fstab
Solaris 2 /etc/vfstab
HPUX /etc/checklist or /etc/fstab
AIX /etc/filesystems
IRIX /etc/fstab
ULTRIX /etc/fstab
OSF1 /etc/fstab
GNU/Linux /etc/fstab
These files also have different syntax on different machines, which can be found
in the manual pages. The syntax of the command is
mount filesystem directory type (options)

There are two main types of filesystem – a disk filesystem (called ufs, hfs etc.)
(which means a physical disk) and the NFS network filesystem. If we mount a
4.2 filesystem it means that it is, by definition, a local disk on our system and is
described by some logical device name like /dev/something. If we mount an NFS
filesystem, we must specify the name of the filesystem and the name of the host
to which the physical disk is attached.
Here are some examples, using the SunOS filesystem list above:
mount -a # mount all in fstab
mount -at nfs # mount all in fstab which are type nfs
mount -at 4.2 # mount all in fstab which are type 4.2
mount /var/spool/mail # mount only this fs with options given in fstab
(The -t option does not work on all Unix implementations.) Of course, we can type
the commands manually too, if there is no entry in the filesystem table. For exam-
ple, to mount an nfs filesystem on machine ‘wigner’ called /site/wigner/local
so that it appears in our filesystem at /mounted/wigner, we would write
mount wigner:/site/wigner/local /mounted/wigner
The directory /mounted/wigner must exist for this to work. If it contains files,
then these files will no longer be visible when the filesystem is mounted on top of
it, but they are not destroyed. Indeed, if we then unmount using
umount /mounted/wigner
(the spelling umount is correct) then the files will reappear again. Some imple-
mentations of NFS allow filesystems to be merged at the same mount point,
so that the user sees a mixture of all the filesystems mounted at the same
point.
4.5.2 Disk partition device names
The convention for naming disk devices in BSD and system 5 Unix differs. Let us
take SCSI disks as an example. Under BSD, the SCSI disks have names according
to the following scheme:
/dev/sd0a First partition of disk 0 of the standard
disk controller. This is normally the root
file system /.
/dev/sd0b Second partition of disk 0 on the standard
disk controller. This is normally used for
the swap area.
/dev/sd1c Third partition of disk 1 on the standard
disk controller. This partition is usually
reserved to span the entire disk, as a
reminder of how large the disk is.

System 5 Unix employs a more complex, but also more general naming scheme.
Here is an example from Solaris 2:
/dev/dsk/c0t3d0s0 Disk controller 0, target (disk) 3,
device 0, segment (partition) 0
/dev/dsk/c1t1d0s4 Disk controller 1, target (disk) 1,
device 0, segment (partition) 4
Not all systems distinguish between target and device. On many systems you will
find only t or d but not both.
4.6 Installation of the operating system
The installation process is one of the most destructive things we can do to a
computer. Everything on the disk will disappear during the installation process.
One should therefore have a plan for restoring the information if it should turn
out that reinstallation was in error.
Today, installing a new machine is a simple affair. The operating system comes
on some removable medium (like a CD or DVD) that is inserted into the player
and booted. One then answers a few questions and the installation is done.
Operating systems are now large so they are split up into packages. One is
expected to choose whether to install everything that is available or just certain
packages. Most operating systems provide a package installation program which
helps this process.
In order to answer the questions about installing a new host, information must
be collected and some choices made:
• We must decide a name for each machine.
• We need an unused Internet address for each.
• We must decide how much virtual memory (swap) space to allocate.
• We need to know the local netmask and domain name.
• We need to know the local timezone.
We might need to know whether a Network Information Service (NIS) or Windows
domain controller is used on the local network; if so, how to attach the new host
to this service. When we have this information, we are ready to begin.
4.6.1 Solaris
Solaris can be installed in a number of ways. The simplest is from CD-ROM. At
the boot prompt, we simply type
? boot cdrom

This starts a graphical user interface which leads one through the steps of the
installation from disk partitioning to operating system installation. The procedure
is well described in the accompanying documentation, indeed it is quite intuitive,
so we needn’t belabor the point here. The installation procedure proceeds through
the standard list of questions, in this order:
• Preferred language and keyboard type.
• Name of host.
• Net interfaces and IP addresses.
• Subscribe to NIS or NIS plus domain, or not.
• Subnet mask.
• Timezone.
• Choose upgrade or install from scratch.
Solaris installation addresses an important issue, namely that of customization
and integration. As part of the installation procedure, Solaris provides a service
called Jumpstart, which allows hosts to execute specialized scripts which cus-
tomize the installation. In principle, the automation of hosts can be completely
automated using Jumpstart. Customization is extremely important for integrating
hosts into a local network. As we have seen, vendor standard models are almost
never adequate in real networks. By making it possible to adapt the installation
procedure to local requirements, Solaris makes a great contribution to automatic
network configuration.
Installation from CD-ROM assumes that every host has a CD-ROM from which
to install the operating system. This is not always the case, so operating systems
also enable hosts with CD-ROM players to act as network servers for their
CD-ROMs, thus allowing the operating system to be installed directly from the
network.
4.6.2 GNU/Linux
Installing GNU/Linux is simply a case of inserting a CD-ROM and booting from
it, then following the instructions. However, GNU/Linux is not one, but a family
of operating systems. There are many distributions, maintained by different orga-
nizations and they are installed in different ways. Usually one balances ease of
installation with flexibility of choice.
What makes GNU/Linux installation unique amongst operating system instal-
lations is the sheer size of the program base. Since every piece of free software is
bundled, there are literally hundreds of packages to choose from. This presents
GNU/Linux distributors with a dilemma. To make installation as simple as possi-
ble, package maintainers make software self-installing with some kind of default
configuration. This applies to user programs and to operating system services.
Here lies the problem: installing network services which we don’t intend to use
presents a security risk to a host. A service which is installed is a way into the

system. A service which we are not even aware of could be a huge risk. If we install
everything, then, we are faced with uncertainty in knowing what the operating
system actually consists of, i.e. what we are getting ourselves into.
As with most operating systems, GNU/Linux installations assume that you are
setting up a stand-alone PC which is yours to own and do with as you please.
Although GNU/Linux is a multiuser system, it is treated as a single-user system.
Little thought is given to the effect of installing services like news servers and web
servers. The scripts which are bundled for adding user accounts also treat the
host as a little microcosm, placing users in /home and software in /usr/local.
To make a network workstation out of GNU/Linux, we need to override many of
its idiosyncrasies.
4.6.3 Windows
The installation of Windows4 is similar to both of the above. One inserts a CD-ROM
and boots. Here it is preferable to begin with an already partitioned hard-drive
(the installation program is somewhat ambiguous with regard to partitions). On
rebooting, we are asked whether we wish to install Windows anew, or repair an
existing installation. This is rather like the GNU/Linux rescue disk. Next we choose
the filesystem type for Windows to be installed on, either DOS or NTFS. There is
clearly only one choice: installing on a DOS partition would be irresponsible with
regard to security. Choose NTFS.
Windows reboots several times during the installation procedure, though this
has improved somewhat in recent versions. The first time around, it converts
its default DOS partition into NTFS and reboots again. Then the remainder
of the installation proceeds with a graphical user interface. There are several
installation models for Windows workstations, including regular, laptop, minimum
and custom. Having chosen one of these, one is asked to enter a license key for
the operating system. The installation procedure asks us whether we wish to use
DHCP to configure the host with an IP address dynamically, or whether a static IP
address will be set. After various other questions, the host reboots and we can log
in as Administrator.
Windows service packs are patch releases which contain important upgrades.
These are refreshingly trivial to install on an already-running Windows system.
One simply inserts them into the CD-ROM drive and up pops the Explorer program
with instructions and descriptions of contents. Clicking on the install link starts
the upgrade. After a service pack upgrade, Windows reboots predictably and then
we are done. Changes in configuration require one to reinstall service packs,
however.
4.6.4 Dual boot
There are many advantages to having both Windows and GNU/Linux (plus any
other operating systems you might like) on the same PC. This is now easily
4Since Windows 9x is largely history, and NT changes names (NT, 2000, XP, ...) faster than a
speeding bullet, I have chosen to refer to ‘Windows’ meaning modern NT-based Windows, and largely
ignore the older versions in this book.

achieved with the installation procedures provided by these two operating systems.
It means, however, that we need to be able to choose the operating system from
a menu at boot time. The boot-manager GRUB that is now part of GNU/Linux
distributions performs this tasks very well, so one scarcely needs to think about
this issue anymore. Note, however, that it is highly advisable to install Windows
before installing GNU/Linux, since the latter tends to have more respect for the
former than vice versa! GNU/Linux can preserve an existing Windows partition,
and even repartition the disk appropriately.
4.6.5 Configuring name service lookup
Name service lookup must be configured in order for a system to be able to look
up hostnames and Internet addresses. On Windows systems, one configures a
list of name servers by going to the menu for TCP/IP network configuration. On
Unix hosts there are often graphical tools for doing this too. However, automation
requires a non-interactive approach, for scalability, so we consider the low-level
approach to this. The most important file in this connection is /etc/resolv.conf.
Ancient IRIX systems seem to have placed this file in /usr/etc/resolv.conf.
This old location is obsolete. Without the resolver configuration file, a host will
often stop dead whilst trying, in vain, to look up Internet addresses. Hosts which
use NIS or NIS plus might be able to look up local names; names can also be
registered manually in /etc/hosts. The most important features of this file are
the definition of the domain-name and a list of nameservers which can perform
the address translation service. These nameservers must be listed as IP numerical
addresses. The format of the file is as shown.
domain domain.country
nameserver 192.0.2.10
nameserver 158.36.85.10
nameserver 129.241.1.99
Some prefer to use the search directive in place of the domain directive, since it is
more general and allows several domains to be searched in special circumstances:
search domain.country
nameserver 192.0.2.10
nameserver 192.0.2.85
nameserver 192.0.2.99
The default is to search the local domain, so these are equivalent unless several
domains are to be searched. On the host which is itself a nameserver, the first
nameserver should be listed as the loopback address, so as to avoid sending traffic
out onto the network when none is required:
search domain.country
nameserver 127.0.0.1
nameserver 192.0.2.10
nameserver 192.0.2.99

DNS has several competitor services. A trivial mapping of hostnames to IP
addresses is performed by the /etc/hosts database, and this file can be shared
using NIS or NIS plus. Windows had the WINS service, though this is now dep-
recated. Modern Unix-like systems allow us to choose the order in which these
competing services are given priority when looking up hostname data. Unfortu-
nately there is no standard way of configuring this. GNU/Linux and public domain
resolver packages for old SunOS (resolv+) use a file called /etc/hosts.conf.
Theformatofthisfileis
order hosts,bind,nis
multi on
This example tells the lookup routines to look in the /etc/hosts file first, then to
query DNS/BIND and then finally to look at NIS. The resolver routines quit after the
first match they find, they do not query all three databases every time. Solaris, and
now also some GNU/Linux distributions, use a file called /etc/nsswitch.conf
which is a general configuration for all database services, not just the hostname
service.
# files,nis,nisplus,dns
passwd: files
group: files
hosts: files dns
ipnodes: files dns
networks: files
protocols: files
rpc: files
ethers: files
netmasks: files
bootparams: files
Note that Solaris has ‘ipnodes’ which is used for name lookup in the new IPv6
compatible lookup routines. If DNS is not added here, Solaris does not find IPv6
addresses registered in DNS.
4.6.6 Diskless clients
Diskless workstations are, as per the name, workstations which have no disk at
all. They are now rare, but with the increase of network speeds, they are being
discussed again in new guises such as ‘thin clients’.
Diskless workstations know absolutely nothing other than the MAC address
of their network interface (Ethernet address). In earlier times, when disks were
expensive, diskless workstations were seen as a cheap option. Diskless clients
require disk space on a server-host in order to function, i.e. some other host
which does have a disk, needs to be a disk server for the diskless clients. Most
vendors supply a script for creating diskless workstations. This script is run on
the server-host.

When a diskless system is switched on for the first time, it has no files
and knows nothing about itself except the Ethernet address on its network
card. It proceeds by sending a RARP (reverse address resolution protocol) or
BOOTP or DHCP request out onto the local subnet in the hope that a server
(in.rarpd) will respond by telling it its Internet address. The server hosts must
be running two services: rpc.bootparamd and tftpd, the trivial file transfer
program. This is another reason for arguing against diskless clients: these services
are rather insecure and could be a security risk for the server host. A call to the
rpc.bootparamd daemon transfers data about where the diskless station can find
a server, and what its swap-area and root directory are called in the file tree of this
server. The root directory and swap file are mounted using the NFS. The diskless
client loads its kernel from its root directory and thereafter everything proceeds as
normal. Diskless workstations swap to files rather than partitions. The command
mkfile is used to create a fixed-size file for swapping.
4.6.7 Dual-homed host
A host with two network interfaces, both of which are coupled to a network, is
called a dual-homed host. Dual-homed hosts are important in building firewalls
for network security. A host with two network interfaces can be configured to
automatically forward packets between the networks (act as a bridge) or to block
such forwarding. The latter is normal in a firewall configuration, where it is
left to proxy software to forward packets only after some form of inspection
procedure. Most vendor operating systems will configure dual-network interfaces
automatically, with forwarding switched off. Briefly here is a GNU/Linux setup for
two network interfaces.
1. Compile a new kernel with support for both types of interface, unless both
are of the same type.
2. Change the lilo configuration to detect both interfaces, if necessary, by
adding:
append="ether=0,0,eth0 ether=0,0,eth1"
to /etc/lilo.conf.
3. The new interface can be assigned an IP address in the file /etc/init.d/
network.
One must then decide how the IP addresses are to be registered in the DNS service.
Will the host have the same name on both interfaces, or will it have a different
name? Packet routing on dual-homed hosts has been discussed in ref. [272].
4.6.8 Cloning systems
We are almost never interested in installing every machine separately. A system
administrator usually has to install ten, twenty or even a hundred machines at a

time.Heorshewouldalsolikethemtobeasfaraspossiblethesame,sothat
users will always know what to expect. This might sound like a straightforward
problem, but it is not. There are several approaches.
• A few Unix-like operating systems provide a solution to this using package
templates so that the installation procedure becomes standardized.
• The hard disks of one machine can be physically copied and then the
hostname and IP address can be edited afterwards.
• All software can be placed on one host and shared using NFS, or another
shared filesystem.
Each of these approaches has its attractions. The NFS/shared filesystem approach
is without doubt the least amount of work, since it involves installing the software
only once, but it is also the slowest in operation for users.
As an example of the first, here is how Debian GNU/Linux tackles this problem
using the Debian package system:
Install one system
dpkg --get-selections > file
On the remaining machines type
dpkg --set-selections < file
Run install packages program.
Alternatively, one can install a single package with:
dpkg -i package.deb
This method has now been superceded by an extremely elegant package system
using the apt-get command. Installation of a package is completely transparent
as to source and dependencies:
host# apt-get install bison
Reading Package Lists... Done
Building Dependency Tree... Done
The following NEW packages will be installed:
bison
0 packages upgraded, 1 newly installed, 0 to remove and 110 not upgraded.
Need to get 387kB of archives. After unpacking 669kB will be used.
Get:1 http://sunsite.uio.no stable/main bison 1:1.35-3 [387kB]
Fetched 387kB in 0s (644kB/s)
Selecting previously deselected package bison.
(Reading database ... 10771 files and directories currently installed.)
Unpacking bison (from .../bison_1%3a1.35-3_i386.deb) ...
Setting up bison (1.35-3) ...

In RedHat Linux, a similar mechanism looks like this:
rpm -ivh package.rpm
Disks can be mirrored directly, using some kind of cloning program. For
instance, the Unix tape archive program (tar) can be used to copy the entire direc-
tory tree of one host. In order to make this work, we first have to perform a basic
installation of the OS, with zero packages and then copy over all remaining files
which constitutes the packages we require. In the case of the Debian system above,
there is no advantage to doing this, since the package installation mechanism can
do the same job more cleanly. For example, with a GNU/Linux distribution:
tar --exclude /proc --exclude /lib/libc.so.5.4.23 \
--exclude /etc/hostname --exclude /etc/hosts -c -v \
-f host-imprint.tar /
Note that several files must be excluded from the dump. The file /lib/libc.so.
5.4.23 is the C library; if we try to write this file back from backup, the destination
computer will crash immediately. /etc/hostname and /etc/hosts contains defi-
nitions of the hostname of the destination computer, and must be left unchanged.
Once a minimal installation has been performed on the destination host, we can
access the tar file and unpack it to install the image:
(cd / ; tar xfp /mnt/dump/my-machine.tar; lilo)
Afterwards, we have to install the boot sector, with the lilo command. The cloning
of Unix systems has been discussed in refs. [297, 339].
Note that Windows systems cannot be cloned without special software (e.g.
Norton Ghost or PowerQuest Drive Image). There are fundamental technical rea-
sons for this. One is the fact that many host parameters are configured in the
impenetrable system registry. Unless all of the hardware and software details of
every host are the same, this will fail with an inconsistency. Another reason is
that users are registered in a binary database with security IDs which can have
different numerical values on each host. Finally domain registration cannot be
cloned. A host must register manually with its domain server. Novell Zenworks
contains a cloning solution that ties NDS objects to disk images.
4.7 Software installation
Most standard operating system installations will not leave us in possession of
an immediately usable system. We also need to install third party software in
order to get useful work out of the host. Software installation is a similar problem
to that of operating system installation. However, third party software originates
from a different source than the operating system; it is often bound by license
agreements and it needs to be distributed around the network. Some software has
to be compiled from source. We therefore need a thoughtful strategy for dealing
with software. Specialized schemes for software installation were discussed in refs.
[85, 199] and a POSIX draft was discussed in ref. [18], though this idea has not
been developed into a true standard. Instead, de-facto and proprietary standards
have emerged.

4.7.1 Free and proprietary software
Unlike most other popular operating systems, Unix grew up around people who
wrote their own software rather than relying on off-the-shelf products. The Internet
now contains gigabytes of software for Unix systems which cost nothing. Tradi-
tionally, only large companies like the oil industry and newspapers could afford
off-the-shelf software for Unix.
There are therefore two kinds of software installation: the installation of soft-
ware from binaries and the installation of software from source. Commercial
software is usually installed from a CD by running an installation program and
following the instructions carefully; the only decision we need to make is where we
want to install the software. Free software and open source software usually come
in source form and must therefore be compiled. Unix programmers have gone to
great lengths to make this process as simple as possible for system administrators.
4.7.2 Structuring software
The first step in installing software is to decide where we want to keep it. We could,
naturally, locate software anywhere we like, but consider the following:
• Software should be separated from the operating system’s installed files,
so that the OS can be reinstalled or upgraded without ruining a software
installation.
• Unix-like operating systems have a naming convention. Compiled software
can be collected in a special area, with a bin directory and a lib directory
so that binaries and libraries conform to the usual Unix conventions. This
makes the system consistent and easy to understand. It also keeps the
program search PATH variable simple.
• Home-grown files and programs which are special to our own particular site
can be kept separate from files which could be used anywhere. That way,
we define clearly the validity of the files and we see who is responsible for
maintaining them.
The directory traditionally chosen for installed software is called /usr/local.
One then makes subdirectories /usr/local/bin and /usr/local/lib and so
on [147]. Unix has a de-facto naming standard for directories which we should
try to stick to as far as reason permits, so that others will understand how our
system is built up.
• bin Binaries or executables for normal user programs.
• sbin Binaries or executables for programs which only system administrators
require. Those files in /sbin are often statically linked to avoid problems
with libraries which lie on unmounted disks during system booting.
• lib Libraries and support files for special software.
• etc Configuration files.

• share Files which might be shared by several programs or hosts. For
instance, databases or help-information; other common resources.
/usr/local
lib/
bin/ lib/ etc/ sbin/ share/ bin/ lib/ etc/ sbin/ share/
bin/ etc/ sbin/ share/ gnu/ site/
Figure 4.1: One way of structuring local software. There are plenty of things to criticize
here. For instance, is it necessary to place this under the traditional /usr/local tree?
Should GNU software be underneath /usr/local? Is it even necessary or desirable to
formally distinguish GNU software from other software?
One suggestion for structuring installed software on a Unix-like host is shown
in figure 4.1. Another is shown in figure 4.2. Here we divide these into three
categories: regular installed software, GNU software (i.e. free software) and site-
software. The division is fairly arbitrary. The reason for this is as follows:
• /usr/local is the traditional place for software which does not belong to
the OS. We could keep everything here, but we will end up installing a lot of
software after a while, so it is useful to create two other sub-categories.
• GNU software, written by and for the Free Software Foundation, forms a
self-contained set of tools which replace many of the older Unix equivalents,
like ls and cp. GNU software has its own system of installation and set of
standards. GNU will also eventually become an operating system in its own
right. Since these files are maintained by one source it makes sense to keep
them separate. This also allows us to place GNU utilities ahead of others in
a user’s command PATH.
• Site-specific software includes programs and data which we build locally to
replace the software or data which follows with the operating system. It also
includes special data like the database of aliases for E-mail and the DNS
tables for our site. Since it is special to our site, created and maintained by
our site, we should keep it separate so that it can be backed up often and
separately.
A similar scheme to this was described in refs. [201, 70, 328, 260], in a system
called Depot. In the Depot system, software is installed under a file node called
/depot which replaces /usr/local. In the depot scheme, separate directories
are maintained for different machine architectures under a single file tree. This
has the advantage of allowing every host to mount the same filesystem, but the
disadvantage of making the single filesystem very large. Software is installed in

bin/
/software /local admin/
lib/
etc/
bin/
lib/
etc/
bin/
lib/
etc/
sbin/
share/
sbin/
share/
sbin/
share/
Figure 4.2: Another, more rational way of structuring local software. Here we drop the
affectation of placing local modifications under the operating system’s /usr tree and
separate it completely. Symbolic links can be used to alias /usr/local to one of these
directories for historical consistency.
a package-like format under the depot tree and is linked in to local hosts with
symbolic links. A variation on this idea from the University of Edinburgh was
described in ref. [10], and another from the University of Waterloo uses a file tree
/software to similar ends in ref. [273]. In the Soft environment [109], software
installation and user environment configuration are dealt with in a combined
abstraction.
4.7.3 GNU software example
Let us now illustrate the GNU method of installing software which has become
widely accepted. This applies to any type of Unix, and to Windows if one has
a Unix compatibility kit, such as Cygwin or UWIN. To begin compiling soft-
ware, one should always start by looking for a file called README or INSTALL.
This tells us what we have to do to compile and install the software. In most
cases, it is only necessary to type a couple of commands, as in the following
example. When installing GNU software, we are expected to give the name of a
prefix for installing the package. The prefix in the above cases is /usr/local for
ordinary software, /usr/local/gnu for GNU software and /usr/local/site for
site-specific software. Most software installation scripts place files under bin and
lib automatically. The steps are as follows.
1. Make sure we are working as a regular, unprivileged user. The software
installation procedure might do something which we do not agree with. It is
best to work with as few privileges as possible until we are sure.
2. Collect the software package by ftp from a site like ftp.uu.net or
ftp.funet.fi etc. Use a program like ncftp for painless anonymous login.
3. Unpack the file using tar zxf software.tar.gz,ifusingGNUtar,or
gunzip software.tar.gz; tar xf software.tar if not.
4. Enter the directory which is unpacked, cd software.

5. Type: configure --prefix=/usr/local/gnu.Thischecksthestateofour
local operating system and other installed software and configures the soft-
ware to work correctly there.
6. Type: make.
7. If all goes well, type make -n install. This indicates what themake program
will install and where. If we have any doubts, this allows us to make changes
or abort the procedure without causing any damage.
8. Finally, switch to privileged root/Administrator mode with the su command
and type make install. This should be enough to install the software. Note,
however, that this step is a security vulnerability. If one blindly executes
commands with privilege, one can be tricked into installing back-doors and
Trojan horses, see chapter 11.
9. Some installation scripts leave files with the wrong permissions so that
ordinary users cannot access the files. We might have to check that the
files have a mode like 555 so that normal users can access them. This is
in spite of the fact that installation programs attempt to set the correct
permissions [287].
Today this procedure should be more or less the same for just about any software
pick up. Older software packages sometimes provide only Makefiles which you
must customize yourself. Some X11-based windowing software requires you to
use the xmkmf X-make-makefiles command instead of configure. You should
always look at the README file.
4.7.4 Proprietary software example
If we are installing proprietary software, we will have received a copy of the program
on a CD-ROM, together with licensing information, i.e. a code which activates the
program. The steps are somewhat different.
1. To install from CD-ROM we must start work with root/Administrator privi-
leges, so the authenticity of the CD-ROM should be certain.
2. Insert the CD-ROM into the drive. Depending on the operating system, the
CD-ROM might be mounted automatically or not. Check this using the
mount command with no arguments, on a Unix-like system. If the CD-ROM
has not been mounted, then, for standard CD-ROM formats, the following
will normally suffice:
mkdir /cdrom if necessary
mount /dev/cdrom /cdrom
For some manufacturers, or on older operating systems, we might have
to specify the type of filesystem on the CD-ROM. Check the installation
instructions.

3. On a Windows system a clickable icon appears to start the installation
program. On a Unix-like system we need to look for an installation script
cd /cdrom/ cd-name
less README
./install-script
4. Follow the instructions.
Some proprietary software requires the use of a license server, such as lmgrd.
This is installed automatically, and we are required only to edit a configuration
file with a license key which is provided, in order to complete the installation.
Note however, that if we are running multiple licensed products on a host, it is
not uncommon that these require different and partly incompatible license servers
which interfere with one another. If possible, one should keep to only one license
server per subnet.
4.7.5 Installing shared libraries
Systems which use shared libraries or shared objects sometimes need to be
reconfigured when new libraries are added to the system. This is because the
names of the libraries are cached to provide fast access. The system will not look
for a library if it is not in the cache file.
• SunOS (prior to Solaris 2): After adding a new library, one must run the com-
mand ldconfig lib-directory.Thefile /etc/ld.so.cache is updated.
• GNU/Linux: New library directories are added to the file /etc/ld.so.conf.
Then one runs the command ldconfig.Thefile /etc/ld.so.cache is
updated.
4.7.6 Configuration security
In the preceding sections we have looked at some examples and suggestions for
dealing with software installation. Let us now take a step back from the details to
analyze the principles underlying these.
The first is a principle which we shall return to many times in this book. It is
one of the key principles in computer science, and we shall be repeating it with
slightly different words again and again.
Principle 15 (Separation III). Independent systems should not interfere with
one another, or be confused with one another. Keep them in separate storage
areas.
The reason is clear: if we mix up files which do not belong together, we lose track of
them. They become obscured by a lack of structure. They vanish into anonymity.
The reason why all modern computer systems have directories for grouping files,
is precisely so that we do not have to mix up all files in one place. This was

discussed in section 4.4.5. The application to software installation is clear: we
should not ever consider installing software in /usr/bin or /bin or /lib or /etc
or any directory which is controlled by the system designers. To do so is like
lying down in the middle of a freeway and waiting for a new operating system
or upgrade to roll over us. If we mix local modifications with operating system
files, we lose track of the differences in the system, others will not be able to see
what we have done. All our hard work will be for nothing when a new system is
installed.
Suggestion 1 (Vigilance). Be on the lookout for software which is configured,
by default, to install itself on top of the operating system. Always check the
destination using make -n install before actually committing to an installation.
Programs which are replacements for standard operating system components
often break the principle of separation.
a
aSoftware originating in BSD Unix is often an offender, since it is designed to be a part of BSD
Unix, rather than an add-on, e.g. sendmail and BIND.
The second important point above is that we should never work with root priv-
ileges unless we have to. Even when we are compiling software from source,
we should not start the compilation with superuser privileges. The reason is
clear: why should we trust the source of the program? What if someone has
placed a command in the build instructions to destroy the system, plant a
virus or open a back-door to intrusion? As long as we work with low priv-
ilege then we are protected, to a degree, from problems like this. Programs
will not be able to do direct and pervasive damage, but they might still be
able to plant Trojan horses that will come into effect when privileged access is
acquired.
Principle 16 (Limited privilege). No process or file should be given more
privileges than it needs to do its job. To do so is a security hazard.
Another use for this principle arises when we come to configure certain types of
software. When a user executes a software package, it normally gets executed with
the user privileges of that user. There are two exceptions to this:
• Services which are run by the system: Daemons which carry out essential
services for users or for the system itself, run with a user ID which is
independent of who is logged on to the system. Often, such daemons are
started as root or the Administrator when the system boots. In many cases,
the daemons do not need these privileges and will function quite happily with
ordinary user privileges after changing the permissions of a few files. This is a
much safer strategy than allowing them to run with full access. For example,
the httpd daemon for the WWW service uses this approach. In recent years,
bugs in many programs which run with root privileges have been exploited to
give intruders access to the system. If software is run with a non-privileged
user ID, this is not possible.
• Unix setuid programs: Unix has a mechanism by which special privilege can
be given to a user for a short time, while a programis being executed. Software

which is installed with the Unix setuid bit set, and which is owned by root,
runs with root’s special privileges. Some software producers install software
with this bit set with no respect for the privilege it affords. Most programs
which are setuid root do not need to be. A good example of this is the
Common Desktop Environment (a multi-vendor desktop environment used
on Unix systems). In a recent release, almost every program was installed
setuid root. Within only a short time, a list of reports about users exploiting
bugs to gain control of these systems appeared. In the next release, none of
the programs were setuid root.
All software servers which are started by the system at boot time are started
with root/Administrator privileges, but daemons which do not require these
privileges can relinquish them by giving up their special privileges and run-
ning as a special user. This approach is used by the Apache WWW server and
by MySQL for instance. These are examples of software which encourage us
to create special user IDs for server processes. To do this, we create a spe-
cial user in the password database, with no login rights (this just reserves
a UID). In the above cases, these are usually called www and mysql.The
software allows us to specify these user IDs so that the process owner is
switched right after starting the program. If the software itself does not per-
mit this, we can always force a daemon to be started with lower privilege
using:
su -c ’command’ user
The management tool cfengine can also be used to do this. Note however that
Unix server processes which run on reserved (privileged) ports 1–1023 have to be
started with root privileges in order to bind to their sockets.
On the topic of root privilege, a related security issue has to do with programs
which write to temporary files.
Principle 17 (Temporary files). Temporary files or sockets which are opened
by any program, should not be placed in any publicly writable directory like /tmp.
This opens the possibility of race conditions and symbolic link attacks. If possible,
configure them to write to a private directory.
Users are always more devious than software writers. A common mistake in pro-
gramming is to write to a file which ordinary users can create, using a privileged
process. If a user is allowed to create a file object with the same name, then
he or she can direct a privileged program to write to a different file instead,
simply by creating a symbolic or hard link to the other file. This could be used
to overwrite the password file or the kernel, or the files of another user. Soft-
ware writers can avoid this problem by simply unlinking the file they wish to
write to first, but that still leaves a window of opportunity after unlinking the
file and before opening the new file for writing, during which a malicious user
could replace the link (remember that the system time-shares). The lesson is to
avoid making privileged programs write to directories which are not private, if
possible.

Before closing this section, a comment is in order. Throughout this chapter,
and others, we have been advocating a policy of building the best possible, most
logical system by tailoring software to our own environment. Altering absurd
software defaults, customizing names and locations of files and changing user
identities is no problem as long as everyone who uses and maintains the system
is aware of this. If a new administrator started work and, unwittingly, reverted to
those software defaults, then the system would break.
Principle 18 (Flagging customization). Customizations and deviations from
standards should be made conspicuous to users and administrators. This makes
the system easier to understand both for ourselves and our successors.
4.7.7 When compilation fails
Today, software producers who distribute their source code are able to configure it
automatically to work with most operating systems. Compilation usually proceeds
without incident. Occasionally though, an error will occur which causes the
compilation to halt. There are a few things we can try to remedy this:
• A previous configuration might have been left lying around, try
make clean
make distclean
and start again, from the beginning.
• Make sure that the software does not depend on the presence of another
package, or library. Install any dependencies, missing libraries and try again.
• Errors at the linking stage about missing functions are usually due to missing
or un-locatable libraries. Check that the
LD LIBRARY PATH
variable includes all relevant library locations. Are any other environment
variables required to configure the software?
• Sometimes an extra library needs to be added to the Makefile. To find out
whether a library contains a function, we can use the following C-shell trick:
host% cd /lib
host% foreach lib ( lib* )
> echo Checking $lib ----------------------
> nm $lib | grep function
>end
• Carefullytrytopatchthesourcecodetomakethecodecompile.
• Check in news groups whether others have experienced the same problem.
• Contact the author of the program.

4.7.8 Upgrading software
Some software (especially free software) gets updated very often. We could easily
spend an entire life just chasing the latest versions of favorite software packages.
Avoid this.
• It is a waste of time.
• Sometimes new versions contain more bugs than the old one, and an even-
newer-version is just around the corner.
• Users will not thank us for changing things all the time. Stability is a virtue.
Everyone likes time to get used to the system before change strikes.
A plan is needed for testing new versions of software. Package systems for
software make this process easier, since one can allow several versions of software
to coexist, or roll back to earlier versions if problems are discovered with newer
versions.
4.8 Kernel customization
The operating system kernel is that most important part of the system which
drives the hardware of the machine and shares it between multiple processes.
If the kernel does not work well, the system as a whole will not work well. The
main reason for making changes to the kernel is to fix bugs and to upgrade
system software, such as support for new hardware; performance gains can
also be achieved however, if one is patient. We shall return to the issue of
performance again in section 8.11. Kernel configuration varies widely between
operating systems. Some systems require kernel modification for every miniscule
change, while others live quite happily with the same kernel unless major changes
are made to the hardware of the host.
Many operating system kernels are monolithic, statically compiled programs
which are specially built for each host, but static programs are inflexible and the
current trend is to replace them with software configurable systems which can
be manipulated without the need to recompile the kernel. System V Unix has
blazed the trail of adaptable, configurable kernels, in its quest to build an oper-
ating system which will scale from laptops to mainframes. It introduces kernel
modules which can be loaded on demand. By loading parts of the kernel only
when required, one reduces the size of the resident kernel memory image, which
can save memory. This policy also makes upgrades of the different modules inde-
pendent of the main kernel software, which makes patching and reconfiguration
simpler. SVR4 Unix and its derivatives, like Solaris and Unixware, are testimony
to the flexibility of SVR4.
Windows has also taken a modular view to kernel design. Configuration of
the Windows kernel also does not require a recompilation, only the choice of a
number of parameters, accessed through the system editor in the Performance
Monitor, followed by a reboot. GNU/Linux switched from a static, monolithic
kernel to a modular design quite quickly. The Linux kernel strikes a balance

between static compilation and modular loading. This balances the convenience
of modules with the increased speed of having statically compiled code forever in
memory. Typically, heavily used kernel modules are compiled in statically, while
infrequently used modules are accessed on demand.
Solaris
Neither Solaris nor Windows require or permit kernel recompilation in order to
make changes. In Solaris, for instance, one edits configuration files and reboots
for an auto-reconfiguration. First we edit the file /etc/system to change kernel
parameters, then reboot with the command
reboot -- -r
which reconfigures the system automatically. There is also a large number of
system parameters which can be configured on the fly (at run time) using the ndd
command.
GNU/Linux
The Linux kernel is subject to more frequent revision than many other systems,
owing to the pace of its development. It must be recompiled when new changes
are to be included, or when an optimized kernel is required. Many GNU/Linux
distributions are distributed with older kernels, while newer kernels offer signifi-
cant performance gains, particularly in kernel-intensive applications like NFS, so
there is a practical reason to upgrade the kernel.
The compilation of a new kernel is a straightforward but time-consuming,
process. The standard published procedure for installing and configuring a new
kernel is as follows. New kernel distributions are obtained from any mirror of the
Linux kernel site [176]. First we back up the old kernel, unpack the kernel sources
into the operating system’s files (see the note below) and alias the kernel revision
to /usr/src/linux. Note that the bash shell is required for kernel compilation.
$ cp /boot/vmlinuz /boot/vmlinux.old
$ cd /usr/src
$ tar zxf /local/site/src/linux-2.2.9.tar.gz
$ ln -s linux-2.2.9 linux
There are often patches to be collected and applied to the sources. For each patch
file:
$ zcat /local/site/src/patchX.gz | patch -p0
Then we make sure that we are building for the correct architecture (Linux now
runs on several types of processor).
$ cd /usr/include
$ rm -rf asm linux scsi
$ ln -s /usr/src/linux/include/asm-i386 asm
$ ln -s /usr/src/linux/include/linux linux
$ ln -s /usr/src/linux/include/scsi scsi

Next we prepare the configuration:
$ cd /usr/src/linux
$ make mrproper
The command make config can now be used to set kernel parameters. More
user-friendly windows-based programs make xconfig or make menuconfig are
also available, though the former does require one to run X11 applications as root,
which is a potential security faux pas. The customization procedure has defaults
which one can fall back on. The choices are Y to include an option statically in the
kernel, N to not include and M toincludeasmodulesupport.Thecapitalizedoption
indicates the default. Although there are defaults, it is important to think carefully
about the kind of hardware we are using. For instance, is SCSI support required?
One of the questions prompts us to specify the type of processor, for optimization:
Processor type (386, 486, Pentium, PPro) [386]
The default, in square brackets, is for generic 386, but Pentium machines will
benefit from optimizations if we choose correctly. If we are compiling on hosts
without CD-ROMs and tape drives, there is no need to include support for these,
unless we plan to copy this compiled kernel to other hosts which do have these.
After completing the long configuration sequence, we build the kernel:
# make dep
# make clean
# make bzImage
and move it into place:
# mv arch/i386/boot/zImage /boot/vmlinuz-2.2.9
# ln -s /boot/vmlinuz-2.2.9 /boot/vmlinuz
# make modules
# make modules-install
The last step allows us to keep track of which version is running, while still having
the standard kernel name.
To alter kernel parameters on the fly, Linux uses a number of writable pseud-
ofiles under /proc/sys,e.g.
echo 1 >/proc/sys/vm/overcommit_memory
cat /proc/sys/vm/overcommit_memory
This can be used to tune values or switch features.
lilo and Grub
After copying a kernel loader into place, we have to update the boot blocks
on the system disk so that a boot program can be located before there is an
operating kernel which can interpret the filesystem. This applies to any operating
system, e.g. SunOS has the installboot program. After installing a new kernel
in GNU/Linux, we update the boot records on the system disk by running the

lilo program. The new loader program is called by simply typing lilo.Thisreads
a default configuration file /etc/lilo.conf and writes loader data to the Master
Boot Record (MBR). One can also write to the primary Linux partition, in case
something should go wrong:
lilo -b /dev/hda1
so that we can still boot, even if another operating system should destroy the boot
block. A new and superior boot loader called Grub is now gaining popularity in
commercial Linux distributions.
Logistics of kernel customization
The standard procedure for installing a new kernel breaks a basic principle:
don’t mess with the operating system distribution, as this will just be overwritten
by later upgrades. It also potentially breaks the principle of reproducibility: the
choices and parameters which we choose for one host do not necessarily apply for
others. It seems as though kernel configuration is doomed to lead us down the
slippery path of making irreproducible, manual changes to every host.
We should always bear in mind that what we do for one host must usually be
repeated for many others. If it were necessary to recompile and configure a new
kernel on every host individually, it would simply never happen. It would be a
project for eternity.
The situation with a kernel is not as bad as it seems, however. Although, in
the case of GNU/Linux, we collect kernel upgrades from the net as though it were
third party software, it is rightfully a part of the operating system. The kernel is
maintained by the same source as the kernel in the distribution, i.e. we are not
in danger of losing anything more serious than a configuration file if we upgrade
later. However, reproducibility across hosts is a more serious concern. We do not
want to repeat the job of kernel compilation on every single host. Ideally, we would
like to compile once and then distribute to similar hosts. Kernels can be compiled,
cloned and distributed to different hosts provided they have a common hardware
base (this comes back to the principle of uniformity). Life is made easier if we can
standardize kernels; in order to do this we must first have standardized hardware.
The modular design of newer kernels means that we also need to upgrade the
modules in /lib/modules to the receiving hosts. This is a logistic problem which
requires some experimentation in order to find a viable solution for a local site.
These days it is not usually necessary to build custom kernels. The default
kernels supplied with most OSs are good enough for most purposes. Performance
enhancements are obtainable, however, particularly on busy servers. See section
8.11 for more hints.
Exercises
Self-test objectives
1. List the considerations needed in creating a server room.
2. How can static electricity cause problems for computers and printers?

3. What are the procedures for shutting down computers safely at your site?
4. How do startup and shutdown procedures differ between Unix and Windows?
5. What is the point of partitioning disk drives?
6. Can a disk partition exceed the size of a hard-disk?
7. How do different Unix-like operating systems refer to disk partitions?
8. How does Windows refer to disk partitions?
9. What is meant by ‘creating a new filesystem’ on a disk partition in Unix?
10. What is meant by formatting a disk in Unix and Windows (hint: they do not
mean the same)?
11. What different filesystems are in use on Windows hosts? What are the pros
and cons of each?
12. What is the rationale behind the principle of (data) Separation I?
13. How does object orientation, as a strategy, apply to system administration?
14. How is a new disk attached to a Unix-like host?
15. List the different ways to install an operating system on a new computer from
a source.
16. What is meant by a thin client?
17. What is meant by a dual-homed host?
18. What is meant by host cloning? Explain how you would go about cloning a
Unix-like and Windows host.
19. What is meant by a software package?
20. What is meant by free, open source and proprietary software? List some pros
and cons of each of these.
21. Describe a checklist or strategy for familiarizing yourself with the layout of a
new operating system file hierarchy.
22. Describe how to install Unix software from source files.
23. Describe how you would go about installing software provided on a CD-ROM
or DVD.
24. What is meant by a shared library or DLL?
25. Explain the principle of limited privilege.
26. What is meant by kernel customization and when is it necessary?

Problems
1. If you have a PC to spare, install a GNU/Linux distribution, e.g. Debian, or a
commercial distribution. Consider carefully how you will partition the disk.
Can you imagine repeating this procedure for one hundred hosts?
2. Install Windows (NT, 2000, XP etc). You will probably want to repeat the
procedure several times to learn the pitfalls. Consider carefully how you will
partition the disk. Can you imagine repeating this procedure for 100 hosts?
3. If space permits, install GNU/Linux and Windows together on the same host.
Think carefully, once again, about partitioning.
4. For both of the above installations, design a directory layout for local files.
Discuss how you will separate operating system files from locally installed
files. What will be the effect of upgrading or reinstalling the operating system
at a later time? How does partitioning of the disk help here?
5. Imagine the situation in which you install every independent software pack-
age in a directory of its own. Write a script which builds and updates the
PATH variable for users automatically, so that the software will be accessible
from a command shell.
6. Describe what is meant by a URL or universal naming scheme for files.
Consider the location of software within a directory tree: some software
packages compile the names of important files into software binaries. Explain
why the use of a universal naming scheme guarantees that the software will
always be able to find the files even when mounted on a different host,
and conversely why cross mounting a directory under a different name on a
different host is doomed to break the software.
7. Upgrade the kernel on your GNU/Linux installation. Collect the kernel from
ref. [176].
8. Determine your Unix/Windows current patch level. Search the web for more
recent patches. Which do you need? Is it always right to patch a system?
9. Comment on how your installation procedure could be duplicated if you had
not one, but one hundred machines to install.
10. Make a checklist for standardizing hosts: what criteria should you use to
ensure standardization? Give some thought to the matter of quality assur-
ance. How can your checklist help here? We shall be returning to this issue
in chapter 8.
11. Make a scaling checklist for your system policy.
12. Suppose your installed host is a mission-critical system. Estimate the time
it would take you to get your host up and running again in case of complete
failure. What strategy could you use to reduce the time the service was out
of action?

13. Given the choice between compiling a critical piece of software yourself, or
installing it as a software package from your vendor or operating system
provider, which would you choose? Explain the issues surrounding this
choice and the criteria you would use to make the decision.




[+/-] Selengkapnya...