Saturday, August 7, 2010

Windows XP Tip: Speed up boot time


If Windows XP seems to take forever to boot, try the following tip:
Open the registry and navigate to
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Dfrg\BootOptimizeFunction key.
Next, Double-click the Enable parameter and type Y and click OK.

How do I... Use BootVis to improve XP boot performance?


How the Windows XP boot process works

A main cause of slow boots with Windows NT/2000 was their method for loading drivers. Prior to XP, Windows versions loaded drivers sequentially. Windows XP, however, loads drivers concurrently. It also records which applications are launched during startup. This information is written to the C:\WINDOWS\Prefetch\Layout.ini file.
When the Layout.ini file is created, XP performs a partial defragmentation on the files listed in Layout.ini. This defrag process attempts to make the files listed in Layout.ini available in one contiguous area on the hard disk, allowing these files to be accessed, and the associated drivers to be loaded, more quickly. This process is run in the background approximately every three days.
There are four factors affecting the defrag process:
  • The system must be idle for XP to perform the defragmentation.
  • There must be enough free, contiguous disk space to contain all the files listed in the Layout.ini file.
  • The partial defrag performed by XP will not create the necessary contiguous disk space. That can be accomplished only by running a full defragmentation with the XP defragmentation tool or a third-party disk utility.
  • The XP defrag process will not use a third-party utility to perform the defragmentation. Any external tools must be run on their own.
BootVis, which Microsoft describes as a “performance trace visualization tool,” actually performs the same tasks as the XP boot process, except that BootVis allows the information obtained during a single boot to be used for optimization, rather than monitoring the system over a period of several days.
Download the file and then extract the BootVis.exe utility by double-clicking the archive file, selecting a location for the Bootvis.exe file, and clicking OK.

Opening BootVis and running a trace

To run BootVis, simply double-click the BootVis.exe file and the BootVis screen, shown in Figure A, should appear.

Figure A

Here is the BootVis main window.
The first step in tweaking or troubleshooting your boot process is to run a boot trace. Click File | New | Next Boot + Drivers Trace. The Trace Repetitions window, shown in Figure B, will prompt you for the number of repetitions (reboots and traces) to run. Go with the defaults and click OK.
BootVis will now provide you with a 10-second countdown before it reboots the system and performs the trace, giving you time to cancel the reboot and close any applications you might have left running. Click Reboot Now to bypass the countdown or Cancel to cancel the reboot.

Figure B

Select the number of reboots and driver traces for BootVis to run.
Once the system reboots, BootVis restarts automatically and provides individual graphs for the following system activity areas (This can take a few minutes, so be patient.):
  • Boot activity
  • CPU usage
  • Disk I/O
  • Disk utilization
  • Driver delay
  • Process creates

Reading the boot activity graph

The Boot Activity graph (shown in Figure C) breaks the boot process down into the following components:
  • Disk: The time required to detect all devices in the nonpageable device path. This entry can include any device from the CPU to the boot disk. This value should be around two seconds.
  • Driver: The time required to initialize devices.
  • Prefetching: The time required to read pages that are later used to initialize devices. This entry also includes Winlogon, services, the shell, and any applications loaded when the system boots.
  • Registry + Page File: The time required to read the registry and initialize the page file.
  • Video: The time spent setting the display mode and refresh rate. This time is affected by both the video BIOS and the video driver used.
  • Logon + Services and Shell: The time required to startWinlogon, any services, the shell, and any applications, such as firewall or antivirus software, that are run when XP starts.

Figure C

Here is the BootVis boot activity graph.
The components are displayed in the order in which XP calls them and are read from the bottom up. Each component’s bar begins at the point in the boot sequence when the component was called and the bar’s length reflects the time in seconds required to load the component. To determine the time required for any individual component activity, place the cursor over the title for the component.
To get the most important number, the time used to boot the system, place the cursor over the vertical line that crosses through all the components. This line represents the time the system took to boot. In the example in Figure C, the system required 33.84 seconds to complete the boot process.
One item of note, this boot time is dependent on the time it takes the user to enter the logon password, if one is required. Make sure to enter the password as quickly as possible when testing a system.

Optimizing the boot process

Now that you have an indication of how well the boot process is going, the next step is to optimize the system. To optimize your system boot, click Trace | Optimize System, and BootVis will present you with a 10-second countdown before rebooting. When the system reboots, the window shown in Figure D will appear, indicating that BootVis is using information gained from the previous boot and the current boot to optimize the system.

Figure D

BootVis is optimizing the system.
The next window, shown in Figure E, appears when BootVis actually begins to place the files specified in the Layout.ini file in the area of contiguous disk space created during the defragmentation process run prior to using BootVis.

Figure E

This shows BootVis organizing files on the hard disk.
When the window shown in Figure E closes, restart BootVis and run another boot trace by clicking File | Next Boot + Driver Trace. This will allow you to see how much improvement was gained from the optimization process.
Figure F shows the results on my test machine. After running the optimization, the boot time was reduced to 30.85 seconds — a difference of almost three seconds. As I mentioned earlier, this value is affected by the time it takes to enter a logon password, so enter the password as quickly as possible. While three seconds may not seem like a lot, I have seen this value change by as much as 10 seconds. And in today’s world, where we expect instant-on computers, every second counts.

Figure F

BootVis reduced my test machine’s boot time by nearly three seconds.

Identifying driver problems

Now that you know how to optimize a machine’s boot process with BootVis, let’s look at how to troubleshoot boot issues involving problem drivers. BootVis can identify drivers that cause problems during the boot process and will indicate them on the Driver Delay graph, shown in Figure G, with a red bar.
Fortunately, my test machine does not have driver issues. If it did, I would check the manufacturer’s Web site for the latest drivers.

Figure G

BootVis reports no driver delays on my test machine. If it did, they would have appeared in red.

BootVis can only do so much

BootVis tries to optimize the XP boot process as much as possible, but it can’t work miracles. If a machine loads antivirus, firewall, and/or e-mail programs when booted, BootVis can only do so much. Remember the phrase “Your mileage may vary,” and use BootVis within the context of how you use your system. This will help you achieve a compromise between a fast boot and a system you can work with as soon as it boots to XP.

Flex your Linux muscles with partition administration


Takeaway: Linux partition management needlessly strikes fear in the hearts of many system admins. In this Daily Drill Down, Vincent Danen unveils the mysteries behind this highly flexible partition management system.

New army recruits have to conquer the obstacle course, new lawyers have to pass the bar exam, and new Linux system admins have to demystify partition management. Overcoming such hurdles isn’t only good for your psyche, but, in the case of Linux partitions, it’s a boon for your network as well. Linux newbies aren’t the only admins struggling with partitions, however; there are plenty of Linux pros that aren’t getting the most from their partitions either, which is unfortunate to say the least. When you have a system as flexible and efficient as Linux partition management is, it would be a crime not to fully utilize it.

With other DOS-similar operating systems, such as Windows and OS/2, partition management is relatively simple to use. For more advanced file systems of this ilk, an entire disk is often dedicated as a C: drive, with additional physical drives being designated to other drive letters as needed. By choosing not to use FAT partitions, administrators can designate extremely large NTFS or HPFS partitions without wasting too much space.

With UNIX-based operating systems, however, the rules change. There are no drive letters, but mount points instead. To put it into Windows-speak, you can look at the entire system as being one large C: drive that can span multiple disks. The little ground you give in terms of simplicity, you more than make up for in terms of flexibility.

In this Daily Drill Down, I will explain how to establish a solid partition strategy for your UNIX-based system. In the process, I’ll also shed light on some of the more puzzling elements of Linux partitions, such as formats and partition names.

Partition strategies
In the “old” days, UNIX-based systems were usually given many smaller partitions, such as the root file system, /usr/usr/local/opt/boot/var, and even /tmp. Each of these used to be a separate mount point on separate file systems. The thinking behind this setup was that if a file system got corrupted, it would not affect the entire system. For instance, if /tmp got corrupted, it would be a relatively easy operation to reformat it without affecting the rest of the system. Another reason for this setup was that if an errant program attempted to fill up the file system, it would only fill up one small partition as opposed to a large partition that contained the entire system. Finally, this partition method could take advantage of older small hard drives by placing smaller file systems on them while saving more space on the larger drives for larger files.

While these concerns may still be valid for some, the preferred method for partition management these days seems to be to have as few partitions as possible. One reason for this larger partition scheme is that as a partition fills up, performance degradation becomes quite noticeable. For example, if you were to have a smaller size /var partition (where log files are stored) and a particular application were to create a rather large log, the /var partition could fill up and your server might come to a grinding halt. To this end, people will likely create a maximum of three partitions: the root file system, the swap partition, and a separate partition for /home.

Of course, in order to choose the most effective partition management strategy, you need to take into account how the system will be used. If the system is a workstation or home computer, using the three-partition method might be appropriate. On a server system, however, three partitions would most certainly undermine your system’s potential. You must also consider certain mount options, such as mounting a partition as read-only, noexec, and so on.

Once you consider all the options, you might find, as I have, that while both methodologies have their advantages and drawbacks, a mix of the two can be very advantageous.

Types of installations
Let's look at two simple scenarios to get an idea of what types of things you need to consider when planning a partition management strategy. The first scenario will be a workstation installation, and the second will be a server installation.

Workstation
Let's assume for a moment you are installing Linux as a workstation or home computer. The system contains a single 20-GB hard drive, which is not at all unreasonable with hard-drive prices being so low. Let's also assume that the system has 128 MB of RAM, which is important to know when determining the size of your swap partition.

With a 20-GB drive, you would probably want to allocate about 3 GB or 4 GB for the root file system to contain your entire Linux installation. This, of course, depends on the number of applications you plan on installing, the base install size of your chosen distribution, and so on. Some distributions, such as Debian, have a much smaller install base size than distributions like Mandrake Linux. Regardless, 3 GB or a little more is a good size in terms of the function for this computer and the size of the disk.

With 128 MB of RAM, you will need some swap space. The swap partition is where the Linux kernel will write information that is not often used and would normally reside in RAM. Because RAM is much faster to access than a hard drive, swap partitions should not be considered a substitute for RAM. For example, a system with 128 MB of physical RAM and a 1-GB swap partition will slow your system down because a lot of information will be swapped to disk and will be slower to retrieve. The general rule of thumb for swap partition sizes has been to create a swap partition equivalent to the amount of physical RAM installed on the system, so in this case you would want a 128-MB swap partition. For lower amounts of RAM, however, you could double the size. In this case you could use two 128-MB swap partitions or a single 256-MB swap partition. Going beyond twice the size of your physical RAM generally is not a good idea because it is usually a waste of space. If you have 256 MB of RAM plus a 256-MB swap partition, you’re looking at 612 MB of space available for writing. With too much swap space, the kernel could begin writing far too much information into that swap space and start slowing down your system. As well, if you have a large amount of RAM, like 1 GB, having a 1-GB swap partition is equally unreasonable. A good rule of thumb is to have no more than 256 MB of swap on any system, regardless of the amount of physical RAM installed.

So now we've used up roughly 3.5 GB of your hard drive, leaving another 16.5 GB available. I would allocate this to the /home partition, which is where all user information, personal files, and so on is stored.

Server
The partitioning scenario above would not be recommended for a server system. With a 20-GB hard drive, the partitions would be much different, and even the type of server would determine the size of partitions. For instance, if you plan on running a Web server, you will have to decide where Web pages will be stored. If you intend to follow the FHS, this will be in /var/www, or if you choose to use the more familiar /home/httpd, you will need to modify your partitions accordingly. In this case, you will need to decide if /var needs to be a larger partition or if /home should be the larger of the two.

Let's assume you have the same 20-GB drive mentioned above, but with 512 MB of physical RAM. Let's also assume the system will be a Web server and that Web pages will be in /home/httpd. In this situation, I would recommend a smaller root file system because servers generally do not run large programs such as GNOME or KDE, which take up large amounts of space. With this server, I would recommend a root file system of 2 GB, with a swap partition of 256 MB (half the size of the physical RAM; again to prevent the kernel from swapping too much to disk). /usr/local is a good mount point; it is a great place to store user-installed programs that do not come with the distribution. Giving /usr/local a size of about 2 GB is reasonable.

Putting /var on its own partition also makes sense here. If log files get too big, they won't fill up the root file system and prevent other parts of the system from working. To give the system plenty of room to store log files, spool files, and other variable data, a/var partition of about 1 GB is adequate. It may also be prudent to have another minimal installation of your chosen Linux distribution on its own partition as well. This way, if something happens with your primary OS, you can boot into the emergency install to perform maintenance. A 500-MB partition, perhaps mounted as /mnt/rescue,would suffice. Finally, you may want to place /tmp on its own partition as well, to prevent temporary files from filling up your system. By having /tmp as its own partition, you can also mount it as noexec, which is an extra security precaution that doesn’t allow any programs to be executed from the /tmp file system. A size of 500 MB should be appropriate for /tmp.

So now you've got about 15.5 GB of free space. Because your Web pages will be served from /home, allocating the rest of the free space to the /home partition makes sense.

Formats
Because there are a number of different file systems formats that are now available for Linux (such as the traditional ext2 and journaling file systems like ReiserFS, XFS, ext3, etc.), you may decide to mix and match file systems as your needs require. For instance, you may decide to use ReiserFS for all partitions except the /home partition because you want to enable quotas for users, so you may choose XFS, ext3, or ext2 to enable quotas. Or you may choose to have /tmp as ext2 and everything else as XFS.

In some instances, you may require a particular file system format for a particular partition. For instance, the qmail MTA is typically installed to /var/qmail because its queue has had problems on ReiserFS file systems. So if you want to use ReiserFS, and still use qmail, you may decide to use ReiserFS everywhere except for the /var partition, which would be formatted to ext2. It is this kind of versatility that separates the Linux partition management system from the rest of the pack.

Understanding partition names
In addition to their confusion over partition strategies, Linux newbies often struggle to make sense of partition names. Using names such as /dev/hdf4 and /dev/hdb can take some getting used to. Here is a good overview to help clarify the naming procedure.

Hard drives are associated by device entries. /dev/hdX (note the “h”) designates an IDE hard drive, whereas /dev/sdX (note the “s”) designates a SCSI hard drive. The last letter determines the drive itself. /dev/hda is the first IDE drive on the first IDE channel, while/dev/hdb is the second drive on the first channel. hdc and hdd are the first and second IDE drives on the second IDE channel. With ATA100 and ATA133 drives, you may have devices named /dev/hdehdfhdg, and hdh. These represent the first and second drive on the first ATA100 controller channel and the first and second drive on the second ATA100 controller channel, respectively. For SCSI drives, the names run up the alphabet: /dev/sda is the first SCSI drive, /dev/sdb the second, and so forth.

Some partition names also feature a number after the device, which represents the partition number. /dev/hdb2 is the second partition on the second IDE drive of the first IDE channel. With Linux, you can have four primary partitions and as many extended partitions as you like.

When determining where to put your partitions, it is a good rule of thumb to have the root file system (/) be the first partition (i.e., /dev/hda1), the swap partition be the second partition (i.e., /dev/hda2), and from there you can put partitions wherever you like. You may decide to put /var as hda3/home as hda5/usr/local as hda6, and so on. The partition numbers of 5 and higher (i.e., hda5) are extended partitions, whereas the partition numbers 4 and lower are primary partitions.

Conclusion
This basic introduction to partition management and strategies can't possibly cover every scenario. The size and disposition of partitions depends on what the computer is going to be used for, the size and number of drives available to Linux, and any number of other factors. Hopefully, by reading through these basic guidelines, however, you now have a clearer view of what the partition management structure should be on your particular network. Over time and with practice, you will be able to determine if a partition is too large and wasting space or too small and requiring more space. With every Linux installation, upgrade, and reinstall, you will continue to fine-tune your partition strategies, which will eventually provide you with the expertise to take full advantage of this incredibly efficient and flexible Linux system.

Encrypting and decrypting files with GnuPG


GPG can do much more than that. Many e-mail programs provide GPG support so you can use GPG seamlessly with your e-mail client. This allows you to digitally sign e-mails to assure recipients that you did indeed write the message. It also allows you to encrypt messages to a recipient with their public key, meaning that only the individual with the passphrase to the equivalent private key can decode and read the e-mail.
Likewise, GPG can do the same for files. If you wish to encrypt a file for someone else, you would use his or her public key to encrypt the file. However, if you wished to keep your own files private and safe from theft or prying eyes, you would encrypt the file with your own public key, ensuring that only you would be able to decrypt it.
It makes no difference to GPG what type of file you are encrypting; it can be binary just as well as text, or an OpenOffice.org spreadsheet. For instance, to encrypt a Word document for yourself, you would execute the following:
$ file private.doc
private.doc: Microsoft Office Document
$ gpg -ea -r user@domain.org private.doc
The original file is untouched, but the document is now stored in an ASCII file calledprivate.doc.asc:
$ file private.doc.asc
private.doc.asc: PGP armored data message
$ gpg -d private.doc.asc >new.doc
You need a passphrase to unlock the secret key for
user: "Real Name (Comment) "
2048-bit ELG-E key, ID 7F72A50F, created 2007-12-01 (main key ID 9B1386E2)
Enter passphrase:
gpg: encrypted with 2048-bit ELG-E key, ID 7F72A50F, created 2007-12-01
"Real Name (Comment) "
$ cmp new.doc private.doc
$ echo "" >>new.doc
$ cmp new.doc private.doc
cmp: EOF on private.doc
The cmp command at the end was a slight demonstration to indicate that that resulting decrypted file is exactly the same as the original, which is visible in the slight modification done to it prior to the second invocation of cmp.
The result of the above is an ASCII armored file, making it quite portable but at the expense of size. To create a binary file, omit the -a option:

$ gpg -e -r user@domain.org private.doc $ file private.doc.gpg
private.doc.gpg: GPG encrypted data
$ ls -l private.doc*
-rw------- 1 user user 30720 Nov 29 15:36 private.doc
-rw-r--r-- 1 user user 7340 Dec 2 17:27 private.doc.asc
-rw-r--r-- 1 user user 5352 Dec 2 17:33 private.doc.gpg
As you can see, some compression can take place as well; a 30-KB Word document turns into a 7-KB ASCII-armored file or a 5-KB GPG encrypted file.
If you are only interested in integrity checking and validity of a file, you can create digital signatures for those files to ensure that they haven’t changed.
$ gpg -ba -u user@domain.org private.doc
You need a passphrase to unlock the secret key for
user: "Real Name (Comment) "
1024-bit DSA key, ID 9B1386E2, created 2004-09-09
Enter passphrase:
$ gpg --verify private.doc.asc
gpg: Signature made Sun Dec  2 17:37:02 2007 MST using DSA key ID 9B1386E2
gpg: Good signature from "Real Name (Comment) "
$ echo "" >>private.doc
$ gpg --verify private.doc.asc
gpg: Signature made Sun Dec  2 17:37:02 2007 MST using DSA key ID 9B1386E2
gpg: BAD signature from "Real Name (Comment) "
Again, the above creates an ASCII-armored version of the signature; to create a binary copy, change -ba to simply -b to drop the switch to enable ASCII output. The second command verifies the file, by checking the signature. Next, just for testing, we slightly modify the file and you can see that on the next run, the verification fails.
There are many places where GPG has practical application. This has touched only on a few of the very basic uses for GPG, but not only does it have more features to tap into, but the uses for it are many and varied.

Get started with GnuPG


GnuPG is an open replacement for PGP Corporation’s PGP (Pretty Good Privacy) encryption tool, and based on the OpenPGP standard. What GnuPG (or GPG for short) does is allow for the encryption and decryption of files using a public/private keypair. It can be used to encrypt regular files or e-mail, in either binary or ASCII format, and can also verify the integrity of files or e-mail via cryptographic signatures. GPG is a command-line tool and is available with every Linux distribution.
To begin using GPG, you must generate a public/private keypair. This keypair is generated with the –gen-key command:
$ gpg --gen-key
It will create the ~/.gnupg/ directory if it doesn’t already exist, where it will store its configuration file, gpg.conf, and the private and public keyrings where keys are stored, secring.gpg and pubring.gpg respectively, as well as the trust database.
When you generate the initial keypair, you will have to choose the key type. The default is “DSA and Elgamal,” which will allow you to sign and encrypt. You will then have to select a keysize for the key — anywhere between 1024 and 4096 bits. The default is 2048 bits and is sufficient. Next, you will need to determine whether or not the key will expire, and if so, when. A non-expiring key is most convenient, as neither you nor anyone using your public key will have to worry about new keys, however if the key is stolen or compromised, it can then be used indefinitely. Many individuals have keys that expire after one year and generate new keys at that time.
Finally, you will need to provide a user ID for the key which consists of your real name, e-mail address, and an optional comment. The user ID will then end up being “Real Name (Comment)
When the key generation is complete — which may be immediate or may take some time depending on the amount of entropy your system has collected in order to generate random bytes — you can list the keys by executing:
$ gpg --list-keys; gpg --list-secret-keys
You can also view the key’s fingerprint, a unique identifier to the key, with the command:
$ gpg --fingerprint user@domain.org
pub  1024D/9B1386E2 2007-12-01 Real Name (Comment) 
Key fingerprint = 88A9 166B 13E6 516A 87C8  F127 5CA9 2D9E 9B13 86E2
sub  2048g/7F72A50F 2007-12-01
Be sure to keep your fingerprint handy. When people are attempting to use or import your key, they can ensure they have the right key if you provide them with the fingerprint.
At this point, you can start using GPG to encrypt and decrypt files. For instance, if you have a text document, and you want to ensure that no one tampers with it, you can sign it with the –clearsign command. To keep the file readable, specify the ASCII armor format with -a. After providing your passphrase, the contents of the file will be wrapped in a digital signature and a new file will be created with the new contents. If even one space is added to the file, the signature verification will fail. For instance:
$ echo "Test file" >test.txt
$ gpg --clearsign -a test.txt
You need a passphrase to unlock the secret key for
user: "Real Name (Comment) "
1024-bit DSA key, ID 9B1386E2, created 2007-12-01
Enter passphrase:
$ cat test.txt.asc
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Test file
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
iD8DBQFHUh3VJnj1HnfyJpYRAjn7AKCI5DYTvvQ2J6pALyMYp26oGuZKaQCcCSZ7
O6dBveVjOgzC4HL5k8rFFHM=
=SxSW
-----END PGP SIGNATURE-----
$ gpg --verify test.txt.asc
gpg: Signature made Sat Dec  1 19:52:05 2007 MST using DSA key ID 9B1386E2
gpg: Good signature from "Real Name (Comment) "
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 88A9 166B 13E6 516A 87C8  F127 5CA9 2D9E 9B13 86E2
$ perl -pi -e 's|file|files|' test.txt.asc
$ gpg --verify test.txt.asc
gpg: Signature made Sat Dec  1 19:52:05 2007 MST using DSA key ID 9B1386E2
gpg: BAD signature from "Real Name (Comment) "
As you can see from the above, changing the word “file” to “files” causes the verification of the ASCII-armored text file to fail. You can also see that GPG created a new file called test.txt.asc; GPG will attach either an .asc extension to the original file name for an ASCII-armored text file, or a .gpg extension in the case of a GPG-encrypted file.
GnuPG is extremely useful and next week, we’ll see what else it can do.

Secure temporary files in Linux


On a typical Linux system there will be at least two, if not more, directories or partitions meant to hold temporary files. There is always the /tmp directory, and often a /var/tmp directory as well. With newer Linux kernels, there can also be /dev/shm, which is mounted using the tmpfs filesystem.
One problem with directories meant to store temporary files is that they can often be targeted as places to store bots and rootkits that compromise the system. This is because in most cases, anyone (or any process) can write to these directories. Insecure permissions are problematic as well; most Linux distributions set the sticky bit on directories meant to contain temporary files — this means that user A cannot remove a file belonging to user B, and vice versa. Depending on the permissions of the file itself, user A may be able to view and/or modify the contents of that file, however.
A typical Linux installation will set /tmp as mode 1777, meaning it has the sticky bit set and is readable, writable, and executable by all users. For many, that’s as secure as it gets, and this is mostly because the /tmp directory is just that: a directory, not its own filesystem. The /tmp directory lives on the / partition and, as such, must obey its mount options.
A more secure solution would be to set /tmp on its own partition, so that it can be mounted independent of the / partition and have more restrictive options set. An example /etc/fstab entry for a /tmp partition might look like:
/dev/sda7 /tmp ext3 nosuid,noexec,nodev,rw 0 0
This would set the nosuid, noexec, and nodev options, meaning that no suid programs are permitted, nothing can be executed from that partition, and no device files may exist.
You could then remove the /var/tmp directory and create a symlink pointing to /tmp so that the temporary files in /var/tmp also make use of these restrictive mount options.
The /dev/shm virtual filesystem also needs to be secured as well, and this can be done by changing /etc/fstab. Typically, /dev/shm is simply mounted with the defaultsoption, which isn’t enough to properly secure it. Like the fstab entry shown for /tmp, it should have more restrictive mount options:
none /dev/shm tmpfs defaults,nosuid,noexec,rw 0 0
Finally, if you don’t have the ability to create a fresh /tmp partition on existing drives, you can use the loopback capabilities of the Linux kernel by creating a loopback filesystem that will be mounted as /tmp and can use the same restrictive mount options. To create a 1GB loopback filesystem, execute:
# dd if=/dev/zero of=/.tmpfs bs=1024 count=1000000
# mke2fs -j /.tmpfs
# cp -av /tmp /tmp.old
# mount -o loop,noexec,nosuid,rw /.tmpfs /tmp
# chmod 1777 /tmp
# mv -f /tmp.old/* /tmp/
# rmdir /tmp.old
Once this is complete, edit /etc/fstab to have the loopback filesystem mounted automatically at boot:
/.tmpfs /tmp ext3 loop,nosuid,noexec,rw 0 0
Little things like ensuring proper permissions and using restrictive mount options will prevent a lot of harm coming to the system. If a bot lands on a filesystem that is unable to execute, that bot is essentially worthless.

How do I use Sysprep to create a Windows XP image?


There are many different methods an administrator can use to automate the installation of Microsoft Windows XP. One of the most popular and efficient methods is referred to as disk duplication where a preconfigured operating system is cloned and copied onto another computer. This method is an ideal choice when you need to install Windows XP on a number of systems that all require an identical configuration.
The System Preparation Tool (Sysprep), included with Windows XP, can be used to clone a computer and automate the deployment of the operating system. In this article, I will outline how you can use Sysprep to perform disk duplication.
Introduction to Sysprep
One of the benefits of using disk duplication is that it makes installing an operating system, such as Windows XP, on multiple computers more efficient. It is a welcome alternative to manually installing the operating system on multiple computers and configuring identical settings. Instead, the operating system, any service packs, configuration settings, and applications can be included in the image and copied to the target machines.
The System Preparation Tool (Sysprep) included with Windows XP can be used to create the initial disk image. What Sysprep does is prepare the system running Windows XP to be duplicated. Once the image is created, you must then use a third-party utility to deploy it.
Using a utility like Sysprep offers several advantages. Although some time must be spent preparing the image, it will obviously speed up future installations as well as reduce the amount of user interaction required. The main disadvantage is that the reference computer and the target computers must have compatible Hardware Abstraction Layers (HALs) and identical Advanced Configuration and Power Interface (ACPI). The size of the hard disk on the destination computer must also be the same size or larger than the reference computer. All plug-and-play devices are redetected after Sysprep has run.
The general steps that must be completed when using disk duplication to deploy an operating system include:
  1. Install the operating system on the reference computer.
  2. Configure the reference computer as required.
  3. Verify that the reference computer is properly configured.
  4. Prepare the computer for duplication using Sysprep and create an optional Sysprep.inf answer file.
  5. Duplicate the image.

Preparing the reference computer

The first step in using Sysprep to create a disk image is to set up the reference computer. This entails installing the operating system, any service packs, software applications, and configuring settings that you want applied to the target computers. Once you’ve tested the image and are confident that it’s configured the way you want it, you are ready to being the cloning process.
At this point, you are ready to run Sysprep. In order for the utility to function correctly, the Setupcl.exe file, the Sysprep.exe file, and the Sysprep.inf file must all be in the same folder. So your first step will be to create a Sysprep directory in the root folder of drive C on the reference computer. You can create the folder using Windows Explorer or the command prompt. With the second method, open the command prompt and change to the root folder of drive C. Type md Sysprep, as shown in Figure A, to create the new directory.

Figure A

You can create the Sysprep directory in the root folder of drive C from the command prompt.
Your next step will be to copy the files required to run the utility from the Windows XP CD to the Sysprep directory you just created. Insert the Windows XP CD into the CD-ROM drive. Open the Deploy.cab file located in the Support\Tools directory and copy the Sysprep.exe file and the Setupcl.exe file into the Sysprep folder, as shown inFigure B.

Figure B

Copy the Sysprep.exe file and Setupcl.exe file into the Sysprep directory.

Running the Windows system preparation tools

After completing the steps outlined in the previous section, you are ready to launch the Sysprep utility to clone the reference computer. From the command prompt, change to the Sysprep directory and type in the following command:
Sysprep /optional parameter
Sysprep optional parameters include:
  • -quiet - Sysprep runs without displaying onscreen confirmation messages
  • -reboot - Forces the computer to automatically restart after Sysprep is complete.
  • -audit - Restarts the computer in Factory mode without having to generate new security IDs (SIDs).
  • -factory - Restarts the computer in a network-enabled state without displaying the Windows Welcome or mini-Setup. Use the parameter to perform configuration and installation tasks.
  • -nosidgen - The Sysprep.exe file is run without generating new SIDs. Use this parameter if you are not cloning the system.
  • -reseal - Prepares the destination computer after performing tasks in factory mode.
  • -forceshutdown - The computer is shut down after the Sysprep utility is finished.
Once you launch the utility, a warning message will appear. Click OK to acknowledge the warning, and the System Preparation Tool window appears, as shown in Figure C, allowing you to configure how the utility will run. The options available here can also be set using command-line switches when Sysprep is run from the command prompt as outlined above.

Figure C

The System Preparation Tool window allows you to configure how the utility will run.
Once Sysprep has successfully duplicated the reference computer and shutdown (remember the computer can be shutdown automatically by using the -reboot optional parameter), you can remove the hard disk and clone it using third party disk-imaging software.
When you restart a computer from a cloned disk for the first time, two events will occur. First, the Setupcl.exe file will start and generate a new SID for the computer. Second, the Mini-Setup Wizard will start, allowing you to customize the computer. You can also automate this event by creating and using a Sysprep.inf answer file, which is discussed in the section below.

The Sysprep.inf answer file

The first time a computer reboots after being cloned by Sysprep, a Mini-Setup wizard starts. The Mini-Setup wizard prompts the user for information to customize the installation on the target computer. However, if you want to automate the Mini-Setup wizard, you can use a Sysprep.inf file.
The Sysprep.inf file is similar to an answer file in that it contains configuration information that would normally be supplied by a user during the mini setup program. In order to use the sysprep.inf, it must be placed in the Sysprep folder or on a floppy disk. The first time the computer is restarted, it will automatically look for the sysprep.inf file.

Creating the answer file

Creating the Sysprep.inf answer file is not that difficult because a wizard will walk you through the entire process. The utility used to create the answer file is called Setup Manager. Conversely, if you are skilled in the area of answer files, you can also create one using a text editor such as Notepad.
Before you can use Setup Manager to create the answer file, it must first be installed on your computer. On the Windows XP CD, locate the Support\Tools directory. Open the Deploy.cab file and copy the entire contents to a folder on your computer. Once the files have been copied, you can follow the steps outlined below to create an answer file.
1. Open the folder on your computer that contains the contents of the deploy.cab file and double-click Setupmgr.exe. The Windows Setup Manager Wizard will appear. Click Next.
2. Specify whether to create a new answer file or modify an existing one. If you want to modify one, you must enter the path to the file. Click Next.
3. From the Product to Install dialog box, shown in Figure D, select Sysprep Install. Click Next.

Figure D

Select Sysprep Install to create a Sysprep.inf answer file.
4. Select the platform that you will be using the answer file to deploy. You can select from Windows XP Home Edition, Windows XP Professional, and Windows 2000 Server, Advanced Server, or Data Center. Click Next.
5. Select the level of automation you want to use and click Next.
6. The next dialog box allows you to customize General Settings, Network Settings, and Advanced Settings, as shown in Figure E.

Figure E

The Windows Setup Manager allows you to customize various settings.
7. Once you have configured all the settings, click Finish.
8. Setup Manager creates the answer file and prompts you to choose a location to save the file. The file can be placed on a floppy disk or in the %systemdrive%\Sysprep directory.
9. Exit the Setup Manager application.
Once the Sysprep.inf answer file is created, you can open it using a text editor such as Notepad. The file may look something like the one shown below.


When creating the Sysprep.inf file, there are a few things you need to keep in mind. After a Windows XP computer cloned using Sysprep restarts, the Mini-Setup program begins. It will automatically look for an answer file on a floppy disk or in the Sysprep directory.
The answer file must be named Sysprep.inf, otherwise the Mini-setup program will ignore the file. If an answer file is present, it is copied to the %windir%\System32 directory as $winnt$.inf. If no answer file is present, the Mini-Setup program will run interactively, prompting you for configuration information. Also, if any required sections are missing in the answer file, the program will switch to interactive mode and prompt you for the information.

That’s all there is to it!

Disk duplication is a great way to reduce the amount of time it takes to install an operating system on multiple computers. The System Preparation Tool included with Windows XP can be used to prepare a reference computer to be cloned. To further automate the installation of Windows XP, you can use Setup Manager to create an answer file to be used with Sysprep. The answer file named Sysprep.inf contains the configuration information that would normally require user input during the Mini-Setup program.

[Unattended]
; Prompt the user to accept the EULA.
OemSkipEula = No
;Use Sysprep's default and regenerate the page file for the system
;to accommodate potential differences in available RAM.
KeepPageFile = 0
;Provide the location for additional language support files that
;might be required in a global organization.
InstallFilesPath = c:\Sysprep\i386
[GuiUnattended]
;Set the time zone.
TimesZone = 20
;Skip the Welcome screen when the system starts.
OemSkipWelcome = 1
;Do not skip the Regional and Language Options dialog box so that users can
;indicate which options apply to them.
OemSkipRegional = 0
[UserData]
ComputerName = XYZ_Computer1
[Display]
BitsPerPel = 16
XResolution = 800
YResolution = 600
VRefresh = 60
[GuiRunOnce]
"%systemdrive%\sysprep\file name.bat" = "path-1\Command-1.exe""path-n\Command-n.exe""%systemdrive%\sysprep\sysprep.exe -quiet"[Identification]
;Join the computer to the domain ITDOMAIN.
JoinDomain = ITDOMAIN
[Networking]