Tuesday, March 27, 2012

Network Block Devices: Using Hardware Over a Network


Most Linux administrators today are familiar with protocols such as the Network File System (NFS) and the Server Message Block/Common Internet File System (SMB/CIFS, the protocol used by Samba). These protocols enable you to share filesystems across a network — the Linux computer sitting on your desk can access files stored on another Linux system, a Windows computer, or whatnot across the office (or on the other side of the planet!).
While NFS, SMB/CIFS, and similar protocols are handy, they aren’t always ideally suited to what you need to do. One obscure Linux tool can sometimes help on this score: Network block devices (NBDs). This tool is a way to enable one computer to provide another computer with direct low-level access to block device hardware. (Block devices exchange data in fixed-size multi-byte blocks, whereas character devices communicate in byte-size chunks. Disks, including hard disks, floppy disks, and CD-ROM/DVD drives, are the most common block devices.)
Why Use NBDs?
Why would you want to use an NBD rather than a more common file-sharing protocol? Several scenarios come to mind:
  • If client can provide better tools for low-level maintenance of disks (newer versions of fsck, for instance) than the server, providing NBD access may make sense.
  • The client may have need of expanded network disk space, but a traditional network filesystem may not be adequate. For instance, you might be testing new filesystem drivers or need features of a particular filesystem and those features might not be properly handled by NFS or SMB/CIFS.
  • The server might not support the filesystem or data structures on the device you wish to export. Note that NBD server software is available for Windows and other non-Linux OSes, so in principle a Windows system could host a ReiserFS or XFS partition for a Linux client, or you could export CD-ROMs burned with Rock Ridge extensions from a Windows server.
  • Under some circumstances, using NBDs can provide superior performance compared to using traditional network filesystems. This can be true if diskless clients need to boot simultaneously, for instance.
You may be able to find workarounds or superior alternatives to NBDs in many of these cases. For instance, you could upgrade your server’s low-level disk utilities or expand the disk space on the client. Sometimes, though, NBDs may make more sense. You’ll need to be the judge of which is true in your case.
You should be aware that NBDs have their limitations, too. The most important of these is that providing read/write access to NBDs to more than one client is extremely dangerous. Ordinarily, you’ll either limit the NBD access to a single client or use NBDs to provide read-only access to data (as is possible for a diskless workstation’s root filesystem).
NBDs can also encounter deadlock conditions, particularly if you try to access the NBD from the server. If you need to update the NBD’s partition from the server, you should do so directly — and when no client is using it, particularly if read/write client access is desirable.
Obtaining NBD Software
NBD software for Linux includes three components:
  • NBD user-space server software
  • NBD user-space client software
  • NBD kernel-space client software
The user-space software can be obtained from http://sourceforge.net/projects/nbd. The nbdpackage includes both client and server software. Some distributions provide NBD packages; for instance, Ubuntu calls them nbd-client and nbd-server. It’s usually easiest to install a ready-made package, but if you can’t find one for your distribution, the installation process is conventional:
  1. Download the source tarball, unpack it in a convenient location, and cd into the resulting directory.
  2. Type ./configure
  3. Type make
  4. As root, type make install
The result will be the installation of the nbd-server and nbd-client programs, along with associated man pages. As of version 2.99, the NBD source package includes a subdirectory called winnbd, which includes a Windows NBD server. I didn’t test it for this column, though.
Your NBD client requires kernel support to use NBDs, but the kernel support is not needed on the server. To add this support, you must compile it into your kernel (or as a module). The module is called nbd, so you can look for it on your system before you recompile your kernel. (On my systems, it’s stored in /lib/modules/version/kernel/drivers/block, where versionis the kernel version number.)
Completely describing the process of recompiling your kernel is beyond the scope of this column. Assuming you know how to do it, you can find the NBD option under Device Drivers -> Block Devices, as Network block device support. Activate this option (either compiling it directly into the kernel or building it as a module) and recompile your kernel. (In some cases, recompiling just your kernel modules will do.) You’ll also need to install your kernel modules, and probably the kernel. If you changed the kernel proper, you may need to modify your LILO or GRUB configuration and reboot.
Preparing an NBD Server
The first step in preparing an NBD server is to set aside a block device for use by the server. Typically, this will be a hard disk partition, although it could be a logical volume manager (LVM) volume, redundant array of independent disks (RAID) device, CD-ROM/DVD-ROM drive, or some other block device. You may also create a regular file and export it as a block device. This approach is similar to using a loopback device locally, but you give a remote system access to the file as if it were a block device. This can be handy if you want to provide clients with access to a variety of image files (CD-ROM images, say).
Depending on how you want to use the NBD, you might want or need to create a filesystem on the device and populate it with files beforehand. This would be necessary, for instance, if you intend to use the NBD as a read-only root filesystem for diskless network clients. In other cases, you might leave the preparation of the storage space to the client. This would be necessary if the server doesn’t support the filesystem or other low-level format the client uses, as for instance when using a Windows NBD server.
Whether or not you prepare the NBD storage space using the server computer, you should be sure to never access the NBD space using the server when the clients do so! In the case of read/write client access, having two clients write to a filesystem simultaneously is almost certain to produce data corruption. In the case of read-only access, unexpected writes might confuse the client. You’ll need to be careful about how you prepare and use your disk space on the server side to avoid such problems.
With the disk space prepared, you can run the NBD server:
# nbd-server 2000 /dev/sdb1
This command exports /dev/sdb1 using port 2000. Note that the device filename must use an absolute path, not a relative path. The version of NBD I tested issues a warning message about being unable to open its config file. This message seems to be harmless, so you can ignore it. In principle, you can run nbd-server as a non-root user; however, the user must have read (and perhaps write) access to whatever file or device you export. Note that both the port number and block device are required parameters to nbd-server. You can read the nbd-server man page to learn about its options; some highlights include:
  • You can precede the port number with an IP address, as in 192.168.17.2:2000 to listen to port 2000 on IP address 192.168.17.2. This is most useful if your server has multiple network cards and you only want to export the device using one interface.
  • -r exports the device in read-only mode, which is a highly recommended precaution if clients should not be able to write to the device.
  • -c creates a copy-on-write export, meaning that client write operations are performed on a temporary file, rather than on the original file. The temporary file is discarded when the client disconnects.
  • -a timeout specifies a timeout period (in seconds), after which the NBD server terminates the connection.
  • -l host_list_file specifies a file that includes the IP addresses of hosts that may connect to the NBD server. The default value is nbd_server.allow. If the file is missing, any host may connect. This tool is obviously useful for controlling access to your NBD server.
  • -C config_file tells nbd-server where to find its configuration file (described shortly).
The -c (copy-on-write) option deserves a few more comments. Because most modern Linux distributions use udev to create a dynamic /dev directory tree on a virtual filesystem,-c can produce write errors after a while. To avoid this problem, try creating symbolic links on a conventional filesystem to point to the real device file and then export the symbolic links rather than the device files to which they point. The -c option also has the drawback of reducing performance.
On the plus side, it’s a way around the danger associated with providing write access to multiple clients; each client gets a unique diff file on the server, so each client can safely write to the “shared” file without damaging other clients’ files.
As an example of some of nbd-server‘s options in action, consider the following extended command:
# nbd-server 2000 /dev/sdb1 -r -l /etc/nbd.allow
This command provides read-only access to /dev/sdb1 to those clients whose IP addresses appear in /etc/nbd.allow.
Preparing a Server Configuration File
Instead of specifying options on the command line, you can create an NBD server configuration file. This file consists of named sections, each section name enclosed in square brackets ([]). Within each section, lines contain parameter/value pairs, with parameters and values separated by equal signs (=). The [generic] section is required and sets global options. Each subsequent named section sets options for particular devices. Listing One presents an example, which demonstrates two NBD exports, one for/dev/sdb1 and one for /dev/fd0.
Listing One: Sample NBD Configuration File
[generic]
  listenaddr = 0.0.0.0
  authfile = /etc/nbd-server/allow

[diskfile]
  exportname = /dev/sdb1
  copyonwrite = true
  port = 2000

[floppy]
  exportname = /dev/fd0
  port = 2001
The easiest way to create a configuration file is to specify the options you want and use the -o section_name option to nbd-server, where section_name is the name you want to give to the section. You can then cut-and-paste the output into your configuration file. Be sure each section specifies a unique port number! You may also want to peruse theREADME file that comes with the NBD source package; it provides examples of a few additional options that the configuration file supports.
With configuration file created, you can pass it to nbd-server via its -C option. The server will then share all the listed block devices, each on its specified port and with its specified options, with just one call to nbd-server.
Preparing an NBD Client
With an NBD-enabled kernel running, you can use the nbd-client program on the client computer to map a remote NBD to a local device file:
# nbd-client nbdserver 2000 /dev/nbd0
In this example, nbdserver is the hostname or IP address of the server computer, 2000is the port number associated with the device you want to use, and /dev/nbd0 is the local device filename you want to link to the remote server. A few caveats and tricks require attention:
  • You may need to explicitly load the nbd module before running nbd-client.
  • Once the nbd module is loaded, some systems (including Fedora and Ubuntu systems I’ve checked) automatically create /dev/nbd# devices, where # is a number from 0 up. Upon creation, these device files are not assigned to any server, but you should specify one of these devices in your nbd-client command line.
  • The nbd-client program fails silently if it can’t find an NBD server on the specified computer and port. If it does find an NBD server, it displays a couple of lines of text summarizing the size of the device it finds. If something goes wrong during this process, it may hang with the word Negotiation on the screen. This looks like a prompt, but it isn’t. If you see this, chances are there’s something wrong with your NBD server configuration.
  • The nbd-client program must be run as root.
Although the basic nbd-client command specified earlier works fine in many cases, the program supports some additional options, including:
  • -swap tells the system that the device should be used as swap space. This helps prevent deadlocks.
  • timeout=seconds specifies the timeout period for NBD operations.
Using NBDs
With your NBD client pointing to a network-accessible NBD server, you can begin using NBD. You can treat your /dev/nbd# device just as you would any local block device. Typically, you’ll mount it as a filesystem (perhaps first creating a filesystem on it) withmount:
# mount /dev/nbd0 /mnt/nbd
This command mounts the device at /mnt/nbd. If the device was exported for read/write operations, you’ll be able to write to it, with the caveat that two clients should not connect to the same read/write device simultaneously unless you use the -c option on the server!
Of course, you can also perform other operations on an NBD, such as use mkfs to create a filesystem, check a filesystem with fsck, and so on. Many of these actions require that you have read/write access to the device, though.
When you’re done using an NBD that you’ve mounted, you can unmount it using umount, just as you would a local filesystem. You can then type nbd-client -d /dev/nbd0(substituting the correct device filename) to terminate the client/server NBD network link. Although this last step may not always be strictly necessary, it can help you avoid confusion over what devices are active, and it causes the server to delete diff files it created if you used the -c option on the server.
One of the problems with NBDs is that, if a network connection goes down or a server crashes, clients will have a hard time recovering. Data may be lost, much as in a local disk crash. To guard against such problems, I recommend using NBDs for brief periods, if possible; if you don’t need access to the NBD now, unmount it. Try to avoid using NBDs over anything but local networks. NBD traffic is unencrypted, so passing it over wireless networks or the Internet can be risky from a security point of view.
Overall, NBDs can fill some specialized needs in a Linux network. If NFS or SMB/CIFS just doesn’t seem to fill your needs, give NBDs a try.

Redhat Enterprise Linux 6 Stuff


Ten Essential Linux Admin Tools


System Administrators (SAs) need a set of tools with which to manage their often unmanageable systems and environments*. These ten essential Linux administration tools provide excellent support for the weary SA. Those listed aren’t your standard list of tools deemed essential by industry bystanders. These are tools that have proven track records and have stood the test of time in the data center.
  1. Webmin – Webmin is the ultimate web-based management platform for Linux and several other operating systems. Written in Perl, it simplifies and streamlines standard administrative tasks. Additionally, Webmin helps you configure very complex implementations of Apache, MySQL and SendMail. If you haven’t experienced Webmin, you should, it’sthe essential administration tool.
  2. byobu – If you’re a screen user, byobu is the next step. If you haven’t used screen, you should try byobu. Byobu is a Japanese word for the decorative screens or room dividers that often adorn Japanese homes. Hence, the name for a more decorative form of the screen utility. Linux people are nothing if not clever in their naming of projects.
  3. tcpdump – It sounds crazy but you’d be surprised by how many times that System Administrators need to analyze network packets to help troubleshoot obscure problems that plague their systems. Tcpdump is the right tool for the job of analyzing network traffic. It isn’t beautiful or elaborate but it does exactly what its name advertises: It dumps IP-related traffic to the screen or to a file for analysis.
  4. Virtual Network Computing (VNC) – In its many incarnations (TightVNCUltraVNC,RealVNC), VNC has become one of the most readily recognized and widely utilized remote access tools in the System Administrator’s toolbox. Its broad acceptance is due in part to its platform-independence. VNC is easy to install, simple to configure and available for almost every contemporary operating system.
  5. GNOME Partition Editor (GParted) – What’s better than fdisk? GParted. You have to love the power of this program, since you can boot to a Live CDROM and create, delete and resize your partitions without destroying any existing data. And, it works on almost every imaginable filesystem, even NTFS. For best results, download a Live CD/USB/PXE version and keep it handy.
  6. DenyHosts – DenyHosts is a Python script that allows you to actively monitor your systems for attempted unauthorized logins via SSH and subsequently deny access to the originating host system. Denyhosts records the denied entries in/etc/denyhosts.conf. No System Administrator should bring up a system without it.
  7. Nagios – Nagios is an extensive and somewhat complex network monitoring tool. It has the ability to monitor a variety of hosts, services and protocols. It is an enterprise class tool that is essential in every network regardless of size or complexity. With Nagios, you can monitor, alert, resolve and report on network problems. It also has trending and capacity planning capabilities. Nagios is an extrememly extensible tool through its plugins, addons, extensions and modules.
  8. Linux Rescue CD – Numerous rescue CDs exist for every task or imaginable situation. There are a three notable standouts in the crowd for those of you who don’t have one of these in your arsenal: The Ubuntu Rescue Remix, Parted Magic and GRML. Ubuntu Rescue Remix is a command line-based data recovery and forensics tools compilation (CD or USB). Parted Magic is a super diagnostic and rescue CD/USB/PXE that contains extensive documentation. GRML is a Debian-based live CD that contains a collection of System Administrator tools for system rescue, network analysis or as a working Linux distribution.
  9. Dropbox – Dropbox, as described in “Dropbox: Painless and Free Backup” is an essential backup and cross-platform file exchange tool. With Dropbox, you can leave home without your essential toolbox but still keep it with you where ever you go.
  10. Darik’s Boot and Nuke (DBAN) – Described by its developers as “a self-contained boot disk that securely wipes the hard disks of most computers”, DBAN is an essential decommissioning tool for those who have to dispose of systems that are no longer in service. DBAN also assures System Administrators that data from any previous operating system installations will be unrecoverable. DBAN isn’t the fastest tool on the planet but it is very thorough and wipes all detectable disks securely and completely.
* It’s unfortunate that no set of tools exist to manage the unmanageable users in our midst.

FBCMD: Command Line for Facebook


The Basics
There are a few prerequisites for installing the command line Facebook application,FBCMD. PHP 5.x is a requirement, since the application is a single PHP file. You can install php5 via any method you wish but, in my experience, I had to install php5-cli, php5-gd, and php5-mysql to use this application. Your experience and mileage may vary.
Connect to the FBCMD and download the PHP file or grab it easily via curl.
$ curl -O https://github.com/dtompkins/fbcmd/raw/master/fbcmd_update.php
And, run the following two commands to complete this very simple installation.
$ sudo php fbcmd_update.php sudo

fbcmd update utility [version 2.91]

http://fbcmd.dtompkins.com/update

php fbcmd_update.php install

fbcmd update utility [version 2.91]

http://fbcmd.dtompkins.com/update

.....................

Update: COMPLETE!

fbcmd version: [none] --> [1.0-beta5-dev1]

Note: fbcmd_update.php is now at [/usr/local/lib/fbcmd/fbcmd_update.php]
so you can remove the old one at [/home/khess/fbcmd_update.php]

type fbcmd to begin
As the prompt suggests, type fbcmd and see what happens.
$ fbcmd

Welcome to fbcmd! [version 1.0-beta5-dev1]

This application needs to be authorized to access your facebook account.

Step 1: Allow basic (initial) access to your acount via this url:

https://www.facebook.com/dialog/oauth?client_id=42463270450&redirect_uri=http://www.facebook.com/connect/login_success.html

to launch this page, execute: fbcmd go access

Step 2: Generate an offline authorization code at this url:

http://www.facebook.com/code_gen.php?v=1.0&api_key=42463270450

to launch this page, execute: fbcmd go auth

obtain your authorization code (XXXXXX) and then execute: fbcmd auth XXXXXX
These messages explain the steps you need to take next to grant FBCMD access to your Facebook information.
Making the Facebook Connection
Perform the following connections from a Linux desktop system because some of these commands use Firefox (or your default browser) to initiate the connections and setup the application. Open a Terminal and type in the following commands.
fbcmd go access
Your Internet browser will open to Facebook and prompt you for login. If you see a link that reads, “Login with Command Line” or something similar, select that link and login to Facebook. If you don’t see that link, login to Facebook the way you normally do. Return to your Terminal window and issue the following command:
$ fbcmd go access
This command prompts another connection to Facebook, where you should see a six character code that you’ll need for the next step.
fbcmd auth XXXXXX

fbcmd [v1.0-beta5-dev1] AUTH Code accepted.
Welcome to FBCMD, Kenneth Hess!

most FBCMD commands require additional permissions.
to grant default permissions, execute: fbcmd addperm
As instructed, issue the command in the message.
$ fbcmd addperm

launching: https://www.facebook.com/dialog/oauth?client_id=42463270450&redirect_uri=http://www.facebook.com/connect/login_success.html&scope=create_event,friends_about_me,friends_activities,friends_birthday,friends_checkins,friends_education_history,friends_events,friends_groups,friends_hometown,friends_interests,friends_likes,friends_location,friends_notes,friends_online_presence,friends_photo_video_tags,friends_photos,friends_relationship_details,friends_relationships,friends_religion_politics,friends_status,friends_videos,friends_website,friends_work_history,manage_friendlists,manage_pages,offline_access,publish_checkins,publish_stream,read_friendlists,read_mailbox,read_requests,read_stream,rsvp_event,user_about_me,user_activities,user_birthday,user_checkins,user_education_history,user_events,user_groups,user_hometown,user_interests,user_likes,user_location,user_notes,user_online_presence,user_photo_video_tags,user_photos,user_relationship_details,user_relationships,user_religion_politics,user_status,user_videos,user_website,user_work_history
Your FBCMD to Facebook connection is now complete and you’re ready to use FBCMD. To test that assertion, try the following command to see the permissions you granted the application.
$ fbcmd showperm
PERMISSION                    GRANTED?
ads_management                0
create_event                  1
email                         0
friends_about_me              1
friends_activities            1
friends_birthday              1
friends_checkins              1
friends_education_history     1
friends_events                1
friends_groups                1
...
user_videos                   1
user_website                  1
user_work_history             1
xmpp_login                    0
A ’1′ means permission granted and a ’0′ means permission denied. You can change permissions at any time by issuing the addperm keyword and a permission. See the FBCMDCommand Documentation for a complete listing of command keywords and syntax.
Using FBCMD
I can’t show you all of the FBCMD commands but I can show you a few of the fun ones. You can do almost anything with the command line interface that you can with the web interface. Your results may vary but generally speaking everything works pretty well. To see a list of your friends who are signed into Facebook, use fonline.
$ fbcmd fonline

NAME   ONLINE_PRESENCE
Friend One  idle
Friend Two  idle
Friend Three         idle
Friend Four  idle
Friend Five  active
Friend Six  active
Friend Seven         active
To see a list of messages that your friends have posted to your wall, use mywall.
$ fbcmd mywall
[#]  NAME          MESSAGE

[1]  Friend One  Hi , Hope you are good
You can read your Facebook messages with the inbox keyword.
$ fbcmd inbox
[#]   FIELD     VALUE

[1]   subject   [Hello]
      :to/from  Friend Four
      :snippet  Hi, what's up?
To check those annoying event invitations that people send you, use events.
$ fbcmd events
[#]  START_TIME        RSVP         EVENT
[1]  Wed May 25 02:00  not_replied  Towel Day - Celebrating Douglas Adams
[2]  Sat Jul 16 10:00  declined     William Bernhardt Small-Group Seminar
         (Level 3)
And, last but not least, you can update your status. You wouldn’t want anyone to miss any aspect of your fascinating existence or your latest video game scores.
$ fbcmd post "This is a test post from FBCMD"
POST_ID
1443542993_205008538849
If you’re a PHP programmer, I suggest that you expand and extend this application by contacting the primary developer. See the Contribute page for more information.
For those of you who love to use Facebook, you’re sure to love an easy to install, easy to use command line Facebook application like FBCMDFBCMD has a lot of potential as an evolving command line application that I hope someone incorporates into a repository so that it’s even easier to install for those who don’t like to install applications. Those of us who like a challenge are in the minority. Most people just want something that works and works without hassle or strain. Make it so, Linux fans.

Wireshark: An Ethereal Experience


On a scale of one to ten, where one is dental surgery and ten is winning a $100 million Powerball lottery, network protocol analysis falls somewhere in the range of three or four. It isn’t exactly painful but it certainly doesn’t arouse any fireworks or thoughts of fireworks in your soul.Wireshark, however, makes network packet sniffing and analysis easy and almost fun.
Wireshark is a network protocol analyzer tool, which means that it captures and interprets live network traffic data for offline analysis. Sometimes referred to as packet sniffing, packet analysis helps you understand what’s going on network-wise so that you can assess and mitigate problems with bandwidth, security, malicious activity and normal network usage.

Wireshark is free software licensed under the GPL.
The Basics
To install Wireshark and its dependencies on Debian-based systems, enter the standard apt-get bandy.
$ sudo apt-get install wireshark
For rpm-based systems, enter the equivalent yum command.
$ sudo yum install wireshark
On some systems, you might be surprised when you look for Wireshark under Applications ->Internet and you don’t find it. Nor do you find it by entering wireshark & in a terminal window. These systems install the non-GUI applications such as tsharkeditcap andrawshark sometimes known as wireshark-common components. To install the familiar Wireshark GUI, refer to wireshark-gnome or wireshark-gtk+ in your install command.

Download the source code from the Wireshark Download page and compile in the usual way, if you’re not satisfied with pre-built binaries. There are a few dependencies needed for a source code compilation but the configure script informs you of these as it proceeds and fails.

Using Wireshark
Once installed, you’ll want to jump right in and start sniffing away at your network traffic. You might run into a roadblock or two if you “jump this shark” too quickly. For one, you have to use a privileged account, such as root, that has the ability to place one or more of your network interfaces into promiscuous mode. Second, you must perform a bit of configuration prior to gathering your data. Let’s look at a simple session.

Open Wireshark by locating its icon under Applications->Internet (GNOME). As Figure 1 shows, Wireshark is a typical-looking GUI application.

Figure 1: Getting Started with Wireshark Capture Options

Figure 1: Getting Started with Wireshark Capture Options
To configure a capture, click Capture from the menu and then select Options to launch the Capture Options entry screen. See Figure 2.

Figure 2: Configuring Wireshark for a Capture Session

Figure 2: Configuring Wireshark for a Capture Session

Select the network interface that you want to use for packet capture (eth0, for example), the Link-layer header type (Ethernet), promiscuous mode, a capture filter, a capture file, display options and name resolution options. There’s a lot of information on this screen, so let’s take a minute to examine the options.

If you don’t select “promiscuous” mode, then your capture will only see packets addressed to your system. It will see broadcast and multicast packets but you won’t see the bulk of the network traffic as it passes by your system. Promiscuous mode is the default behavior for wire sniffing. Specify a file to collect your captured data for offline viewing and analysis. The display options are a matter of personal preference and you’ll have to find which options suit you. The name resolution options, when checked, instruct Wireshark to attempt name resolution from MAC addresses and from IP addresses. Name resolution makes reading logs easier for those not accustomed to looking at Hex codes and dot notation IP numbers.

Begin your capture by clicking the Startbutton at the bottom of the Capture Optionspage. Future captures will use these settings until you return to this page and make changes. Refer to Figure 3 for a sample capture in progress.

Figure 3: Capturing Packets in Wireshark

Figure 3: Capturing Packets in Wireshark
Stop the packet capture by clicking the Stop Capture menu icon or select Capture->Stop from the menu. This halts the packet capture and saves the information to the file specified on theCapture Options page. You can’t read this file in word processing or text processing programs as is. You also can’t read it at the command line with catmore or less. To read your data in other programs, export the captured data to another format (Plain text, CSV, PostScript, XML).

Simple Wireshark Cases
You installed Wireshark to perhaps figure out where security breach attempts originate or to find some network bottlenecks that affect your systems. Let’s take the first situation, attempts on your system, as an example.

During the packet capture, you noticed some dark red colored entries flash by on the Wireshark screen. Scroll down in the list until you see the red entries. These red entries tell you that there is a serious or error condition in the capture that you need to investigate. Refer to Figure 4.

Figure 4: Wireshark Displaying Red (Error) Entries in a Packet Capture

Figure 4: Wireshark Displaying Red (Error) Entries in a Packet Capture

As the packet info shows, there was an attempt made on the local system running Wireshark (192.168.1.77) from xenalive (192.168.1.72) in the form of a telnet connection. This is likely someone looking for an easy way into a system that has telnet enabled. You have enough information (system name, MAC address, IP address) to find the culprit and ask him what his purpose is in attempting a connection to your system.

What does a normal connection attempt look like in Wireshark? To answer that question, you have to capture data while such an attempt is in progress. See Figure 5 for an SSH attempt.

Figure 5: Investigating SSH Packets in a Wireshark Capture

Figure 5: Investigating SSH Packets in a Wireshark Capture
You see that the xenalive system made an SSH connection to the local system. SSH is an allowed protocol and you’ll see hundreds of these in a log where you have users connecting to a system.
What about failed attempts on a legitimate protocol? Does Wireshark capture those? Yes and no. Yes, it captures the connection attempts but doesn’t alert or mark them in any special way other than what you saw in Figure 5. Wireshark is not an intrusion detection system. You’ll need to check your system logs for those entries.
# grep Failed auth.log
Oct 28 21:03:25 filer sshd[4740]: Failed none for invalid user fred from 192.168.1.72 port 14066 ssh2
Oct 28 21:03:28 filer sshd[4740]: Failed password for invalid user fred from 192.168.1.72 port 14066 ssh2
Oct 28 21:03:30 filer sshd[4740]: Failed password for invalid user fred from 192.168.1.72 port 14066 ssh2
Oct 28 21:03:33 filer sshd[4740]: Failed password for invalid user fred from 192.168.1.72 port 14066 ssh2
Oct 28 21:03:36 filer sshd[4740]: Failed password for invalid user fred from 192.168.1.72 port 14066 ssh2
Oct 28 21:03:39 filer sshd[4740]: Failed password for invalid user fred from 192.168.1.72 port 14066 ssh2
Oct 28 21:03:42 filer sshd[4740]: Failed password for invalid user fred from 192.168.1.72 port 14066 ssh2
A Word on Filtering
If you don’t enjoy seeing a lot of ARP traffic in your captures, you can filter it by adding a !arpin the Filter field. You don’t want to delete this information but it tends to clutter your view.
Wireshark isn’t the perfect network protocol capture and analysis tool but it comes close. And, you can’t beat the price. Next week, come back for more Wireshark, when we look at some advanced features and actual analysis.

Wireshark, by itself, is an effective analytical tool and it can point you in the right direction for some trouble spots. For example, if someone on your network has an email virus, you can see those packets, their source and their destination. Unfortunately, you’ll see them mixed in with all of the other packets that you’ve captured. The solution is selective filtering.

Casting a Smaller Net
Take one of your recent packet captures and count the number of “Who Has” broadcasts that you see. Chances are that you have an abundance of them cluttering up your capture. These are ARP requests and they tend to annoy rather than assist in your quest to find problems. Don’t misunderstand that statement. ARP requests are important and can point to problems on your network but unless an ARP “storm” is the root of your problem, there’s too many of them and they distract your attention from the real issues at hand.

You can resolve this problem by using a filter when you perform a packet capture. Using that same recent packet capture, enter “!arp” into the Filter field (See Figure 1) and press the ENTER key to accept. All of the ARP entries should disappear. Now you can focus on potential problems without the extraneous matter fogging your vision.

Figure 1: Removing the ARP Entries from a Packet Capture


Figure 1: Removing the ARP Entries from a Packet Capture
If you don’t know the correct filter syntax, you can click the Filter button, scroll through the list of common filter selections and choose the one you want to use. Try selecting No ARP and no DNS from the list to see how much your capture changes.
Alternatively, you can select a single packet type of interest and filter on that selection. Select a single packet, right click it, select Apply as Filter and click Selected to accept the change. See Figures 2 and 3 for reference. Note the change in your display. You can apply filters before or after a packet capture event. To return to your original capture, click the Clearbutton.

Figure 2: Applying a Packet Filter


Figure 2: Applying a Packet Filter
Figure 3: Viewing the Filtered Results

Figure 3: Viewing the Filtered Results



Sometimes it’s helpful, to grab a quick capture while you’re observing an event in progress. For example, if you see that a network attack is underway. The quickest way to bring up a Wireshark capture is with your excellent command line skills. Rather than wrestling with a GUI, you can use a simple command to start Wireshark and start that packet capture as soon as you notice something fishy happening with your system.
Enter the following in a terminal window.
# wireshark -i eth0 -k
Wireshark starts up and immediately (Using the -k switch) begins capturing packets oneth0 with no interaction needed from you. Click the Stop Capture button when finished. You’re correct if you noticed that this capture had no filters. And, you’re also correct if you wondered if command line captures can include filters. Look at the following example discussed earlier.
# wireshark -i eth0 -k "not arp"
This launches Wireshark on eth0immediately (-k) with no ARP messages included in the capture. The command line alternative allows a rapid response to those rapidly changing conditions and when timing is important.

Collaborative Analysis
What happens when you’ve captured thousands of packets and you still can’t figure out what’s going on? A second, third or fourth set of eyes on a problem couldn’t hurt. There is a collaborative method that allows you and your colleagues to ponder over Wireshark packet captures simultaneously and offline.
You can upload your packet capture to one of the free online services for that efficient and collective view. One such site is CloudShark. See Figure 4. CloudShark is a free service that allows you to upload your packet captures without the need for user registration. Connect, upload, distribute the URL for your capture and while away the hours on this worthy pursuit.

Figure 4: Using CloudShark to View a Packet Capture Online


Figure 4: Using CloudShark to View a Packet Capture Online
One reader shared Network Timeout as an alternative capture upload and analysis site.
Wireshark offers you one method for packet capture and analysis for your networks. It is a powerful tool that can help you maintain a safe and well-running network. A word of caution for those of you who want to use Wireshark for unsavory purposes: Most corporate networks frown upon port scanning and packet sniffing unless you have a job title that includes such activities. Please don’t allow your use of Wireshark to take you down hook, line and sinker.


Intro to Linux Pluggable Authentication Modules


Every time you log into a Linux system, you’re using the Pluggable Authentication Modules (PAM) behind the scenes. PAM simplifies Linux authentication, and makes it possible for Linux systems to easily switch from local file authentication to directory based authentication in just a few steps. If you haven’t thought about PAM and the role it plays on the system, let’s take a look at what it is and what it does.

Actually, PAM is about more than logging into the system itself. Applications can use the PAM libraries to share authentication — so users can use a single username and password for many applications. The rationale behind PAM is to separate authentication from granting privileges. It should be up to the application how to handle granting an authenticated user privileges, but authentication can be handled separately.

A simple way of looking at this. Imagine going to an all-ages show at a local club. At the door, the bouncer checks ID and tickets. If you’ve got a valid ticket and ID that shows you’re over 21, you get a green wristband. If you’ve got a valid ticket and an ID that shows you’re under 21, you get a red wristband. Once in the club, it’s up to the bartender to grant privileges to buy alcohol (or not), and the club staff to grant seating privileges or direct you to the floor for general admission.
There’s no beer or music involved, but PAM is meant to work in a similar fashion.

Understanding PAM

Out of the box, most Linux installations are configured to use file-based authentication. Note that other systems also have PAM implementations, but for the purpose of this article we’ll stick to Linux.
For file-based authentication on modern Linux systems, users log in and their username and password combination is compared against /etc/shadow. Traditionally this was held in/etc/passwd, but the problem was that many programs needed to be able to read/etc/passwd. This meant that, in effect, anyone with local access could attempt to crack passwords — and without going into the details here, it was not beyond the realm of possibility that they’d be successful. This is doubly true when users are allowed to pick their own passwords and with no form of password policy enforcement.

So now user passwords are held in /etc/shadow, while things like the user shell and group are stored in /etc/passwd.

For single-user systems or small shops, this sort of file-based authentication is manageable. If you’re working with a small number of users on a handful of machines, it’s not difficult at all to deal with user account creation and user management manually using the standard tools provided by the distros.
But imagine if you have a 50-server environment which requires user synchronization across all systems. Suddenly you start dealing with issues of scale. You want to be able to use a directory service like OpenLDAP, or Microsoft’s Active Directory. But how? By switching away from the standard *nix password file method, and switching to an authentication module that supports the method you want to use.

Writing a module for PAM is well beyond the scope of this article. You shouldn’t need to anyway — plenty of modules exist already for any solution you’d want to use.

Take a look under /etc on a Linux system. On most popular distributions like Ubuntu Linux or Red Hat Enterprise you’ll find a directory, pam.d that has several files. Sometimes the configuration is held in /etc/pam.conf, but on many systems it’s broken out into several files by application. Remember, PAM is about more than just the initial login — it can also be used by other system applications that require authentication.

Let’s stick with login for now. Look at /etc/pam.d/login. This is the file used for the shadow login service. Here you’ll see quite a few directives for configuring the types of logins allowed, the type of authentication to be used, how long to delay another login if one fails, and much more. Here’s an example:
auth optional pam_faildelay.so delay=3000000
Basically, you’re calling the pam_faildelay module on authentication. If the user fails the attempt, it sets a delay so that any attacker trying to brute-force the way into a system will spend more time trying user/password combinations. Other PAM modules exist such aspam_succeed_if which will only allow an authentication to occur when an additional requirement such as being member of a certain group or your UID is within a certain range.

What if you want to change the type of authentication the system is using? Then you want to look at /etc/pam.d/common-auth, which defines the type of authentication being used to log into the system. It’s what points the system to /etc/shadow in the first place.

Here you can configure the system to use OpenLDAP, or other directory services. But there’s one more piece that needs to be changed, /etc/nsswitch.conf. This file tells the system what name services and directories to use for authentication, as well as where to look for protocol information (usually /etc/protocols, logically enough) and more. It’s sort of like your system’s Little Black Book, or the index to a Little Black Book.

Again, this goes back to the days when systems had One True Login and One True DNS, rather than a bunch of options. Now you can configure things so that the system uses OpenLDAP or Microsoft Active Directory (via Likewise, or Centrify) for authentication rather than static files. Another benefit of PAM is that it logs both successful and failures in common places, which allows you to use products specializing in reporting functionality to track whether logins are succeeding or failing.

As you can see, there’s a lot going on behind the scenes with PAM. You may have thought that Linux authentication was a simple affair, but there’s a lot of hidden (we hope) complexity and flexibility running the system when you provide your username and password. You’ll also find that Linux is very flexible, and can accommodate just about any authentication mechanism you’d like to use.

As Easy As Openfiler


Managing storage isn’t easy but Openfiler makes it less painful. You can create NFS and CIFS shares, iSCSI targets, web services, LDAP authentication, FTP services and Rsync services with Openfiler. You can setup quotas to limit those annoying space hogs and limit renegade connections with network security settings. For universal access to network attached storage, there may be no easier answer than Openfiler.

Openfiler is an appliance, which means that it has a single, specific function. When the system boots the first time, you receive a text-based welcome screen that directs you to use the web interface for Openfiler management.

The Basics
The quick method for the impatient is to download an ISO image from the Openfilerwebsite, burn to a CD-R, boot from the CD image and install. This demonstration uses the Openfiler ISO x86 image. Use at least 512MB RAM* and any standard disk (1GB or larger) for Openfiler. Note: For a very efficient system, you can install Openfiler to a USB pendrive.
You can install Openfiler without a knowledge of Linux or storage systems. You’re only a few mouse clicks and a few minutes of patience away from a successful installation. Since Openfiler’s management interface is web-based, it’s conceivable that someone with no Linux skills could install and manage an Openfiler server. For example, Figure 1 shows the default primary disk setup provided by the Openfiler installation wizard.
Figure 1: Default Disk Layout for Openfiler

Figure 1: Default Disk Layout for Openfiler
Using Openfiler

Figure 2 shows you the initial boot screen directing you to the web-based interface. It’s possible to manage the system from the command line but it’s not recommended for most users.
Figure 2: The Openfiler Console Screen

Figure 2: The Openfiler Console Screen
The first thing you need to do is point a browser to the IP Address and port (446) displayed on the Openfiler screen. Next, select the System tab and select the Launch system updatelink. Figure 3 shows the System Update page that opens for you to list the updates needed to bring your Openfiler system up to date. Select Update All PackagesBackground Update and click the Install Updates button to update the system.
Figure 3: Openfiler's System Update Application

Figure 3: Openfiler’s System Update Application
After your system update completes, it’s time to setup your storage volume(s). To begin this process, select the Volumes tab and click the create new physical volumes link provided. You’re directed to the Block Device Management screen as shown in Figure 4.
Figure 4: Block Device Management - Volume Setup Step One
Figure 4: Block Device Management – Volume Setup Step One
Select a volume, by device name, /dev/sdb1, for example. On the next screen, create any partitions that you want and return to the Block Device Management screen, when finished. These screens are basically web-based fdisk and have nothing to do with presenting storage yet.
Create a new Volume Group by clicking the Volume Groups link in the right-hand pane. Name your new Volume Group, select the physical volume(s) to add and click the Add volume group button as shown in 

Figure 5.
Figure 5: Creating the Volume Group - Volume Setup Step Two

Figure 5: Creating the Volume Group – Volume Setup Step Two
Now you need to add a Volume to the Volume Group you just created. Select the Add Volume link, select your Volume Group from the dropdown menu, scroll down until you see your selected Volume Group. Name the Volume, enter a Volume Description, use the slider, or manually enter a number (1024), to select an amount of space you wish to allocate to that Volume, select the filesystem type from the dropdown (XFS, ext3, iSCSI) and click the Createbutton to create the new Volume, Files1 in the Files Volume Group. See Figure 6.
Figure 6: Creating the Volume - Volume Setup Step Three

Figure 6: Creating the Volume – Volume Setup Step Three

Figure 7 shows you the results of the Files1 Volume creation and the current status of the Files Volume Group.
Figure 7: The Finished Volume Status
Figure 7: The Finished Volume Status
Your volumes aren’t ready to use by remote systems quite yet. You need to setup the services that make them available to remote systems and users. To do so, select theServices tab and click the NFS server Enable link to start the NFS service.

Click the Shares tab, select the User Files link and create a new subfolder (Users1) that the system will share via NFS. Select the Users1 link created and click the Make Share button. When you’re redirected to the Users1 share page, scroll down to set access modes, user permissions, host access configurations and click the Update button.
You will now see an entry similar to the following in /etc/exports.
/mnt/files/files1/Users1 192.168.1.0/255.255.255.0(rw,anonid=96,anongid=96,secure,root_squash,wdelay,sync)
Your users may now connect via NFS to the Users1 share.
Advanced Openfiler
To change the root password, you’ll have to boot up in single user mode, change the password and reboot again. To change the root password using this method, boot the system and when you see the boot menu, press the space bar to stop the countdown. Press the ‘a’ key on your keyboard to append a command to the boot parameters. The grub append prompt looks like the following.
grub append> ro root=LABEL=/  quiet
To enter the command, press your SPACE bar once and enter the word “single” without the quotes as shown below.
grub append> ro root=LABEL=/  quiet single
Press the ENTER key to accept and continue booting the system. After a minimal startup, you’ll drop to a single user root prompt.
sh-3.00#
Use the passwd command to change the root password to something you know. Type init 3at the prompt to continue booting the system into multi-user mode. You can now login at the login prompt as root or via the web console (System tab->Secure Console).
Your Volume Groups are actually directories under the /mnt directory and the Volumes you create exist under that directory. For example, for this demonstration, the Volume Group and Volume are: /mnt/files/files1. Any shares you create are under this directory tree. Keep this in mind when using Openfiler and creating new Volumes and shares.
This very abbreviated introduction to Openfiler will get you started but is by no means complete or exhaustive. There is a user manual available for a small fee. You can also purchase commercial support for Openfiler through the website.
Openfiler is a free solution for small to medium-sized businesses or for personal use. It solves the problem of a higher-end storage solution with good security and an easy-to-use web interface. I strongly recommend purchasing commercial support for business use. Anything this easy to use is also just as easy to put you into an accidentally-induced disaster that might prove difficult to recover from.

Five Easy Ways to Secure Your Linux System


On the heels of last week’s entry on using DenyHosts, and Nikto the week before that; I thought it appropriate to continue in the security vein with five more simple techniques that you can use to protect your systems. These include using account locking, limiting cron use, using DENY access to services, refusing root SSH logins and changing SSHD’s default port.
There’s no excuse to run insecure systems on your network. Your data’s integrity (and your job) depend on your ability to keep those systems running correctly and securely for your co-workers and customers. Shown here are five simple techniques to make your systems less vulnerable to compromise.

Account Locking
Account locking for multiple failed tries puts extra burden on the system administrators but it also puts some responsibility on the user to remember his passwords. Additionally, locking allows the administrator to track the accounts that have potential hack attempts against them and to notify those users to use very strong passwords.
Typically, a system will drop your connection after three unsuccessful attempts to login but you may reconnect and try again. By allowing an infinite number of failed attempts, you’re compromising your system’s security. Smart system administrators can take the following measure to stop this threat: Account lockout after a set number of attempts. My preference is to set that limit to three.
Add the following lines to your system’s /etc/pam.d/system-auth file.
auth    required   /lib/security/$ISA/pam_tally.so onerr=fail no_magic_root
account required   /lib/security/$ISA/pam_tally.so per_user deny=3 no_magic_root reset
Your distribution might not include the system-auth file but instead uses the /etc/pam.d/loginfile for these entries.

Cron Restriction
On multiuser systems, you should restrict cron and at to root only. If other users must have access to scheduling, add them individually to the /etc/cron.allow and /etc/at.allow files. If you choose to create these files and add user accounts into them, you also need to create/etc/cron.deny and /etc/at.deny files. You can leave them empty but they need to exist. Don’t create an empty /etc/cron.deny unless you add entries to the /etc/cron.allow because doing so allows global access to cron. Same goes for at.
To use the allow files, create them in the /etc directory and add one user per line to the file. The root user should have an entry in both allow files. Doing this restricts cron to the root user only.
As the system administrator, you can allow or deny cron and at usage based upon the user’s knowledge and responsibility levels.

Deny, Deny, Deny
“Deny everything” sounds eerily Presidential doesn’t it? But for system security and certain political indiscretions, it’s the right answer. System security experts recommend denying all services for all hosts using an all encompassing deny rule in the /etc/hosts.deny file. The following simple entry (ALL: ALL) gives you the security blanket you need.
#
# hosts.deny    This file describes the names of the hosts which are
#               *not* allowed to use the local INET services, as decided
#               by the '/usr/sbin/tcpd' server.
#
# The portmap line is redundant, but it is left to remind you that
# the new secure portmap uses hosts.deny and hosts.allow.  In particular
# you should know that NFS uses portmap!

ALL: ALL
Edit the /etc/hosts.allow file and insert your network addresses (192.168.1., for example) where you and your users connect from before you logout or you’ll have to login via the console to correct the problem. Insert entries similar to the following to allow access for an entire network, single host or domain. You can add as many exceptions as you need. The/etc/hosts.allow file takes precedence over the /etc/hosts.deny to process your exceptions.

Deny SSH by Root
Removing the root user’s ability to SSH provides indirect system security. Logging in as root to a system removes your ability to see who ran privileged commands on your systems. All users should SSH to a system using their standard user accounts and then issue su or sudocommands for proper tracking via system logs.
Open the /etc/ssh/sshd_config file with your favorite editor and change PermitRootLogin yesto PermitRootLogin no and restart the ssh service to accept the change.

Change the Default Port
While changing the default SSH port (22) will have limited effectiveness in a full port sweep, it will thwart those who focus on specific or traditional service ports. Some sources suggest changing the default port to a number greater than 1024, for example: 2022, 9922 or something more random, such as 2345. If you’re going to use this method as one of your strategies, I suggest that you use a port that doesn’t include the number 22.
Edit your /etc/ssh/sshd_config and change the “Port” parameter to your preferred port number. Uncomment the Port line too. Restart the sshd service when you’re finished and inform your users of the change. Update any applicable firewall rules to reflect the change too.
System security is important and is a constant battle. You have to maintain patch levels, updates and constantly plug newly discovered security holes in system services. As long as there are black hat wearing malcontents lurking the Net looking for victims, you’ll have a job keeping those wannabe perpetrators at bay.