This Blog is intended to collect information of my various Intrests,pen my opinion on the information gathered and not intended to educate any one of the information posted,but are most welcome to share there view on them
Tuesday, March 27, 2012
Ten Essential Linux Admin Tools
System Administrators (SAs) need a set of tools with which to manage their often unmanageable systems and environments*. These ten essential Linux administration tools provide excellent support for the weary SA. Those listed aren’t your standard list of tools deemed essential by industry bystanders. These are tools that have proven track records and have stood the test of time in the data center.
- Webmin – Webmin is the ultimate web-based management platform for Linux and several other operating systems. Written in Perl, it simplifies and streamlines standard administrative tasks. Additionally, Webmin helps you configure very complex implementations of Apache, MySQL and SendMail. If you haven’t experienced Webmin, you should, it’sthe essential administration tool.
- byobu – If you’re a screen user, byobu is the next step. If you haven’t used screen, you should try byobu. Byobu is a Japanese word for the decorative screens or room dividers that often adorn Japanese homes. Hence, the name for a more decorative form of the screen utility. Linux people are nothing if not clever in their naming of projects.
- tcpdump – It sounds crazy but you’d be surprised by how many times that System Administrators need to analyze network packets to help troubleshoot obscure problems that plague their systems. Tcpdump is the right tool for the job of analyzing network traffic. It isn’t beautiful or elaborate but it does exactly what its name advertises: It dumps IP-related traffic to the screen or to a file for analysis.
- Virtual Network Computing (VNC) – In its many incarnations (TightVNC, UltraVNC,RealVNC), VNC has become one of the most readily recognized and widely utilized remote access tools in the System Administrator’s toolbox. Its broad acceptance is due in part to its platform-independence. VNC is easy to install, simple to configure and available for almost every contemporary operating system.
- GNOME Partition Editor (GParted) – What’s better than fdisk? GParted. You have to love the power of this program, since you can boot to a Live CDROM and create, delete and resize your partitions without destroying any existing data. And, it works on almost every imaginable filesystem, even NTFS. For best results, download a Live CD/USB/PXE version and keep it handy.
- DenyHosts – DenyHosts is a Python script that allows you to actively monitor your systems for attempted unauthorized logins via SSH and subsequently deny access to the originating host system. Denyhosts records the denied entries in/etc/denyhosts.conf. No System Administrator should bring up a system without it.
- Nagios – Nagios is an extensive and somewhat complex network monitoring tool. It has the ability to monitor a variety of hosts, services and protocols. It is an enterprise class tool that is essential in every network regardless of size or complexity. With Nagios, you can monitor, alert, resolve and report on network problems. It also has trending and capacity planning capabilities. Nagios is an extrememly extensible tool through its plugins, addons, extensions and modules.
- Linux Rescue CD – Numerous rescue CDs exist for every task or imaginable situation. There are a three notable standouts in the crowd for those of you who don’t have one of these in your arsenal: The Ubuntu Rescue Remix, Parted Magic and GRML. Ubuntu Rescue Remix is a command line-based data recovery and forensics tools compilation (CD or USB). Parted Magic is a super diagnostic and rescue CD/USB/PXE that contains extensive documentation. GRML is a Debian-based live CD that contains a collection of System Administrator tools for system rescue, network analysis or as a working Linux distribution.
- Dropbox – Dropbox, as described in “Dropbox: Painless and Free Backup” is an essential backup and cross-platform file exchange tool. With Dropbox, you can leave home without your essential toolbox but still keep it with you where ever you go.
- Darik’s Boot and Nuke (DBAN) – Described by its developers as “a self-contained boot disk that securely wipes the hard disks of most computers”, DBAN is an essential decommissioning tool for those who have to dispose of systems that are no longer in service. DBAN also assures System Administrators that data from any previous operating system installations will be unrecoverable. DBAN isn’t the fastest tool on the planet but it is very thorough and wipes all detectable disks securely and completely.
* It’s unfortunate that no set of tools exist to manage the unmanageable users in our midst.
FBCMD: Command Line for Facebook
The Basics
There are a few prerequisites for installing the command line Facebook application,FBCMD. PHP 5.x is a requirement, since the application is a single PHP file. You can install php5 via any method you wish but, in my experience, I had to install php5-cli, php5-gd, and php5-mysql to use this application. Your experience and mileage may vary.
Connect to the FBCMD and download the PHP file or grab it easily via curl.
$ curl -O https://github.com/dtompkins/fbcmd/raw/master/fbcmd_update.php
And, run the following two commands to complete this very simple installation.
$ sudo php fbcmd_update.php sudo fbcmd update utility [version 2.91] http://fbcmd.dtompkins.com/update php fbcmd_update.php install fbcmd update utility [version 2.91] http://fbcmd.dtompkins.com/update ..................... Update: COMPLETE! fbcmd version: [none] --> [1.0-beta5-dev1] Note: fbcmd_update.php is now at [/usr/local/lib/fbcmd/fbcmd_update.php] so you can remove the old one at [/home/khess/fbcmd_update.php] type fbcmd to begin
As the prompt suggests, type fbcmd and see what happens.
$ fbcmd Welcome to fbcmd! [version 1.0-beta5-dev1] This application needs to be authorized to access your facebook account. Step 1: Allow basic (initial) access to your acount via this url: https://www.facebook.com/dialog/oauth?client_id=42463270450&redirect_uri=http://www.facebook.com/connect/login_success.html to launch this page, execute: fbcmd go access Step 2: Generate an offline authorization code at this url: http://www.facebook.com/code_gen.php?v=1.0&api_key=42463270450 to launch this page, execute: fbcmd go auth obtain your authorization code (XXXXXX) and then execute: fbcmd auth XXXXXX
These messages explain the steps you need to take next to grant FBCMD access to your Facebook information.
Making the Facebook Connection
Perform the following connections from a Linux desktop system because some of these commands use Firefox (or your default browser) to initiate the connections and setup the application. Open a Terminal and type in the following commands.
fbcmd go access
Your Internet browser will open to Facebook and prompt you for login. If you see a link that reads, “Login with Command Line” or something similar, select that link and login to Facebook. If you don’t see that link, login to Facebook the way you normally do. Return to your Terminal window and issue the following command:
$ fbcmd go access
This command prompts another connection to Facebook, where you should see a six character code that you’ll need for the next step.
fbcmd auth XXXXXX fbcmd [v1.0-beta5-dev1] AUTH Code accepted. Welcome to FBCMD, Kenneth Hess! most FBCMD commands require additional permissions. to grant default permissions, execute: fbcmd addperm
As instructed, issue the command in the message.
$ fbcmd addperm launching: https://www.facebook.com/dialog/oauth?client_id=42463270450&redirect_uri=http://www.facebook.com/connect/login_success.html&scope=create_event,friends_about_me,friends_activities,friends_birthday,friends_checkins,friends_education_history,friends_events,friends_groups,friends_hometown,friends_interests,friends_likes,friends_location,friends_notes,friends_online_presence,friends_photo_video_tags,friends_photos,friends_relationship_details,friends_relationships,friends_religion_politics,friends_status,friends_videos,friends_website,friends_work_history,manage_friendlists,manage_pages,offline_access,publish_checkins,publish_stream,read_friendlists,read_mailbox,read_requests,read_stream,rsvp_event,user_about_me,user_activities,user_birthday,user_checkins,user_education_history,user_events,user_groups,user_hometown,user_interests,user_likes,user_location,user_notes,user_online_presence,user_photo_video_tags,user_photos,user_relationship_details,user_relationships,user_religion_politics,user_status,user_videos,user_website,user_work_history
Your FBCMD to Facebook connection is now complete and you’re ready to use FBCMD. To test that assertion, try the following command to see the permissions you granted the application.
$ fbcmd showperm PERMISSION GRANTED? ads_management 0 create_event 1 email 0 friends_about_me 1 friends_activities 1 friends_birthday 1 friends_checkins 1 friends_education_history 1 friends_events 1 friends_groups 1 ... user_videos 1 user_website 1 user_work_history 1 xmpp_login 0
A ’1′ means permission granted and a ’0′ means permission denied. You can change permissions at any time by issuing the addperm keyword and a permission. See the FBCMDCommand Documentation for a complete listing of command keywords and syntax.
Using FBCMD
I can’t show you all of the FBCMD commands but I can show you a few of the fun ones. You can do almost anything with the command line interface that you can with the web interface. Your results may vary but generally speaking everything works pretty well. To see a list of your friends who are signed into Facebook, use fonline.
$ fbcmd fonline NAME ONLINE_PRESENCE Friend One idle Friend Two idle Friend Three idle Friend Four idle Friend Five active Friend Six active Friend Seven active
To see a list of messages that your friends have posted to your wall, use mywall.
$ fbcmd mywall [#] NAME MESSAGE [1] Friend One Hi , Hope you are good
You can read your Facebook messages with the inbox keyword.
$ fbcmd inbox [#] FIELD VALUE [1] subject [Hello] :to/from Friend Four :snippet Hi, what's up?
To check those annoying event invitations that people send you, use events.
$ fbcmd events [#] START_TIME RSVP EVENT [1] Wed May 25 02:00 not_replied Towel Day - Celebrating Douglas Adams [2] Sat Jul 16 10:00 declined William Bernhardt Small-Group Seminar (Level 3)
And, last but not least, you can update your status. You wouldn’t want anyone to miss any aspect of your fascinating existence or your latest video game scores.
$ fbcmd post "This is a test post from FBCMD" POST_ID 1443542993_205008538849
If you’re a PHP programmer, I suggest that you expand and extend this application by contacting the primary developer. See the Contribute page for more information.
For those of you who love to use Facebook, you’re sure to love an easy to install, easy to use command line Facebook application like FBCMD. FBCMD has a lot of potential as an evolving command line application that I hope someone incorporates into a repository so that it’s even easier to install for those who don’t like to install applications. Those of us who like a challenge are in the minority. Most people just want something that works and works without hassle or strain. Make it so, Linux fans.
Wireshark: An Ethereal Experience
On a scale of one to ten, where one is dental surgery and ten is winning a $100 million Powerball lottery, network protocol analysis falls somewhere in the range of three or four. It isn’t exactly painful but it certainly doesn’t arouse any fireworks or thoughts of fireworks in your soul.Wireshark, however, makes network packet sniffing and analysis easy and almost fun.
Wireshark is a network protocol analyzer tool, which means that it captures and interprets live network traffic data for offline analysis. Sometimes referred to as packet sniffing, packet analysis helps you understand what’s going on network-wise so that you can assess and mitigate problems with bandwidth, security, malicious activity and normal network usage.
Wireshark is free software licensed under the GPL.
Wireshark is free software licensed under the GPL.
The Basics
To install Wireshark and its dependencies on Debian-based systems, enter the standard apt-get bandy.
$ sudo apt-get install wireshark
For rpm-based systems, enter the equivalent yum command.
$ sudo yum install wireshark
On some systems, you might be surprised when you look for Wireshark under Applications ->Internet and you don’t find it. Nor do you find it by entering wireshark & in a terminal window. These systems install the non-GUI applications such as tshark, editcap andrawshark sometimes known as wireshark-common components. To install the familiar Wireshark GUI, refer to wireshark-gnome or wireshark-gtk+ in your install command.
Download the source code from the Wireshark Download page and compile in the usual way, if you’re not satisfied with pre-built binaries. There are a few dependencies needed for a source code compilation but the configure script informs you of these as it proceeds and fails.
Using Wireshark
Once installed, you’ll want to jump right in and start sniffing away at your network traffic. You might run into a roadblock or two if you “jump this shark” too quickly. For one, you have to use a privileged account, such as root, that has the ability to place one or more of your network interfaces into promiscuous mode. Second, you must perform a bit of configuration prior to gathering your data. Let’s look at a simple session.
Open Wireshark by locating its icon under Applications->Internet (GNOME). As Figure 1 shows, Wireshark is a typical-looking GUI application.
Figure 1: Getting Started with Wireshark Capture Options
To configure a capture, click Capture from the menu and then select Options to launch the Capture Options entry screen. See Figure 2.
Figure 2: Configuring Wireshark for a Capture Session
Select the network interface that you want to use for packet capture (eth0, for example), the Link-layer header type (Ethernet), promiscuous mode, a capture filter, a capture file, display options and name resolution options. There’s a lot of information on this screen, so let’s take a minute to examine the options.
If you don’t select “promiscuous” mode, then your capture will only see packets addressed to your system. It will see broadcast and multicast packets but you won’t see the bulk of the network traffic as it passes by your system. Promiscuous mode is the default behavior for wire sniffing. Specify a file to collect your captured data for offline viewing and analysis. The display options are a matter of personal preference and you’ll have to find which options suit you. The name resolution options, when checked, instruct Wireshark to attempt name resolution from MAC addresses and from IP addresses. Name resolution makes reading logs easier for those not accustomed to looking at Hex codes and dot notation IP numbers.
Begin your capture by clicking the Startbutton at the bottom of the Capture Optionspage. Future captures will use these settings until you return to this page and make changes. Refer to Figure 3 for a sample capture in progress.
Figure 3: Capturing Packets in Wireshark
Stop the packet capture by clicking the Stop Capture menu icon or select Capture->Stop from the menu. This halts the packet capture and saves the information to the file specified on theCapture Options page. You can’t read this file in word processing or text processing programs as is. You also can’t read it at the command line with cat, more or less. To read your data in other programs, export the captured data to another format (Plain text, CSV, PostScript, XML).
Simple Wireshark Cases
You installed Wireshark to perhaps figure out where security breach attempts originate or to find some network bottlenecks that affect your systems. Let’s take the first situation, attempts on your system, as an example.
During the packet capture, you noticed some dark red colored entries flash by on the Wireshark screen. Scroll down in the list until you see the red entries. These red entries tell you that there is a serious or error condition in the capture that you need to investigate. Refer to Figure 4.
Figure 4: Wireshark Displaying Red (Error) Entries in a Packet Capture
As the packet info shows, there was an attempt made on the local system running Wireshark (192.168.1.77) from xenalive (192.168.1.72) in the form of a telnet connection. This is likely someone looking for an easy way into a system that has telnet enabled. You have enough information (system name, MAC address, IP address) to find the culprit and ask him what his purpose is in attempting a connection to your system.
What does a normal connection attempt look like in Wireshark? To answer that question, you have to capture data while such an attempt is in progress. See Figure 5 for an SSH attempt.
Figure 5: Investigating SSH Packets in a Wireshark Capture
You see that the xenalive system made an SSH connection to the local system. SSH is an allowed protocol and you’ll see hundreds of these in a log where you have users connecting to a system.
What about failed attempts on a legitimate protocol? Does Wireshark capture those? Yes and no. Yes, it captures the connection attempts but doesn’t alert or mark them in any special way other than what you saw in Figure 5. Wireshark is not an intrusion detection system. You’ll need to check your system logs for those entries.
# grep Failed auth.log Oct 28 21:03:25 filer sshd[4740]: Failed none for invalid user fred from 192.168.1.72 port 14066 ssh2 Oct 28 21:03:28 filer sshd[4740]: Failed password for invalid user fred from 192.168.1.72 port 14066 ssh2 Oct 28 21:03:30 filer sshd[4740]: Failed password for invalid user fred from 192.168.1.72 port 14066 ssh2 Oct 28 21:03:33 filer sshd[4740]: Failed password for invalid user fred from 192.168.1.72 port 14066 ssh2 Oct 28 21:03:36 filer sshd[4740]: Failed password for invalid user fred from 192.168.1.72 port 14066 ssh2 Oct 28 21:03:39 filer sshd[4740]: Failed password for invalid user fred from 192.168.1.72 port 14066 ssh2 Oct 28 21:03:42 filer sshd[4740]: Failed password for invalid user fred from 192.168.1.72 port 14066 ssh2
A Word on Filtering
If you don’t enjoy seeing a lot of ARP traffic in your captures, you can filter it by adding a !arpin the Filter field. You don’t want to delete this information but it tends to clutter your view.
Wireshark isn’t the perfect network protocol capture and analysis tool but it comes close. And, you can’t beat the price. Next week, come back for more Wireshark, when we look at some advanced features and actual analysis.
Casting a Smaller Net
Take one of your recent packet captures and count the number of “Who Has” broadcasts that you see. Chances are that you have an abundance of them cluttering up your capture. These are ARP requests and they tend to annoy rather than assist in your quest to find problems. Don’t misunderstand that statement. ARP requests are important and can point to problems on your network but unless an ARP “storm” is the root of your problem, there’s too many of them and they distract your attention from the real issues at hand.You can resolve this problem by using a filter when you perform a packet capture. Using that same recent packet capture, enter “!arp” into the Filter field (See Figure 1) and press the ENTER key to accept. All of the ARP entries should disappear. Now you can focus on potential problems without the extraneous matter fogging your vision.
Figure 1: Removing the ARP Entries from a Packet Capture
If you don’t know the correct filter syntax, you can click the Filter button, scroll through the list of common filter selections and choose the one you want to use. Try selecting No ARP and no DNS from the list to see how much your capture changes.
Alternatively, you can select a single packet type of interest and filter on that selection. Select a single packet, right click it, select Apply as Filter and click Selected to accept the change. See Figures 2 and 3 for reference. Note the change in your display. You can apply filters before or after a packet capture event. To return to your original capture, click the Clearbutton.
Figure 2: Applying a Packet Filter
Figure 3: Viewing the Filtered Results
Sometimes it’s helpful, to grab a quick capture while you’re observing an event in progress. For example, if you see that a network attack is underway. The quickest way to bring up a Wireshark capture is with your excellent command line skills. Rather than wrestling with a GUI, you can use a simple command to start Wireshark and start that packet capture as soon as you notice something fishy happening with your system.
Enter the following in a terminal window.
# wireshark -i eth0 -kWireshark starts up and immediately (Using the -k switch) begins capturing packets oneth0 with no interaction needed from you. Click the Stop Capture button when finished. You’re correct if you noticed that this capture had no filters. And, you’re also correct if you wondered if command line captures can include filters. Look at the following example discussed earlier.
# wireshark -i eth0 -k "not arp"This launches Wireshark on eth0immediately (-k) with no ARP messages included in the capture. The command line alternative allows a rapid response to those rapidly changing conditions and when timing is important.
Collaborative Analysis
What happens when you’ve captured thousands of packets and you still can’t figure out what’s going on? A second, third or fourth set of eyes on a problem couldn’t hurt. There is a collaborative method that allows you and your colleagues to ponder over Wireshark packet captures simultaneously and offline.You can upload your packet capture to one of the free online services for that efficient and collective view. One such site is CloudShark. See Figure 4. CloudShark is a free service that allows you to upload your packet captures without the need for user registration. Connect, upload, distribute the URL for your capture and while away the hours on this worthy pursuit.
Figure 4: Using CloudShark to View a Packet Capture Online
One reader shared Network Timeout as an alternative capture upload and analysis site.
Wireshark offers you one method for packet capture and analysis for your networks. It is a powerful tool that can help you maintain a safe and well-running network. A word of caution for those of you who want to use Wireshark for unsavory purposes: Most corporate networks frown upon port scanning and packet sniffing unless you have a job title that includes such activities. Please don’t allow your use of Wireshark to take you down hook, line and sinker.
Intro to Linux Pluggable Authentication Modules
Every time you log into a Linux system, you’re using the Pluggable Authentication Modules (PAM) behind the scenes. PAM simplifies Linux authentication, and makes it possible for Linux systems to easily switch from local file authentication to directory based authentication in just a few steps. If you haven’t thought about PAM and the role it plays on the system, let’s take a look at what it is and what it does.
Actually, PAM is about more than logging into the system itself. Applications can use the PAM libraries to share authentication — so users can use a single username and password for many applications. The rationale behind PAM is to separate authentication from granting privileges. It should be up to the application how to handle granting an authenticated user privileges, but authentication can be handled separately.
A simple way of looking at this. Imagine going to an all-ages show at a local club. At the door, the bouncer checks ID and tickets. If you’ve got a valid ticket and ID that shows you’re over 21, you get a green wristband. If you’ve got a valid ticket and an ID that shows you’re under 21, you get a red wristband. Once in the club, it’s up to the bartender to grant privileges to buy alcohol (or not), and the club staff to grant seating privileges or direct you to the floor for general admission.
There’s no beer or music involved, but PAM is meant to work in a similar fashion.
Understanding PAM
Out of the box, most Linux installations are configured to use file-based authentication. Note that other systems also have PAM implementations, but for the purpose of this article we’ll stick to Linux.
For file-based authentication on modern Linux systems, users log in and their username and password combination is compared against
/etc/shadow.
Traditionally this was held in/etc/passwd,
but the problem was that many programs needed to be able to read/etc/passwd.
This meant that, in effect, anyone with local access could attempt to crack passwords — and without going into the details here, it was not beyond the realm of possibility that they’d be successful. This is doubly true when users are allowed to pick their own passwords and with no form of password policy enforcement.So now user passwords are held in
/etc/shadow,
while things like the user shell and group are stored in /etc/passwd.For single-user systems or small shops, this sort of file-based authentication is manageable. If you’re working with a small number of users on a handful of machines, it’s not difficult at all to deal with user account creation and user management manually using the standard tools provided by the distros.
But imagine if you have a 50-server environment which requires user synchronization across all systems. Suddenly you start dealing with issues of scale. You want to be able to use a directory service like OpenLDAP, or Microsoft’s Active Directory. But how? By switching away from the standard *nix password file method, and switching to an authentication module that supports the method you want to use.
Writing a module for PAM is well beyond the scope of this article. You shouldn’t need to anyway — plenty of modules exist already for any solution you’d want to use.
Take a look under
/etc
on a Linux system. On most popular distributions like Ubuntu Linux or Red Hat Enterprise you’ll find a directory, pam.d
that has several files. Sometimes the configuration is held in /etc/pam.conf,
but on many systems it’s broken out into several files by application. Remember, PAM is about more than just the initial login — it can also be used by other system applications that require authentication.Let’s stick with login for now. Look at
/etc/pam.d/login.
This is the file used for the shadow login service. Here you’ll see quite a few directives for configuring the types of logins allowed, the type of authentication to be used, how long to delay another login if one fails, and much more. Here’s an example:auth optional pam_faildelay.so delay=3000000
Basically, you’re calling the
pam_faildelay
module on authentication. If the user fails the attempt, it sets a delay so that any attacker trying to brute-force the way into a system will spend more time trying user/password combinations. Other PAM modules exist such aspam_succeed_if
which will only allow an authentication to occur when an additional requirement such as being member of a certain group or your UID is within a certain range.What if you want to change the type of authentication the system is using? Then you want to look at
/etc/pam.d/common-auth,
which defines the type of authentication being used to log into the system. It’s what points the system to /etc/shadow
in the first place.Here you can configure the system to use OpenLDAP, or other directory services. But there’s one more piece that needs to be changed,
/etc/nsswitch.conf.
This file tells the system what name services and directories to use for authentication, as well as where to look for protocol information (usually /etc/protocols,
logically enough) and more. It’s sort of like your system’s Little Black Book, or the index to a Little Black Book.Again, this goes back to the days when systems had One True Login and One True DNS, rather than a bunch of options. Now you can configure things so that the system uses OpenLDAP or Microsoft Active Directory (via Likewise, or Centrify) for authentication rather than static files. Another benefit of PAM is that it logs both successful and failures in common places, which allows you to use products specializing in reporting functionality to track whether logins are succeeding or failing.
As you can see, there’s a lot going on behind the scenes with PAM. You may have thought that Linux authentication was a simple affair, but there’s a lot of hidden (we hope) complexity and flexibility running the system when you provide your username and password. You’ll also find that Linux is very flexible, and can accommodate just about any authentication mechanism you’d like to use.
As Easy As Openfiler
Managing storage isn’t easy but Openfiler makes it less painful. You can create NFS and CIFS shares, iSCSI targets, web services, LDAP authentication, FTP services and Rsync services with Openfiler. You can setup quotas to limit those annoying space hogs and limit renegade connections with network security settings. For universal access to network attached storage, there may be no easier answer than Openfiler.
Openfiler is an appliance, which means that it has a single, specific function. When the system boots the first time, you receive a text-based welcome screen that directs you to use the web interface for Openfiler management.
The Basics
The quick method for the impatient is to download an ISO image from the Openfilerwebsite, burn to a CD-R, boot from the CD image and install. This demonstration uses the Openfiler ISO x86 image. Use at least 512MB RAM* and any standard disk (1GB or larger) for Openfiler. Note: For a very efficient system, you can install Openfiler to a USB pendrive.
You can install Openfiler without a knowledge of Linux or storage systems. You’re only a few mouse clicks and a few minutes of patience away from a successful installation. Since Openfiler’s management interface is web-based, it’s conceivable that someone with no Linux skills could install and manage an Openfiler server. For example, Figure 1 shows the default primary disk setup provided by the Openfiler installation wizard.
Figure 1: Default Disk Layout for Openfiler
Using Openfiler
Figure 2 shows you the initial boot screen directing you to the web-based interface. It’s possible to manage the system from the command line but it’s not recommended for most users.
Figure 2: The Openfiler Console Screen
The first thing you need to do is point a browser to the IP Address and port (446) displayed on the Openfiler screen. Next, select the System tab and select the Launch system updatelink. Figure 3 shows the System Update page that opens for you to list the updates needed to bring your Openfiler system up to date. Select Update All Packages, Background Update and click the Install Updates button to update the system.
Figure 3: Openfiler’s System Update Application
After your system update completes, it’s time to setup your storage volume(s). To begin this process, select the Volumes tab and click the create new physical volumes link provided. You’re directed to the Block Device Management screen as shown in Figure 4.
Figure 4: Block Device Management – Volume Setup Step One
Select a volume, by device name, /dev/sdb1, for example. On the next screen, create any partitions that you want and return to the Block Device Management screen, when finished. These screens are basically web-based fdisk and have nothing to do with presenting storage yet.
Create a new Volume Group by clicking the Volume Groups link in the right-hand pane. Name your new Volume Group, select the physical volume(s) to add and click the Add volume group button as shown in
Figure 5.
Figure 5: Creating the Volume Group – Volume Setup Step Two
Now you need to add a Volume to the Volume Group you just created. Select the Add Volume link, select your Volume Group from the dropdown menu, scroll down until you see your selected Volume Group. Name the Volume, enter a Volume Description, use the slider, or manually enter a number (1024), to select an amount of space you wish to allocate to that Volume, select the filesystem type from the dropdown (XFS, ext3, iSCSI) and click the Createbutton to create the new Volume, Files1 in the Files Volume Group. See Figure 6.
Figure 6: Creating the Volume – Volume Setup Step Three
Figure 7 shows you the results of the Files1 Volume creation and the current status of the Files Volume Group.
Figure 7: The Finished Volume Status
Your volumes aren’t ready to use by remote systems quite yet. You need to setup the services that make them available to remote systems and users. To do so, select theServices tab and click the NFS server Enable link to start the NFS service.
Click the Shares tab, select the User Files link and create a new subfolder (Users1) that the system will share via NFS. Select the Users1 link created and click the Make Share button. When you’re redirected to the Users1 share page, scroll down to set access modes, user permissions, host access configurations and click the Update button.
Click the Shares tab, select the User Files link and create a new subfolder (Users1) that the system will share via NFS. Select the Users1 link created and click the Make Share button. When you’re redirected to the Users1 share page, scroll down to set access modes, user permissions, host access configurations and click the Update button.
You will now see an entry similar to the following in /etc/exports.
/mnt/files/files1/Users1 192.168.1.0/255.255.255.0(rw,anonid=96,anongid=96,secure,root_squash,wdelay,sync)
Your users may now connect via NFS to the Users1 share.
Advanced Openfiler
To change the root password, you’ll have to boot up in single user mode, change the password and reboot again. To change the root password using this method, boot the system and when you see the boot menu, press the space bar to stop the countdown. Press the ‘a’ key on your keyboard to append a command to the boot parameters. The grub append prompt looks like the following.
grub append> ro root=LABEL=/ quiet
To enter the command, press your SPACE bar once and enter the word “single” without the quotes as shown below.
grub append> ro root=LABEL=/ quiet single
Press the ENTER key to accept and continue booting the system. After a minimal startup, you’ll drop to a single user root prompt.
sh-3.00#
Use the passwd command to change the root password to something you know. Type init 3at the prompt to continue booting the system into multi-user mode. You can now login at the login prompt as root or via the web console (System tab->Secure Console).
Your Volume Groups are actually directories under the /mnt directory and the Volumes you create exist under that directory. For example, for this demonstration, the Volume Group and Volume are: /mnt/files/files1. Any shares you create are under this directory tree. Keep this in mind when using Openfiler and creating new Volumes and shares.
This very abbreviated introduction to Openfiler will get you started but is by no means complete or exhaustive. There is a user manual available for a small fee. You can also purchase commercial support for Openfiler through the website.
Openfiler is a free solution for small to medium-sized businesses or for personal use. It solves the problem of a higher-end storage solution with good security and an easy-to-use web interface. I strongly recommend purchasing commercial support for business use. Anything this easy to use is also just as easy to put you into an accidentally-induced disaster that might prove difficult to recover from.
Five Easy Ways to Secure Your Linux System
On the heels of last week’s entry on using DenyHosts, and Nikto the week before that; I thought it appropriate to continue in the security vein with five more simple techniques that you can use to protect your systems. These include using account locking, limiting cron use, using DENY access to services, refusing root SSH logins and changing SSHD’s default port.
There’s no excuse to run insecure systems on your network. Your data’s integrity (and your job) depend on your ability to keep those systems running correctly and securely for your co-workers and customers. Shown here are five simple techniques to make your systems less vulnerable to compromise.
Account Locking
Account locking for multiple failed tries puts extra burden on the system administrators but it also puts some responsibility on the user to remember his passwords. Additionally, locking allows the administrator to track the accounts that have potential hack attempts against them and to notify those users to use very strong passwords.
Typically, a system will drop your connection after three unsuccessful attempts to login but you may reconnect and try again. By allowing an infinite number of failed attempts, you’re compromising your system’s security. Smart system administrators can take the following measure to stop this threat: Account lockout after a set number of attempts. My preference is to set that limit to three.
Add the following lines to your system’s /etc/pam.d/system-auth file.
auth required /lib/security/$ISA/pam_tally.so onerr=fail no_magic_root account required /lib/security/$ISA/pam_tally.so per_user deny=3 no_magic_root reset
Your distribution might not include the system-auth file but instead uses the /etc/pam.d/loginfile for these entries.
Cron Restriction
On multiuser systems, you should restrict cron and at to root only. If other users must have access to scheduling, add them individually to the /etc/cron.allow and /etc/at.allow files. If you choose to create these files and add user accounts into them, you also need to create/etc/cron.deny and /etc/at.deny files. You can leave them empty but they need to exist. Don’t create an empty /etc/cron.deny unless you add entries to the /etc/cron.allow because doing so allows global access to cron. Same goes for at.
To use the allow files, create them in the /etc directory and add one user per line to the file. The root user should have an entry in both allow files. Doing this restricts cron to the root user only.
As the system administrator, you can allow or deny cron and at usage based upon the user’s knowledge and responsibility levels.
Deny, Deny, Deny
“Deny everything” sounds eerily Presidential doesn’t it? But for system security and certain political indiscretions, it’s the right answer. System security experts recommend denying all services for all hosts using an all encompassing deny rule in the /etc/hosts.deny file. The following simple entry (ALL: ALL) gives you the security blanket you need.
# # hosts.deny This file describes the names of the hosts which are # *not* allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # # The portmap line is redundant, but it is left to remind you that # the new secure portmap uses hosts.deny and hosts.allow. In particular # you should know that NFS uses portmap! ALL: ALL
Edit the /etc/hosts.allow file and insert your network addresses (192.168.1., for example) where you and your users connect from before you logout or you’ll have to login via the console to correct the problem. Insert entries similar to the following to allow access for an entire network, single host or domain. You can add as many exceptions as you need. The/etc/hosts.allow file takes precedence over the /etc/hosts.deny to process your exceptions.
Deny SSH by Root
Removing the root user’s ability to SSH provides indirect system security. Logging in as root to a system removes your ability to see who ran privileged commands on your systems. All users should SSH to a system using their standard user accounts and then issue su or sudocommands for proper tracking via system logs.
Open the /etc/ssh/sshd_config file with your favorite editor and change PermitRootLogin yesto PermitRootLogin no and restart the ssh service to accept the change.
Change the Default Port
While changing the default SSH port (22) will have limited effectiveness in a full port sweep, it will thwart those who focus on specific or traditional service ports. Some sources suggest changing the default port to a number greater than 1024, for example: 2022, 9922 or something more random, such as 2345. If you’re going to use this method as one of your strategies, I suggest that you use a port that doesn’t include the number 22.
Edit your /etc/ssh/sshd_config and change the “Port” parameter to your preferred port number. Uncomment the Port line too. Restart the sshd service when you’re finished and inform your users of the change. Update any applicable firewall rules to reflect the change too.
System security is important and is a constant battle. You have to maintain patch levels, updates and constantly plug newly discovered security holes in system services. As long as there are black hat wearing malcontents lurking the Net looking for victims, you’ll have a job keeping those wannabe perpetrators at bay.
Wednesday, March 7, 2012
Android Boa Web Server
Boa is a small-footprint Web Server, which can instantly transform your phone or tablet as a file server. It is primarily used for FILE SHARING (mp3's or movies). It supports HTTP-1.1 protocol.
Advantages
- Easy to setup, easy to use.
- No client software need to download files.
- Files can be served over Wi-Fi or 3G.
- Support for DynDns.
- Support for Directory index.
- Can be used for hosting websites.
- Support for parallel connections.
- Detailed http access and error logs.
https://play.google.com/store/apps/details?id=com.applications.boa&feature=search_result#?t=W251bGwsMSwxLDEsImNvbS5hcHBsaWNhdGlvbnMuYm9hIl0
Advantages
- Easy to setup, easy to use.
- No client software need to download files.
- Files can be served over Wi-Fi or 3G.
- Support for DynDns.
- Support for Directory index.
- Can be used for hosting websites.
- Support for parallel connections.
- Detailed http access and error logs.
https://play.google.com/store/apps/details?id=com.applications.boa&feature=search_result#?t=W251bGwsMSwxLDEsImNvbS5hcHBsaWNhdGlvbnMuYm9hIl0
Sunday, March 4, 2012
How to Install Internet Explorer 8 (IE 8) on Linux
Internet Explorer 8 (also known as IE 8) is the latest but not quite the greatest web browser from Microsoft. It offers several enhancements over its predecessor that includes improvements in RSS, Cascading Style Sheets, and Ajax support. It also has several added features like automatic tab crash recovery, suggested sites, web slices, and accelerators (a form of selection-based search).
If you are using Linux and if for some reason you need to install and use Internet Explorer 8, don't worry because it is really quite easy to do so. Using Wine, I've shared with you how I installed and run Safari 4 on Linux. To install IE 8 on Linux, you will also need Wine.
Installing Internet Explorer 8 (IE 8) on Linux:
1. Install Wine and winetricks and setup the following Windows redistributables:
corefonts
gdiplus
msls31
msxml3
riched20
riched32
tahoma
2. Search and download msctf.dll, msimtf.dll, uxtheme.dll from HERE, and then using the Wine menu, navigate and place the DLLs inside /system32.
3. Configure Wine by navigating to Wine --> Configure Wine --> Libraries and set the following DLLs as shown:
"browseui="native, builtin"
"crypt32"="native, builtin"
"gdiplus"="native"
"hhctrl.ocx"="native, builtin"
"hlink"="native, builtin"
"iernonce"="native, builtin"
"iexplore.exe"="native, builtin"
"itircl"="native, builtin"
"itss"="native, builtin"
"jscript"="native, builtin"
"mlang"="native, builtin"
"mshtml"="native, builtin"
"msimtf"="native,builtin"
"msxml3"="native,builtin"
"riched20"="native,builtin"
"riched32"="native,builtin"
"secur32"="native, builtin"
"shdoclc"="native, builtin"
"shdocvw"="native, builtin"
"shlwapi"="native, builtin"
"url"="native, builtin"
"urlmon"="native, builtin"
"usp10"="native, builtin"
"uxtheme"="native,builtin"
"wininet"="builtin"
"wintrust"="native, builtin"
"xmllite"="native, builtin"
4. Download Internet Explorer 8 from HERE.
5. Navigate to where you saved the IE 8 installer and run it using Wine with this command:
$ wine IE8-WindowsXP-x86-ENU.exe
6. Install IE 8 as normal, but don't select the Windows security updates option during installation as it may cause issues later on.
7. After installation, you will now see Internet Explorer 8 under Wine --> Programs.
Although running IE 8 on Linux is buggy, it renders web pages well. So if you are a web developer, you may find keeping Internet Explorer 8 on Linux handy.
Subscribe to:
Posts (Atom)