Monday, October 26, 2015

10 outstanding open source server tools

http://www.techrepublic.com/blog/10-things/10-outstanding-open-source-server-tools/

1: phpMyAdmin

If you're looking for a tool to make the management of your MySQL database as easy as possible, phpMyAdmin is what you want. It's easy to install and use and it takes up little room on your server. With phpMyAdmin you can manage databases, tables, columns, relations, indexes, users, permissions, and much more. phpMyAdmin is a web-based interface, which makes managing your databases as simple as point and click.

2: Capistrano

Capistrano is a remote server automation and deployment tool that supports scripting and task automation. You can easily deploy web applications to multiple machines simultaneously or in sequence, perform data migrations, run automatic audits, script arbitrary workflows over SSH, and execute any number of other tasks. Capistrano can also be integrated with any Ruby software.

3: MySQL Tuner

MySQL Tuner is a Perl script designed to assist you with the configuration and performance tuning of a MySQL database server. The only caveat to using MySQL Tuner is that it is a read-only script. You don't run the script and then watch it tune your DB server. This script will examine your MySQL server and then report its findings. You can then make suggested changes to your server to increase performance. With that in mind, you'll want to have a solid understanding of MySQL before you dive into using the tuner.

4: ConfigServer Security & Firewall

ConfigServer Security & Firewall is a "Stateful Packet Inspection (SPI) firewall, Login/Intrusion Detection and Security application for Linux servers." It's made up of a suite of scripts that offer a ton of features: SPI IPTables firewall, login failure checking, POP3/IMAP login failure detection, excessive connection blocking, SU login notification, SSH port auto-configuration, traffic blocking on unused server IP addresses, and much more. ConfigServer also integrates with cPanel, Webmin, and DirectAdmin.

5: Webmin

Webmin has been around for a long time—with good reason. As an easy-to-install and simple-to-use GUI tool for server admin, Webmin has proved itself year after year. You can use it to administer every aspect of your server—including Apache, MySQL, DNS, file sharing, users, and firewalls. Webmin is so powerful and flexible, you'll be hard-pressed to find a GUI better suited to help administer your Linux server (outside of the likes of the Red Hat and SUSE solutions—which require licenses as well as their respective platforms).

6: VNC

VNC is what you need if you want to enable users to log into the server and enjoy a GUI. But this tool isn't just for allowing users to work with a remote instance of LibreOffice. If you'd rather not work with the likes of Webmin and want to manage your server from a more standard desktop GUI, you can work with VNC. The only issue with adding VNC to your server is deciding which one to choose. I've worked with a number of VNC servers and have found tightvnc to be the best of the bunch. Not only is its installation and usage better documented, it offers better compression for enhanced performance.

7: Apache Cloudstack

Apache Cloudstack is designed specifically for the purpose of deploying and managing a large number of virtual machines. This is a turnkey solution that includes all the features you'd require (such as compute orchestration, network-as-a-service, user and account management, a full and open native API, resource accounting, and a first-class User Interface). Cloudstack currently supports the most common hypervisors (VMware, KVM, XenServer, Xen Cloud Platform (XCP), and Hyper-V), and users can manage their clouds with a simple web interface.

8: OpenLDAP

OpenLDAP is the open source iteration of LDAP (lightweight directory access protocol). Although it's powerful and flexible, the biggest issue facing the system is its complexity. This isn't a point-and-click tool as you'll find with Windows Active Directory. OpenLDAP is complex. And even though there are GUI tools designed to make the management of OpenLDAP easier, the installation and setup is not for the faint of heart.

9: MONIT

MONIT is not just a server-monitoring tool. It will also attempt to resolve problems (when/if they arise) by taking predefined actions for certain situations. Say, for instance, MONIT discovers that Apache is using too many resources. Should this happen, MONIT will attempt to restart the http daemon to resolve the issue. MONIT is easy to deploy. (The site says you can have it up and running in 15 minutes—a claim that is very much true.) And MONIT doesn't just monitor services; you can also set it up to monitor files, directories, and file systems.

10: Ganglia

Ganglia is another server monitoring tool, only it's geared toward high-performance systems, such as clusters and grids. Ganglia uses XML for data representation, XDR for compact and portable data transport, and RRDtool for data storage and visualization. There is no other open source tool better suited for presenting data and information of a cluster in a useable, simplified manner. If you happen to administer such high-performance systems, you'd be remiss if you didn't at least take a look at Ganglia as your go-to cluster monitor.

10 open source storage solutions that might be perfect for your company

http://www.techrepublic.com/blog/10-things/10-open-source-storage-solutions-that-might-be-perfect-for-your-company/

1: Samba

Samba provides secure, stable, and fast storage (as well as print services) for all clients using the SMB/CIFS protocol (all versions of DOS and Windows, OS/2, Linux, and many others). If you plan to host storage for a variety of platforms, you will not get by without Samba. It's the glue that holds heterogeneous platforms together. In fact, many storage appliances depend upon Samba to get the job done. And now that Samba has nearly seamless integration with Microsoft Active Directory, the solution is all the more flexible.

2: NFS

NFS—the Network File System—was created in 1984 to allow computers to access file systems on remote machines as if they were mounted locally. What's nice about NFS is that it allows you to create a set-it-and-forget-it distributed file system. One caveat: The setup can get a bit complex and you must set up both server and client. NFS is available for every Linux distribution on the planet and can be installed from either the command line or the distribution's package manager.

3: File Server

File Server is a dedicated Linux storage distribution that uses Samba, Webmin, Pydio, SSL, and much more to create an outstanding storage solution without having to piece it all together yourself. One of the best features of File Server is that you can set it up as both a standard Windows-compatible storage solution and as a web-based file solution. With the help of Pydio, you can enjoy an incredibly easy -to-use web interface to store your files.

4: Ceph

Ceph is a distributed object store and file system "designed for excellent performance, reliability, and scalability." In other words, this is storage for the big boys; small shops need not apply. Ceph is the solution you want when you're looking for massive data storage. It also works seamlessly with block storage—so you can use it on a storage cluster for scalability.

5: FreeNAS

FreeNAS is another storage-based Linux distribution that can be installed on nearly any platform to create an outstanding storage solution. It features replication, encryption, data protection, snapshots, file sharing, an easy-to-use web-based interface, and a powerful plug-in system. FreeNAS provides a versatile solution that any platform can connect to and any business can enjoy.

6: Openfiler

Openfiler makes it easy for you to deploy both storage area networking (SAN) and network attached storage (NAS) with all the bells and whistles your company needs. Openfiler offers a community edition and a commercial edition. The commercial edition is ideal for iSCSI Target and Fibre Channel Target stacks and features high availability cluster/failover as well as block-level replication for disaster recovery.

7: ZFS file system

ZFS file system is one of the better file systems to use when considering a storage solution. It offers excellent scalability and data integrity. When you're installing most Linux distributions, you can choose the file system you want to use. If setting up a Linux storage solution, ZFS will go further to ensure data integrity than any other file system. If you do decide to dive into ZFS, make sure you do plenty of research and understand what it does and how it works.

8: OpenMediaVault

OpenMediaVault is an open NAS solution built on Debian that features services like SSH, (S)FTP, SMB/CIFS, DAAP media server, RSync, and BitTorrent client. OpenMediaVault offers a massive plug-in system—so if it doesn't have what you need, you can add it with ease. This might well be one of the best out-of-the-box storage solution experiences you'll ever have. It's that easy to use. OpenMediaVault also enjoys full-on UPS support.

9: Lustre

Lustre is a "scale-out architecture distributed parallel filesystem." It's lightning fast and can handle petabytes of data and tens of thousands of nodes. The description alone should indicate that Lustre is designed to address large-scale storage needs. Since 2005 Lustre has been consistently used by half of the top 10 supercomputers on the planet. Ideal industries for Lustre include meteorology, simulation, oil and gas, life science, rich media, and finance.

10: Linux

I cannot, in good conscience, list the best open source storage solutions without including Linux itself. Why? Because most Linux distributions can easily serve as an effective storage solution. Of course, depending upon your size, you may need to tweak various aspects or turn to an enterprise distribution (such as Red Hat or SUSE). But for network storage, Linux has you covered.

10 Linux GUI tools for sysadmins


http://www.techrepublic.com/blog/10-things/10-linux-gui-tools-for-sysadmins/

What are some good GUI tools that can simplify your Linux sysadmin tasks? Let's take a look at 10 of them.

1: MySQL Workbench

MySQL Workbench is one of my favorite tools for working with MySQL databases. You can work locally or remotely with this well designed GUI tool. But MySQL Workbench isn't just for managing previously created databases. It also helps you design, develop, and administer MySQL databases. A newer addition to the MySQL Workbench set of tools is the ability to easily migrate Microsoft SQL Server, Microsoft Access, Sybase ASE, PostgreSQL, and other RDBMS tables, objects, and data to MySQL. That alone makes MySQL Workbench worth using.

2: phpMyAdmin

phpMyAdmin is another MySQL administration tool... only web based. Although it doesn't offer the bells and whistles of MySQL Workbench, it's a much more user-friendly tool. With phpMyAdmin you can create and manage MySQL databases via a standard web browser. This means you can install phpMyAdmin on a headless Linux server and connect to it through any browser that has access to the machine.

3: Webmin

Webmin is a web-based one-stop-shop tool for administering Linux servers. With Webmin you can manage nearly every single aspect of a server—user accounts, Apache, DNS, file sharing, security, databases, and much more. And if what you need isn't included with the default installation, a massive number of third-party modules are available to take up the slack.

4: YaST

YaST stands for Yet Another Setup Tool. It enables system configuration for enterprise-grade SUSE and openSUSE and serves as both the installation and configuration tool for the platform. With YaST you can configure hardware, network, and services and tune system security, all with an easy-to-use, attractive GUI. YaST is installed by default in all SUSE and openSUSE platforms.

5: Shorewall

Shorewall is a GUI for configuring iptables. Yes, there are other GUIs for tuning the security of your system, but many of them don't go nearly as deep as Shorewall. Where an app like UFW is one of the best security tuners for the desktop, Shorewall is tops for the server. With this particular security GUI, you can configure gateways, VPNs, traffic controlling, blacklisting, and much more. If you're serious about your firewall, and you want a GUI for the job, Shorewall is what you want.

6: Apache Directory

Apache Directory is about the only solid GUI tool for managing any LDAP server (though it is designed particularly for ApacheDS). It's an Eclipse RCP application and can serve as your LDAP browser, schema editor, ApacheDS configuration editor, LDIF editor, ACI editor, and more. The app also contains the latest ApacheDS, which means you can use it to create a DS server in no time.

7: CUPS

CUPS is the Linux printer service that also happens to have a web-based GUI tool for the management of printers, printer classes, and print queues. It is also possible to enable Kerberos authentication and remote administration. One really nice thing about this GUI is its built-in help system. You can learn nearly everything you need to manage your print server.

8: cPanel

cPanel is one of the finest web-based administration tools you'll use. It lets you configure sites, customers' sites and services, and quite a bit more. With this tool you can configure/manage mail, security, domains, apps, apps, files, databases, logs—the list goes on and on. The only drawback to using cPanel is that it's not free. Check out the pricing matrix to see if there's a plan to fit your needs.

9: Zenmap

Zenmap is the official front end for the Nmap network scanner. With this tool, both beginners and advanced users can quickly and easily scan their network to troubleshoot issues. After scanning, you can even save the results to comb through them later. Although you won't use this tool to directly administer your system, it will become invaluable in the quest for discovering network-related issues.

10: Cockpit

Cockpit was created by Red Hat to make server administration easier. With this web-based GUI you can tackle tasks like storage administration, journal inspection, starting/stopping services, and multiple server monitoring. Cockpit will run on Fedora Server, Arch Linux, CentOS Atomic, Fedora Atomic, and Red Hat Enterprise Linux.

Friday, October 23, 2015

Hardening RHEL 7.1 Services

http://www.aclnz.com/interests/blogs/hardening-rhel-7-1-maipo-part-1-services

Services
Linux servers run network services. Each services has an application (daemon) listening for connections on one or many network ports.
Each service and port could potentially receive a network attack.
Here is a list of potential risks on having ports open to provide services:
  • Denial of Service Attacks (DoS)— By flooding a service with requests, a denial of service attack can render a system unusable as it tries to log and answer each request.
  • Distributed Denial of Service Attack (DDoS) — A type of DoS attack which uses multiple compromised machines (often numbering in the thousands or more) to direct a coordinated attack on a service, flooding it with requests and making it unusable.
  • Script Vulnerability Attacks — If a server is using scripts to execute server-side actions as Web servers commonly do, an attacker can target improperly written scripts. These script vulnerability attacks can lead to a buffer overflow condition or allow the attacker to alter files on the system.
  • Buffer Overflow Attacks — Services that connect to ports numbered 0 through 1023 must run as an administrative user. If the application has an exploitable buffer overflow, an attacker could gain access to the system as the user running the daemon. Because exploitable buffer overflows exist, crackers use automated tools to identify systems with vulnerabilities, and once they have gained access, they use automated root kits to maintain their access to the system.
Before we start you might want to check what services are running on your system with the netstat command.Here is an example of a server with few services running.
hardening rhel 10
I’m going to go through the most common services that require attention.
rpcbind  is a service daemon that dynamically assigns ports to services line RPC, NIS and NFS.
This service has a week authentication mechanism and can assign a wide range of ports and needs to be protected by the .
If this service is needed and you are going to protect it with the firewall you will first need to make a case study to understand which networks should reach rpcbind and which not. Once you know this run this command to enable each network.
To limit TCP:
  • firewall-cmd --add-rich-rule='rule family="ipv4" port port="111" protocol="tcp" source  address="192.168.0.0/24" invert="True" drop' --permanent
  • # firewall-cmd --add-rich-rule='rule family="ipv4" port port="111" protocol="tcp" source address="127.0.0.1" accept' --permanent
To limit UDP:
  • firewall-cmd --add-rich-rule='rule family="ipv4" port port="111" protocol="udp" source address="192.168.0.0/24" invert="True" drop' –permanent
Repeat the last three steps for each subnet that will need access.
NIS  is well known for authenticating users across the network. This service is outdated because it sends unencrypted information through the network, including passwords. Unless needed for specific reasons it’s better to not use it at all.
If your network has NIS authentication or you are planning on setting one make sure you have rpcbind behind a firewall as specified above and then go through this steps.

  1. Generate a random host name for the DNS master server such as o7hfawtgmhwg.domain.com and configure it.
  2. Generate a random like NIS domain name for your NIS server, different from the DNS server host name and configure the new name by editing the NISDOMAIN entry on the /etc/sysconfig/network file:
    hardening rhel 11
  3. Edit the /var/yp/securenets file to add each netmask/network that requires NIS authentication. If the file doesn’t exist create it. After adding a few lines the file should look like this:
    hardening rhel 12
  4. Assign static ports to ypxfrd and ypserv daemons by adding the following lines to the /etc/sysconfig/network file:
    YPSERV_ARGS="-p 834"

    YPXFRD_ARGS="-p 835"


    Then run the next two firewall commands for each network needing NIS to limit the networks that can use this ports.
    TCP
    # firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.0.0/24" invert="True" port port="834-835" protocol="tcp" drop' --permanent
    UDP
    # firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.0.0/24" invert="True" port port="834-835" protocol="udp" drop' --permanent
NFS exports could also generate security risks such as symlink attacks. For this reason use NFSv4.0 when possible which can require authentication and can operate behind a firewall.
Here are some considerations you should follow:

  • Always export complete filesystems rather than just subdirectories.
  • Use ro option to export filesystems whenever possible.
  • Always use the ug sections to assign permissions and never o. Consequently limiting NFS access to specific users and groups on your /etc/group and /etc/passwd files.
  • Take special attention to syntax on the /etc/exports file, a syntax error can lead to unwanted share configurations.
    To overcome this always check your exports with the showmount –e command.
  • Uncomment this entries on the /etc/sysconfig/nfs file:
    # TCP port rpc.lockd should listen on.
    LOCKD_TCPPORT=32803
    # UDP port rpc.lockd should listen on.
    LOCKD_UDPPORT=32769
  • Restart the nfs service “service nfs restart” and check what ports are being used by nfs to complete the needed firewall rules to limit the network access to those ports.
    hardening rhel 13
    For this eample the following firewall rules for each network needing access should be added:
    TCP
    # firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.0.0/24" invert="True" port port="20048" protocol="tcp" drop' --permanent
    UDP
    # firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.0.0/24" invert="True" port port="20048" protocol="udp" drop' –permanent
    TCP
    # firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.0.0/24" invert="True" port port="2049" protocol="tcp" drop' --permanent
    UDP
    # firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.0.0/24" invert="True" port port="2049" protocol="udp" drop' --permanent
References
+ This article is based on the Red Hat Enterprise Linux 7 Security Guide that can be downloaded from the RedHat network here.

Hardening RHEL 7.1 User access

http://www.aclnz.com/interests/blogs/hardening-rhel-7-1-maipo-part-1-user-access

On this document I will go through a series of steps to configure the most relevant settings to harden a RHEL server.
This document is based on the Red Hat Enterprise Linux 7 Security Guide that can be downloaded from the RedHat network here.
Secure passwords
Passwords are the primary method that Red Hat Enterprise Linux 7 uses to verify a user's
identity. This is why password security is so important for protection of the user, the
workstation, and the network.
By default RHEL uses shadow passwords which eliminate this type of attack by storing the password hashes in the file /etc/shadow, which is readable only by the root user.
Strong passwords
Since the storing of passwords has already been taken care of the next step is to force the creation of strong passwords.
When users are asked to create or change passwords, they can use the passwd
command-line utility, which is PAM-aware (Pluggable Authentication Modules) and checks to
see if the password is too short or otherwise easy to crack. This checking is performed by
the pam_pwquality.so PAM module.
PAM reads its configuration from the /etc/pam.d/passwd file, but the file we want to edit for tuning password policies is /etc/security/pwquality.conf
Have a look at the configuration options:
hardening rhel 01
Here are the details of what each entry means:
  • difok - Number of characters in the new password that must not be present in the old password.
  • minlen - Minimum acceptable size for the new password
  • dcredit - Credit for having digits in the new password
  • ucredit - Credit for having uppercase characters in the new password
  • lcredit - Credit for having lowercase characters in the new password
  • ocredit - Credit for having other characters in the new password
  • maxrepeat - maximum number of allowed consecutive same characters in the new password.
  • minclass - minimum number of required classes of characters for the new password (digits, uppercase, lowercase, others).
  • maxclassrepeat - maximum number of allowed consecutive characters of the same class in the new password.
  • gecoscheck - Whether to check for the words from the passwd entry GECOS string of the user (0=check).
  • dictpath - Path to the cracklib dictionaries. Blank is to use the cracklib default.
NOTE: Credit works like money, if you have a plus number like three you have spare and don't have to worry, but if you have a negative number (debts) you have to pay for them. For instance "ucredit = 2" means the user will have to give at least two upper case characters as part of the password for creating a password.
Something practical to do is to set a "minlen = 8" value and "minclass = 4" value. Whith this two settings you would ensure that the password has to be at least 8 characters long and that it will need to have letters Upper case, Lower case, numbers and symbols. That is what you will normally find on production servers.
Some like to uncomment dictpath and let GECOS use the default dictionary. You could go much further with this, but it is not recommended because passwords would need to be too complex and users wouldn't be able to remember them and the SA would have to be resetting passwords too often.
This is the result of a strong password file:
hardening rhel 02
NOTE: As the root user is the one who enforces the rules for password creation, he can set any password for himself or for a regular user, despite the warning messages.
Password aging
This technique is used to limit the time of cracked passwords. The downside is that if you set this value too low (password change required very often) the users will tend to write their passwords down generating a weak spot.
A common practice is to specify the maximum number of days for which the password is valid.
Password aging is performed with the command "chage".
This command is normally used when hardening a system to expire old unsecure password immediately.
I will show three examples on how to use this command on a console.
  1. Set a 90 day period for the password of user fpalacios to expire.
  2. Expire the password for fpalacios to have the user change it on the next log on.
  3. Expire the password of every user on group developers.
hardening rhel 03
Account Locking
In Red Hat Enterprise Linux 7, the pam_faillock PAM module allows system administrators to lock out user accounts after a specified number of failed attempts.
Limiting user login attempts serves mainly as a security measure that aims to prevent
possible brute force attacks targeted to obtain a user's account password.
Follow these steps to configure account locking:
  1. To lock out any non-root user after three unsuccessful attempts and unlock that user after 10 minutes, add the following lines to the auth section of the /etc/pam.d/system-auth and /etc/pam.d/password-auth files:
    auth required pam_faillock.so preauth silent audit
    auth sufficient pam_unix.so nullok try_first_pass
    auth [default=die] pam_faillock.so authfail audit deny=3
    unlock_time=600
    deny=3 unlock_time=600
    hardening rhel 05

    hardening rhel 06
  2. Add the following line to the account section of both files specified in the previous files:

    account required pam_faillock.so
    I will show you the end result of one of the files:

    hardening rhel 07
 

How to find out files updated in last N minutes

Issue
How to find out files updated in last N minutes?

Resolution
It is simple. Use the following command:
Syntax:
find -cmin -N
where N is the number of minutes
Example:
find /suresh/home/songs/ -cmin -10
Tip:
If you would like to see path of the file's directory then use the ls command along with the above command:
find /suresh/home/songs/ -cmin -10 ls

Wednesday, October 14, 2015

How To Change Default Data File (.OST) Location in Office 2013



To set the default location of an outlook data file you have to make a registry change.  Once you make the change, anytime you create a new data file, Outlook will put the data file in that new location. 
These instructions will work in prior versions of office as well but the path to the office key will be slightly different.

Change Default Data File Location in Outlook

1) Start - Regedit - Double-Click Regedit - Accept the Elevation Prompt - Drill Down / Navigate To: Computer\HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\Outlook
image
2) Right-Click Outlook Select New then String Value
3) Type “ForceOSTPath” without the quotes for the variable name and press Enter then Double-Click ForceOSTPath to Edit the string
4) Type the location you want your data files to use.  In my case it was D:\_Profile\Mail
5) Click OK {You can close Registry Editor now}   You will need to close Outlook in order for Outlook to use this new setting.  Now when you create a new OST data file Outlook will put it in the new location. Close Outlook and move any .PST files to the new location if you like.  

Move your OST to the new location…

Unfortunately to use the new location for your existing accounts, you will need to create a new profile.  When a profile is created, it saves the default location :(  To utilize the new location for the OST’s you need to let outlook recreate them.
6) Start - Control Panel - View By small Icons/ - Mail  (on Windows 8 you can also get there by just clicking Start - Type Mail - Click Settings - Click Mail)
image
This screenshot is Windows 8 but Windows 7 will be very similar  Now we will create a new profile and let the OST be rebuilt.
7) Click Add, Type in the Name you would like to give the profile (in my case I used “Dan Stolts”) then click OK
image
This will start the Auto account Setup wizard
8) Setup your first email account (Exchange or ActiveSync) by filling out your name, email address and password.
SNAGHTML1eb57704
The account setup should be fully automated.
SNAGHTML1eb6f94d
Notice if you like you can change the account settings or add another account from the wizard.  just click Finish to complete this account setup.
9) We need to set the new profile to be the default.   Do this by clicking the dropdown for “Always use this profile” and select the one we just created.
10) Cleanup - you can now delete the old profile if you like.  Deleting the profile will not remove the data files.  You will have to go back and do that manually.  By default the old data files were at %USERPROFILE%\Local Settings\Application Data\Microsoft\Outlook (in my case that was: C:\Users\dstolts\Local Settings\Application Data\Microsoft\Outlook)
If you want to see the change you can drill down to the outlook data file settings from the mail control panel app Profile - Properties - Data files - Settings - Outlook Data File Settings
image
You can also get the information from the Account settings screen.  Profile - Properties - Email AccountsE-Mail Tab- Click the account and the location of the data file will be listed.
image
You can now go ahead and create additional accounts if needed.
If you want to copy the old OST file you can copy it to the new location as long as it is not in use.  If it is in use, try closing any apps that might have it open.  If that fails try restarting your computer and not running outlook until after you create the new profile.  I have not personally tried this so let me know if you have success. This may save you some time downloading a large mailbox.
Tip: If you find the OST is still put in the %USERPROFILE%\Local Settings\Application Data\Microsoft\Outlook folder it is likely because there is already an .OST file of the same name in the destination folder. Try moving it somewhere else.  You can try copying it back once the account is successfully setup.

Saturday, October 10, 2015

How to boot into BOSS from the Grub Rescue prompt?

This guide will detail how to boot from the "grub rescue>" prompt for grub2 users.
Boot Procedure:
1. Locate the BOSS partition and the folder containing the Grub modules.
The Grub folder containing the modules must be located so the correct modules can be loaded. This folder was created during BOSS installation and should be located in the BOSS partition. This folder would be located at either (hdX,Y)/boot/grub or (hdX,Y)/usr/lib/grub/i386-pc.
Commands:
ls                               # List the known drives (hdX) and partitions  (hdX,Y).

ls (hdX,Y)/                      # List the contents of the partition's root.

ls (hdX,Y)/boot/grub             # Normal location of the Grub 2 modules.

ls (hdX,Y)/usr/lib/grub/i386-pc  # Alternate location of the Grub 2 modules.

2. Load the modules
commands:
set prefix=(hdX,Y)/boot/grub
insmod linux
eg:
 set prefix=(hd0,1)/boot/grub 
 insmod linux
3. Load the Linux kernel and initrd image using the following commands.
 set root=(hd0,1)
 linux /boot/vmlinuz-2.6.32-5-686 root=/dev/sda1
 initrd /boot/initrd.img-2.6.32-5-686
 boot