Thursday, February 2, 2012

hpacucli - Check RAID Information from Linux Shell

Have you ever tried to check how the hardware RAID Array configured on server from your Linux Shell? Have you ever wanted to change or modify your Hardware RAID configurations without rebooting the server and without leaving your Linux shell?

hpacucli utility is there to help you, If your server is HP Hardware. hpacucli  (HP Array Configuration Utility CLI) is a command line based disk configuration program for Smart Array Controllers and RAID Array Controllers. You can download and  install hpacucli tool from HP website.

Quick Abbreviations:
chassisname = ch
controller = ctrl
logicaldrive = ld
physicaldrive = pd
drivewritecache = dwc

As root, just type "hpacucli" and you will be into hpacucli command line interface. Let me give you a quick example of what you can do with this hpacucli.

To Get the quick details about the RAID controller and its Health:
=> ctrl all show status

Smart Array P400 in Slot 9
   Controller Status: OK
   Cache Status: OK
   Battery Status: OK

=>
To get a quick idea of How the disks are grouped and which raid level used:

=> ctrl all show

Smart Array P400 in Slot 9           (sn: PXXXXXXXXXXXXX)

=> ctrl all show config

Smart Array P400 in Slot 9           (sn: P6YYYYYYYYYYYY)

   array A (SAS, Unused Space: 0 MB)

      logicaldrive 1 (68.3 GB, RAID 1, OK)

      physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 72 GB, OK)
      physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 72 GB, OK)

To Get complete details about how the raid configured in the server:

=> ctrl all show config detail

Smart Array P400 in Slot 9
   Bus Interface: PCI
   Slot: 9
   Serial Number: PXXXXXXXXX
   Cache Serial Number: PAXXXXXXXXT
   RAID 6 (ADG) Status: Enabled
   Controller Status: OK
   Chassis Slot:
   Hardware Revision: Rev D
   Firmware Version: 7.08
   Rebuild Priority: Medium
   Expand Priority: Medium
   Surface Scan Delay: 15 secs
   Post Prompt Timeout: 0 secs
   Cache Board Present: True
   Cache Status: OK
   Accelerator Ratio: 25% Read / 75% Write
   Drive Write Cache: Disabled
   Total Cache Size: 512 MB
   Battery Pack Count: 1
   Battery Status: OK
   SATA NCQ Supported: True

   Array: A
      Interface Type: SAS
      Unused Space: 0 MB
      Status: OK

      Logical Drive: 1
         Size: 68.3 GB
         Fault Tolerance: RAID 1
         Heads: 255
         Sectors Per Track: 32
         Cylinders: 17562
         Stripe Size: 128 KB
         Status: OK
         Array Accelerator: Enabled
         Unique Identifier: 600508XXXXXXXXXXXXXXXX0002
         Disk Name: /dev/cciss/c0d0
         Mount Points: /boot 103 MB, swap 8.0 GB
         Logical Drive Label: A08923XXX61630G9SVI3RJCC0A
         Mirror Group 0:
            physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 72 GB, OK)
         Mirror Group 1:
            physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 72 GB, OK)

      physicaldrive 1I:1:1
         Port: 1I
         Box: 1
         Bay: 1
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS
         Size: 72 GB
         Rotational Speed: 10000
         Firmware Revision: HPDA
         Serial Number:         PXXXXXXA
         Model: HP      DG072A4951
         PHY Count: 1
         PHY Transfer Rate: Unknown
      physicaldrive 1I:1:2
         Port: 1I
         Box: 1
         Bay: 2
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS
         Size: 72 GB
         Rotational Speed: 10000
         Firmware Revision: HPDA
         Serial Number:         PXXXXXXA
         Model: HP      DG072A4951
         PHY Count: 1
         PHY Transfer Rate: Unknown

=>
 Be sure to verify your version of hpacucli and refer the ReadMe always, before you trying to modify the configuration of RAID or Smart Array controllers. 

Apache Hadoop Single Node Standalone Installation Tutorial


When you implement Apache Hadoop in production environment, you’ll need multiple server nodes. If you are just exploring the distributed computing, you might want to play around with Hadoop by installing it on a single node.
This article explains how to setup and configure a single node standalone Hadoop environment. Please note that you can also simulate a multi node Hadoop installation on a single server using pseudo distributed hadoop installation, which we’ll be covering in detail in the next article of this series.

The standlone hadoop environment is a good place to start to make sure your server environment is setup properly with all the pre-req to run Hadoop.

1. Create a Hadoop User

You can download and install hadoop on root. But, it is recommended to install it as a separate user. So, login to root and create a user called hadoop.
# adduser hadoop
# passwd hadoop

2. Download Hadoop Common

Download the Apache Hadoop Common  and move it to the server where you want to install it.
You can also use wget to download it directly to your server using wget.
# su - hadoop
$ wget http://mirror.nyi.net/apache//hadoop/common/stable/hadoop-0.20.203.0rc1.tar.gz
Make sure Java 1.6 is installed on your system.
$ java -version
java version "1.6.0_20"
OpenJDK Runtime Environment (IcedTea6 1.9.7) (rhel-1.39.1.9.7.el6-x86_64)
OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode)

3. Unpack under hadoop User

As hadoop user, unpack this package.
$ tar xvfz hadoop-0.20.203.0rc1.tar.gz
This will create the “hadoop-0.20.204.0″ directory.
$ ls -l hadoop-0.20.204.0
total 6780
drwxr-xr-x.  2 hadoop hadoop    4096 Oct 12 08:50 bin
-rw-rw-r--.  1 hadoop hadoop  110797 Aug 25 16:28 build.xml
drwxr-xr-x.  4 hadoop hadoop    4096 Aug 25 16:38 c++
-rw-rw-r--.  1 hadoop hadoop  419532 Aug 25 16:28 CHANGES.txt
drwxr-xr-x.  2 hadoop hadoop    4096 Nov  2 05:29 conf
drwxr-xr-x. 14 hadoop hadoop    4096 Aug 25 16:28 contrib
drwxr-xr-x.  7 hadoop hadoop    4096 Oct 12 08:49 docs
drwxr-xr-x.  3 hadoop hadoop    4096 Aug 25 16:29 etc
Modify the hadoop-0.20.204.0/conf/hadoop-env.sh file and make sure JAVA_HOME environment variable is pointing to the correct location of the java that is installed on your system.
$ grep JAVA ~/hadoop-0.20.204.0/conf/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.6.0_27

4. Test Sample Hadoop Program

In a single node standalone application, you don’t need to start any hadoop background process. Instead just call the ~/hadoop-0.20.203.0/bin/hadoop, which will execute hadoop as a single java process for your testing purpose.
This example program is provided as part of the hadoop, and it is shown in the hadoop document as an simple example to see whether this setup work.
First, create a input directory, where all the input files will be stored. This might be your location where all the incoming data files will be stored in the hadoop environment.
$ cd ~/hadoop-0.20.204.0
$ mkdir input
For testing purpose, add some sample data files to the input directory. Let us just copy all the xml file from the conf directory to the input directory. So, these xml file will be considered as the data file for the example program.
$ cp conf/*.xml input
Execute the sample hadoop test program. This is a simple hadoop program that simulates a grep. This searches for the reg-ex pattern “dfs[a-z.]+” in all the input/*.xml file and stores the output in the output directory.
$ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
When everything is setup properly, the above sample hadoop test program will display the following messages on the screen when it is executing it.
$ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
12/01/14 23:38:46 INFO mapred.FileInputFormat: Total input paths to process : 6
12/01/14 23:38:46 INFO mapred.JobClient: Running job: job_local_0001
12/01/14 23:38:46 INFO mapred.MapTask: numReduceTasks: 1
12/01/14 23:38:46 INFO mapred.MapTask: io.sort.mb = 100
12/01/14 23:38:46 INFO mapred.MapTask: data buffer = 79691776/99614720
12/01/14 23:38:46 INFO mapred.MapTask: record buffer = 262144/327680
12/01/14 23:38:46 INFO mapred.MapTask: Starting flush of map output
12/01/14 23:38:46 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/01/14 23:38:47 INFO mapred.JobClient:  map 0% reduce 0%
...
This will create the output directory with the results as shown below.
$ ls -l output
total 4
-rwxrwxrwx. 1 root root 11 Aug 23 08:39 part-00000
-rwxrwxrwx. 1 root root  0 Aug 23 08:39 _SUCCESS

$ cat output/*
1       dfsadmin
The source code of the example programs are located under src/examples/org/apache/hadoop/examples directory.
$ ls -l ~/hadoop-0.20.204.0/src/examples/org/apache/hadoop/examples
-rw-rw-r--. 1 hadoop hadoop  2395 Jan 14 23:28 WordCount.java
-rw-rw-r--. 1 hadoop hadoop  8040 Jan 14 23:28 Sort.java
-rw-rw-r--. 1 hadoop hadoop  9156 Jan 14 23:28 SleepJob.java
-rw-rw-r--. 1 hadoop hadoop  7809 Jan 14 23:28 SecondarySort.java
-rw-rw-r--. 1 hadoop hadoop 10190 Jan 14 23:28 RandomWriter.java
-rw-rw-r--. 1 hadoop hadoop 40350 Jan 14 23:28 RandomTextWriter.java
-rw-rw-r--. 1 hadoop hadoop 11914 Jan 14 23:28 PiEstimator.java
-rw-rw-r--. 1 hadoop hadoop   853 Jan 14 23:28 package.html
-rw-rw-r--. 1 hadoop hadoop  8276 Jan 14 23:28 MultiFileWordCount.java
-rw-rw-r--. 1 hadoop hadoop  6582 Jan 14 23:28 Join.java
-rw-rw-r--. 1 hadoop hadoop  3334 Jan 14 23:28 Grep.java
-rw-rw-r--. 1 hadoop hadoop  3751 Jan 14 23:28 ExampleDriver.java
-rw-rw-r--. 1 hadoop hadoop 13089 Jan 14 23:28 DBCountPageView.java
-rw-rw-r--. 1 hadoop hadoop  2879 Jan 14 23:28 AggregateWordHistogram.java
-rw-rw-r--. 1 hadoop hadoop  2797 Jan 14 23:28 AggregateWordCount.java
drwxr-xr-x. 2 hadoop hadoop  4096 Jan 14 08:49 dancing
drwxr-xr-x. 2 hadoop hadoop  4096 JAn 14 08:49 terasort

5. Troubleshooting Issues

Issue: “Temporary failure in name resolution”
While executing the sample hadoop program, you might get the following error message.
12/01/14 23:34:57 INFO mapred.JobClient: Cleaning up the staging area file:/tmp/hadoop-root/mapred/staging/root-1040516815/.staging/job_local_0001
java.net.UnknownHostException: hadoop: hadoop: Temporary failure in name resolution
        at java.net.InetAddress.getLocalHost(InetAddress.java:1438)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:815)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:791)
        at java.security.AccessController.doPrivileged(Native Method)
Solution: Add the following entry to the /etc/hosts file that contains the ip-address, FQDN fully qualified domain name, and host name.
192.168.1.10 hadoop.sureshkumarpakalapati.in hadoop

Apache Hadoop Fundamentals – HDFS and MapReduce Explained with a Diagram


Hadoop is an open source software used for distributed computing that can be used to query a large set of data and get the results faster using reliable and scalable architecture.
This is the first article in our new ongoing Hadoop series.

In a traditional non distributed architecture, you’ll have data stored in one server and any client program will access this central data server to retrieve the data. The non distributed model has few fundamental issues. In this model, you’ll mostly scale vertically by adding more CPU, adding more storage, etc. This architecture is also not reliable, as if the main server fails, you have to go back to the backup to restore the data. From performance point of view, this architecture will not provide the results faster when you are running a query against a huge data set.
In a hadoop distributed architecture, both data and processing are distributed across multiple servers. The following are some of the key points to remember about the hadoop:
  • Each and every server offers local computation and storage. i.e When you run a query against a large data set, every server in this distributed architecture will be executing the query on its local machine against the local data set. Finally, the resultset from all this local servers are consolidated.
  • In simple terms, instead of running a query on a single server, the query is split across multiple servers, and the results are consolidated. This means that the results of a query on a larger dataset are returned faster.
  • You don’t need a powerful server. Just use several less expensive commodity servers as hadoop individual nodes.
  • High fault-tolerance. If any of the nodes fails in the hadoop environment, it will still return the dataset properly, as hadoop takes care of replicating and distributing the data efficiently across the multiple nodes.
  • A simple hadoop implementation can use just two servers. But you can scale up to several thousands of servers without any additional effort.
  • Hadoop is written in Java. So, it can run on any platform.
  • Please keep in mind that hadoop is not a replacement for your RDBMS. You’ll typically use hadoop for unstructured data
  • Originally Google started using the distributed computing model based on GFS (Google Filesystem) and MapReduce. Later Nutch (open source web search software) was rewritten using MapReduce. Hadoop was branced out of Nutch as a separate project. NowHadoop is a top-level Apache project that has gained tremendous momentum and popularity in recent years.

HDFS

HDFS stands for Hadoop Distributed File System, which is the storage system used by Hadoop. The following is a high-level architecture that explains how HDFS works.
The following are some of the key points to remember about the HDFS:
  • In the above diagram, there is one NameNode, and multiple DataNodes (servers). b1, b2, indicates data blocks.
  • When you dump a file (or data) into the HDFS, it stores them in blocks on the various nodes in the hadoop cluster. HDFS creates several replication of the data blocks and distributes them accordingly in the cluster in way that will be reliable and can be retrieved faster. A typical HDFS block size is 128MB. Each and every data block is replicated to multiple nodes across the cluster.
  • Hadoop will internally make sure that any node failure will never results in a data loss.
  • There will be one NameNode that manages the file system metadata
  • There will be multiple DataNodes (These are the real cheap commodity servers) that will store the data blocks
  • When you execute a query from a client, it will reach out to the NameNode to get the file metadata information, and then it will reach out to the DataNodes to get the real data blocks
  • Hadoop provides a command line interface for administrators to work on HDFS
  • The NameNode comes with an in-built web server from where you can browse the HDFS filesystem and view some basic cluster statistics

MapReduce


The following are some of the key points to remember about the HDFS:

  • MapReduce is a parallel programming model that is used to retrieve the data from the Hadoop cluster
  • In this model, the library handles lot of messy details that programmers doesn’t need to worry about. For example, the library takes care of parallelization, fault tolerance, data distribution, load balancing, etc.
  • This splits the tasks and executes on the various nodes parallely, thus speeding up the computation and retriving required data from a huge dataset in a fast manner.
  • This provides a clear abstraction for programmers. They have to just implement (or use) two functions: map and reduce
  • The data are fed into the map function as key value pairs to produce intermediate key/value pairs
  • Once the mapping is done, all the intermediate results from various nodes are reduced to create the final output
  • JobTracker keeps track of all the MapReduces jobs that are running on various nodes. This schedules the jobs, keeps track of all the map and reduce jobs running across the nodes. If any one of those jobs fails, it reallocates the job to another node, etc. In simple terms, JobTracker is responsible for making sure that the query on a huge dataset runs successfully and the data is returned to the client in a reliable manner.
  • TaskTracker performs the map and reduce tasks that are assigned by the JobTracker. TaskTracker also constantly sends a hearbeat message to JobTracker, which helps JobTracker to decide whether to delegate a new task to this particular node or not.
We’ve only scratched the surface of the Hadoop. This is just the first article in our ongoing series on Hadoop. In the future articles of this series, we’ll explain how to install and configure Hadoop environment, and how to write MapReduce programs to retrieve the data from the cluster, and how to effectively maintain a Hadoop infrastructure.

TCP/IP Protocol Fundamentals Explained


Have you ever wondered how your computer talks to other computers on your local LAN or to other systems on the internet?
Understanding the intricacies of how computers interact is an important part of networking and is of equal interest to a sysadmin as well as to a developer. In this article, we will make an attempt to discuss the concept of communication from the very basic fundamental level that needs to be understood by everybody.

TCP/IP PROTOCOL SUITE

Communications between computers on a network is done through protocol suits. The most widely used and most widely available protocol suite is TCP/IP protocol suite. A protocol suit consists of a layered architecture where each layer depicts some functionality which can be carried out by a protocol. Each layer usually has more than one protocol options to carry out the responsibility that the layer adheres to. TCP/IP is normally considered to be a 4 layer system. The 4 layers are as follows :
  1. Application layer
  2. Transport layer
  3. Network layer
  4. Data link layer

1. Application layer

This is the top layer of TCP/IP protocol suite. This layer includes applications or processes that use transport layer protocols to deliver the data to destination computers.
At each layer there are certain protocol options to carry out the task designated to that particular layer. So, application layer also has various protocols that applications use to communicate with the second layer, the transport layer. Some of the popular application layer protocols are :
  • HTTP (Hypertext transfer protocol)
  • FTP (File transfer protocol)
  • SMTP (Simple mail transfer protocol)
  • SNMP (Simple network management protocol) etc

2. Transport Layer

This layer provides backbone to data flow between two hosts. This layer receives data from the application layer above it. There are many protocols that work at this layer but the two most commonly used protocols at transport layer are TCP and UDP.
TCP is used where a reliable connection is required while UDP is used in case of unreliable connections.
TCP divides the data(coming from the application layer) into proper sized chunks and then passes these chunks onto the network. It acknowledges received packets, waits for the acknowledgments of the packets it sent and sets timeout to resend the packets if acknowledgements are not received in time. The term ‘reliable connection’ is used where it is not desired to loose any information that is being transferred over the network through this connection. So, the protocol used for this type of connection must provide the mechanism to achieve this desired characteristic. For example, while downloading a file, it is not desired to loose any information(bytes) as it may lead to corruption of downloaded content.
UDP provides a comparatively simpler but unreliable service by sending packets from one host to another. UDP does not take any extra measures to ensure that the data sent is received by the target host or not. The term ‘unreliable connection’ are used where loss of some information does not hamper the task being fulfilled through this connection. For example while streaming a video, loss of few bytes of information due to some reason is acceptable as this does not harm the user experience much.

3. Network Layer

This layer is also known as Internet layer. The main purpose of this layer is to organize or handle the movement of data on network. By movement of data, we generally mean routing of data over the network. The main protocol used at this layer is IP. While ICMP(used by popular ‘ping’ command) and IGMP are also used at this layer.

4. Data Link Layer

This layer is also known as network interface layer. This layer normally consists of device drivers in the OS and the network interface card attached to the system. Both the device drivers and the network interface card take care of the communication details with the media being used to transfer the data over the network. In most of the cases, this media is in the form of cables. Some of the famous protocols that are used at this layer include ARP(Address resolution protocol), PPP(Point to point protocol) etc.

TCP/IP CONCEPT EXAMPLE

One thing which is worth taking note is that the interaction between two computers over the network through TCP/IP protocol suite takes place in the form of a client server architecture.
Client requests for a service while the server processes the request for client.
Now, since we have discussed the underlying layers which help that data flow from host to target over a network. Lets take a very simple example to make the concept more clear.
At the first layer, since http protocol is being used, so an HTTP request is formed and sent to the transport layer.
Here the protocol TCP assigns some more information(like sequence number, source port number, destination port number etc) to the data coming from upper layer so that the communication remains reliable i.e, a track of sent data and received data could be maintained.
At the next lower layer, IP adds its own information over the data coming from transport layer. This information would help in packet travelling over the network. Lastly, the data link layer makes sure that the data transfer to/from the physical media is done properly. Here again the communication done at the data link layer can be reliable or unreliable.
This information travels on the physical media (like Ethernet) and reaches the target machine.
Now, at the target machine (which in our case is the machine at which the website is hosted) the same series of interactions happen, but in reverse order.
The packet is first received at the data link layer. At this layer the information (that was stuffed by the data link layer protocol of the host machine) is read and rest of the data is passed to the upper layer.
Similarly at the Network layer, the information set by the Network layer protocol of host machine is read and rest of the information is passed on the next upper layer. Same happens at the transport layer and finally the HTTP request sent by the host application(your browser) is received by the target application(Website server).
One would wonder what happens when information particular to each layer is read by the corresponding protocols at target machine or why is it required? Well, lets understand this by an example of TCP protocol present at transport layer. At the host machine this protocol adds information like sequence number to each packet sent by this layer.
At the target machine, when packet reaches at this layer, the TCP at this layer makes note of the sequence number of the packet and sends an acknowledgement (which is received seq number + 1).
Now, if the host TCP does not receive the acknowledgement within some specified time, it re sends the same packet. So this way TCP makes sure that no packet gets lost. So we see that protocol at every layer reads the information set by its counterpart to achieve the functionality of the layer it represents.

PORTS, SERVERS AND STANDARDS

On a particular machine, a port number coupled with the IP address of the machine is known as a socket. A combination of IP and port on both client and server is known as four tuple. This four tuple uniquely identifies a connection. In this section we will discuss how port numbers are chosen.
You already know that some of the very common services like FTP, telnet etc run on well known port numbers. While FTP server runs on port 21, Telent server runs on port 23. So, we see that some standard services that are provided by any implementation of TCP/IP have some standard ports on which they run. These standard port numbers are generally chosen from 1 to 1023. The well known ports are managed by Internet Assigned Numbers Authority(IANA).
While most standard servers (that are provided by the implementation of TCP/IP suite) run on standard port numbers, clients do not require any standard port to run on.
Client port numbers are known as ephemeral ports. By ephemeral we mean short lived. This is because a client may connect to server, do its work and then disconnect. So we used the term ‘short lived’ and hence no standard ports are required for them.
Also, since clients need to know the port numbers of the servers to connect to them, so most standard servers run on standard port numbers.
The ports reserved for clients generally range from 1024 to 5000. Port number higher than 5000 are reserved for those servers which are not standard or well known.
If we look at the file ‘/etc/services’, you will find most of the standard servers and the port on which they run.
$ cat /etc/services
systat  11/tcp  users
daytime  13/udp
netstat  15/tcp
qotd  17/tcp  quote
msp  18/udp
chargen  19/udp  ttytst source
ftp-data 20/tcp
ftp  21/tcp
ssh  22/tcp
ssh  22/udp
telnet  23/tcp
...
...
...
As you see from the /etc/services file, FTP has port number 21, telent has port number 23 etc. You can use ‘grep’ command on this file to find any server and its associated port.
As far as the standards are concerned, the following four organizations/groups manage the TCP/IP protocol suite. Both the IRTF and the IETF fall under the IAB.
  1. The Internet Society (ISOC)
  2. The Internet Architecture Board (IAB). The IAB falls under the ISOC.
  3. The Internet Engineering Task Force (IETF)
  4. The Internet Research Task Force (IRTF)


Wednesday, February 1, 2012

2 Easy Steps to Enable SSL / HTTPS on Tomcat Server


If you are running tomcat server that runs only on HTTP, follow the 2 easy steps mentioned below, to configure tomcat for SSL.

1. Create Keystore using Java keytool

First use the keytool to create a java keystore as shown below. Make sure to note down the password that you enter while creating the keystore.
# $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA
Enter keystore password:
Re-enter new password:
What is your first and last name?
 [Unknown]:  Suresh Kumar
What is the name of your organizational unit?
 [Unknown]:  Development
What is the name of your organization?
 [Unknown]:
What is the name of your City or Locality?
 [Unknown]:  
What is the name of your State or Province?
 [Unknown]:  
What is the two-letter country code for this unit?
 [Unknown]: 
Is CN=Suresh, OU=Development, O=Unknown, L=Los Angeles, ST=CA, C=US correct?
 [no]:  yes

Enter key password for
   (RETURN if same as keystore password):
This will create the .keystore file under the /root home directory as shown below.
# ls -l /root/.keystore
-rw-r--r-- 1 root root 1391 Apr  6 11:19 .keystore

2. Modify the server.xml file

Locate the conf/server.xml file located under the tomcat directory. If the Connector port=”8443″is commented out, you should uncomment it first. Please note that the comments in the server.xml file are enclosed in as shown below. You should remove the 1st and last line from the following code snippet.
# vi server.xml
   
Now, add the keystore information to the server.xml as shown below. Replace the your-key-password with the password you provided in the step 1 while creating the keystore.
# vi server.xml
   
Finally, restart the tomcat server and access the application using https://{your-ip-address}:8443/

How to Password Protect Grub Boot Loader in Linux


GRUB security features allows you to set a password to the grub entries. Once you set a password, you cannot edit any grub entries, or pass arguments to the kernel from the grub command line without entering the password.
It is highly recommended to set GRUB password on any critical production systems as explained in this article.

1. Use grub password command in grub.conf

On a system where GRUB is not secured with the password, the following message will be displayed right under the GRUB menu during the system startup.
As you see from this message, anybody who is in front of the console rebooting the server, can edit the grub commands, or even modify the kernel arguments, which probably will cause problems, if someone who doesn’t know what they are doing, plays around with this on production systems.
Use the up-arrow and down-arrow keys to select which entry is highlighted.
Press enter to boot the selected OS,
'e' to edit the commands before booting,
'a' to modify the kernel arguments before booting, or
'c' for a command-line
/boot/grub/grub.conf contains information about the entries that are displayed in the GRUB menu during system startup. On some systems, /etc/grub.conf is a symbolic link to /boot/grub/grub.conf
Add the following “password” line to the grub.conf file.
$ cat /etc/grub.conf
default=0
timeout=15
password GrbPwd4SysAd$
..
Once the “password” command is added to the grub.conf, the following message will be displayed right under the GRUB menu during the system startup.
As you see from this message, without entering the GRUB password that you gave in the grub.conf, nobody can edit the grub commands, or modify the kernel arguments. All they can do is just select one of the displayed entries and boot from here.
Use the up-arrow and down-arrow keys to select which entry is highlighted.
Press enter to boot the selected OS or
'p' to enter a password to unlock the next set of features.

2. Encrypt the grub password using grub-crypt

While reading the above entry, probably you thought to yourself: Yes, the grub is secured by a password. But, the password itself is in clear text in the grub.conf file, which kind of defeats the purpose.
You can use grub-crypt utility to create an encrypted password.
grub-crypt will get the clear text password from the user, and display the encrypted password as shown below.
# grub-crypt
Password: GrbPwd4SysAd$
Retype password: GrbPwd4SysAd$
^9^32kwzzX./3WISQ0C
Modify the grub.conf file, add the “password” entry with the –encrypted argument as shown below. Just copy the output of the grub-crypt command, and paste it after the “–encrypted” argument in the password entry.
$ cat /etc/grub.conf
default=0
timeout=15
password --encrypted ^9^32kwzzX./3WISQ0C
..
By default, the grub-crypt command encrypts the password using SHA-512 algorithm. You can also encrypt the password either using SHA-256 or MD5 alrogithms as shown below.
# grub-crypt --sha-256
# grub-crypt --md5
You can also use md5crypt to encrypt the password. In that case, you should use “password –md5 encrypted-password” in your grub.conf file.
Inside the script section of your grub.conf file, if you specify “lock”, grub will execute the rest of the commands in that section of the menu entry only if the user is authenticated.

3. Load a different file for the Grub Menu

By default, the entries in the GRUB menu during system startup are picked-up from the grub.conf file. i.e based on the line that starts with “title” entry from the grub.conf file.
If you are testing some variation of a new kernel, you might want to create a separate grub menu file that contains the custom menu entries. During the system startup, by default it will show only the entries from the grub.conf. However when you enter a password, you can instruct grub to load your custom menu entries.
This is achived by passing the custom menu file name to the password command as shown below in the grub.conf file.
In the following example, it will load and display the grub menu entries from the /etc/mymenu.lst when you provide the password during the system startup.
$ cat /etc/grub.conf
default=0
timeout=15
password --encrypted ^9^32kwzzX./3WISQ0C /etc/mymenu.lst
..