Showing posts with label CentOS. Show all posts
Showing posts with label CentOS. Show all posts

Saturday, February 11, 2012

Install Fail2ban in CentOS 5 (fail2ban)

1. Download and Install

wget http://sourceforge.net/projects/fail2ban/files/fail2ban-stable/fail2ban-0.8.4/fail2ban-0.8.4.tar.bz2
tar -xjvf fail2ban-0.8.4.tar.bz2
cd fail2ban-0.8.4
python setup.py install

2. Edit jail.conf

vi /etc/fail2ban/jail.conf

----------//---------

[DEFAULT]

# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not
# ban a host which matches an address in this list. Several addresses can be
# defined using space separator.
ignoreip = 127.0.0.1 192.168.1.0/24 <--------- 여기에 지정된 주소는 fail2ban의해 밴당하지 않는다

# "bantime" is the number of seconds that a host is banned.
bantime  = 84600 <-------- 24시간으로 변경. 해당 호스트가 밴되는 시간 (기본 600)

# A host is banned if it has generated "maxretry" during the last "findtime"
# seconds.
findtime  = 600

# "maxretry" is the number of failures before a host get banned.
maxretry = 3  <------- 위의 faindtime 시간안에 maxretry 횟수만큼 로그인 실패시 밴 (기본 3)

......

[ssh-iptables]

enabled  = true <-------- sshd에 사용하기 위해 true로 변경
filter   = sshd
action   = iptables[name=SSH, port=ssh, protocol=tcp]
           sendmail-whois[name=SSH, dest=you@mail.com, sender=fail2ban@mail.com]
logpath  = /var/log/secure <------------ sshd.log에서 secure로 변경
maxretry = 5   <-------- 변경시 여기에 있는 값이 위의 default 'maxretry' 값보다 우선시 된다 

----------//---------

3. Copy start script and start service

cp files/redhat-initd /etc/init.d/fail2ban
chkconfig --add fail2ban
chkconfig fail2ban on
service fail2ban start

Thursday, September 1, 2011

Download CentOS 6.0 ISO | 32 & 64 Bit


Finally CentOS 6 Final Version is now released and Available to Download.
Its Good News for all RHEL users who likes to Use its Clone But right now we don’t know how much stable it is and how exactly it matches RHEL 6.
We are pleased to announce the immediate availability of CentOS 6.0 for i386 and x86_64 architectures. CentOS 6.0 is based on the upstream release EL 6.0 and includes packages from all variants. All upstream repositories have been combined into one, to make it easier for end users to work with. Since upstream has a 6.1 version already released, we will be using a Continuous Release repository for 6.0 to bring all 6.1 and post 6.1 security updates to all 6.0 users, till such time as CentOS 6.1 is released
Now Downloads for CentOS 6 is available, You can download it from following links, Let me know if you face any errors so i will provide some other links too.
We are pleased to announce the immediate availability of CentOS 6.0 for i386 and x86_64 architectures. CentOS 6.0 is based on the upstream release EL 6.0 and includes packages from all variants. All upstream repositories have been combined into one, to make it easier for end users to work with. Since upstream has a 6.1 version already released, we will be using a Continuous Release repository for 6.0 to bring all 6.1 and post 6.1 security updates to all 6.0 users, till such time as CentOS 6.1 is released

Thursday, August 4, 2011

How to Setup GFS2 or GFS in Linux Centos

It has been a nightmare for me setting up GFS2 with my 3 shared hosting servers and 1 SAN Storage. I have been reading all over the internet and the solutions to this is either outdated or contains bug that cannot make my SAN storage SAN to work. Finally, i managed to setup my GFS2 on my Dell MD3200i with 10TB of disk space.


GFS2/GFS Test Environment


Here is the test environment equipment that i utilized for this setup.
3 Centos Web Server
1 MD3200i Dell SAN Storage
1 Switch to connect all these equipment together


Assumption


I will assume you would have setup all your 3 Centos servers to communicate with your SAN ISCSI storage. This means that all your 3 Centos servers will be able to view your newly created LUN using iscsiadmn. And you have switch off your iptabls and selinux. If your iscsi storage hasn’t configure, you can do so at cyberciti.


Setup GFS2/GFS packages


On all of your 3 Centos servers, you must install the following packages:


cman
gfs-utils
kmod-dlm
modcluster
ricci
luci
cluster-snmp
iscsi-initiator-utils
openais
oddjobs
rgmanager


Or you can simple type the following yum on all 3 Centos machine view sourceprint?

1 yum install -y cman gfs-utils kmod-gfs kmod-dlm modcluster ricci luci cluster-snmp iscsi-initiator-utils openais oddjob rgmanager


Or even simplier, you can just add the cluster group via the following line view sourceprint?

1 yum groupinstall -y Clustering
2 yum groupinstall -y "Storage Cluster"


Oh, remember to update your Centos before proceeding to do all of the above.

1 yum -y check-update
2 yum -y update


After you have done all of the above, you should have all the packages available to setup GFS2/GFS on all your 3 Centos machine.


Configuring GFS2/GFS Cluster on Centos


Once you have your required centos packages installed, you would need to setup your Centos machine. Firstly, you would need to setup all your hosts file with all 3 servers machine name. Hence, i appended all my 3 servers machine name across and in each machine i would have the following additional line in my /etc/hosts file.

1 111.111.111.1 gfs1.hungred.com
2 111.111.111.2 gfs2.hungred.com
3 111.111.111.3 gfs3.hungred.com

where *.hungred.com is each machine name and the ip beside it are the machine ip addresses which allows each of them to communicate with each other by using the ip stated there.

Next, we will need to setup the cluster configuration of the server. On each machine, you will need to execute the following instruction to create a proper cluster configuration on each Centos machine.
view sourceprint?
1 ccs_tool create HungredCluster
2 ccs_tool addfence -C node1_ipmi fence_ipmilan ipaddr=111.111.111.1 login=root passwd=machine_1_password
3 ccs_tool addfence -C node2_ipmi fence_ipmilan ipaddr=111.111.111.2 login=root passwd=machine_2_password
4 ccs_tool addfence -C node3_ipmi fence_ipmilan ipaddr=111.111.111.3 login=root passwd=machine_3_password
5 ccs_tool addnode -C gfs1.hungred.com -n 1 -v 1 -f node1_ipmi
6 ccs_tool addnode -C gfs2.hungred.com -n 2 -v 1 -f node2_ipmi
7 ccs_tool addnode -C gfs3.hungred.com -n 3 -v 1 -f node3_ipmi


Next, you will need to start cman.view sourceprint?

1 service cman start
2 service rgmanager start
cman should starts without any error. If you have any error while starting cman, your GFS2/GFS will not work. If everything works fine, you should see the following when you type the command as shown below,


view sourceprint?

1 [root@localhost ]# cman_tool nodes
2 10.0.0.1
3 Node Sts Inc Joined Name
4 1 M 16 2011-1-06 02:30:27 gfs1.hungred.com
5 2 M 20 2011-1-06 02:30:02 gfs2.hungred.com
6 3 M 24 2011-1-06 02:36:01 gfs3.hungred.com

If the above shows, this means that you have properly setup your GFS2 cluster. Next we will need to setup GFS2!


Setting up GFS2/GFS on Centos


You will need to start the following services.


service gfs start
service gfs2 start

Once, this two has been started. All you need to do is to partition your SAN storage LUN. If you want to use GFS2, partition it with gfs2


view sourceprint?1 /sbin/mkfs.gfs2 -j 10 -p lock_dlm -t HungredCluster:GFS /dev/sdb


Likewise, if you like to use gfs, just change it to gfs instead of gfs2


view sourceprint?1 /sbin/mkfs.gfs -j 10 -p lock_dlm -t HungredCluster:GFS /dev/sdb


A little explanation here. HungredCluster is the one we created while we were setup out GFS2 Cluster. /dev/sdb is the SAN storage lun space which was discovered using iscsiadm. -j 10 is the number of journals. each machine within the cluster will require 1 cluster. Therefore, it is good to determine the number of machine you will place into this cluster. -p lock_dlm is the lock type we will be using. There are other 2 more types beside lock_dlm which you can search online.


P.S: All of the servers that will belong to the GFS cluster will need to be located in the same VLAN. Contact support if you need assistance regarding this.


If you are only configuring two servers in the cluster, you will need to manually edit the file /etc/cluster/cluster.conf file on each server. After the tag, add the following text:


If you do not make this change, the servers will not be able to establish a quorum and will refuse to cluster by design.


Setup GFS2/GFS run on startup


Key the following to ensure that GFS2/GFS starts everytime the system reboot.


view sourceprint?
1 chkconfig gfs on
2 chkconfig gfs2 on
3 chkconfig clvmd on //if you are using lvm
4 chkconfig cman on
5 chkconfig iscsi on
6 chkconfig acpid off
7 chkconfig rgmanager on
8 echo "/dev/sdb /home gfs2 defaults,noatime,nodiratime 0 0" >>/etc/fstab
9 mount /dev/sdb


Once this is done, your GFS2/GFS will have mount on your system to /home. You can check whether it works using the following command.


view sourceprint?

1 [root@localhost ~]# df -h


You should now be able to create files on one of the nodes in the cluster, and have the files appear right away on all the other nodes in the cluster.


Optimize clvmd


We can try to optimize lvmd to control the type of locking lvmd is using.


view sourceprint?

1 vi /etc/clvmd/clvmd.conf
2 find the below variables and change it to the variable as shown below
3 locking_type = 3
4 fallback_to_local_locking = 0
5 service clvmd restart
credit goes to http://pbraun.nethence.com/doc/filesystems/gfs2.html


Optimize GFS2/GFS
There are a few ways to optimize your gfs file system. Here are some of them.
Set your plock rate to unlimited and ownership to 1 in /etc/cluster/cluster.conf
view sourceprint?1
Set noatime and nodiratime in your fstab.
view sourceprint?1 echo "/dev/sdb /home gfs2 defaults,noatime,nodiratime 0 0" >>/etc/fstab
lastly, we can tune gfs directy by decreasing how often GFS2 demotes its locks via this method.
view sourceprint?
1 echo "
2 gfs2_tool settune /GFS glock_purge 50
3 gfs2_tool settune /GFS scand_secs 5
4 gfs2_tool settune /GFS demote_secs 20
5 gfs2_tool settune /GFS quota_account 0
6 gfs2_tool settune /GFS statfs_fast 1
7 gfs2_tool settune /GFS statfs_slots 128
8 " >> /etc/rc.local


credit goes to linuxdynasty.
iptables and gfs2/gfs port
If you wish to have iptables remain active, you will need to open up the following ports.


view sourceprint?
1 -A INPUT -i 10.10.10.200 -m state --state NEW -p udp -s 10.10.10.0/24 -d 10.10.10.0/24 --dport 5404, 5405 -j ACCEPT
2 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 8084 -j ACCEPT
3 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 11111 -j ACCEPT
4 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 14567 -j ACCEPT
5 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 16851 -j ACCEPT
6 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 21064 -j ACCEPT
7 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 41966,41967,41968,41969 -j ACCEPT
8 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 50006,50008,50009 -j ACCEPT
9 -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p udp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 50007 -j ACCEPT


Once these ports are open on your iptables, your cman should be able to restart properly without getting start either on fencing or cman starting point. Good Luck!


Troubleshooting


You might face some problem setting up GFS2 or GFS. Here are some of them which might be of some help


CMAN fencing failed
You get something like the following when you start your cman
view sourceprint?
1 Starting cluster:
2 Loading modules... done
3 Mounting configfs... done
4 Starting ccsd... done
5 Starting cman... done
6 Starting daemons... done
7 Starting fencing... failed


One of the possibility that is causing this is that your gfs2 has already been mounted to a drive. Hence, fencing failed. Try to unmount it and start it again.


mount.gfs2 error
if you are getting the following error
view sourceprint?

1 mount.gfs2: can't connect to gfs_controld: Connection refused
you need to try to start the cman service

Friday, July 15, 2011

Download CentOS 6.0 ISO | 32 & 64 Bit


Finally CentOS 6 Final Version is now released and Available to Download.
Its Good News for all RHEL users who likes to Use its Clone But right now we don’t know how much stable it is and how exactly it matches RHEL 6.
We are pleased to announce the immediate availability of CentOS 6.0 for i386 and x86_64 architectures. CentOS 6.0 is based on the upstream release EL 6.0 and includes packages from all variants. All upstream repositories have been combined into one, to make it easier for end users to work with. Since upstream has a 6.1 version already released, we will be using a Continuous Release repository for 6.0 to bring all 6.1 and post 6.1 security updates to all 6.0 users, till such time as CentOS 6.1 is released
Now Downloads for CentOS 6 is available, You can download it from following links, Let me know if you face any errors so i will provide some other links too.
We are pleased to announce the immediate availability of CentOS 6.0 for i386 and x86_64 architectures. CentOS 6.0 is based on the upstream release EL 6.0 and includes packages from all variants. All upstream repositories have been combined into one, to make it easier for end users to work with. Since upstream has a 6.1 version already released, we will be using a Continuous Release repository for 6.0 to bring all 6.1 and post 6.1 security updates to all 6.0 users, till such time as CentOS 6.1 is released

Tuesday, July 27, 2010

Pxe-Kickstart-Automating-CentOS

In a previous post we looked at the install and setup of a kickstart server. One of the last steps that had to be taken as the client was to use an "append" at the boot prompt to assign the client a static ip address. This time we are going to look at setting up PXE services for clients to create a truly "hands-off" approach to installing desktops and servers with kickstart. I will be using the HTTP protocol again for my kickstart and I must say resources out there for the PXE/Kickstart/HTTP are really limited. It took a lot of trial and error to get this working, however the FTP and NFS method are much easier to implement.

You should already have a working kickstart server in place before setting up anything else in this post. For those that don't as a quick refresh you should have the following directory structure:

/var/www/pub
|-- CentOS
|-- images
    `-- pxeboot
|-- isolinux
    `-- isolinux.cfg
|-- kickstart
|-- repodata 
 
In the pxeboot folder should be vmlinuz and initrd.img files, and the kickstart folder should contain your kickstart file (test.cfg in our case). You can also refer to this earlier post to setup this up. Next you will need to setup a DHCP server first.
# yum -y install dhcp
# cp /usr/share/doc/dhcp-3.0.5/dhcpd.conf.sample /etc/dhcpd.conf
# vi /etc/dhcpd.conf

## /etc/dhcpd.conf file ##
ddns-update-style interim;
ignore client-updates;
authoritative;
allow booting;
allow bootp;

subnet 172.168.1.0 netmask 255.255.255.0 {
   # default gateway
   option routers    172.168.1.1;
   option subnet-mask   255.255.255.0;
   option domain-name   "mydomain.org";
   option domain-name-servers 172.168.1.1;
 
   # EST Time Zone
   option time-offset   -18000; 
 
   # Client IP range
   range dynamic-bootp 172.168.1.100 172.168.1.1.200;
   default-lease-time 21600;
   max-lease-time 43200;
 
   # PXE Server IP
   next-server 172.168.1.1;
   filename "pxelinux.0";
 
}

## END FILE ## 
 
Now you will need to save the file and set the service to start on boot.
# chkconfig dhcpd on
# service dhcpd restart

Now your DHCP server should be setup and working properly. You can test this if you'd like by allowing a client to lease an ip address from the server to verify that it is working (run the dhclient command on any linux box). Next we will need to setup a TFTP server to server up the PXE file to clients. We will need to install the server and configure it run with xinetd service. Essentially all you need to do is change the "disable" option to "yes".
# yum -y install tftp-server
# vi /etc/xinetd.d/tftp

## /etc/xinetd.d/tftp file ##

service tftp
{
        socket_type           = dgram
        protocol              = udp
        wait                  = yes
        user                  = root
        server                = /usr/sbin/in.tftpd
        server_args           -s /tftpboot
        disable               = no
        per_source            = 11
        cps                   = 100 2
        flags                 = IPv4
}

## END FILE ## 
 
Save the file and restart the service for it to take effect:
# service xinetd restart

Next is going to be the install of syslinux which is required to allow the clients to actually PXE boot.
# yum -y install syslinux

Simple enough. Next we will need to create the TFTP directory layout for the clients to PXE boot from.
# cd /
# mkdir tftpboot
# cd tftpboot
# mkdir images
# mkdir pxelinux.cfg
# cp /usr/share/syslinux/menu.c32 .
# cp /usr/share/syslinux/pxelinux.0 .

* Some will have to use /usr/lib/syslinux

Now your directory structure should be in place with the required files. Last we will just copy over the kernel for the clients to use when booting.
# cd images
# cp /var/www/pub/images/pxeboot/vmlinuz .
# cp /var/www/pub/images/pxeboot/initrd.img .

Finally we just need to make the PXE file that directs the clients where you boot from.
# cd /tftpboot/pxelinux.cfg
# vi default

## /tftpboot/default ##

default menu.c32
prompt 0
timeout 10

MENU TITLE PXE Menu

LABEL CentOS 5.4 x32
MENU LABEL CentOS 5.4 x32
KERNEL images/vmlinuz
append initrd=images/initrd.img linux ks=http://172.168.1.1/pub/kickstart/test.cfg

## END FILE ##

Once you save and close this file you are done with the setup! There is one small change I forgot to mention...you will need to adjust your firewall settings for these new services.
# vi /etc/sysconfig/iptables
# -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 67 -j ACCEPT
# -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 68 -j ACCEPT
# -A RH-Firewall-1-INPUT -m udp -p udp --dport 69 -j ACCEPT
#service iptables restart

That should do it. Now if many of you haven't guessed by now I use the following addresses on my "lab" network to perform these test installs:

DHCP Server: 172.168.1.1
DNS Server: 172.168.1.1
PXE Server: 172.168.1.1
Clients: 172.168.1.100 - 172.168.1.200

Most of this should be obvious from following this tutorial. Now try PXE booting your client and it should pickup all that it needs from the PXE server, boot the linux kernel into RAM, and begin executing your kickstart file for installation. I will note for those of you that are note using the HTTP protocol (NFS or FTP) there are very few changes that need to be made to this tutorial to make PXE booting work for you. In particular you will have a different directory layout when starting and the /tftpboot/default file will need to have the last line changed to the format of the protocol you are using.

CentOS DHCP Server Setup

One of the basics elements found on all networks in a DHCP server, making it an important part of any network.  DHCP makes network administration easy because you can make changes to a single point (theDHCP server) on your network and let it filter down to the rest of the network.  To begin setting up a DHCP server we are going to first need to configure our machine with a static ip address.  As the root user you will need to open the following file, /etc/sysconfig/network-scripts/ifcfg-eth0 (assuming you want the eth0 interface to distribute ip addresses to the network).  Configure the following:

TYPE=Ethernet
DEVICE=eth0
BOOTPROTO=
IPADDR=192.168.1.50
NETMASK=255.255.255.0
USERCTL=yes
IPV6INIT=no
PEERDNS=yes
ONBOOT=yes

Once finished you will need to restart the networking service: service networking restart

Now that you have static ip address setup you will need to install the dhcpd package which contains the DHCP server software.  After this package is installed there are two important files which we will need to work with.  The first is /etc/dhcpd.conf which is the configuration file for the DHCP server.  This file may not exist in which case you will need to create it.  You can find a sample to work off of (recommended) at /usr/share/doc/dhcp-/dhcp.conf.sample.  Copy this over to the main configuration file and then edit the main configuration file to your specifications.  This is the easiest and fastest way to setup the DHCP.  The second file to take note of is /var/lib/dhcpd/dhcpd.leases which stores all the client leases for the DHCP server.

$ yum install dhcp.i386


$ cp /usr/share/doc/dhcp-3.0.5/dhcp.conf.sample /etc/dhcpd.conf
$ nano /etc/dhcpd.conf


# Sample DHCP Config File
ddns-update-style interim;
authoritative;


subnet 192.168.1.0 netmask 255.255.255.0 {


     # Parameters for the local subnet
     option routers                                192.168.1.254;
     option subnet-mask                            255.255.255.0;


     option domain-name                            "testbed.edu";
     option domain-name-servers                    192.168.1.50;


     default-lease-time       21600;
     max-lease-time       43200;


     range dynamic-bootp 192.168.1.100 192.168.1.200;
}


$ service dhcpd restart

As we can see above looking through the configuration file, there is only one subnet for this network.  The gateway has been defined by the "option routers" paramters, the DNS information by the "option domain-name" parameters, and the leases for the client by the "range" parameter.  Restarting the DHCP service will allow the configuration file to be loaded into the server and it will begin to lease ip addresses to clients.  One other configuration parameter that you should know if how to setup reserved ip addresses.

# In the configuration file
host client01 {


 option host-name “client01.example.com”;
 hardware ethernet 02:B4:7C:43:DD:FF;
 fixed-address 192.168.1.109;


}
This basically reserves this ip address for the client01 host with the specified MAC address.  This can be usefu for printers or particular addresses that you wish to reserve.  You can now watch as clients should begin leasing their ip addresses from the server as they connect to the network.  Some other ideas you might want to consider implementing with a DHCP server is a failover server, relay servers, and backing up the configuration file and/or the lease database.  As a tip, instead of editing the dhcpd.conf file and then restarting the server to make changes you can use the omshell command which will allow you to connect to, query, and change the configuration while the server is running.