Thursday, September 15, 2011

Japan builds hotel for the dead!


A hotel in Japan turns down young couples who come looking for a place to stay. This hotel has only 'cold-storage' rooms -- that too for the dead!
Situated in Yokohama, the Lastel hotel though looks much like any other lodging, it stores the deceased in refrigerated coffins so that mourning relatives can visit any time, the Daily Mail reported.
These are, however, not permanent hotel guests. They are waiting their turn at the city's crowded crematoriums.
There are 18 corpses, all tucked up in refrigerated coffins. Each coffin costs around 12,000 yen ($157).
Hotel owner Hisayoshi Teramura says about the couples who come asking for rooms: 'We tell them we only have cold rooms.'
Death in Japan has become a 'rare booming market', the daily said.
In 2010, according to government records, 1.2 million people died. Around 55,000 more people died than in 2009.
Over the past decade, an average of 23,000 more people died each year in Japan.
Annual deaths by 2040 are expected to reach 1.66 million.
Teramura's hotel for the dead stores and chills encoffined corpses, and delivers them through hatches into a viewing room, whenever friends and family come to pay their respects.
In Yokohama city, the average wait for a crematorium is more than four days.
'Otherwise people have to keep the bodies at home where there isn't much space,' said Teramura.

The .htaccess file - More than just redirects!


.htaccess is only a file in your home folder. But it can do wonders. It can change settings on the servers and allow you to do many different things. The .htaccess file isn’t difficult to use and is really just made up of a few simple instructions in a text file. Let me note down a few situations where the .htaccess file can be used. These are requests that I get frequently and hope this article will help you get it done, all by yourselves.
First and foremost, we need to see if .htaccess is enabled in our server. For those who have root access, check the Apache configuration file, and ensure that the following entry is set:
———————-

AllowOverride All

———————-
For those who have no root access, please check with your support team.
You can also change the name from .htaccess to anything else like .config. You just need to set the following in the Apache configuration file:
———-
AccessFileName .config
———-

Now lets get on with the various applications:

To set custom error pages

To point a certain error message to a custom file, put this in your .htaccess file:
———
ErrorDocument 404 http://www.yourdomainname.com/filename.html
———
Where 404 is the error message you are redirecting, and http://www.yourdomain.com/filename.html is the page you wish people to see when they receive the error.
URLs will begin with a slash (/) for local URLs, or will be a full URL which the client can resolve.
Examples:
ErrorDocument 500 /cgi-bin/tester
ErrorDocument 404 /cgi-bin/bad_urls.pl
ErrorDocument 401 http://www2.foo.bar/subscription_info.html
ErrorDocument 403 “Sorry can’t allow you access tod

Redirect a page using .htaccess

To redirect visitors to certain pages based on the directory or file they request, add this to your .htaccess file:
————-
Redirect /directory http://www.domain.com/new.html
————-
Where /directory is the URL of the directory or file that you wish to redirect, and http://www.domain.com/new.html is the URL you are redirecting to

Protecting a directory using .htaccess

If you want to set authentication, and prevent other users from entering certain area, here is the .htaccess code to
require passwords:
————–
AuthType Basic
AuthUserFile /home/user/.htpasswd
AuthGroupFile /dev/null
AuthName “Members Area”
require valid-user

————–

Deny users using .htacccess

Add the following to the .htaccess file:
————–

order allow,deny
deny from 128.23.45.
deny from 122.2.2.2
allow from all

————–
This is an example of a .htaccess file that will block access to your site to anyone who is coming from any IP address beginning with 128.23.45 and from the specific IP address 122.2.2.2 . By specifying only part of an IP address, and ending the partial IP address with a period, all sub-addresses coming from the specified IP address block will be blocked. You must use the IP addresses to block access. Use of domain names is not supported.

Redirect to a machine name

Add the following to the .htaccess file:
————–
RewriteEngine On
Options +FollowSymlinks
RewriteBase /
# Rewrite Rule for machine.domain-name.net
RewriteCond %{HTTP_HOST} machine.domain-name.net $
RewriteCond %{REQUEST_URI} !machine/
RewriteRule ^(.*)$ machine/$1

————–
This will redirect requests for the machine name machine.domain-name.net to the directory machine on the site
domain-name.net .

Prevent hot links: Preventing People from Linking to Your Images

Add the following to the .htaccess file:
————–
# Rewrite Rule for images
RewriteCond %{HTTP_REFERER}
RewriteRule ^(.*)$ http://
————–
You would replace the above with the domain name and path of the page that is referring to your domain. For example: www.their-isp.net/users/mypage/
The RewriteCond directive states that if the {HTTP_REFERER} matches the URL that follows, then use the RewriteRule directive. The RewriteRule directive will redirect any reference back to the referring web page.
——————————————————-
Reference:
http://help.mindspring.com/webhelp/resources/powertips/accessindex.htm
http://www.javascriptkit.com/howto/htaccess.shtml
http://apache-server.com/tutorials/ATusing-htaccess.html
http://www.webdeveloper.com/servers/servers_htaccess_magic.html
http://baremetal.com/gadgets/htaccess/
——————————————————-

Staying safe on the Internet


After coming to know that someone has used her credit card to buy iPhone, a friend of mine remorsed “I always knew that it was not safe to use the same password in all websites, but then I did so, thinking that it won’t happen to me!”. She realized she was terribly wrong and now it is a bit too late for her.
It is very natural that people tend to use same password everywhere and I am no exception. I am trying to evaluate the situation and venturing on to identifying possible solutions through this article.

Reasons for this practice

We are no super computers, and it is simply impossible to use different passwords in different websites. A netizen may have to register in hundreds of websites over years. One will be forced to reuse passwords for the sake of convenience, in spite of knowing that it is not a good practice. Some do so out of sheer ignorance, while many think they have nothing to lose even if someone manages to break into their email, since there is no sensitive data in the mail. But then online identity theft, is one of the crimes that is spreading a lot these days.

Solutions to the problem

1. Use separate passwords for personal & professional use
2. Use separate passwords for critical and non critical websites. As an example, use different set of passwords for forums, email accounts, banks etc.
3. Use same password with slight modifications
Eg: P@ssw0rd1 for one site and P@ssw0rd2 for second website or P@ssw0rdG for gmail and P@ssw0rdY for yahoo mail
4. Use unique password for all websites. But then no one could remember them all. So, store all such passwords in a location like a separate email account(Eg: Hotmail). But, what will happen if that Hotmail account is compromised. So, choose a password like “0op5”+”my unique password stored in email”. This is like a two part password. You will remember the first part of the password, but you will choose to store second part somewhere else.
5. Try to remember unique passwords used of all websites you register with!!!

Other solutions

1. Using a common login in all websites using an API. Examples will be websites that allow login using Google ID, OpenID, Fconnect etc.
2. Use a second layer of authentication other than password. A good example is “sms validation” recently implemented in Gmail
3. Another proven solution will be “staying away from Internet”!!!
I think I have covered almost everything I could think of now. Please share your thoughts as comments.

The World is running out of IPv4 addresses


Well, its finally happening, the world is starting to run out of IPv4 addresses. ICANN (Internet Corporation For Assigned Names and Numbers) and IANA (Internet Assigned Numbers Authority) announced in February that the last of the world’s remaining IPv4 blocks had been assigned to the Regional Internet Registries(RIR). We would have expected the RIRs to be able to meet demand for IPv4 addresses for at least another year. However, APNIC(Asia-Pacific Network Information Centre), the RIR for the Asia-Pasific region, announcedthat it has released its final block of IPv4 addresses.

“This event is a key turning point in IPv4 exhaustion for the Asia Pacific, as the remaining IPv4 space will be ‘rationed’ to network operators to be used as essential connectivity with next-generation IPv6 addresses (PDF Link). All new and existing APNIC Members who meet the current allocation criteria will be entitled to a maximum delegation of a /22 (1,024 addresses) of IPv4 space.” - APNIC

What caused it to run out so quickly? Primarily it is the exponential growth in fixed and mobile networks in the region. From now on, all new networks and services in the region must implement IPv6. Based on these stats it is not too hard to imagine RIPE or ARIN running out of IP addresses by the end on 2011, let alone lasting into 2012.
ARIN has reported that there has been a decline in IPv4 request since IANA reached depletion of their IPv4 pool in early February. While demand for IPv6 has gone up. So you can expect to see more interaction with IPv6 this year, and expect to have to order a few from next year onwards.
IPv6 traffic on the Internet is still reported to be only 0.25%. In hopes to improve that, the Internet Society is planning a World IPv6 dayOn World IPv6 Day, major web companies and other industry players will come together to enable IPv6 on their main websites for 24 hours. The goal is to motivate organizations across the industry — Internet service providers, hardware makers, operating system vendors and web companies — to prepare their services for IPv6 to ensure a successful transition as IPv4 address space runs out.
So, yes, its high time you start thinking about your future needs, and jump onto the IPv6 bandwagon today!

Should Webhosts worry about IPv6?


Well, if your already setup, then you wont have to worry too much. At the current rate, the general opinion is that new hosts will have to be assigned IPv6 addresses by 2012(if the world doesn’t end). So if those hosts wish to communicate with the other IPv4 servers, using the IPv4 network infrastructure, hosts will have to start understanding both IPv4 and IPv6. At least till the transition is complete. To make the transition as smooth as possible, various transition mechanisms have been put forward, of which RFC 4213(Basic Transition Mechanisms for IPv6 Hosts and Routers) will make an interesting read for any Webhost who plans to buy servers after 2012. More after the jump.

Why IPv6?

When I bought my first PC, a friend of mine said “A 2GB HDD?? What are you going to do with all that space? Its going to be a waste!”. Well, that was before someone invented a lossy data compression method called MPEG-2 Audio Layer III. Thats pretty much whats happening with IPv4 addresses. Mobile devices now do more than just make calls, various services requiring the device to have its own IP address. Virtualization technologies have allowed single physical hosts to host multiple servers, each requiring their own blocks of address. Always-on broadband connections, inefficient address use and basically just more people on the net, have all led to IP address usage far exceeding the expectations when IPv4 was first developed. Besides various other security and QoS enhancements, a jump from 32bit to 128bit addresses is the most significant, or the most needed, change in Internet Protocol version 6.

What will I have to change?

The mechanisms in this document are designed to be employed by IPv6 hosts and routers that need to inter-operate with IPv4 hosts and utilize IPv4 routing infrastructures. We expect that most nodes in the Internet will need such compatibility for a long time to come, and perhaps even indefinitely. - quote from RFC4213
That last line is quite comforting, it lets you know that not many people expect the transition to be complete. IPv4 maybe be around longer, much much longer. But if you purchase hosts after 2012, you’ll most probably get an IPv6 address(es). Two mechanisms put forward by RFC4213 to help with the transitions are dual stack and configured tunneling. Neither of which you will have to worry about, as they will be taken care of by either your OS or DC. Most OSes today that support IPv6 implement a hybrid IPv4-IPv6 stack. However something that may effect you directly is IPv6 address resolution. i.e. Getting your nameservers to handle IPv6 address, and that will be what I’ll be talking about in my post next week. Check back for other interesting posts from my fellow bloggers.

IPv6 in your OS

Latest releases of all major OSes currently support IPv6 out of the box.

For Linux

IPv6 support has been available for the 2.4.x kernel, but it is recommended you switch to the 2.6.x kernel to be IPv6-up-to-date(among other reasons). To test if your server support IPv6, simply run the following command:
test -f /proc/net/if_inet6 && echo "Running kernel is IPv6 ready"
If it displays the “Running kernel is IPv6 ready” message, your server is IPv6 ready. If not, you can find out more on how to load the IPv6 modules here.

For Windows

Support for IPv6 is built into the latest versions of Microsoft Windows, which include Windows 7, Windows Server 2008 R2, Windows Vista, Windows Server 2008, Windows Server 2003, Windows XP with Service Pack 2, Windows XP with Service Pack 1, Windows XP Embedded SP1, and Windows CE .NET.

IPv6 at your DC

Support for IPv6 may not necessarily require new hardware, as support can enabled via software/firmware upgrades, if the current hardware has enough storage and memory space to support the new IPv6 stack. However various “IPv6 ready” devices are being marketed with “advanced” support. So contact your DC if you need to know more. But if they are selling you servers IPv6 addresses by 2012, I guess its safe to assume they have the necessary equipment in place :)

The IPv6 Address Space

IPv4 uses a 32-bit address space. These 32 bits of data are stored as binary numbers(1’s and 0’s), but to make them easier for us to understand, they are displayed as blocks of decimal numbers separated by a “.”. Hence the familiar 192.168.1.1 notation. IPv6 uses 128-bits of data to represent an IP address. So if we were to use the same decimal notation, it would go up to 39 digits. To avoid lengthy notations, IPv6 will use a hexadecimal notation. i.e. a combination of numbers 0-9 and letters a-f(10-15). This reduces the number of characters required to represent an IPv6 address down to 32. IPv4 address are broken into 4 blocks of 8bits each separated by a “.”, IPv6 uses 8 blocks of 16 bits each separated by a “:”. So an IPv6 address in this notation would look something like this:
5852:d721:6b39f:0e32:99e6:34bb2:7134:43ff

But leading zeros in each block are omitted, and whole blocks of zeros are represented by “::”. So the address above would be more correctly represented as:
5852:d721:6b39f:e32:99e6:34bb2:7134:43ff

The familiar 127.0.0.1 “localhost” in IPv4 is represented as:
0000:0000:0000:0000:0000:0000:0000:0001
which shorten downs to:
::1

Ok, now that we know what IPv6 IP address are all about in part I and II, lets take a look at what it would be like using them for sites hosted on your server.
Once you’ve ensured that your OS and your DC are setup to support IPv6, the next step would be to start configuring your services to understand and handle IPv6 addresses. One of the services that you will need to ensure is IPv6 ready, is your DNS service. Bind ver 9.x and the DNS Server service of Windows Server 2003/2008 both currently support IPv6. IPv6 support in Bind is not enabled by default, so if you are using a plain server with no control panel, you will have to recompile Bind to enable IPv6 support. If you are using a control panel, you will have to check your control panel documentation for more details. As of writting this post, only DirectAdmin has announced its partial support for IPv6. Other two popular control panels Plesk and cPanel have not yet announced support for IPv6, but it seems work on it is currently in progress. cPanel has mentioned it in their FAQ here.

Adding IPv6 IP Addresses

Even though most control panels do not support adding IPv6 addresses now, you can still test by manually adding IPv6 address to the DNS zone file of your domains. The DNS servers currently in use with most popular control panels, on both Linux and Windows servers, already support IPv6 host records. There are currently two resouce record types in contention for this post, they are A6 and AAAA. Just like how the A record defines an IPv4 record, the A6 and AAAA records can be used to define IPv6 records. Both are currently supported by DNS servers, and you can read a comparison of them here.
Now that you know what resource record to use, adding the record to the DNS zone of your domain is just like adding any other record. e.g. If the IPv6 IP address of my domain was 5852:d721:6b39f:e32:99e6:34bb2:7134:43ff, the DNS record that would point my domain to that IP address would be:

my.domain. IN AAAA 5852:d721:6b39f:e32:99e6:34bb2:7134:43ff

Once that is setup a quick lookup using dig or nslookup would report this as the IP address of my domain. Thats it! Well, mostly. Assuming that this IPv6 IP address points to a server that is fully IPv6 ready. i.e. It has Apache listening on this address and a VirtualHost setup on that IP. Not to mention other services like Mail, MySQL etc also understanding the IPv6 address. But don’t worry, by the time IPv6 addresses hit the hosting industry, all control panels will already support IPv6 so you wont have to worry about configuring these services! Hope these articles helped give you a better idea about IPv6 and its use in the webhosting industry.

OpenDS: How to put an elephant inside the Refrigerator


Ok, I lied. OpenDS will not help you put the elephant inside the fridge. But why did you want to put that poor creature there, in the first place? No, I’m not going to talk about elephants or fridges. Today, its all about the process of“putting”. From the very beginning in the history of computing, storing data has been a big question. If we look at the road-map, we can highlight the growth something like this:
  • How to save data?
  • How to save data efficiently so as lesser space is consumed?
  • How to save data, so that it takes very less space, and can be retrieved fast?

Every new day brought in new technologies in the hardware part, as well as the software part in compressing data. That is when the third idea started catching up. Yes, organizing data is definitely important/critical task than saving it. And that is how directories and directory access protocols came into picture. A directory is a set of objects with attributes organized in a logical and hierarchical manner. A directory is just like an index in the CD library. The library may contain Terabytes of data, or thousands of movies. But, without the indexing, you are going to have a very tough time to find the data that you need.
The earlier prototypes of its kind were X.500 DAP, which was based on theOSI protocol stak. Cutting short, I would say: X.500 was way too heavy that the folks at the University of Michigan came out with a lighter DAP which was based on the TCP/IP. It was light, and it was named in a light way:Lightweight DAP(LDAP). Now, it is ***NOT*** LDAP, that I’m gonna talk about. There is someone new with a very recent history; arrived to steal the thunder. Folks, let us talk about OpendDS.
OpenDS Software is a free, open source directory service, written in Java which implements a wide range of LDAP and related standards. It also offers multi-master replication, access control, and many extensions.The software is developed in Java, making it cross-platform compatible; that is, Linux, mac, Windows or whatever- OpenDS will run. It is to be noted, that even after all those talks about Directory, it will have not “just” the Directory Server, but also other essential directory-related services like directory proxy, virtual directory, namespace distribution and data synchronization.
Hey!! Helloo?? what is new in OpenDS? Why so much noise about this?
  • Performance. We can add another bunch of feature, but performance remains the key feature of the system.
  • Scalability upward- Able to handle billions of entries in a single instance.
  • Scalability downward- Able to run under low-memory environments, surviving just the essential components. Imagine; OpenDS will run in a cell phone. :D
  • Security: A whole lot of expertise in access control, encryption, authentication, auditing, password and account management and all those things your security auditor will ever ask for.
  • Availability: The most crucial of them all. Whatever happens, the system should be up and running.
  • List goes on and on and on. So, lets stop the list here.
A big huge feature that I’ve missed completely is the replication mechanism. By this, we can very effectively reduce the load on the single machine, and hence improving the performance, and even aid in scaling up the system by adding new machines to the loop. The list of available strategies in connecting the entire server up is quite huge, and if I start talking about that, this post will go like, forever. So, I’m stopping this here for now. Wait for the detailed connection strategies in the sequel. Till then, ta!

Think Speed : mod_pagespeed for Apache


World of web is becoming faster day by day. Webmasters have been making conscious efforts to enhance website speeds even more, now that Google has even page rank linked to website speed. Sticking to efficient coding and adopting best practices has always been the steps taken by webmasters to speed up their sites. As a webhost, there are quite a lot of steps to be taken to help your webmasters in this race of website performance. Lets look at the latest in the block.

mod_pagespeed

mod_pagespeed is an open-source Apache module that optimizes web pages and resources by them. It does this by re-writing the resources used, using filters that implement web performance best practices. Webmasters and web developers can use mod_pagespeed to improve the performance of their web pages when serving content with the Apache HTTP Server. The module is, for now compatible with Apache version 2.2 and is now available as a down-loadable binary for i386 and x86-64bit systems. The module has been tested in both CentOS and Ubuntu and the binary form can be used in other Debian/RPM based distros.
An open source code accessible through svn can be found here.

How it works

The module performs several optimizations on the fly, such as optimized caching, minimized client-server round-trips, minimized payload size and optimized browser rendering, and more. The end result is much higher performance for the websites which is usually to the tune of upto 100%, or in other words half the loading time.
mod_pagespeed includes several filters that optimize JavaScript, HTML and CSS style-sheets. It also includes filters for optimizing JPEG and PNG images. The filters are based on a set of best practices known to enhance web page performances.

Getting started

This part is rather easy, and shouldn’t take more than a couple of minutes, if you are familiar with Apache and its modules. Even otherwise, the installation and configuration is pretty straightforward.
The rpm package can be found here, which can be installed in servers that have Apache 2.2 installed with an rpm/deb package. The details are present in the same page.
The following link outlines compiling the module from source. If none of these work, the option of extracting the files from within the package, and loading the extension appropriately would work as well.
Once the module is installed, say with rpm/deb package, the configuration file can be found at :
Ubuntu: /etc/apache2/mods-enabled/pagespeed.conf
CentOS: /etc/httpd/conf.d/pagespeed.conf
The modules are to be loaded within this configuration, and the web-server needs to be restarted for the module to work.
Now, to enable and disable various filters, a fair understanding of each filter is mandatory. The details on filters can be found here. Additional information on configuration can be found here.

Kit that hunts Kits - RKHunter


Rootkit Hunter(rkhunter) is a Unix tool that scans for rootkits, trojans, backdoors and similar exploits. The tool is released under GPL license, and hence is a free tool. Actually it’s a shell script that performs various checks on the system and detects the presence of known rootkits and malware. It performs various checks to see if system binaries have been modified, if the system startup files have been tampered, and if active processes are malicious in nature. The reports of the checks are usually brief, yet helpful in validating the sanity of a local machine/server.

Scanning techniques

–> MD5 hash compare, to validate the authenticity of packages and binaries
–> Looking for Hidden files and default files used by rootkits.
–> Wrong file permissions for binaries
–> Look for suspected strings in kernel modules
–> Optional scan for malicious code within plain-text and binary files

Install RKHunter

The installation is pretty much simple. The latest files can be fetched fromhttp://sourceforge.net/projects/rkhunter/files/
The following command(s) would leave you with a working rkhunter installation:
cd /usr/local/src/; wget http://downloads.sourceforge.net/rkhunter/rkhunter-1.3.6.tar.gz; tar -zxvf rkhunter-*; cd rkhunter-*; sh installer.sh --layout /usr/local --install
The very first step would be to populate the properties database. The following command does that :

rkhunter --propupd

You may wish to glance through the configuration file at /usr/local/etc/rkhunter.conf. The file would help you to understand the tool better, and also empower you with the configuration details. It would give you a better idea on avoiding false positives as well.
You might see a few warnings when you run the test, and it does not always indicate a security breach. Yes, the chances of false positives are high, and once you get to know the tool, you could avoid them with various configuration options. The tool is so designed, that it would work on various platforms, and yet remain flexible enough to make it work for each of your systems.
The test can be performed with the following command :

/usr/local/bin/rkhunter --check --skip-keypress

In case you want to un-install the tool, the following command would remove the installation. This has to be run from the same location from which it was installed(say /usr/local/src/rkhunter-1.3.6).

./installer.sh –remove

Setting cron for RKHunter

Scheduling the check is a step that assures periodic server sanctity check/reports. It is simple, and the following cron with the one liner script takes care of the schedule.
The script has to go to the /etc/cron.daily folder. The tool itself has a switch “–cronjob” that is designed for the purpose of cron.
Create the file /etc/cron.daily/rkhunter.sh
Add the following- replacing your email address in place of email@example.com:
#!/bin/bash
/usr/local/bin/rkhunter --cronjob 2>&1 | mail -s "$HOSTNAME daily RK-Hunter Scan Report" email@example.com
Follow it up with setting permissions for the script :
chmod +x /etc/cron.daily/rkhunter.sh
That’s it! You now have a kit that detects rootkits at your service.