Friday, December 13, 2013

Red Hat Enterprise Linux 7 beta now available

https://access.redhat.com/site/sites/default/files/pages/attachments/rhel_whatsnewrhel7beta_techoverview_.pdf

https://access.redhat.com/site/products/Red_Hat_Enterprise_Linux/Get-Beta

https://access.redhat.com/site/solutions/32790

ftp://ftp.redhat.com//redhat/rhel/beta/7/x86_64/iso/rhel-everything-7.0-beta-1-x86_64-dvd.iso

Tuesday, December 10, 2013

*Gmail: 9 years and counting*

*mDEFENCE Specially for ladies protection*

http://www.mdefence.in/index.php

mDEFENCE is a mobile enabled TRACK/ POST tool which works at all moments, which works even without an active internet connection on mobile to provide a confidence of safety/ emergency management for citizens (particularly women).

The unique use cases are:

* Report multiple emergencies

* Emergency post to Guardians

* Auto-sync with media

* Employer connectivity

* Request for blood group

*Which platform is best*



Android, iOS, J2ME, Symbian, Blackberry, Windows.
Photo: * Which platform is best*

 Android,  iOS, J2ME, Symbian, Blackberry, Windows.

Monday, December 9, 2013

!!!Sachin Happy Ending!!!

Photo: #ThankYouSachin
End of a glorious career!

!!!* Sachin's Inspirational Last Speech After His Retirement From Tests *!!!

http://www.youtube.com/watch?v=AzLil8ImkUw#t=649

*When it comes to supercomputers Linux Rules*

Google Voice Search Hotword (Beta)

This extension allows you to say ‘Ok Google’ and start speaking your search.
Now you can talk to Google when you’re using Chrome. Hands-free. No typing. Simply say “Ok Google” and then ask your question.  Note: this extension sends your question to Google only when it hears the phrase “Ok Google.”

How to get started
1) Download the extension.
2) Click “agree” to give your permission to use your microphone. 
3) Visit Google.com on Chrome and give it a try.  Just say “Ok Google” and then ask your question.

By installing this item, you agree to the Google Terms of Service and Privacy Policy at https://www.google.com/intl/en/policies/.


http://thumbnails.visually.netdna-cdn.com/list-of-google-now-voice-commands_5294dc0a62462_w587.png

Bit Torrent Sync

http://www.bittorrent.com/sync
http://www.youtube-nocookie.com/embed/044jIZfnyqQ?rel=0

sync-speed-infographic

Friday, December 6, 2013

10 reasons your enterprise should adopt Red Hat 6.5

http://www.techrepublic.com/blog/10-things/10-reasons-your-enterprise-should-adopt-red-hat-65/

Check out the 10 features that are most notable in the latest version of Red Hat Enterprise Linux. 
redhat_logo_350x169.png
The latest iteration of the Red Hat Enterprise Linux (RHEL) operating system has arrived and it is not only ready for the enterprise, it's ready to re-define and reset the bar for enterprise expectations. With a full host of improvements (and new features), RHEL could easily become the de facto standard for enterprise platforms.
If you're not sure of this claim, or simply cannot believe the claim, I offer up to you ten reasons why your enterprise should adopt Red Hat Enterprise Linux.

1. Precision Time Protocol

If your company requires time to be measured in microseconds, you need a platform that works with the Precision Time Protocol (PTP). PTP enables sub-microsecond clock accuracy over a local area network. If you depend upon high-speed, low-latency applications (such as those used in the trading industry), PTP is a must-have.

2. Easy application image deployment

There's a new tool in town (or at least a renamed tool), called Docker. With Docker you can easily deploy application images within containers. Each of these containers run the application as if it were on a virtual machine. This means you no longer have to suffer the overhead of deploying a full-blown virtual operating system just to run a simple application. This will not only make your virtual environment much more efficient, it'll also be far more cost effective.

3. Open hybrid cloud

RHEL 6.5 supports both OpenStack and OpenShift technologies. OpenStack is an open source cloud computing platform and OpenShift automates the provisioning, management, and scaling of cloud computing platforms. Together these two pieces work to create a Platform as a Service (PaaS). This, in conjunction with Docker creates an incredibly flexible cloud environment that can serve the enterprise needs in many ways.

4. Enhanced security

RHEL 6.5 enjoys numerous security upgrades. Key to the enhancements is a centralized certificate trust store which provides standardized certificate access for all security services. There are also tools that support the OpenSCP implementation of the Security Content Automation Protocol (SCAP). This protocol was developed by US National Institute of Standards and Technology and is central for auditing and verifying security configurations. With this included standards-based technology, it is possible to ensure a RHEL server configuration meets very stringent standards.

5. Network activity views

If you're an administrator that likes to know specifically what is going on with your network, RHEL 6.5 has what you're looking for. The latest version of Red Hat offers a comprehensive view of all network activity. With these new capabilities, administrators will be able to inspect Internet Group Management Protocol (IGMP) data in order to list multicast router ports, multicast groups with active subscribers (and their associated interfaces). 

6. Improved virtualization tools

There are plenty of improvements to the virtualization tools included with RHEL 6.5. High on this list is the ability to dynamically enable or disable virtual processors in active guests. With this new addition, RHEL can now better interact with cloud-based elastic workloads. Virtual guest memory has also been improved, with configurations that support up to 4TB of memory on the Linux built-in, kernel-based virtual machine hypervisor.

7. Subscription management

RHEL 6.5 now boasts a revised Subscription Management. With this new tool you have the choice of having your server connect to the Red Hat Customer Portal or to an on-premise subscription management service set up using the Subscription Asset Manager. With the server and the service connected, your company will enjoy centralized control of all subscription assets. Another benefit of this service is that you gain enhanced reporting for multiple systems.

8. Faster dump files

If you've ever had to deal with large kernel dump files, you know they can cause problems. That is no more with RHEL 6.5 The new system is now capable of handling incredibly large dump files faster. Thanks to a new compression algorithm (LZO), dump files are created far faster than previous iterations. Enhancements to the dump tools tracing and testing commands provides additional even monitoring capabilities.

9. Improved storage

Anyone working with RHEL 6.5 will see a marked improvement of storage. One reason for this is the improved control and recover when working in iSCSI or Fiber Channel Storage Area Networks. The latest release also includes a solid state driver (SSD) controller interface as well as support for NVM Express-based SSDs. It is also now possible to configure over 255 (Logical Unit Number) LUNs connected to a single iSCSI target.

10. Improved overall performance

Above everything, Red Hat Enterprise Linux 6.5 enjoys an over all performance increase that is noticeable  – which, in turn, translates to more reliable environments, cost savings, and happier end users/CTOs. This improved performance means your critical applications can be run more effectively – which translates to a better bottom line.
Red Hat Enterprise Linux 6.5 could very easily herald a new king of the mountain in the enterprise. With the newest release, your company will enjoy more reliability, more security, and an improved ROI. 

Fact sheet: Red Hat Enterprise Linux 6.5

http://www.techrepublic.com/blog/linux-and-open-source/fact-sheet-red-hat-enterprise-linux-65/

Take a quick look at some of the updates and changes you'll see in Red Hat Enterprise Linux 6.5. 

The latest iteration of Red Hat Enterprise Linux (6.5) is now available, and it's a serious contender to usurp all other platforms as king of the enterprise space. This particular release was designed specifically to simplify the operation of mission-critical SAP applications. The new release focuses on key enterprise-specific areas, including:
  • Subscription management services
  • Scalability
  • Networking
  • Storage
  • Virtualization
  • Security

What we know

Kernel:
  • The pm8001/pm80xx driver adds support for PMC-Sierra Adaptec Series 6H and 7H SAS/SATA HBA cards, plus PMC Sierra 8081, 8088, and 8089 chip-based SAS/SATA controllers
  • Configurable Timeout for Unresponsive Devices
  • Configuration of Maximum Time for Error Recovery
  • Lenovo X220 Touchscreen Support
  • New Supported Compression Formats for makedumpfile
Networking:
  • Precision Time Protocol (PTP)
  • Analyzing the Non-Configuration IP Multicast IGMP Snooping Data
  • PPPoE Connections Support in NetworkManager
  • Network Namespace Support for OpenStack
  • SCTP Support to Change the Cryptography Hash Function
  • M3UA Measurement Counters for SCTP
  • Managing DOVE Tunnels Using iproute
  • WoWLAN Support for Atheros Interfaces
  • SR-IOV Functionality in the qlcnic Driver
Security:
  • OpenSSL Updated to Version 1.0.1
  • Smartcard Support in OpenSSH
  • ECDSA Support in OpenSSL
  • ECDHE Support in OpenSSL
  • Support of TLS 1.1 and 1.2 in OpenSSL and NSS
  • OpenSSH Support of HMAC-SHA2 Algorithm
  • Prefix Macro in OpenSSL
  • NSA Suite B Cryptography Support
  • Shared System Certificates
  • Automatic Synchronization of Local Users Centrally in Identity Management
  • ECC Support in NSS
  • Certificate Support in OpenSSH

A new time protocol

There are specific enterprises (such as trading-related industries) where application latency must be measured in microseconds. Because of this need, Red Hat Enterprise Linux 6.5 now supports sub-microsecond clock accuracy over the local area network (LAN) using the Precision Time Protocol (PTP). This precision time synchronization is key to enable better performance for high-speed, low-latency applications.

Networking

PTP isn't the only improvement to the network subsystem. Red Hat Enterprise Linux 6.5 improved networking includes new capabilities that enable system administrators to inspect Internet Group Management Protocol (IGMP) data to list multicast router ports and multicast groups with active subscribers (and their associated interfaces). The improvements in networking allow the Red Hat server to better meet the needs of modern network scenarios.

Next-gen enterprise security

The latest iteration of Red Hat Enterprise Linux goes a long way to integrate security. One of the main changes is the addition of a centralized certificate trust store that enables standardized certificate access for security services. Also added into this release is OpenSCAP 2.1, an implementation of the National Institute of Standards and Technology’s (NIST) Security Content Automation Protocol (SCAP) 1.2 standard.

Virtualization

One of the big improvements with virtualization in Red Hat Enterprise Linux 6.5 is the ability to enable and disable virtual processors CPUs (vCPUs) in active guests. This improvement makes it an ideal choice for elastic workloads. Also, the handling of memory intensive applications within guests has been improved, thanks to the inclusion of support for up to 4 TB of memory on the Kernel-based Virtual Machine (KVM) hypervisor. Lastly, integration with GlusterFS volumes is now supported, and this provides direct access to the distributed storage platform, which greatly improves performance when accessing either Red Hat Storage or GlusterFS volumes.

Storage

Storage is crucial to any enterprise. You need to have reliable and fast access to data, including portability. With Red Hat Enterprise Linux 6.5, customers are able to deploy application images in containers created in physical, virtual, or cloud environments. This feature is accomplished using Docker, an open-source project to package and run lightweight, self-sufficient containers. Red Hat Enterprise Linux 6.5 has also improved support for NVM Express-based Solid Sate Drives (SSDs), which standardizes the interface for PCIe-based SSDs. If you can afford a server loaded with SSDs, the performance increase is exceptional -- and Red Hat fully understands that.
Scalability has been improved within Red Hat Enterprise Linux 6.5 as well. It's now possible to configure more than 255 Logical Unit Number (LUNs) connected to a single iSCSI target. Administrators can also control and recover SAN for iSCSI. There are numerous other storage-centric improvements (Fibre Channel, updates to kexec/kdump mechanism, and more). Finally, Red Hat Enterprise Linux 6.5 makes it easier to track and manage the consumption of subscriptions across the entire enterprise.

Resources

For more information about Red Hat Enterprise Linux 6.5, visit the resources below:

Red Hat


                         redhat.jpg

Talkatone

Millions of people use Talkatone to call and text over WiFi or a data connection without using cell minutes.


                                                              http://www.talkatone.com/

mRemoteNG

http://www.mremoteng.org/

mRemoteNG is a fork of mRemote, an open source, tabbed, multi-protocol, remote connections manager. mRemoteNG adds bug fixes and new features to mRemote.

It allows you to view all of your remote connections in a simple yet powerful tabbed interface.

mRemoteNG supports the following protocols:
  • RDP (Remote Desktop/Terminal Server)
  • VNC (Virtual Network Computing)
  • ICA (Citrix Independent Computing Architecture)
  • SSH (Secure Shell)
  • Telnet (TELecommunication NETwork)
  • HTTP/HTTPS (Hypertext Transfer Protocol)
  • rlogin
  • Raw Socket Connections
If you are a programmer, graphic designer or technical writer and would like to help with mRemoteNG, please let us know.

Tuesday, October 22, 2013

mDEGENCE App

Hey! I am now using mDefence, a GPRS Free Citizen safety App. This will be really handy in times of need and emergency. Get one on your mobile from http://mdefence.in

http://www.mdefence.in/index.php
mDEFENCE is a mobile enabled TRACK/ POST tool which works at all moments, which works even without an active internet connection on mobile to provide a confidence of safety/ emergency management for citizens (particularly women).
The unique use cases are:
* Report multiple emergencies
* Emergency post to Guardians
* Auto-sync with media
* Employer connectivity
* Request for blood group




Thursday, October 3, 2013

Monday, June 10, 2013

Unix Administration Learning MAP

Big Data – What is it???

Ref:-http://gurkulindia.com/main/2013/06/bigdata/

Most of the technology geeks may have heard the recent buzz about Big Data; in recent times many of my colleagues and friends were asking several questions. So I thought I should write a blog post to better answer their questions.
What is Big Data?
Big Data is defined ‘n’ number of ways in the industry, so instead of trying to find the actual definition lets try to understand the concepts and idea behind it.
As the name says “big data” you may think it is all about size – but that is not just that. There is a lot more to deal with and enormous number of use cases in multiple domains. One of the ways to explain BigData is “V3
V – Volume V – Variety V – Velocity

V3

Picture Source: Wired.com
Another approach of this is “ABC
A – Analytics B – Bandwidth C – Capacity
One can say “V3” or “ABC” both as the characteristics of big data rather than as a definition.
Let’s get into more details of the V’s
V’ – Volume of Data:
The sheer “volume” of data being stored is exploding. 90% of the world’s data is generated from last 4 years. We expect this number to reach 35 Zettabytes (ZB) by 2020. Companies like Facebook, Twitter, CERN generates Terabytes of data every single day. Interestingly 80% of the world’s data is unstructured, businesses today do not have enough resources or technology to store this and turn it as “useful” data or in other words its is very hard to get information out of the available data.
One of the well-observed phenomena is the data available to an organization is “raising” where as the percent of data an organization can process is declining, this is kind of depressing as a technology lover. But don’t feel bad, we have Hadoop to take fix this ☺
V’ – Variety of Data:
With the growing amounts of data we now have a new challenge to deal with: its variety. With growing variety of sources we have seen “variety of data” to deal with; sensors, social networking, feeds, smart devices, location info, and many more. This has left us in a complex situation, because it not only has traditional relational data (very less percent) but majority of it is raw, semi structured & unstructured data from web logs, click-stream data, search indexes, email, photo videos and soon.
For us to handle this kind of data on a traditional system is impossible “0”. We need a fundamental shift in analysis requirement from traditional structured data to include “variety” of data. But as traditional analytic platforms can’t handle variety due to the nature of its built for supporting relational kind that’s neatly formatted and fits nicely into the strict schemas.
As we know the “truth” about 80% data is left unprocessed we now have a need to build a system to efficiently store and process non-relational data and here by perform required analytics and generate report via the Business Intelligence (BI) tools and make real value to the business and to its growth.
‘V’ – Velocity of Data:
Just as the sheer volume and variety of data we collect and store has changed, so, too, has the “velocity” at which it is generated and needs to be handled. As we know the growth rates associated with data repositories is very high with growth in the number of source.
Rather than confining the idea of velocity the above mentioned we could intercept it as “data in motion”: ‘The speed at which data is flowing’:
Two most important challenges to deal with are:
1. Data in motion
2. Data in rest
Dealing effectively with Big Data requires us to perform analytics against the volume and variety of data while it is “still in motion”, not just after it is at “rest”.
Consider a fraud prevention at real time use case: Lets say a credit card is cloned and used at two different locations at the same time, with our existing ‘traditional’ systems we have lag involved to detect this. But imagine if we have “real time” data processing and analyzing technology to prevent this. Its just wonderful as it sounds.
Why Big Data?
• To analyze not only raw structured data, but semi structured, unstructured data from a variety of sources.
• To effectively process and analyze larger set of data instead of analyzing sample of the data.
• To solve information challenges that don’t natively fit within a traditional relational data base approach for handling the v3.
• To improve “intelligence in business” and to take quicker actions by developing “real” B.I tools and reaching the customer needs like never before.
• To develop business patterns and trends in “real time”
• To improve the quality of business in various sector like e-health, retail, IT , call centers, agriculture & so on.
“To handle, process the data and do magical things that were never imagined by anyone”
Working With Big data:
Google in its initial days was successfully able to download the Internet and index the available data when it was small. But when data started growing and new sources started increasing everyday things became complex to handle. So Google come with up solution internally to process this growing volume in a completely different way.
In that process they have started developing GFS – Google File System and also something called Map-Reduce (M to efficiently manage this growing data. But Google has kept this for their internal use and has not open sourced it. They have published a paper in 2004 called “Map-Reduce” to explain what and how this data is processed to make the internet searches possible.
Using that paper people in the industry started thinking in a different way. A guy named “Doug” has started developing a repository to handle the growing and unstructured data which is named as “Hadoop”, this is a open source project and is been actively developed and highly contributed by “Yahoo”.

Introduction to Hadoop

Ref:-  http://gurkulindia.com/main/2013/06/hadoop-intro/

_hadoopelephant_rgb1
Hadoop is a platform that is well suited to deal with semi-structured & unstructured data, as well as when a data discovery process is needed. That isn’t to say that Hadoop can’t be used for structured dara that is readily available in a raw format; because it can.
Traditionally, data goes through a lot of rigor to make it into the warehouse. This data is cleaned up via various cleansing, enrichment, modeling, master data management and other services before it is ready for analysis; which is expensive process. Because of that expense, its clear that data that lands in warehouse is not just high value, but has a broad purpose; it is used to generate reports & dash-board where the accuracy is the key.
In contrast, Big Data repositories very rarely undergo the full quality control versions of data injected into a warehouse, Hadoop is built for the purpose of handling larger volumes of data, so prepping data and processing it should be cost prohibitive.
I say Hadoop as a system designed for processing mind-boggling amounts of data
Two main components of Hadoop:
1. Map – Reduce = Computation
2. HDFS = Storage
Hadoop Distributed file system (HDFS):
hdfs arch
Let’s discuss about the Hadoop cluster components before getting into details of HDFS.
A typical Hadoop environment consists of a master node, worker nodes with specialized software components.
Master node: There will be multiple master nodes to avoid single point of failure in any environment. The elements of master node are
1. Job Tracker
2. Task tracker
3. Name tracker
Job Tracker: Job tracker interacts with client applications. It is mainly responsible for distributing Map. Reduce tasks to particular nodes with in a cluster.
Task tracker: This process receives the tasks from a job tracker like Map, Reduce and shuffle.
Name node: All these processes are charged with storing a directory free of all files in the HDFS. They also keep track of where the file data is kept within the cluster. Client applications contact name nodes when they need to locate a file, or add, copy as delete a file.
Data Node: Data nodes stores data in the HDFS, it is responsible for replicating data across clusters. These interact with client apps and Name node supplied the data node’s address.
Hdfs-mr
Worker Nodes: These are the commodity servers for processing the data that is coming through. Each worker node includes a data node and a task tracker
Scenario to better understand how “stuff” works:
1. Let’s say we have a 300mb file
2. By default we make it as 128mb blocks 
300mb= 128mb + 128mb + 44mb 
3. So HDFS splits 300mb into blocks as above
4. HDFS will keep 3 copies of each block
5. All these blocks are stored on data nodes
Bottom line is, Name node tracks blocks & data nodes and pays attention to all nodes in cluster. It do not save any data and no data goes through it.
• When a Data node (DN) fails it makes sure the copies are copied to another node and can handle upto 2 DN’s failure.
• Name node (NN) is a single point of failure.
• DN’s continuously runs check sums, if any block is corrupted then it will be process from other DN’s replicas.
There is lot more to discuss but let’s move on to M-R for now.
Map Reduce (M-R)
Google invented this. The main characteristics of M-R are:
1. Sort/merge is the primate
2. Batch oriented
3. Ad hoc queries (no schema)
4. Distribution handled by frame work
Let’s make it simple to understand, we get TB’s & PB’s of data to get processed & analyzed. So to handle it we use MR which basically has two major phases map & reduce.
Map: MR uses key/value pairs. Any data that comes in will be Splitted by HDFS into blocks and then we process it through M-R where we assign a value to every key.
Example: “Gurukulindia is the best site to learn big data”
Just to list network view & logical view and to make good view:
1. Input step: Load data into HDFS by splitting & load to DN’S. The blocks are replicated to overcome failures. The NN keeps track of blocks & DN’s.
2. Job step: Submit the MR Job & its details to the Job tracker.
3. Job init step: The Job tracker interacts with task tracker on each DN to schedule MR tasks.
4. Map step: Mapper process data blocks and generates a list of key value pairs
5. Sort step: Mapper sorts list of key value pair
6. Shuffle: Transfers mapped output to the reducers in sorted fashion.
7. Reduce: Reduces merge list of key value pairs to generate final result.
The results of Reduces are finally stored in HDFS replicated as per the configuration and then clients will be able to read from HDFS.

Thursday, June 6, 2013

Latest Technologi​es To Learn

  • Expertise in Big Data technologies like Apache Hadoop, HDFS, Map Reduce, Pig, Hive, SQOOP
  • Installing and configuring the HADOOP in Pseudo Mode on Ubuntu
  • Administration, installing, upgrading and managing distributions of Hadoop (CDH3, CDH4)
  • knowledge in performance troubleshooting and tuning Hadoop clusters
  • Knowledge in Map Reduce Hive, PIG, Sqoop.

Wednesday, May 8, 2013

Latest Technologies In Demand

Cloud computing
Data Center Hardware
Storage
Security Tech
IT
BigData
SmartGrids
Security
Health Tech
SOftware,Networking
Enalitics
ComputerSecurity

Friday, April 19, 2013

PLUS D and searchable hidden histories

The WIKILEAKS Public Library of US Diplomacy

The WIKILEAKS Public Library of US Diplomacy (PlusD) holds the world's largest searchable collection of United States confidential, or formerly confidential, diplomatic communications. As of April 8, 2013 it holds 2 million records comprising approximately 1 billion words. The collection covers US involvements ...in, and diplomatic or intelligence reporting on, every country on earth. It is the single most significant body of geopolitical material ever published.

The PlusD collection, built and curated by WikiLeaks, is updated from a variety of sources, including leaks, documents released under the Freedom of Information Act (FOIA) and documents released by the US State Department systematic declassification review.

We are also preparing the processed PlusD collection for standalone distribution. If you are interested in obtaining a copy, please email: plusd@wikileaks.org and put 'Request' in the subject line.

If you have unclassified or declassified US diplomatic documents to add to the PlusD collection please contact: plusd@wikileaks.org and put 'Submission' in the subject line. Please note that for inclusion in the PlusD Library we are generally unable to consider submissions of less than 1,000 documents at a time.

http://wikileaks.org/plusd/

Google Glass

http://en.wikipedia.org/wiki/Google_Glass
http://www.google.com/glass/start/how-it-feels/

... It’s surprisingly simple.
Say “take a picture” to take a picture.
Record what you see. Hands-free.
Even share what you see. Live.
Directions right in front of you.
Speak to send a message.
Ask whatever’s on your mind.
Translate your voice.
Answers without having to ask.
Strong and light.
Evolutionary design.
Charcoal, Tangerine, Shale, Cotton, Sky.

http://www.google.com/glass/start/how-to-get-one/

Google hopes to launch Glass by early 2014, though the company is already pushing out developer editions, priced at $1,500. A consumer version will be available by the end of 2013 for under $1,500.


 

Google Keep

Quickly capture what’s on your mind and recall it easily wherever you are. Create a checklist, enter a voice note or snap a photo and annotate it. Everything you add is instantly available on all your devices – desktop and mobile.
With Google Keep you can:
• Keep track of your thoughts via notes, lists and photos
• Have voice notes transcribed automatically
• Use homescreen widgets ...to capture thoughts quickly
• Color-code your notes to help find them later
• Swipe to archive things you no longer need
• Turn a note into a checklist by adding checkboxes
• Use your notes from anywhere - they are safely stored in the cloud and available on the web at http://drive.google.com/keep

http://www.youtube.com/watch?v=UbvkHEDvw-o

https://play.google.com/store/apps/details?id=com.google.android.keep