This Blog is intended to collect information of my various Intrests,pen my opinion on the information gathered and not intended to educate any one of the information posted,but are most welcome to share there view on them
Monday, June 10, 2013
Big Data – What is it???
Ref:-http://gurkulindia.com/main/2013/06/bigdata/
Most of the technology geeks may have heard the recent buzz about Big Data; in recent times many of my colleagues and friends were asking several questions. So I thought I should write a blog post to better answer their questions.
What is Big Data?
Big Data is defined ‘n’ number of ways in the industry, so instead of trying to find the actual definition lets try to understand the concepts and idea behind it.
As the name says “big data” you may think it is all about size – but that is not just that. There is a lot more to deal with and enormous number of use cases in multiple domains. One of the ways to explain BigData is “V3”
V – Volume V – Variety V – Velocity
Picture Source: Wired.com
Another approach of this is “ABC”
A – Analytics B – Bandwidth C – Capacity
One can say “V3” or “ABC” both as the characteristics of big data rather than as a definition.
Let’s get into more details of the V’s
‘V’ – Volume of Data:
The sheer “volume” of data being stored is exploding. 90% of the world’s data is generated from last 4 years. We expect this number to reach 35 Zettabytes (ZB) by 2020. Companies like Facebook, Twitter, CERN generates Terabytes of data every single day. Interestingly 80% of the world’s data is unstructured, businesses today do not have enough resources or technology to store this and turn it as “useful” data or in other words its is very hard to get information out of the available data.
One of the well-observed phenomena is the data available to an organization is “raising” where as the percent of data an organization can process is declining, this is kind of depressing as a technology lover. But don’t feel bad, we have Hadoop to take fix this ☺
‘V’ – Variety of Data:
With the growing amounts of data we now have a new challenge to deal with: its variety. With growing variety of sources we have seen “variety of data” to deal with; sensors, social networking, feeds, smart devices, location info, and many more. This has left us in a complex situation, because it not only has traditional relational data (very less percent) but majority of it is raw, semi structured & unstructured data from web logs, click-stream data, search indexes, email, photo videos and soon.
For us to handle this kind of data on a traditional system is impossible “0”. We need a fundamental shift in analysis requirement from traditional structured data to include “variety” of data. But as traditional analytic platforms can’t handle variety due to the nature of its built for supporting relational kind that’s neatly formatted and fits nicely into the strict schemas.
As we know the “truth” about 80% data is left unprocessed we now have a need to build a system to efficiently store and process non-relational data and here by perform required analytics and generate report via the Business Intelligence (BI) tools and make real value to the business and to its growth.
‘V’ – Velocity of Data:
Just as the sheer volume and variety of data we collect and store has changed, so, too, has the “velocity” at which it is generated and needs to be handled. As we know the growth rates associated with data repositories is very high with growth in the number of source.
Rather than confining the idea of velocity the above mentioned we could intercept it as “data in motion”: ‘The speed at which data is flowing’:
Rather than confining the idea of velocity the above mentioned we could intercept it as “data in motion”: ‘The speed at which data is flowing’:
Two most important challenges to deal with are:
1. Data in motion
2. Data in rest
2. Data in rest
Dealing effectively with Big Data requires us to perform analytics against the volume and variety of data while it is “still in motion”, not just after it is at “rest”.
Consider a fraud prevention at real time use case: Lets say a credit card is cloned and used at two different locations at the same time, with our existing ‘traditional’ systems we have lag involved to detect this. But imagine if we have “real time” data processing and analyzing technology to prevent this. Its just wonderful as it sounds.
Why Big Data?
• To analyze not only raw structured data, but semi structured, unstructured data from a variety of sources.
• To effectively process and analyze larger set of data instead of analyzing sample of the data.
• To solve information challenges that don’t natively fit within a traditional relational data base approach for handling the v3.
• To improve “intelligence in business” and to take quicker actions by developing “real” B.I tools and reaching the customer needs like never before.
• To develop business patterns and trends in “real time”
• To improve the quality of business in various sector like e-health, retail, IT , call centers, agriculture & so on.
• To effectively process and analyze larger set of data instead of analyzing sample of the data.
• To solve information challenges that don’t natively fit within a traditional relational data base approach for handling the v3.
• To improve “intelligence in business” and to take quicker actions by developing “real” B.I tools and reaching the customer needs like never before.
• To develop business patterns and trends in “real time”
• To improve the quality of business in various sector like e-health, retail, IT , call centers, agriculture & so on.
“To handle, process the data and do magical things that were never imagined by anyone”
Working With Big data:
Google in its initial days was successfully able to download the Internet and index the available data when it was small. But when data started growing and new sources started increasing everyday things became complex to handle. So Google come with up solution internally to process this growing volume in a completely different way.
In that process they have started developing GFS – Google File System and also something called Map-Reduce (M to efficiently manage this growing data. But Google has kept this for their internal use and has not open sourced it. They have published a paper in 2004 called “Map-Reduce” to explain what and how this data is processed to make the internet searches possible.
Using that paper people in the industry started thinking in a different way. A guy named “Doug” has started developing a repository to handle the growing and unstructured data which is named as “Hadoop”, this is a open source project and is been actively developed and highly contributed by “Yahoo”.
Introduction to Hadoop
Ref:- http://gurkulindia.com/main/2013/06/hadoop-intro/
Hadoop is a platform that is well suited to deal with semi-structured & unstructured data, as well as when a data discovery process is needed. That isn’t to say that Hadoop can’t be used for structured dara that is readily available in a raw format; because it can.
Traditionally, data goes through a lot of rigor to make it into the warehouse. This data is cleaned up via various cleansing, enrichment, modeling, master data management and other services before it is ready for analysis; which is expensive process. Because of that expense, its clear that data that lands in warehouse is not just high value, but has a broad purpose; it is used to generate reports & dash-board where the accuracy is the key.
In contrast, Big Data repositories very rarely undergo the full quality control versions of data injected into a warehouse, Hadoop is built for the purpose of handling larger volumes of data, so prepping data and processing it should be cost prohibitive.
I say Hadoop as a system designed for processing mind-boggling amounts of data
Two main components of Hadoop:
1. Map – Reduce = Computation
2. HDFS = Storage
2. HDFS = Storage
Hadoop Distributed file system (HDFS):
Let’s discuss about the Hadoop cluster components before getting into details of HDFS.
A typical Hadoop environment consists of a master node, worker nodes with specialized software components.
Master node: There will be multiple master nodes to avoid single point of failure in any environment. The elements of master node are
1. Job Tracker
2. Task tracker
3. Name tracker
2. Task tracker
3. Name tracker
Job Tracker: Job tracker interacts with client applications. It is mainly responsible for distributing Map. Reduce tasks to particular nodes with in a cluster.
Task tracker: This process receives the tasks from a job tracker like Map, Reduce and shuffle.
Name node: All these processes are charged with storing a directory free of all files in the HDFS. They also keep track of where the file data is kept within the cluster. Client applications contact name nodes when they need to locate a file, or add, copy as delete a file.
Data Node: Data nodes stores data in the HDFS, it is responsible for replicating data across clusters. These interact with client apps and Name node supplied the data node’s address.
Worker Nodes: These are the commodity servers for processing the data that is coming through. Each worker node includes a data node and a task tracker
Scenario to better understand how “stuff” works:
1. Let’s say we have a 300mb file
2. By default we make it as 128mb blocks
300mb= 128mb + 128mb + 44mb
3. So HDFS splits 300mb into blocks as above
4. HDFS will keep 3 copies of each block
5. All these blocks are stored on data nodes
2. By default we make it as 128mb blocks
300mb= 128mb + 128mb + 44mb
3. So HDFS splits 300mb into blocks as above
4. HDFS will keep 3 copies of each block
5. All these blocks are stored on data nodes
Bottom line is, Name node tracks blocks & data nodes and pays attention to all nodes in cluster. It do not save any data and no data goes through it.
• When a Data node (DN) fails it makes sure the copies are copied to another node and can handle upto 2 DN’s failure.
• Name node (NN) is a single point of failure.
• DN’s continuously runs check sums, if any block is corrupted then it will be process from other DN’s replicas.
• Name node (NN) is a single point of failure.
• DN’s continuously runs check sums, if any block is corrupted then it will be process from other DN’s replicas.
There is lot more to discuss but let’s move on to M-R for now.
Map Reduce (M-R)
Google invented this. The main characteristics of M-R are:
1. Sort/merge is the primate
2. Batch oriented
3. Ad hoc queries (no schema)
4. Distribution handled by frame work
2. Batch oriented
3. Ad hoc queries (no schema)
4. Distribution handled by frame work
Let’s make it simple to understand, we get TB’s & PB’s of data to get processed & analyzed. So to handle it we use MR which basically has two major phases map & reduce.
Map: MR uses key/value pairs. Any data that comes in will be Splitted by HDFS into blocks and then we process it through M-R where we assign a value to every key.
Example: “Gurukulindia is the best site to learn big data”
Just to list network view & logical view and to make good view:
1. Input step: Load data into HDFS by splitting & load to DN’S. The blocks are replicated to overcome failures. The NN keeps track of blocks & DN’s.
2. Job step: Submit the MR Job & its details to the Job tracker.
3. Job init step: The Job tracker interacts with task tracker on each DN to schedule MR tasks.
4. Map step: Mapper process data blocks and generates a list of key value pairs
5. Sort step: Mapper sorts list of key value pair
6. Shuffle: Transfers mapped output to the reducers in sorted fashion.
7. Reduce: Reduces merge list of key value pairs to generate final result.
2. Job step: Submit the MR Job & its details to the Job tracker.
3. Job init step: The Job tracker interacts with task tracker on each DN to schedule MR tasks.
4. Map step: Mapper process data blocks and generates a list of key value pairs
5. Sort step: Mapper sorts list of key value pair
6. Shuffle: Transfers mapped output to the reducers in sorted fashion.
7. Reduce: Reduces merge list of key value pairs to generate final result.
The results of Reduces are finally stored in HDFS replicated as per the configuration and then clients will be able to read from HDFS.
Thursday, June 6, 2013
Latest Technologies To Learn
- Expertise in Big Data technologies like Apache Hadoop, HDFS, Map Reduce, Pig, Hive, SQOOP
- Installing and configuring the HADOOP in Pseudo Mode on Ubuntu
- Administration, installing, upgrading and managing distributions of Hadoop (CDH3, CDH4)
- knowledge in performance troubleshooting and tuning Hadoop clusters
- Knowledge in Map Reduce Hive, PIG, Sqoop.
Wednesday, May 8, 2013
Latest Technologies In Demand
Cloud computing
Data Center Hardware
Storage
Security Tech
IT
BigData
SmartGrids
Security
Health Tech
SOftware,Networking
Enalitics
ComputerSecurity
Data Center Hardware
Storage
Security Tech
IT
BigData
SmartGrids
Security
Health Tech
SOftware,Networking
Enalitics
ComputerSecurity
Friday, April 19, 2013
PLUS D and searchable hidden histories
The WIKILEAKS Public Library of US Diplomacy
The WIKILEAKS Public Library of US Diplomacy (PlusD) holds the world's largest searchable collection of United States confidential, or formerly confidential, diplomatic communications. As of April 8, 2013 it holds 2 million records comprising approximately 1 billion words. The collection covers US involvements ...in, and diplomatic or intelligence reporting on, every country on earth. It is the single most significant body of geopolitical material ever published.
The PlusD collection, built and curated by WikiLeaks, is updated from a variety of sources, including leaks, documents released under the Freedom of Information Act (FOIA) and documents released by the US State Department systematic declassification review.
We are also preparing the processed PlusD collection for standalone distribution. If you are interested in obtaining a copy, please email: plusd@wikileaks.org and put 'Request' in the subject line.
If you have unclassified or declassified US diplomatic documents to add to the PlusD collection please contact: plusd@wikileaks.org and put 'Submission' in the subject line. Please note that for inclusion in the PlusD Library we are generally unable to consider submissions of less than 1,000 documents at a time.
http://wikileaks.org/plusd/
The WIKILEAKS Public Library of US Diplomacy (PlusD) holds the world's largest searchable collection of United States confidential, or formerly confidential, diplomatic communications. As of April 8, 2013 it holds 2 million records comprising approximately 1 billion words. The collection covers US involvements ...in, and diplomatic or intelligence reporting on, every country on earth. It is the single most significant body of geopolitical material ever published.
The PlusD collection, built and curated by WikiLeaks, is updated from a variety of sources, including leaks, documents released under the Freedom of Information Act (FOIA) and documents released by the US State Department systematic declassification review.
We are also preparing the processed PlusD collection for standalone distribution. If you are interested in obtaining a copy, please email: plusd@wikileaks.org and put 'Request' in the subject line.
If you have unclassified or declassified US diplomatic documents to add to the PlusD collection please contact: plusd@wikileaks.org and put 'Submission' in the subject line. Please note that for inclusion in the PlusD Library we are generally unable to consider submissions of less than 1,000 documents at a time.
http://wikileaks.org/plusd/
Google Glass
http://en.wikipedia.org/ wiki/Google_Glass
http://www.google.com/ glass/start/how-it-feels/
... It’s surprisingly simple.
Say “take a picture” to take a picture.
Record what you see. Hands-free.
Even share what you see. Live.
Directions right in front of you.
Speak to send a message.
Ask whatever’s on your mind.
Translate your voice.
Answers without having to ask.
Strong and light.
Evolutionary design.
Charcoal, Tangerine, Shale, Cotton, Sky.
http://www.google.com/ glass/start/how-to-get-one/
Google hopes to launch Glass by early 2014, though the company is already pushing out developer editions, priced at $1,500. A consumer version will be available by the end of 2013 for under $1,500.
http://www.google.com/
... It’s surprisingly simple.
Say “take a picture” to take a picture.
Record what you see. Hands-free.
Even share what you see. Live.
Directions right in front of you.
Speak to send a message.
Ask whatever’s on your mind.
Translate your voice.
Answers without having to ask.
Strong and light.
Evolutionary design.
Charcoal, Tangerine, Shale, Cotton, Sky.
http://www.google.com/
Google hopes to launch Glass by early 2014, though the company is already pushing out developer editions, priced at $1,500. A consumer version will be available by the end of 2013 for under $1,500.
Google Keep
Quickly capture what’s on your mind and recall it easily wherever you are. Create a checklist, enter a voice note or snap a photo and annotate it. Everything you add is instantly available on all your devices – desktop and mobile.
With Google Keep you can:
• Keep track of your thoughts via notes, lists and photos
• Have voice notes transcribed automatically
• Use homescreen widgets ...to capture thoughts quickly
• Color-code your notes to help find them later
• Swipe to archive things you no longer need
• Turn a note into a checklist by adding checkboxes
• Use your notes from anywhere - they are safely stored in the cloud and available on the web at http://drive.google.com/keep
http://www.youtube.com/ watch?v=UbvkHEDvw-o
https://play.google.com/store/ apps/ details?id=com.google.android.k eep
With Google Keep you can:
• Keep track of your thoughts via notes, lists and photos
• Have voice notes transcribed automatically
• Use homescreen widgets ...to capture thoughts quickly
• Color-code your notes to help find them later
• Swipe to archive things you no longer need
• Turn a note into a checklist by adding checkboxes
• Use your notes from anywhere - they are safely stored in the cloud and available on the web at http://drive.google.com/keep
http://www.youtube.com/
https://play.google.com/store/
Google Account Activity
https://www.google.com/ settings/activity/signup?hl=en
Google Account Activity Reports give you a monthly summary of your account activity across many Google products.
With Account Activity Reports you can learn what's going on in your account, e.g. how many emails you have sent and received, how often you have searched on Google, from which countries you have logged in and ho...w often your YouTube videos have been viewed.
Every month the Account Activity Report will collect and summarize data across your Google account – e.g. sent emails or top searches. Data deletion at the data source, e.g. in your Web History will have no impact on issued reports, however reports can be deleted at any time.
Security Note: To maintain your safety and privacy we may sometimes ask you to verify your password even if you are already signed in. This may happen more frequently for services involving your personal information.
Google Account Activity Reports give you a monthly summary of your account activity across many Google products.
With Account Activity Reports you can learn what's going on in your account, e.g. how many emails you have sent and received, how often you have searched on Google, from which countries you have logged in and ho...w often your YouTube videos have been viewed.
Every month the Account Activity Report will collect and summarize data across your Google account – e.g. sent emails or top searches. Data deletion at the data source, e.g. in your Web History will have no impact on issued reports, however reports can be deleted at any time.
Security Note: To maintain your safety and privacy we may sometimes ask you to verify your password even if you are already signed in. This may happen more frequently for services involving your personal information.
Inactive Account Manager
Plan your digital afterlife with Inactive Account Manager
https://www.google.com/ settings/u/0/account/inactive
What should happen to your photos, emails and documents when you stop using your account? Google puts you in control.
You might want your data to be shared with a trusted friend or family member, or, you might want your account to be deleted entirely. There a...re many situations that might prevent you from accessing or using your Google account. Whatever the reason, we give you the option of deciding what happens to your data.
Using Inactive Account Manager, you can decide if and when your account is treated as inactive, what happens with your data and who is notified.
https://www.google.com/
What should happen to your photos, emails and documents when you stop using your account? Google puts you in control.
You might want your data to be shared with a trusted friend or family member, or, you might want your account to be deleted entirely. There a...re many situations that might prevent you from accessing or using your Google account. Whatever the reason, we give you the option of deciding what happens to your data.
Using Inactive Account Manager, you can decide if and when your account is treated as inactive, what happens with your data and who is notified.
Google Fiber
A different kind of Internet.
Google Fiber starts with a connection speed 100 times faster than today's broadband.
Instant downloads. Crystal clear high definition TV. And endless possibilities.
... It's not cable. And it's not just Internet. It's Google Fiber.
http://fiber.google.com/about/
Google Fiber starts with a connection speed 100 times faster than today's broadband.
Instant downloads. Crystal clear high definition TV. And endless possibilities.
... It's not cable. And it's not just Internet. It's Google Fiber.
http://fiber.google.com/about/
Wednesday, January 9, 2013
Red Hat ships Enterprise Linux 5.9
Summary: Linux leader Red Hat announced an upgrade of its Linux that offers better support for the latest hardware, OpenJDK, security, Samba 3.8 and Microsoft Hyper-V
Red Hat announced today an update of its leading enterprise Linux distribution that keeps pace with hardware, security, developer, interoperability and virtualization improvements.
The last upgrade in the 5 family, version 5.8, shipped last February. Version 5.9 demonstrates the company's commitment to a 10-year suport lifespan for each platform, the company notes.
The company will continue to add value to older Linux platforms to give customers maximum flexibility in how and when they upgrade. Red Hat Enterprise Linux 6.4, for instance, moved into beta testing last month.
Red Hat's formal statement about Enterprise Linux 5.9, which was released today, cites the platform's:
- Support for Industry-Leading Hardware Vendors Through Enhanced Hardware Enablement. Red Hat Enterprise Linux 5.9 showcases the strong relationships Red Hat has with leading hardware vendors by including support for some of the latest CPU, chipset and device driver enhancements.
- Continued Commitment to Security, Standards and Certifications. Red Hat Enterprise Linux has always been built with security in mind – a commitment that Red Hat Enterprise Linux 5.9 helps solidify. This update features tighter security controls, the ability to verify and check the robustness of new passwords and support for the latest government password policy requirements. It also adds support for using Federal Information Processing Standard (FIPS) mode with dmraid root devices. FIPS mode now supports RAID device discovery, RAID set activation, and the creation, removal, rebuilding and displaying of properties.
- New Developer Tools. Red Hat Enterprise Linux 5.9 includes several new developer-friendly features and tools, including the ability to develop and test with the latest version of open source Java available through OpenJDK 7. Many new SystemTap improvements have been added to Red Hat Enterprise Linux 5.9, including compile-server and client support for IPv6 networks, smaller SystemTap files, faster compiles, and compile server support for multiple concurrent connections.
- Enhanced Application Support. Red Hat Enterprise Linux 5.9 includes a new rsyslog5 package, which upgrades rsyslog to major version 5 and is faster and more reliable than existing rsyslog packages available in previous RHEL releases. Samba has also been updated to version 3.6 with several new features, including SMB2 support, a reworked print server and security default improvements for all versions.
- New Virtualization Capabilities and Flexibility in Multi-vendor Environments. Red Hat Enterprise Linux 5.9 enhances the operating system's usability in multi-vendor environments by introducing Microsoft Hyper-V drivers for improved performance. This enhances the usability of Red Hat Enterprise Linux 5 for guests in heterogenous, multi-vendor virtualized environments and provides improved flexibility and interoperability for enterprises.
- Better Subscription Management. Red Hat Enterprise Linux 5.9 uses Red Hat Subscription Management as the default, allowing customers to effectively and more easily manage their Red Hat Enterprise Linux subscriptions locally or with tools such as Subscription Asset Manager which was enhanced to easily run reports on subscription distribution and utilization and an improved user interface. For more detail on how Red Hat customers can manage their subscriptions, please visit the Subscription Management section of the Red Hat Customer Portal.
Monday, January 7, 2013
Discover Best smartphone and Best Camera
*Discover a smartphone*
http://geekaphone.com/
All Smartphones all the time. Android, iPhone, BlackBerry, Windows Phone 7, Nokia Symbian and more!
*Discover a Best Camera*
http://www.dpreview.com/
http://www.snapsort.com/
All Smartphones all the time. Android, iPhone, BlackBerry, Windows Phone 7, Nokia Symbian and more!
*Discover a Best Camera*
Saturday, December 8, 2012
Creating bootable USB drives the easy way
http://rufus.akeo.ie/
• Windows 7 x64: en_windows_7_ultimate_with_sp1_x64_dvd_618240.iso
Windows 7 USB/DVD Download Tool v1.0.30 | 8 mins 10s |
Universal USB Installer v1.8.7.5 | 7 mins 10s |
UNetbootin v1.1.1.1 | 6 mins 20s |
RMPrepUSB v2.1.638 | 4 mins 10s |
WiNToBootic v1.2 | 3 mins 35s |
Rufus v1.1.1 | 3 mins 25s |
• Ubuntu 11.10 x86: ubuntu-11.10-desktop-i386.iso
UNetbootin v1.1.1.1 | 1 min 45s |
RMPrepUSB v2.1.638 | 1 min 35s |
Universal USB Installer v1.8.7.5 | 1 min 20s |
Rufus v1.1.1 | 1 min 15s |
• Slackware 13.37 x86: slackware-13.37-install-dvd.iso
UNetbootin v1.1.1.1 | >60 mins |
Universal USB Installer v1.8.7.5 | 24 mins 35s |
RMPrepUSB v2.1.638 | 22 mins 45s |
Rufus v1.1.1 | 20 mins 15s |
Thursday, December 6, 2012
A free and open world depends on a free and open web.
“A free and open world depends on a free and open Internet. Governments alone, working behind closed doors, should not direct its future. The billions of people around the globe who use the Internet should have a voice.”
Find what you're looking for faster in Gmail and Search
https://www.google.com/experimental/gmailfieldtrial
Join this field trial to preview upcoming features we've been working on, such as:
Join this field trial to preview upcoming features we've been working on, such as:
- Improvements to search in Gmail
- Results from Gmail and Google Drive when you search on Google.com
- Additional related features and improvements
Vincent Bible Search
This is Instant Bible Search Application. you can search any verse from Holy Bible.
Send a chat message genesis 1,1 to vincent.bible
it will automatically give u the word in that bible referance
You can Search like below:
Ex1: genesis 1:1
Ex2: gen 1:1
Ex3: gen 1:1-4
Ex4: gen 1:1 tel
Ex5: gen 1:1-3 tel
Ex6: salvation
@Vincent Bible Search@
Facebook Integration
http://vincentbiblesearch.com/
Gtalk Integration
http://vincentbiblesearch.com/
Red Hat Enterprise Linux 6 – Red Hat Enterprise Linux 6.4 – is now available.
Key new features and enhancement details include:
Identity Management
Identity Management
- System Security Services Daemon (SSSD) enhancements improve the interoperability experience with [Microsoft Active Directory] by providing centralized identity access control for Linux/Unix clients in a heterogeneous environment.
File system
- pNFS (Parallel NFS) client (file layout only) remains in technology preview, however now delivers performance improvements with the addition of Direct I/O for faster data access. This drives particular performance benefits for I/O intensive use cases including database workloads.
Virtualization
- Red Hat Enterprise Linux 6 now includes the Microsoft Hyper-V Linux drivers, which were recently accepted by the upstream Linux community, improving the overall performance of Red Hat Enterprise Linux 6 as a guest on Microsoft Hyper-V.
- Installation support for VMware and Microsoft Hyper-V para-virtualization drivers. This new feature enhances the user deployment experience of Red Hat Enterprise Linux as a guest in either of these virtualization environments.
- In this release, KVM virtualization virtio-scsi support, a new industry storage architecture, provides industry leading storage stack scalability.
Management
- The use of swap functionality over NFS enables more efficient read/write tradeoffs between local system memory and remote disks. This capability increases performance in very large, disk-less server farms seen in ISP and Web hosting environments.
- Enhancement in c-groups delivers the ability to migrate multi-threaded applications without errors.
- Optimized perf tool for the latest Intel processors
Storage
- New system log features identify mapping from block device name to physical device identifier – allowing an administrator to easily locate specific devices as needed.
Productivity Tools
- Microsoft interoperability improvements with Microsoft Exchange and calendar support in Evolution. Productivity functions, such as calendar support with alarm notification and meeting scheduling is improved.
- Customers such as animation studios and graphic design houses now have support for the newer Wacom tablets.
Tuesday, November 13, 2012
How to Turn Your Ubuntu Laptop into a Wireless Access Point
If you have a single wired Internet connection – say, in a hotel room – you can create an ad-hoc wireless network with Ubuntu and share the Internet connection among multiple devices. Ubuntu includes an easy, graphical setup tool.
Unfortunately, there are some limitations. Some devices may not support ad-hoc wireless networks and Ubuntu can only create wireless hotspots with weak WEP encryption, not strong WPA encryption.
Setup
To get started, click the gear icon on the panel and select System Settings.
Select the Network control panel in Ubuntu’s System Settings window. You can also set up a wireless hotspot by clicking the network menu and selecting Edit Network Connections, but that setup process is more complicated.
If you want to share an Internet connection wirelessly, you’ll have to connect to it with a wired connection. You can’t share a Wi-Fi network – when you create a Wi-Fi hotspot, you’ll be disconnected from your current wireless network.
To create a hotspot, select the Wireless network option and click the Use as Hotspot button at the bottom of the window.
You’ll be disconnected from your existing network. You can disable the hotspot later by clicking the Stop Hotspot button in this window or by selecting another wireless network from the network menu on Ubuntu’s panel.
After you click Create Hotspot, you’ll see an notification pop up that indicates your laptop’s wireless radio is now being used as an ad-hoc access point. You should be able to connect from other devices using the default network name – “ubuntu” – and the security key displayed in the Network window. However, you can also click the Options button to customize your wireless hotspot.
From the wireless tab, you can set a custom name for your wireless network using the SSID field. You can also modify other wireless settings from here. The Connect Automatically check box should allow you to use the hotspot as your default wireless network – when you start your computer, Ubuntu will create the hotspot instead of connecting to an existing wireless network.
From the Wireless Security tab, you can change your security key and method. Unfortunately, WPA encryption does not appear to be an option here, so you’ll have to stick with the weaker WEP encryption.
The “Shared to other computers” option on the IPv4 Settings tab tells Ubuntu to share your Internet connection with other computers connected to the hotspot.
Even if you don’t have a wireless Internet connection available to share, you can network computers together and communicate between them – for example, to share files.
Subscribe to:
Posts (Atom)