Thursday, August 19, 2010

OpenNebula

What is OpenNebula ?
•It is an open source toolkit for building clouds.
•The latest release has built-in support for KVM, Xen Kernel and VmWare Server.

How OpenNebula works ?
OpenNebula is a distributed application containing two components 

The first component referred to as OpenNebula front-end runs on VX64 server. 

sudo apt-get install opennebula 

The second component referred to as OpenNebula Node should be installed on all hosts part of the compute cluster. This package prepares the machine for being a node in an OpenNebula cloud. It in turn configures the following dependent packages: 

1) KVM 
2) libvirt 
3) oneadmin user creation 
4) ruby 

sudo apt-get install opennebula-node


What are the components of opennebula front-end ?


OpenNebula front-end spawns the following processes when it starts: ( Note that front-end should always be started as oneadmin user using the command sudo -u oneadmin one start ) 

1) OpenNebula Daemon ( oned ) - Responsible for handling all incoming requests ( either from CLI or from API ). Talks to other processes whenever required. 

2) Scheduler ( mm_sched ) - It does match making to find a suitable host ( amongst the hosts part of compute cluster ) for bringing up virtual machines. 

3) Information Manager ( one_im_ssh.rb ) - Collects resource availability/utilization information for hosts/VMs respectively. ( Resources include CPU and Memory) 

4) Transfer Manager ( one_tm.rb ) - Responsbile for Image Managemnt ( Clone Image, Delete Image, etc. ) 

5) Virtual Machine Manager ( one_vmm_kvm.rb )- Acts as interface to the underlying Hypervisor. ( All operations to be performed on Virtual Machines go through this interface ) 

6) Hook Manager ( one_hm.rb ) - Responsible for executing Virtual Machine Hooks. ( Hooks are programs which are automatically triggered on VM state changes. They must be configured prior to starting front-end ) 

OpenNebula front-end can be configured to a great extent by modifying the contents of file /etc/one/oned.conf. Consider a sample configuration below: 

################################################################################
HOST_MONITORING_INTERVAL=10 # Used by Information manager to decide the frequency at which resource availability details have to be collected for hosts 

VM_POLLING_INTERVAL=10 # Used by Information manager to decide the frequency at which resource utilization details have to be collected for VMs 

VM_DIR=/mnt/onenfs/ # Should be shared across all hosts in compute cluster. Contains Disk Images required for booting VMs 

PORT=2633 # All supported API calls are converted to XML-RPC calls. front-end runs an XML-RPC Server on this port to handle these calls 

DEBUG_LEVEL=3 # DEBUG_LEVEL: 0 = ERROR, 1 = WARNING, 2 = INFO, 3 = DEBUG 

NETWORK_SIZE = 254 # Default size for Virtual Networks ( applicable while using onevnet ) 

MAC_PREFIX = "00:03" # Default MAC prefix to use while generating MAC Address from IP Address ( applicable while using onevnet ) 

# The following configuration supports KVM hypervisor. Note that the executables one_im_ssh one_vmm_kvm one_tm and one_hm can be found in /usr/lib/one/mads/ 

IM_MAD = [ 
name = "im_kvm", 
executable = "one_im_ssh", 
arguments = "im_kvm/im_kvm.conf" ] 

VM_MAD = [ 
name = "vmm_kvm", 
executable = "one_vmm_kvm", 
default = "vmm_kvm/vmm_kvm.conf", 
type = "kvm" ] 

TM_MAD = [ 
name = "tm_nfs", 
executable = "one_tm", 
arguments = "tm_nfs/tm_nfs.conf" ] 

HM_MAD = [ 
executable = "one_hm" ]



Working with OpenNebula CLI ( OpenNebula CLI is available only on front-end ) ?

1) Adding a new Host to compute cluster 

onehost create

Note that im_mad, vmm_mad, tm_mad in our case should be im_kvm, vmm_kvm and tm_nfs respectively as we have configured them in /etc/one/oned.conf

Also note that Information manager needs to collect resource availability ( CPU and Memory ) information for the host we have added. This requires:

• oneadmin user on front-end should be able to ssh to host without entering password ( test this using sudo -u oneadmin ssh oneadmin@ on front-end )
• Inorder for this to work copy the contents of /var/lib/one/.ssh/id_rsa.pub on front-end to /var/lib/one/.ssh/authorized_keys on host

Type onehost list to check the status:

Notice the value of STAT attribute. If the attribute has a value of 'on' then the host has been successfully added to compute cluster:

ID NAME RVM TCPU FCPU ACPU TMEM FMEM STAT
0 192.168.155.127 0 200 198 198 1800340 1341620 on


Look into /var/log/one/oned.log on front-end for debugging
Once a host has been successfully added use onehost disable/enable to toggle its status.


2) Submitting a new Virtual Machine job 

In order to provision a Virtual Machine in the compute cluster we need to construct a template and submit it using

onevm create

The following is a sample template:
################################################################################
NAME = Ubuntu
CPU = 0.5
MEMORY = 150
DISK = [clone=no, type="disk",source="/mnt/onenfs/ubuntu.img", target="hda"]
GRAPHICS = [
type = "vnc",
listen = "0.0.0.0",
port = "10"]

################################################################################

Note that all Virtual Machines will use the default properties present in /etc/one/vmm_kvm/vmm_kvm.conf unless they are overriden in the template.

The following is a sample /etc/one/vmm_kvm/vmm_kvm.conf which enables ACPI for all VMs and configures boot from hard disk.

###############################################################################
OS = [ boot = "hd" ]

FEATURES = [
PAE=no,
ACPI=yes
]
################################################################################

Issues with VM coming up will be captured either in /var/log/one/oned.log in front-end or /var/log/libvirt/qemu/one-.log on host where VM job has been scheduled.


4) Creating Virtual Networks 

( Note that onevnet does not create bridge or vlan. We should use brctl and vconfig respectively )

Create a VLAN ( creating VLAN 12 here )

sudo vconfig add eth0 12

Create a bridge and encapsulate the above VLAN

sudo brctl addbr br12

sudo brctl addif br12 eth0.12

Bring up the VLAN

ip link set eth0.12 up

Create a file named "vlan12.one" containing the following content:

BRIDGE=br12
NAME=VLAN12
NETWORK_ADDRESS=192.168.12.0
NETWORK_SIZE=B
TYPE=RANGED

Create a Virtual Network using the following command:
onevnet create vlan12.one

View the list of created Virtual Networks

# COMMAND #
onevnet list

# OUTPUT #
ID USER NAME TYPE BRIDGE #LEASES
0 shashank VLAN12 Ranged br12 0

View the configuration of a specific Virtual Network

# COMMAND #
onevnet show 0

# OUTPUT #
VIRTUAL NETWORK 0 INFORMATION
ID: : 0
UID: : 0

VIRTUAL NETWORK TEMPLATE
BRIDGE=br12
NAME=VLAN12
NETWORK_ADDRESS=192.168.12.0
NETWORK_SIZE=B
TYPE=RANGED

To Bring up a Virtual Machine in above created Virtual Network, add the following to VM template:

NIC=[NETWORK="VLAN12",IP=192.168.12.4]

( Note that MAC address for this VM is dynamically generated based on the IP specified )

To view the list of IP addresses currently allocated in the above created virtual network, issue the following command

# COMMAND #
onevnet show 0

VIRTUAL NETWORK 0 INFORMATION
ID: : 0
UID: : 0

VIRTUAL NETWORK TEMPLATE
BRIDGE=br12
NAME=VLAN12
NETWORK_ADDRESS=192.168.12.0
NETWORK_SIZE=B
TYPE=RANGED

LEASES INFORMATION
LEASE=[ IP=192.168.12.4, MAC=00:03:c0:a8:0c:04, USED=1, VID=12 ]


5) Disadvantages of using onevnet 

1) An additional script is required inside the VM which does MAC -> IP conversion and configures the network device on boot.

2) onevnet does not create bridge or vlan.

3) Virtual Machine job will not be submitted if the specified IP is not available.

4) There seems to be no way to specify the IP lease time. DHCP server is highly customizable.

5) If we plan to give hostnames to Virtual Machines in future we might retain DHCP server and configure DDNS updates which may not be possible while using onevnet leases.


3) Enabling additional debugging 

Edit /etc/one/defaultrc file to add the following:

ONE_MAD_DEBUG=1

The above setting will create the following log files in /var/log/one/

im_kvm.log
one_vmm_kvm.log
tm_nfs.log
one_hm.log


6) Viewing and Modifying OpenNebula backend database 

If requried we can use sqlitebrowser to open and edit /var/lib/one/one.db ( sudo apt-get install sqlitebrowser )