Showing posts with label Ubuntu. Show all posts
Showing posts with label Ubuntu. Show all posts

Monday, November 19, 2012

OwnCloud on Ubuntu. Media Server (miniDLNA). Remote Access


Incentive
Recently I took a half an hour video using my Samsung S3. The video was approximately 3.6GB, which is  close to the upper limit offered by several publicly available cloud/ online storage services such as Dropbox, Google Drive, or Ubuntu One.

That is enough incentive for me to build my own private cloud, considering my 1.5TB harddrive is the only storage limitation and I have full ownership over its security.

OwnCloud Server

I came across OwnCloud, an open source, php web based software.

Owncloud is a centralized storage solution: the software automatically stores your files onto its server.

It also has its own clients for linux, windows, android etc, so it's quite flexible in my opinion.

Installation
On my ubuntu server running on 12.04.1 LTS (with LAMP stack), it's installation is simply:

root@ubuntu:/opt# apt-get install owncloud


At the time of writing, this gives you ownCloud version 3 (the latest version is owncloud 4), so keep in mind the compatible sync clients's version is 1.05

Upon successful installation, OwnCloud can be accessed via web interface http://server-ip-address/owncloud. All file upload, download and folder creation can be done here: 


OwnCloud Sync Clients
OwnCloud's web interface allows for max of 2MB upload, I understand this as a PHP limitation. 

OwnCloud has several sync clients available for different OSes, which don't have such limitations. So far I have tried two clients (both version 1.05 for Windows and Linux respectively) with the following findings:

1. Windows 8

Upon first setting up owncloud clientsync account, I encountered this client and server timing out of sync error. This prohibits the server and client to synchronize their files.

As a work around, I have set up the Ubuntu server as an NTP server. Then I pointed my windows machine towards it. 

For reference:
Install
# apt-get install ntp

Start/ stop NTP
# /etc/init.d/ntp --help
Usage: /etc/init.d/ntp {start|stop|restart|try-restart|force-reload|status}

As client
# ntpdate au.pool.ntp.org

(In all fairness, it turned out the Windows machine was configured to the incorrect timezone, hence prohibiting it from synchronizing with the times.windows server).


2. Ubuntu Laptop LTS 12.04
Having installed the ownCloud sync client on my netbook, I have encountered no synchronization issues (apparently the netbook already has a public NTP working properly). 


I have left my netbook switched on allowing it to be sync for the entire day. Did a tcpdump to confirm there are https traffic being exchanged between the client and server.

The sync is finally successful. I suspect the initial sync took longer than the subsequent incremental updates. 


Home Access (miniDLNA)

Leveraging the Ethernet over Power setup I previously posted, it makes sense to enable DLNA on my OwnCloud server. This essentially converts my server to become a centralized storage, which allows for access from all devices via DLNA.

MiniDLNA is a light weight DLNA server software, which installation is as simple as running "apt-get install minidlna". A google search of "dlna ubuntu" returns this quick tip as its first result covering all the essential configurations. 

For my setup, by default, OwnCloud stores its client files in the following directory:
/var/lib/owncloud/data/root/files/ clientsync

In addition, I have also created a "music" and "photos" folders under the same directory. These directories are added to the media directories of minidlna (/etc/minidlna.conf)

# set this to the directory you want scanned.
# * if have multiple directories, you can have multiple media_dir= lines
# * if you want to restrict a media_dir to a specific content type, you
# can prepend the type, followed by a comma, to the directory:
# + "A" for audio (eg. media_dir=A,/home/jmaggard/Music)
# + "V" for video (eg. media_dir=V,/home/jmaggard/Videos)
# + "P" for images (eg. media_dir=P,/home/jmaggard/Pictures)
media_dir=A,/var/lib/owncloud/data/root/files/music
media_dir=P,/var/lib/owncloud/data/root/files/photos

Notice I have set them as "A" for audio and "P" for images. 

At this point, if I run minidlna, the server returns an “Media directory not accessible!" error .

There is a post on the owncloud community forum describing the fix - by adding both root and minidlna users to the group www-data:
root@ubuntu:/opt# usermod -a -G www-data root
root@ubuntu:/opt# usermod -a -G www-data minidlna

Now, apply -R option to force a full rescan, before restarting minidlna:
root@ubuntu:/opt# minidlna -R

root@ubuntu:/opt# /etc/init.d/minidlna force-reload
* Restarting DLNA/UPnP-AV media server minidlna [ OK ]

/var/log/minidlna.log confirms file scan is successful:

[2012/11/17 14:23:26] minidlna.c:155: warn: received signal 15, good-bye
[2012/11/17 14:23:26] minidlna.c:907: warn: Starting MiniDLNA version 1.0.21 [SQLite 3.7.9].
[2012/11/17 14:23:26] minidlna.c:935: warn: Creating new database...
[2012/11/17 14:23:26] minidlna.c:1002: warn: HTTP listening on port 8200
[2012/11/17 14:23:26] scanner.c:719: warn: Scanning /var/lib/owncloud/data/root/files/music
[2012/11/17 14:23:26] scanner.c:790: warn: Scanning /var/lib/owncloud/data/root/files/music finished (166 files)!
[2012/11/17 14:23:26] scanner.c:719: warn: Scanning /var/lib/owncloud/data/root/files/photos
[2012/11/17 14:23:28] scanner.c:790: warn: Scanning /var/lib/owncloud/data/root/files/photos finished (1359 files)!

Remote Access
As mentioned earlier, there are largely two methods for ownCloud clients to access OwnCloud server - web access and sync client. 

There are some useful tweaks which enhance the security and ease of remote access:

1. Enhance security by enabling SSL (and redirecting all HTTP to HTTPS)

2. Enabling remote access by utilizing free dynamic dns services as covered in my earlier post

For example, rather than configuring https://Server-LAN-IP/owncloud, configure https://dynamicDNS-URL:some-random-port/owncloud as the owncloud server's address.

Stating the obvious, the dynamicDNS-URL is intended to be accessible via the public internet. That also means you can access your ownCloud from anywhere in the world with internet connectivity. 

Further notes
I recall enabling SSL on my Apache2 server involved a couple of steps, from creating SSL certificate to enabling the corresponding module in Apache2 software.
It is a worthwhile topic which I may write another post on when I get around to it. 

HP mini 1000 - Ubuntu 12.04 LTS

A quick post on my HP mini 1000 network, which I first blogged (post) in the beginning of this year.

First time I started it up in ten months, the software manager automatically offered to upgrade it to 12.04 LTS. The rest was point and click and a couple of hours of waiting - which has been really worthwhile:


Giving credit to Ubuntu, the latest LTS version has fixed at least two bugs:
1. The wireless connection is functioning properly and its icon correctly reflects its status.
2. The little blue LED button which enables/ disables the wireless connection now functions properly.

Not bad for a three years old Intel atom netbook as I am planning to do more things with it.

Dynamic DNS - dnsdynamic and no-ip


Dynamic DNS (ddns) allows for remote access to publicly hosted server with dynamic IP address, using a pre-defined URL. 

I am covering two dynamic DNS providers offering free services, and how to setup their ddns clients respectively:

1. No-IP
No-IP comes with its own linux ddns client, which can be installed using "apt-get noip2". 
To manually setup your details, run noip2 with the "-C" option. 

root@web-host:# noip2 -h

USAGE: noip2 [ -C [ -F][ -Y][ -U #min]
        [ -u username][ -p password][ -x progname]]
        [ -c file][ -d][ -D pid][ -i addr][ -S][ -M][ -h]

Version Linux-2.1.9
Options: -C               create configuration data
         -F               force NAT off
         -Y               select all hosts/groups
         -U minutes       set update interval
         -u username      use supplied username
         -p password      use supplied password
         -x executable    use supplied executable
         -c config_file   use alternate data path
         -d               increase debug verbosity
         -D processID     toggle debug flag for PID
         -i IPaddress     use supplied address
         -I interface     use supplied interface
         -S               show configuration data
         -M               permit multiple instances
         -K processID     terminate instance PID
         -z               activate shm dump code
         -h               help (this text)


Note: the /usr/local/etc/noip2.conf returns some seemly encrypted text, so it's not meant to be changed. 

DNSdynamic service uses ddclient in ubuntu. Herewith my configuration for reference:

root@ubuntu:# cat /etc/ddclient.conf
# Configuration file for ddclient generated by debconf
#
# /etc/ddclient.conf

daemon=60
protocol=dyndns2
use=web, web=checkip.dyndns.org
server=www.dnsdynamic.org
login= (username)
password=(password)
(domain).dnsdynamic.com

Replace the corresponding fields with your own account details.

P.S. Remember to set up port forwarding if your servers are sitting behind NAT.


Wednesday, January 11, 2012

HP Mini 1000 - Ubuntu Netbook Edition 10.04 Lucid


[Readers note: this post was written back in May 2011 so some of the materials may not be the most up to date (e.g. I think Ubuntu has merged the netbook remix and desktop into one single edition). It is presented purely for reference purposes.  ]

Cutting a long story short, I purchased a HP mini 1000 back in 2009 and it has been basically collecting dust since. (I thought the netbook itself looks great, and I still do).



So, in the interest of making the netbook useful, I went ahead to install Ubuntu Netbook Remix having confirmed my netbook is one of the supported models.

Ubuntu’s netbook edition is available under the “alternative-download” page. (Click “alternative-download” tab on the main download page). The latest netbook edition is still 10.04 (which is used on my netbook). It seems like from then onwards, Ubuntu offers only either server or desktop edition.

As a side note, since the HP netbook has no DVD rom, I used the Universal USB Installer to create the USB install thumb drive for this installation.

The actual installation is rather straight forward so I have skipped the details, whereas tweaking the system is actually the “fun” part, but thankfully there is a wiki Ubuntu page dedicated to this topic. 

For those who couldn't be bothered reading this, there are three fixes for HP mini 1000: 
1). the wireless connection doesn't work unless the wireline is plugged in when system starts up (use GUI System->Administration->Hardware Drivers and install the "restricted driver")
2).Ethernet port (amend  the options "acpi_os_name=Linux" to GRUB_CMDLINE_LINUX_DEFAULT line in the /etc/default/grub file and then do an "update-grub"). and 
3).Setting Firefox's browser cache to point to a RAM disk to avoid killing the solid state drive with constant read/write (Open firefox, in the URL type about:config. Promise you'd be careful, and then set the parameter "browser.cache.disk.parent_directory" with the value "/dev/shm/firefox") .

So here with a screen shot of the web browser on Ubuntu running on HP mini 1000... 


P.S. Not shown on the screen is the wireless status actually has an exclamation mark as if it isn't connected, although the internet connection works perfectly fine. 


Ubuntu Server 11.04 Halts when issues “reboot” command


The Node Controller I built in an earlier post consistently experienced this system halt issue. Basically, after I entered a “reboot” command. the system halted with a “shutting down” message showing on the screen (forever).

The hardware is Dell Optiplex 745 and the operating system was Ubuntu 11.04 (server).
root@ubuntu-NodeController:~# uname -a
Linux ubuntu-NodeController 2.6.38-13-generic-pae #53-Ubuntu SMP Mon Nov 28 19:41:58 UTC 2011 i686 i686 i386 GNU/Linux

I did some search online and found the issue was caused by a Ubuntu 11.04 software bug

So I upgraded the Ubuntu software release:
# sudo apt-get install update-manager-core
# sudo do-release-upgrade


Now the system is running on Ubuntu 11.10 server and the issue is resolved.

root@ubuntu-NodeController:~# uname -a
Linux ubuntu-NodeController 3.0.0-14-generic-pae #23-Ubuntu SMP Mon Nov 21 22:07:10 UTC 2011 i686 i686 i386 GNU/Linux





P.S. while searching for the solution, I stumble upon a post suggesting to set the “reboot” option to one of these “bios”, “apci” and “force” (in the kernel). 

# vi /etc/default/grub
GRUB_CMDLINE_LINUX="reboot=x"


# update-grub

While the suggestion was not helpful to my setup, it could be useful for other cases of system restart halt where OS upgrade is not an option. 

Tuesday, August 30, 2011

Motion and a low-cost home surveillance system


I have this Creative Live webcam from years ago. While it works perfectly well, its resolution is no better than my iPhone 3Gs…
So, rather than throwing it away or let it sit around idle, I connected it into the Ubuntu server, installed the “motion”package and turned it into a low-cost web surveillance system…

As you shall see, it is a straight-forward procedure:
1. On the Ubuntu server, to install motion, enter:

# apt-get install motion
2. Configure "motion.conf" file to enable remote access (web browser):
File: /etc/motion/motion.conf
# Restrict webcam connections to localhost only (default: on)
webcam_localhost off
# Restrict control connections to localhost only (default: on)
control_localhost off
3. Lastly, restart “motion” daemon:
/etc/init.d/motion restart
The webcam’s image should be available through the url “http://:8081”

Friday, July 29, 2011

Implementing Cloud Computing on your average Desktop PC (Part 3/3)

Bundling an image
Under the Cloud Controller webpage’s Extras tab, there are some “ready-made” packages available for downloading:


Referencing the instructions on Eucalyptus Image Management, link:
There are basically three steps to bundle an image:
1. Add a root disk image
2. Add a kernel/ramdisk pair to Walrus
3. Register the uploaded data with Eucalyptus.


Having extracted the “ready-made” packages into my home directory, /home/jonathonl/ubuntu9.04-bucket/euca-ubuntu-9.04-i386/kvm-kernel, I carried out the following steps:
[Kernel]
euca-bundle-image -i ubuntu.9-04.x86.img --kernel true
euca-upload-bundle -b kvm-kernel/ -m /tmp/ubuntu.9-04.x86.img.manifest.xml
euca-register kvm-kernel/ubuntu.9-04.x86.img.manifest.xml
[VM]
euca-bundle-image -i /vmlinuz-2.6.28-11-server
euca-upload-bundle -b kvm-kernel/ -m /tmp/vmlinuz-2.6.28-11-server.manifest.xml
euca-register kvm-kernel/vmlinuz-2.6.28-11-server.manifest.xml
[RAM disk]
euca-upload-bundle -b kvm-kernel/ -m /tmp/initrd.img-2.6.28-11-server.manifest.xml
euca-bundle-image -i kvm-kernel/initrd.img-2.6.28-11-server --ramdisk true
euca-register kvm-kernel/initrd.img-2.6.28-11-server.manifest.xml

The kernel, vm and ram images should be available under the “images” tab:
Hybridfox - Launch Instance
I used Hybridfox to manage and launch an instance. To begin with, Firefox 5.01 does NOT work with Elasticfox(installation error). However, Firefox works well with Hybridfox (v1.7b89). This link has all the instructions for setting up Hybridfox to communicate with your cloud systems.
During the setup, a KeyPair will be created. Be sure to save this key somewhere handy as it will be used for SSH into the instance later.
The following settings were used to launch an instance:



With my setup, I am also using the Cloud Controller as a jump host to access the instance running on Node Controller. The Keypair file mentioned earlier is used as follows to allow for a password-less SSH login:
root@ubuntu-CloudController:~# ssh -i /home/jonathonl/keypair.pem 192.168.133.1
Linux ubuntu 2.6.28-11-server #42-Ubuntu SMP Fri Apr 17 02:48:10 UTC 2009 i686
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law.
To access official Ubuntu documentation, please visit:
http://help.ubuntu.com/
root@ubuntu:~#
So, this concludes the implementation of Cloud Computing, from installation to running an instance.

Implementing Cloud Computing on your average Desktop PC (Part 2/3)

After OS installation - on Integration and Troubleshooting
After the OS is installed on both Cloud Controller and Node Controller systems, the Cloud Controller’s web interface should be accessible through https://:8443 using the default credentials - admin/admin.
While I thought my private cloud was ready to go, it turned I still had to fix and tune a couple of things. The following were the errors I encountered along with their resolutions I gathered along the way:

1. “Store” tab showed “failed to connect to local store proxy” error:
Referencing the link, do a “apt-get install python-image-store-proxy” to resolve the problem.
2. Verify (and fix) the “/etc/eucalyptus/eucalyptus.conf” file
On Cloud Controller, ensure the private and public interfaces are set correctly (in my case, I set them as the two different NICs to avoid some random remote access issues). Furthermore, set NODES=”” to resolve the 0 free/max problem discussed later.
3. Under “Configuration” tab, remember to set the IP address for Cloud Controller, Walrus Host and Cluster Controller, as well as save the VM Types.
With my setup, I set the system’s public IP address to be Cloud Host, Walrus Host; whereas for the Cluster Controller I used private address. Also, I saved the default VM Types.
4. Credentials.zip
4.1 “eucarc” script gives “No Route to Host” error
“eucarc” script is a part of credentials.zip file, which is to be extracted into ~/.euca/ directory on the Cloud Controller.
After Walrus, Cluster Controller and VM Types registration, download credentials onto Cloud Controller (as Cloud Client) and run the "eucarc" script.
The “eucarc” script may give a "No Route to Host" error, this link suggested restarting the Cloud Controller to resolve this issue (and I can confirm it works).

4.2 “EC2_ACCESS_KEY environment variable must be set”
While issue “euca_describe-availability-zones verbose” command (euca2ools), it gives an error as follows:
root@ubuntu-CloudController:~# euca-describe-availability-zones
EC2_ACCESS_KEY environment variable must be set.
Connection failed
The way I resolved this was to download the credentials again, and re-run the “eucarc” script. Furthermore, in /root/.bashrc, add the line to avoid downloading the credentials every time the cloud controller restarts.
[ -r ~/.euca/eucarc ] && . ~/.euca/eucarc
(I read this on one of the forums but forgot to capture the link)
4.3 Node Controller (?) Without fully understand the purpose of the "eucarc" script, I also scp and ran it on the Node Controller having encountered the 0 free/max cpu issue the second time.
5. “euca-describe-availability-zones verbose” should describe the environment (resources availabile on the Node Controller for running an instance). If the “free/max” fields are both 0, then something has not been registered properly.
By removing "NODES" config under eucalyptus.conf, and then deregister BOTH clusters and Node, and then register the cluster and then node again, it solved the 000 free/max CPU problem, referencing this link.
root@ubuntu-CloudController:~/.euca# euca_conf --deregister-nodes 192.168.20.2
SUCCESS: removed node '192.168.20.2' from '//etc/eucalyptus/eucalyptus.local.conf'
root@ubuntu-CloudController:~/.euca# euca_conf --list-clusters
registered clusters:
HomeCluster 192.168.10.153
root@ubuntu-CloudController:~/.euca# euca_conf --deregister-cluster HomeCluster
SUCCESS: cluster 'HomeCluster' successfully deregistered.
root@ubuntu-CloudController:~/.euca# euca_conf --register-cluster HomeCluster 192.168.20.1
Trying rsync to sync keys with "192.168.20.1"...The authenticity of host '192.168.20.1 (192.168.20.1)' can't be established.
ECDSA key fingerprint is 2a:29:27:ce:a1:03:a9:5e:c1:e3:52:9e:62:89:de:23.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.20.1' (ECDSA) to the list of known hosts.
done.
SUCCESS: new cluster 'HomeCluster' on host '192.168.20.1' successfully registered.
root@ubuntu-CloudController:~/.euca# euca_conf --list-clusters
registered clusters:
HomeCluster 192.168.20.1
root@ubuntu-CloudController:~/.euca# euca_conf --register-nodes 192.168.20.2
INFO: We expect all nodes to have eucalyptus installed in //var/lib/eucalyptus/keys for key synchronization.
Trying rsync to sync keys with "192.168.20.2"...done.
root@ubuntu-CloudController:~/.euca# euca_conf --list-nodes
registered nodes:
192.168.20.2 HomeCluster
root@ubuntu-CloudController:~/.euca# euca-describe-availability-zones
AVAILABILITYZONE HomeCluster 192.168.20.1
root@ubuntu-CloudController:~/.euca# euca-describe-availability-zones verbose
AVAILABILITYZONE HomeCluster 192.168.20.1
AVAILABILITYZONE |- vm types free / max cpu ram disk
AVAILABILITYZONE |- m1.small 0002 / 0002 1 192 2
AVAILABILITYZONE |- c1.medium 0002 / 0002 1 256 5
AVAILABILITYZONE |- m1.large 0001 / 0001 2 512 10
AVAILABILITYZONE |- m1.xlarge 0001 / 0001 2 1024 20
AVAILABILITYZONE |- c1.xlarge 0000 / 0000 4 2048 20

6. Configure bridge on CloudController in response to WARN message in /var/log/eucalyptus/cc.log, referencing recommendations from this link.

Congratulations if you have reached this far into this post. At this stage, you should have a working cloud platform, allowing you to search and install images from "Store" tab.
The next blog will be about Hybridfox interface, launching an instance and accessing it through ssh.

Implementing Cloud Computing on your average Desktop PC (Part 1/3)

Why Eucalyptus? Because it is also used by Amazon’sEC2 cloud platform. Even better, Eucalyptus comes with Ubuntu Server edition 11.04.


I began by reading Eucalyptus beginner’s guide, which contains all the installation procedure and configuration items, as well as a high-level reference diagram showing all of the private cloud's components (please read the guide in case if this diagram is not clear due to low resolution):
While I do not intend to go through every step in the beginner's guide, the definition of an "instance" is worth paying attention to (since this is what I aimed at running eventually), "The VMs running on the hypervisor and controlled by UEC are called instances."As you may notice, the private cloud implementation is across two desktop PCs:

Although there is an option of installing the entire private cloud on a single computer, I decided to use my old PC as a dedicated Cloud Controller.
My old PC's CPU does not support Intel’s Virtual Hardware Acceleration (VT) technology but it has a hard-drive of 1.5TB (ample storage space). For the Node Controllers, I purchased a second hand PC from EBay for $190, which is only strong enough to run Windows XP but its CPU supports VT and also upgraded it to 4GB of ram in total.
Installation Notes
The actual OS installation was rather straight forward. I inserted Ubuntu’s installation CD and followed the screen instructions. As a note, it would be a good idea to be connected to the internet to do apt-get update as well as sync up the time with NTP.
Node Controller
Prior to installation, consider the Node Controller is the only component which CPU is required to support hardware virtualization. To identify the CPU model, either check in BIOS or check the /proc/cpuinfo file (if Linux is already installed):
root@ubuntu-NodeController:/var/log# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
model name : Intel(R) Core(TM)2 CPU 6300 @ 1.86GHz
address sizes : 36 bits physical, 48 bits virtual
Cross check with this Intel's link to confirm if the process supports VT technology.
Virtual Hardware Acceleration (VT) technology needs to be enabled through BIOS settings (in my case, it was enabling the "Hardware Virtualization" cpu setting). Otherwise, Eucalyptus complaints about BIOS stopping KVM from starting in the following syslog messages:
root@ubuntu-NodeController:/var/log# cat syslog | grep -i kvm
Jul 21 20:39:29 ubuntu-NodeController kernel: [ 27.885070] kvm:disable TXT in the BIOS or activate TXT before enabling KVM
Jul 21 20:39:29 ubuntu-NodeController kernel: [ 27.885074] kvm: disabled by bios
Jul 21 20:39:29 ubuntu-NodeController init: qemu-kvm pre-start process (1086) terminated with status 1
Another thing worth nothing about Node Controller is, the "eth1" interface should be part of the bridge interface, which, as its named, bridges the physical port to a virtual/ internal interface on the instance.
Cloud Controller
As you may note in the diagram above, the Cloud Controller has two NIC cards installed. One for internet access (public), the other for cloud access (private). I attempted setting up both Private and Public interfaces on the same Ethernet port, but remote access worked intermediately (...I doubt does single port setup works).
Herewith the /etc/network/interfaces file for reference:
root@ubuntu-CloudController:~# cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The internet interface
auto eth0
iface eth0 inet static
address 192.168.10.153
netmask 255.255.255.0
network 192.168.10.0
broadcast 192.168.10.255
gateway 192.168.10.1
# The Cloud Computing interface (This was changed to a bridge interface later to resolve an error message)
auto eth1
iface eth1 inet static
address 192.168.20.1
netmask 255.255.255.0
network 192.168.20.0
broadcast 192.168.200.255
Also worth mentioning, I have setup my Cloud Controller as client as well, installing the following packages:
apt-get install qemu-kvm
apt-get install euca2ools
P.S. I setup my Cloud Controller as NAT to allow Node Controller to access the internet (Somehow my home’s internet gateway does not allow me to configure static routing so I had to resolve to this work around).
So herewith the list of considerations I came across during installation of Eucalyptus. Part two will contain configuration and troubleshooting items needed to get Eucalyptus up-and-running.