Cloud1

VirSH with ESXi for Ubuntu 14.04 LTS (and MAAS)


Why o why is the latest release of virsh with ESX not in the APT.

Anyway, lets hack away.

First of all get your virsh source tarball, I used 1.2.18 today…

Now “apt-get install” the crapload of dependencies to compile the source, the usual Gnu/Dev,XML/dev and other stuff to make things work.

Then do the ./configure (with the –with-esx=yes), and ./make
apt-get install gcc make pkg-config libxml2-dev libgnutls-dev libdevmapper-dev libcurl4-gnutls-dev python-dev libpciaccess-dev libxen-dev libnl-dev uuid-dev xsltproc
wget http://libvirt.org/sources/libvirt-1.2.18.tar.gz
tar -zxvf libvirt-1.2.18.tar.gz
cd libvirt-1.2.18/
./configure --prefix=/usr --localstatedir=/var --sysconfdir=/etc --with-esx=yes
make
make install

Now try virsh -v, should come back with 1.2.8 or whatever version you installed
and virsh -c esx://root@<your.esxi.server>?no_verify=1 should allow you to control your ESXi host, wonder :) we can stop and start ESXi VM’s now, MAAS will be so happy :)

Oh, if you want to use MAAS now to power control your VM’s, use the Virsh power type and the esx://root@<your.esxi.server>/system?no_verify=1 power control and don’t forget to make your /etc/libvirt/auth.conf file for the authentication. It looks a bit like this:

[credentials-esx]
username=root
password=somethingsecret
[auth-esx-hostname1]
credentials=esx
[auth-esx-hostname2]
credentials=esx
[auth-esx-hostname3]

hostnameX is your ESXi host of course..

Ces’t ca

Set Round Robin and IOPS for Nexenta ISCSI Luns


Have a ton of ISCSI volumes across a ton of hosts and just not feeling like clicking all those checkboxes to change paths.. Your solution would be…

esxcli nmp device list | grep naa | grep NEXENTA | awk -F '[()]' '{print $(NF-1)}' | while read disk; do esxcli nmp device setpolicy -d $disk --psp=VMW_PSP_RR; done

Oh, and while you’re at it set the iops chunk size to 1, speeds things up a bit :)
esxcli nmp device list | grep naa | grep NEXENTA | awk -F '[()]' '{print $(NF-1)}' | while read disk; do esxcli nmp roundrobin setconfig --device $disk --iops 1 --type iops ; done

Simple RSYNC/SSH/ZFS Backup


Hi there,

It has been a while, busy building clouds and stuff. In the Christmas spirit of sharing, today I wanted to share this bash script for ZFS backups, it will basically snapshot the ZFS share, RSYNC new files over and the delete the old ZFS snaps.

Simple, yes, but somehow a wheel that keeps being reinvented over and over again, so here’s my version of it. Hope it helps someone save some time.

For this script, the backup server is one of our own TerraNas servers (basically, Ubuntu 14 with Native ZFS and NFS/4) the client is a Ubuntu14 LTS server, the SSH keys have been imported to the client so we can SSH across without passwords and such.

On the ZFS/NFS server side, we disable ID mapping to have usernames cross the wire instead of ID’s

echo N > /sys/module/nfs/parameters/nfs4_disable_idmapping

share the ZFS drive with:

zfs set sharenfs=rw=@192.168.33.0/24,insecure,no_root_squash pool1/backup

Where 192.168.33.0/24 is your subnet or host of course.

Then we share it our with zfs share -a and service nfs-kernel-server restart.

We do have one export to the localhost in the /etc/exports file, because NFS will complain if this file is empty, but the rest of the exports come from ZFS

On the client we have this little script:

Continue reading “Simple RSYNC/SSH/ZFS Backup”

Using Drush to move stuff from dev to prd


Revision 2.0

If you’re hosting Drupal sites you might have searched for scripts to make publishing of development to production easier. And you must have stumbled upon things like aliases, rsync and sqldump to help you get along.

To save you some time, a short little script that we use for propagation to our prod servers.

First you need your aliases.drushrc.php (we have it in the ~/.drush folder)

<?php $aliases['tbs-dev'] = array (
 'root' => '/sites/tbs/www',
 'uri' => 'http://www.travelbysuitcases.local',
 'path-aliases' => array (
 '%drush' => '/usr/bin',
 '%site' => 'sites/default/',
 ),
 'command-specific' => array (
 'rsync'=> array(
 'mode' =>'avz',
 'exclude-paths' =>'files'
 ),
 ),
);
//
$aliases['tbs-prd'] = array (
 'root' => '/sites/tbs/www',
 'uri' => 'http://www.travelbysuitcases.com',
 'remote-host' => '10.1.4.12',
 'path-aliases' => array (
 '%drush' => '/usr/bin',
 '%site' => 'sites/default/',
 ),
 'command-specific' => array (
 'sql-sync' => array (
 'sanitize' => FALSE,
 'no-ordered-dump' => TRUE,
 'structure-tables' => array(
 'common' => array(
 'cache','cache_filter','cache_menu','cache_page','history','sessions','watchdog'
 ),
 ),
 ),
 'rsync'=> array(
 'mode' =>'avz',
 'exclude-paths' =>'files'
 ),
 ),
);

there might be a few things that you notice. We use 3 letter shortnames (CMS, INT, TBS and so on) for our websites, followed by either -prd or -dev, further we do not sanitize data on when we push to the prd database as we develop with the full dataset, you might want to have a different approach there.

Also, as we found it makes no sense to push our entire media library across on every update, we exclude this from our RSYNC. Instead we have the /sites/default/files folder mounted as an NFS share from our Prod and Dev servers

Adjust where necessary.

Now our little bash script that moves stuff around

# PRD 2 DEV Drush by j.kool@integrative.it
# Revision 2.0
#

STARTTIME=$(date +%s)
if [ -z "$1" ] ; then 
 echo "Usage : prd2dev 'shortname'"
 exit 1
fi

cd /sites

# Check if shortname is a valid alias (~/.drush/aliases.drushrc.php)
drush sa > shortnames.txt
if grep -Fxq "@$1-prd" shortnames.txt; then
 # Get the DEV and PRD paths
 dpath=$(drush @$1-dev dd)
 ppath=$(drush @$1-prd dd)
 # Get DEV and PRD URL for later (uuid)
 devurl=$(drush sa --table |grep "@$1-dev" |awk '{print $3}')
 prdurl=$(drush sa --table |grep "@$1-prd" |awk '{print $3}')
 date=`date +%Y-%m-%d.%H-%M-%S`
 # Clean cache / set maintenance and backup the DEV site
 drush @$1-prd vset maintenance_mode 1 &
 drush @$1-dev vset maintenance_mode 1 &
 wait
 echo @$1-prd and @$1-dev in maintenance mode
 drush @$1-prd cc all &
 drush @$1-dev cc all &
 wait
 echo @$1-prd and @$1-dev cache cleared
 rm /sites/backup/$1-prd-*.tar.gz 
 drush @$1-dev ard --destination=/sites/backup/$1-prd-$date.tar.gz --tar-options="--exclude=files"
 
 # Save the dPath and pPath and Build.txt
 echo $dpath > $dpath/dev-path.txt
 echo $ppath > $dpath/prd-path.txt
 echo $date > $dpath/build-$date.txt
 echo Local backup made of @$1-dev to /sites/backup/$1-prd-$date.tar.gz 
 #RSYNC and SQL Sync the PRD to DEV (remove all obsolete files)
 drush --yes -v rsync @$1-prd @$1-dev &
 drush --yes sql-sync @$1-prd @$1-dev &
 wait
 echo SQL and RSYNC completed
 # Set css and js compression to 0
 drush @$1-dev vset preprocess_css 0 &
 drush @$1-dev vset preprocess_js 0 &
 # Disable googleanalythics and mollum - set UUID base
 if [ -d $dpath/sites/all/modules/mollom ]; then
 drush @$1-dev --yes dis mollom &
 fi
 if [ -d $dpath/sites/all/modules/google_analytics ]; then
 drush @$1-dev --yes dis googleanalytics &
 fi
 if [ -d $dpath/sites/all/modules/uuid ]; then
 drush @$1-dev vset uuid_redirect_external_base_url $devurl &
 fi
 wait
 echo Module maintenance completed
 drush @$1-prd vset maintenance_mode 0 &
 drush @$1-dev vset maintenance_mode 0 &
 wait
 echo @$1-prd and @$1-dev Back ON-LINE
else
 echo "$1 not found as Shortname in alias list"
 exit 1
fi
ENDTIME=$(date +%s)
echo "$1-prd deployed to $1-dev in $(($ENDTIME - $STARTTIME)) seconds... COPY Backup in background"
scp -c arcfour128 /sites/backup/$1-prd-$date.tar.gz netbackup@192.168.1.199:~/$1-prd-$date.tar.gz &


It’s no rocket science, we first check if the shortname is available on the server, then push the url’s and paths in some variables as we might need these later, we also take a backup of the prod side, Rsync and SQLsync all over, do some maintenance on disabled modules (thinks like mollom and GA), finally disable the maintenance mode and ce’st ca.

As per revision 2.0 I push some jobs in parralel on dev and prd to speed everything up a bit (the & and wait combinations). Also we move our backups to an NFS share.

The script above is PRD2DEV (DEV2PRD) is basically the reverse of this.

I could go in to detail of each little piece of bash code, but it’s pretty self explanatory i think

Hope it saves someone some time

Cheers

Mass Update Ubuntu (debian) servers without puppets or chefs


Ok, you do not want in invest the exorbitant fee for management and scripting though some fancy web based script engines like Puppet or Chef, but like us you so have this farm of a couple of dozen (or hundreds) of VM’s that at some point need their updates and patches, especially when these VM’s have been neglected for a couple of weeks during the summer vacations :).

Now how to go about this?

Firstly from your admin server or PC, copy your SSH cert to the target so that you can log in without password challenge, for this to work under root you need to give root a password, (log in as root on the target server sudo, and set a password with passwd) of course not advised, but very practical in most cases. 

Now to copy your ssh cert, simple issue the ssh-copy-id command followed by uid@ip (e.g. root@192.168.1.10) and authenticate.

Now make a file with IP’s (or hostnames) of the servers you want to manage, i call it servers.lst 

Then this little (very simple) script does the trick

while read name

do
echo “Processing : ” $name
ssh -n $name “DEBIAN_FRONTEND=noninteractive $1”
done < servers.lst

the -n after ssh makes ssh wait before it returns to shell, the DEBIAN_FRONTEND=noninteractive in the command suppresses some debconf warnings (as explained here)

Simply make executable (chmod +x) and run as ./script.sh ‘apt-get update && apt-get upgrade –yes’ (for example)

Sometimes, less is more..