Remote SSH Parallel processing


Ever need to update a whole bunch of (Ubuntu) hosts and got tired of manual processing. Why not hack a little script to do it for you.

First we need a list for servers in a text file. Mine is called servers.list and contains the hostnames of the hosts that need to be processed, like so;

amber.integrative.it
angela.integrative.it
christel.integrative.it
linda.integrative.it
...
..
.

For authentication I use SSH/RSA, so my admin station has privs on every host. If you have not set that up, do an ssh-copy-id root@hostname.something to copy your Key over to the hosts (yes I use root here, bad bad bad..)

Then this simple script will do the magic for you, mine is called do-servers.sh (don’t forget to chmod +x the script)

#!/bin/bash
command="$1"
while read -u999 server; do
 (echo + Processing $server - $command;ssh root@$server $command;echo - Done $server - $command) &
done 999< servers.list
wait

(the while loop uses another file handle to keep out of the way of the stdout of ssh, we block the whole ssh command with () and execute in the background with &, finally the wait at the end does what it says, wait for everything to be done.

Now all that remains is to call the script with something like ./do-servers.sh ‘apt-get disk-upgrade -y’ (put your command between parentheses to make it a single parameter.

Cest Ca..

 

VyOS Backup


Want to make backups of your VyOS router/firewall, This little script might help, It takes the config and converts it into set commands for easy restore on another box. We push it to an RSYNC on a ZFS/Nexenta server, but you put it anywhere as you like. Schedule it through Cron or better through the system task scheduler.

Don’t forget to use the commit archive to record your changes for the audit trails, like so :

set system config-management commit-archive location 'scp://admin:<password>@x.x.x.x/volumes/pool1/backup/vyos'

VyOS backup.sh Script: (store in /config/scripts/backup/ and do not forget to make it executable : chmod +x /config/scripts/backup/backup.sh)

# Vyos (1.6) Backup Script (jkool@integrative.it)
# Fetch me with scp root@x.x.x.x:/volumes/pool1/backup/vyos/backup.sh /config/scripts/backup/backup.sh
# Keep 5 versions local 
#
# Schedule with:
#
# set system task-scheduler task backup executable path '/config/scripts/backup/backup.sh'
# set system task-scheduler task backup interval '8h'

h=$(hostname)
d=$(date +"%Y%m%d%H%M")
dest=192.168.1.200::pool1_backup/vyos
scripts=/config/scripts/backup

cd $scripts

tar -czf $scripts/backup-auth-$h-$d.tar.gz /config/auth
/opt/vyatta/sbin/vyatta-config-gen-sets.pl > $scripts"/backup-config-"$h"-"$d".txt"

ls -F backup-config-$h*.txt | head -n -5 | xargs rm
ls -F backup-auth-$h*.tar.gz | head -n -5 | xargs rm

rsync $scripts/backup-config-$h-$d.txt $dest/$h
rsync $scripts/backup-auth-$h-$d.tar.gz $dest/$h



Simple RSYNC/SSH/ZFS Backup


Hi there,

It has been a while, busy building clouds and stuff. In the Christmas spirit of sharing, today I wanted to share this bash script for ZFS backups, it will basically snapshot the ZFS share, RSYNC new files over and the delete the old ZFS snaps.

Simple, yes, but somehow a wheel that keeps being reinvented over and over again, so here’s my version of it. Hope it helps someone save some time.

For this script, the backup server is one of our own TerraNas servers (basically, Ubuntu 14 with Native ZFS and NFS/4) the client is a Ubuntu14 LTS server, the SSH keys have been imported to the client so we can SSH across without passwords and such.

On the ZFS/NFS server side, we disable ID mapping to have usernames cross the wire instead of ID’s

echo N > /sys/module/nfs/parameters/nfs4_disable_idmapping

share the ZFS drive with:

zfs set sharenfs=rw=@192.168.33.0/24,insecure,no_root_squash pool1/backup

Where 192.168.33.0/24 is your subnet or host of course.

Then we share it our with zfs share -a and service nfs-kernel-server restart.

We do have one export to the localhost in the /etc/exports file, because NFS will complain if this file is empty, but the rest of the exports come from ZFS

On the client we have this little script:

Continue reading “Simple RSYNC/SSH/ZFS Backup”

Using Drush to move stuff from dev to prd


Revision 2.0

If you’re hosting Drupal sites you might have searched for scripts to make publishing of development to production easier. And you must have stumbled upon things like aliases, rsync and sqldump to help you get along.

To save you some time, a short little script that we use for propagation to our prod servers.

First you need your aliases.drushrc.php (we have it in the ~/.drush folder)

<?php $aliases['tbs-dev'] = array (
 'root' => '/sites/tbs/www',
 'uri' => 'http://www.travelbysuitcases.local',
 'path-aliases' => array (
 '%drush' => '/usr/bin',
 '%site' => 'sites/default/',
 ),
 'command-specific' => array (
 'rsync'=> array(
 'mode' =>'avz',
 'exclude-paths' =>'files'
 ),
 ),
);
//
$aliases['tbs-prd'] = array (
 'root' => '/sites/tbs/www',
 'uri' => 'http://www.travelbysuitcases.com',
 'remote-host' => '10.1.4.12',
 'path-aliases' => array (
 '%drush' => '/usr/bin',
 '%site' => 'sites/default/',
 ),
 'command-specific' => array (
 'sql-sync' => array (
 'sanitize' => FALSE,
 'no-ordered-dump' => TRUE,
 'structure-tables' => array(
 'common' => array(
 'cache','cache_filter','cache_menu','cache_page','history','sessions','watchdog'
 ),
 ),
 ),
 'rsync'=> array(
 'mode' =>'avz',
 'exclude-paths' =>'files'
 ),
 ),
);

there might be a few things that you notice. We use 3 letter shortnames (CMS, INT, TBS and so on) for our websites, followed by either -prd or -dev, further we do not sanitize data on when we push to the prd database as we develop with the full dataset, you might want to have a different approach there.

Also, as we found it makes no sense to push our entire media library across on every update, we exclude this from our RSYNC. Instead we have the /sites/default/files folder mounted as an NFS share from our Prod and Dev servers

Adjust where necessary.

Now our little bash script that moves stuff around

# PRD 2 DEV Drush by j.kool@integrative.it
# Revision 2.0
#

STARTTIME=$(date +%s)
if [ -z "$1" ] ; then 
 echo "Usage : prd2dev 'shortname'"
 exit 1
fi

cd /sites

# Check if shortname is a valid alias (~/.drush/aliases.drushrc.php)
drush sa > shortnames.txt
if grep -Fxq "@$1-prd" shortnames.txt; then
 # Get the DEV and PRD paths
 dpath=$(drush @$1-dev dd)
 ppath=$(drush @$1-prd dd)
 # Get DEV and PRD URL for later (uuid)
 devurl=$(drush sa --table |grep "@$1-dev" |awk '{print $3}')
 prdurl=$(drush sa --table |grep "@$1-prd" |awk '{print $3}')
 date=`date +%Y-%m-%d.%H-%M-%S`
 # Clean cache / set maintenance and backup the DEV site
 drush @$1-prd vset maintenance_mode 1 &
 drush @$1-dev vset maintenance_mode 1 &
 wait
 echo @$1-prd and @$1-dev in maintenance mode
 drush @$1-prd cc all &
 drush @$1-dev cc all &
 wait
 echo @$1-prd and @$1-dev cache cleared
 rm /sites/backup/$1-prd-*.tar.gz 
 drush @$1-dev ard --destination=/sites/backup/$1-prd-$date.tar.gz --tar-options="--exclude=files"
 
 # Save the dPath and pPath and Build.txt
 echo $dpath > $dpath/dev-path.txt
 echo $ppath > $dpath/prd-path.txt
 echo $date > $dpath/build-$date.txt
 echo Local backup made of @$1-dev to /sites/backup/$1-prd-$date.tar.gz 
 #RSYNC and SQL Sync the PRD to DEV (remove all obsolete files)
 drush --yes -v rsync @$1-prd @$1-dev &
 drush --yes sql-sync @$1-prd @$1-dev &
 wait
 echo SQL and RSYNC completed
 # Set css and js compression to 0
 drush @$1-dev vset preprocess_css 0 &
 drush @$1-dev vset preprocess_js 0 &
 # Disable googleanalythics and mollum - set UUID base
 if [ -d $dpath/sites/all/modules/mollom ]; then
 drush @$1-dev --yes dis mollom &
 fi
 if [ -d $dpath/sites/all/modules/google_analytics ]; then
 drush @$1-dev --yes dis googleanalytics &
 fi
 if [ -d $dpath/sites/all/modules/uuid ]; then
 drush @$1-dev vset uuid_redirect_external_base_url $devurl &
 fi
 wait
 echo Module maintenance completed
 drush @$1-prd vset maintenance_mode 0 &
 drush @$1-dev vset maintenance_mode 0 &
 wait
 echo @$1-prd and @$1-dev Back ON-LINE
else
 echo "$1 not found as Shortname in alias list"
 exit 1
fi
ENDTIME=$(date +%s)
echo "$1-prd deployed to $1-dev in $(($ENDTIME - $STARTTIME)) seconds... COPY Backup in background"
scp -c arcfour128 /sites/backup/$1-prd-$date.tar.gz netbackup@192.168.1.199:~/$1-prd-$date.tar.gz &


It’s no rocket science, we first check if the shortname is available on the server, then push the url’s and paths in some variables as we might need these later, we also take a backup of the prod side, Rsync and SQLsync all over, do some maintenance on disabled modules (thinks like mollom and GA), finally disable the maintenance mode and ce’st ca.

As per revision 2.0 I push some jobs in parralel on dev and prd to speed everything up a bit (the & and wait combinations). Also we move our backups to an NFS share.

The script above is PRD2DEV (DEV2PRD) is basically the reverse of this.

I could go in to detail of each little piece of bash code, but it’s pretty self explanatory i think

Hope it saves someone some time

Cheers

Mass Update Ubuntu (debian) servers without puppets or chefs


Ok, you do not want in invest the exorbitant fee for management and scripting though some fancy web based script engines like Puppet or Chef, but like us you so have this farm of a couple of dozen (or hundreds) of VM’s that at some point need their updates and patches, especially when these VM’s have been neglected for a couple of weeks during the summer vacations :).

Now how to go about this?

Firstly from your admin server or PC, copy your SSH cert to the target so that you can log in without password challenge, for this to work under root you need to give root a password, (log in as root on the target server sudo, and set a password with passwd) of course not advised, but very practical in most cases. 

Now to copy your ssh cert, simple issue the ssh-copy-id command followed by uid@ip (e.g. root@192.168.1.10) and authenticate.

Now make a file with IP’s (or hostnames) of the servers you want to manage, i call it servers.lst 

Then this little (very simple) script does the trick

while read name

do
echo “Processing : ” $name
ssh -n $name “DEBIAN_FRONTEND=noninteractive $1”
done < servers.lst

the -n after ssh makes ssh wait before it returns to shell, the DEBIAN_FRONTEND=noninteractive in the command suppresses some debconf warnings (as explained here)

Simply make executable (chmod +x) and run as ./script.sh ‘apt-get update && apt-get upgrade –yes’ (for example)

Sometimes, less is more..