Set Round Robin and IOPS for Nexenta ISCSI Luns


Have a ton of ISCSI volumes across a ton of hosts and just not feeling like clicking all those checkboxes to change paths.. Your solution would be…

esxcli nmp device list | grep naa | grep NEXENTA | awk -F '[()]' '{print $(NF-1)}' | while read disk; do esxcli nmp device setpolicy -d $disk --psp=VMW_PSP_RR; done

Oh, and while you’re at it set the iops chunk size to 1, speeds things up a bit :)
esxcli nmp device list | grep naa | grep NEXENTA | awk -F '[()]' '{print $(NF-1)}' | while read disk; do esxcli nmp roundrobin setconfig --device $disk --iops 1 --type iops ; done

Simple RSYNC/SSH/ZFS Backup


Hi there,

It has been a while, busy building clouds and stuff. In the Christmas spirit of sharing, today I wanted to share this bash script for ZFS backups, it will basically snapshot the ZFS share, RSYNC new files over and the delete the old ZFS snaps.

Simple, yes, but somehow a wheel that keeps being reinvented over and over again, so here’s my version of it. Hope it helps someone save some time.

For this script, the backup server is one of our own TerraNas servers (basically, Ubuntu 14 with Native ZFS and NFS/4) the client is a Ubuntu14 LTS server, the SSH keys have been imported to the client so we can SSH across without passwords and such.

On the ZFS/NFS server side, we disable ID mapping to have usernames cross the wire instead of ID’s

echo N > /sys/module/nfs/parameters/nfs4_disable_idmapping

share the ZFS drive with:

zfs set sharenfs=rw=@192.168.33.0/24,insecure,no_root_squash pool1/backup

Where 192.168.33.0/24 is your subnet or host of course.

Then we share it our with zfs share -a and service nfs-kernel-server restart.

We do have one export to the localhost in the /etc/exports file, because NFS will complain if this file is empty, but the rest of the exports come from ZFS

On the client we have this little script:

Continue reading “Simple RSYNC/SSH/ZFS Backup”

Using Drush to move stuff from dev to prd


Revision 2.0

If you’re hosting Drupal sites you might have searched for scripts to make publishing of development to production easier. And you must have stumbled upon things like aliases, rsync and sqldump to help you get along.

To save you some time, a short little script that we use for propagation to our prod servers.

First you need your aliases.drushrc.php (we have it in the ~/.drush folder)

<?php $aliases['tbs-dev'] = array (
 'root' => '/sites/tbs/www',
 'uri' => 'http://www.travelbysuitcases.local',
 'path-aliases' => array (
 '%drush' => '/usr/bin',
 '%site' => 'sites/default/',
 ),
 'command-specific' => array (
 'rsync'=> array(
 'mode' =>'avz',
 'exclude-paths' =>'files'
 ),
 ),
);
//
$aliases['tbs-prd'] = array (
 'root' => '/sites/tbs/www',
 'uri' => 'http://www.travelbysuitcases.com',
 'remote-host' => '10.1.4.12',
 'path-aliases' => array (
 '%drush' => '/usr/bin',
 '%site' => 'sites/default/',
 ),
 'command-specific' => array (
 'sql-sync' => array (
 'sanitize' => FALSE,
 'no-ordered-dump' => TRUE,
 'structure-tables' => array(
 'common' => array(
 'cache','cache_filter','cache_menu','cache_page','history','sessions','watchdog'
 ),
 ),
 ),
 'rsync'=> array(
 'mode' =>'avz',
 'exclude-paths' =>'files'
 ),
 ),
);

there might be a few things that you notice. We use 3 letter shortnames (CMS, INT, TBS and so on) for our websites, followed by either -prd or -dev, further we do not sanitize data on when we push to the prd database as we develop with the full dataset, you might want to have a different approach there.

Also, as we found it makes no sense to push our entire media library across on every update, we exclude this from our RSYNC. Instead we have the /sites/default/files folder mounted as an NFS share from our Prod and Dev servers

Adjust where necessary.

Now our little bash script that moves stuff around

# PRD 2 DEV Drush by j.kool@integrative.it
# Revision 2.0
#

STARTTIME=$(date +%s)
if [ -z "$1" ] ; then 
 echo "Usage : prd2dev 'shortname'"
 exit 1
fi

cd /sites

# Check if shortname is a valid alias (~/.drush/aliases.drushrc.php)
drush sa > shortnames.txt
if grep -Fxq "@$1-prd" shortnames.txt; then
 # Get the DEV and PRD paths
 dpath=$(drush @$1-dev dd)
 ppath=$(drush @$1-prd dd)
 # Get DEV and PRD URL for later (uuid)
 devurl=$(drush sa --table |grep "@$1-dev" |awk '{print $3}')
 prdurl=$(drush sa --table |grep "@$1-prd" |awk '{print $3}')
 date=`date +%Y-%m-%d.%H-%M-%S`
 # Clean cache / set maintenance and backup the DEV site
 drush @$1-prd vset maintenance_mode 1 &
 drush @$1-dev vset maintenance_mode 1 &
 wait
 echo @$1-prd and @$1-dev in maintenance mode
 drush @$1-prd cc all &
 drush @$1-dev cc all &
 wait
 echo @$1-prd and @$1-dev cache cleared
 rm /sites/backup/$1-prd-*.tar.gz 
 drush @$1-dev ard --destination=/sites/backup/$1-prd-$date.tar.gz --tar-options="--exclude=files"
 
 # Save the dPath and pPath and Build.txt
 echo $dpath > $dpath/dev-path.txt
 echo $ppath > $dpath/prd-path.txt
 echo $date > $dpath/build-$date.txt
 echo Local backup made of @$1-dev to /sites/backup/$1-prd-$date.tar.gz 
 #RSYNC and SQL Sync the PRD to DEV (remove all obsolete files)
 drush --yes -v rsync @$1-prd @$1-dev &
 drush --yes sql-sync @$1-prd @$1-dev &
 wait
 echo SQL and RSYNC completed
 # Set css and js compression to 0
 drush @$1-dev vset preprocess_css 0 &
 drush @$1-dev vset preprocess_js 0 &
 # Disable googleanalythics and mollum - set UUID base
 if [ -d $dpath/sites/all/modules/mollom ]; then
 drush @$1-dev --yes dis mollom &
 fi
 if [ -d $dpath/sites/all/modules/google_analytics ]; then
 drush @$1-dev --yes dis googleanalytics &
 fi
 if [ -d $dpath/sites/all/modules/uuid ]; then
 drush @$1-dev vset uuid_redirect_external_base_url $devurl &
 fi
 wait
 echo Module maintenance completed
 drush @$1-prd vset maintenance_mode 0 &
 drush @$1-dev vset maintenance_mode 0 &
 wait
 echo @$1-prd and @$1-dev Back ON-LINE
else
 echo "$1 not found as Shortname in alias list"
 exit 1
fi
ENDTIME=$(date +%s)
echo "$1-prd deployed to $1-dev in $(($ENDTIME - $STARTTIME)) seconds... COPY Backup in background"
scp -c arcfour128 /sites/backup/$1-prd-$date.tar.gz netbackup@192.168.1.199:~/$1-prd-$date.tar.gz &


It’s no rocket science, we first check if the shortname is available on the server, then push the url’s and paths in some variables as we might need these later, we also take a backup of the prod side, Rsync and SQLsync all over, do some maintenance on disabled modules (thinks like mollom and GA), finally disable the maintenance mode and ce’st ca.

As per revision 2.0 I push some jobs in parralel on dev and prd to speed everything up a bit (the & and wait combinations). Also we move our backups to an NFS share.

The script above is PRD2DEV (DEV2PRD) is basically the reverse of this.

I could go in to detail of each little piece of bash code, but it’s pretty self explanatory i think

Hope it saves someone some time

Cheers

Mass Update Ubuntu (debian) servers without puppets or chefs


Ok, you do not want in invest the exorbitant fee for management and scripting though some fancy web based script engines like Puppet or Chef, but like us you so have this farm of a couple of dozen (or hundreds) of VM’s that at some point need their updates and patches, especially when these VM’s have been neglected for a couple of weeks during the summer vacations :).

Now how to go about this?

Firstly from your admin server or PC, copy your SSH cert to the target so that you can log in without password challenge, for this to work under root you need to give root a password, (log in as root on the target server sudo, and set a password with passwd) of course not advised, but very practical in most cases. 

Now to copy your ssh cert, simple issue the ssh-copy-id command followed by uid@ip (e.g. root@192.168.1.10) and authenticate.

Now make a file with IP’s (or hostnames) of the servers you want to manage, i call it servers.lst 

Then this little (very simple) script does the trick

while read name

do
echo “Processing : ” $name
ssh -n $name “DEBIAN_FRONTEND=noninteractive $1″
done < servers.lst

the -n after ssh makes ssh wait before it returns to shell, the DEBIAN_FRONTEND=noninteractive in the command suppresses some debconf warnings (as explained here)

Simply make executable (chmod +x) and run as ./script.sh ‘apt-get update && apt-get upgrade –yes’ (for example)

Sometimes, less is more..

Graph ZFS details with Cacti (Nexenta)


Today, as promised a long time ago, and therefore seriously overdue, a short writeup on how to graph ZFS details from Nexenta on Cacti.

On the Nexenta host

Firstly, add some extends to you SNMPD.CONF (here is some background on how to do this ). To do this log in (ssh) as admin to your Nexenta box, then obtain root privileges (su).

Now start  the nmc and run the command setup network service snmp-agent edit-settings . this allows you to edit the snmpd.conf file on a Nexenta host. Do not go about editing these files directly in /etc/snmp because it will not work that way

Now add your extends to the snmpd.conf file, i googled a bit to get some commands to return ZFS details, for our servers we added the following. The snmp-agent edit-settings  will open the snmpd.conf file in VI, so remember: a is to append, :x is to save, dd is to delete a line.

We added the following extends to our Nexenta server

extend .1.3.6.1.4.1.2012.88 zpool_name /bin/bash -c “zpool list -H -o name”
extend .1.3.6.1.4.1.2021.88 zpool_snap /bin/bash -c “zpool list -Ho name|for zpool in `xargs`;do zfs get -rHp -o value usedbysnapshots $zpool|awk -F: ‘{sum+=$1} END{print sum}';done”
extend .1.3.6.1.4.1.2021.88 zpool_used /bin/bash -c “zpool list -Ho name|xargs zfs get -Hp -o value used”
extend .1.3.6.1.4.1.2021.88 zpool_data_used /bin/bash -c “zpool list -Ho name|for zpool in `xargs`;do snap=`zfs get -rHp -o value usedbysnapshots $zpool|awk -F: ‘{sum+=$1} END{print sum}’`;pool=`zfs get -Hp -o value used $zpool`; echo $pool $snap|awk ‘{print (\$1-\$2);}';done”
extend .1.3.6.1.4.1.2021.88 zpool_available /bin/bash -c “zpool list -Ho name|xargs zfs get -Hp -o value available”
extend .1.3.6.1.4.1.2021.88 zpool_capacity /bin/bash -c “zpool list -H -o capacity”
extend .1.3.6.1.4.1.2021.85 arc_meta_max /bin/bash -c “echo ::arc | mdb -k| grep arc_meta_max|tr -cd ‘[:digit:]'”
extend .1.3.6.1.4.1.2021.85 arc_meta_used /bin/bash -c “echo ::arc | mdb -k| grep arc_meta_used|tr -cd ‘[:digit:]'”
extend .1.3.6.1.4.1.2021.85 arc_size /bin/bash -c “echo ::arc | mdb -k| grep -w size|tr -cd ‘[:digit:]'”
extend .1.3.6.1.4.1.2021.85 arc_meta_limit /bin/bash -c “echo ::arc | mdb -k| grep arc_meta_limit|tr -cd ‘[:digit:]'”
extend .1.3.6.1.4.1.2021.85 arc_meta_c_max /bin/bash -c “echo ::arc | mdb -k| grep c_max|tr -cd ‘[:digit:]'”
extend .1.3.6.1.4.1.2021.89 arc_hits /bin/bash -c “kstat -p ::arcstats:hits| cut -s -f 2″
extend .1.3.6.1.4.1.2021.89 arc_misses /bin/bash -c “kstat -p ::arcstats:misses| cut -s -f 2″
extend .1.3.6.1.4.1.2021.89 arc_l2_hits /bin/bash -c “kstat -p ::arcstats:l2_hits| cut -s -f 2″
extend .1.3.6.1.4.1.2021.89 arc_l2_misses /bin/bash -c “kstat -p ::arcstats:l2_misses| cut -s -f 2″
extend .1.3.6.1.4.1.2021.90 vopstats_zfs_nread /bin/bash -c “kstat -p ::vopstats_zfs:nread | cut -s -f 2″
extend .1.3.6.1.4.1.2021.90 vopstats_zfs_nwrite /bin/bash -c “kstat -p ::vopstats_zfs:nwrite | cut -s -f 2″
extend .1.3.6.1.4.1.2021.90 vopstats_zfs_read_bytes /bin/bash -c “kstat -p ::vopstats_zfs:read_bytes | cut -s -f 2″
extend .1.3.6.1.4.1.2021.90 vopstats_zfs_write_bytes /bin/bash -c “kstat -p ::vopstats_zfs:write_bytes | cut -s -f 2″

I know on Extends you normally would not have to provide an OID, but I like to provide them anyway so I know where to look for the SNMP OID.

After adding these, save the file and say yes to the question to reload the file after the save. Now check the configuration with:

setup network service snmp-agent confcheck

and then restart the snmpd with:

setup network service snmp-agent restart

Your work is now done on the Nexenta host, you can check the settings with an snmpwalk command to see if it actually works

On CACTI

I assume you have an SNMP enabled device set up to point to your Nexenta server, if not this would be a good time to do so. SNMP V2c works for me.

Now import the following XML graph templates on the end of this post to your Cacti server (I’ve got these from this forum but had to modify them quite a bit for them to see get the data form the correct SNMP OID’s.

Now add these templates to your device, and create the graphs. If you are lucky you will get some pretty pictures with some usefull information, especially the one on L2ARC cache turned out to be quite useful to us.

Good luck, and if you have any questions, post  them.

 

Cheers

Continue reading “Graph ZFS details with Cacti (Nexenta)”