Using Drush to move stuff from dev to prd


If you’re hosting Drupal sites you might have searched for scripts to make publishing of development to production easier. And you must have stumbled upon things like aliases, rsync and sqldump to help you get along.

To save you some time, a short little script that we use for propagation to our prod servers.

First you need your aliases.drushrc.php (we have it in the ~/.drush folder)

$aliases['cms-dev'] = array (
 'root' => '/sites/cms-integrative-local/www',
 'uri' => 'http://cms.integrative.local',
 'path-aliases' => array (
 '%drush' => '/usr/bin',
 '%site' => 'sites/default/',
 ), );
//
$aliases['cms-prd'] = array (
 'root' => '/sites/cms/www',
 'uri' => 'http://cms.integrative.it',
 'remote-host' => '10.1.4.14',
 'path-aliases' => array (
 '%drush' => '/usr/bin',
 '%site' => 'sites/default/',
 ),
 'command-specific' => array (
 'sql-sync' => array (
 'sanitize' => FALSE,
 'no-ordered-dump' => TRUE,
 'structure-tables' => array(
 'common' => array(
 'cache','cache_filter','cache_menu','cache_page','history','sessions','watchdog'
 ), ), ), ), );

there might be a few things that you notice. We use 3 letter shortnames (CMS, INT, TBS and so on) for our websites, followed by either -prd or -dev, further we do not sanitize data on when we push to the prd database as we develop with the full dataset, you might want to have a different approach there. Adjust where necessary.

Now our little bash script that moves stuff around

STARTTIME=$(date +%s)
if [ -z "$1" ] ; then 
 echo "Usage : dev2prd 'shortname'"
 exit 1
fi

cd /sites

# Check if shortname is a valid alias (~/.drush/aliases.drushrc.php)
drush sa > shortnames.txt
if grep -Fxq "@$1-prd" shortnames.txt; then
 # Get the DEV and PRD paths
 dpath=$(drush @$1-dev dd)
 ppath=$(drush @$1-prd dd)
 # Get DEV and PRD URL for later (uuid)
 devurl=$(drush sa --table |grep "@$1-dev" |awk '{print $3}')
 prdurl=$(drush sa --table |grep "@$1-prd" |awk '{print $3}')
 date=`date +%Y-%m-%d.%H:%M:%S`
 # Clean cache / set maintenance and backup the DEV site
 drush @$1-prd vset maintenance_mode 1
 drush @$1-dev vset maintenance_mode 1
 drush @$1-prd cc all
 drush @$1-dev cc all
 drush @$1-prd ard --destination=/sites/backup/$1-prd-$date.tar.gz
 
 # Save the dPath and pPath and Build.txt
 echo $dpath > $dpath/dev-path.txt
 echo $ppath > $dpath/prd-path.txt
 echo $date > $dpath/build-$date.txt
 
 #RSYNC and SQL Sync the PRD to DEV (remove all obsolete files)
 drush --yes -v rsync @$1-dev @$1-prd --delete
 drush --yes sql-sync @$1-dev @$1-prd
 # Set css and js compression to 0
 drush @$1-prd vset preprocess_css 1
 drush @$1-prd vset preprocess_js 1
 # Disable googleanalythics and mollum - set UUID base
 if [ -d $dpath/sites/all/modules/mollom ]; then drush @$1-prd --yes en mollom; fi
 if [ -d $dpath/sites/all/modules/google_analytics ]; then drush @$1-prd --yes en googleanalytics; fi
 if [ -d $dpath/sites/all/modules/uuid ]; then drush @$1-prd vset uuid_redirect_external_base_url $prdurl; fi
 # Disable maintenace mode
 drush @$1-prd vset maintenance_mode 0
 drush @$1-dev vset maintenance_mode 0
else
 echo "$1 not found as Shortname in alias list"
 exit 1
fi
ENDTIME=$(date +%s)
echo "$1-dev deployed to $1-prd in $(($ENDTIME - $STARTTIME)) seconds..."

It’s no rocket science, we first check if the shortname is available on the server, then push the url’s and paths in some variables as we might need these later, we also take a backup of the prod side, Rsync and SQLsync all over, do some maintenance on disabled modules (thinks like mollom and GA), finally disable the maintenance mode and ce’st ca.

I could go in to detail of each little piece of bash code, but it’s pretty self explanatory i think

Hope it saves someone some time

Cheers

Mass Update Ubuntu (debian) servers without puppets or chefs


Ok, you do not want in invest the exorbitant fee for management and scripting though some fancy web based script engines like Puppet or Chef, but like us you so have this farm of a couple of dozen (or hundreds) of VM’s that at some point need their updates and patches, especially when these VM’s have been neglected for a couple of weeks during the summer vacations :).

Now how to go about this?

Firstly from your admin server or PC, copy your SSH cert to the target so that you can log in without password challenge, for this to work under root you need to give root a password, (log in as root on the target server sudo, and set a password with passwd) of course not advised, but very practical in most cases. 

Now to copy your ssh cert, simple issue the ssh-copy-id command followed by uid@ip (e.g. root@192.168.1.10) and authenticate.

Now make a file with IP’s (or hostnames) of the servers you want to manage, i call it servers.lst 

Then this little (very simple) script does the trick

while read name

do
echo “Processing : ” $name
ssh -n $name “DEBIAN_FRONTEND=noninteractive $1″
done < servers.lst

the -n after ssh makes ssh wait before it returns to shell, the DEBIAN_FRONTEND=noninteractive in the command suppresses some debconf warnings (as explained here)

Simply make executable (chmod +x) and run as ./script.sh ‘apt-get update && apt-get upgrade –yes’ (for example)

Sometimes, less is more..

Graph ZFS details with Cacti (Nexenta)


Today, as promised a long time ago, and therefore seriously overdue, a short writeup on how to graph ZFS details from Nexenta on Cacti.

On the Nexenta host

Firstly, add some extends to you SNMPD.CONF (here is some background on how to do this ). To do this log in (ssh) as admin to your Nexenta box, then obtain root privileges (su).

Now start  the nmc and run the command setup network service snmp-agent edit-settings . this allows you to edit the snmpd.conf file on a Nexenta host. Do not go about editing these files directly in /etc/snmp because it will not work that way

Now add your extends to the snmpd.conf file, i googled a bit to get some commands to return ZFS details, for our servers we added the following. The snmp-agent edit-settings  will open the snmpd.conf file in VI, so remember: a is to append, :x is to save, dd is to delete a line.

We added the following extends to our Nexenta server

extend .1.3.6.1.4.1.2012.88 zpool_name /bin/bash -c “zpool list -H -o name”
extend .1.3.6.1.4.1.2021.88 zpool_snap /bin/bash -c “zpool list -Ho name|for zpool in `xargs`;do zfs get -rHp -o value usedbysnapshots $zpool|awk -F: ‘{sum+=$1} END{print sum}';done”
extend .1.3.6.1.4.1.2021.88 zpool_used /bin/bash -c “zpool list -Ho name|xargs zfs get -Hp -o value used”
extend .1.3.6.1.4.1.2021.88 zpool_data_used /bin/bash -c “zpool list -Ho name|for zpool in `xargs`;do snap=`zfs get -rHp -o value usedbysnapshots $zpool|awk -F: ‘{sum+=$1} END{print sum}’`;pool=`zfs get -Hp -o value used $zpool`; echo $pool $snap|awk ‘{print (\$1-\$2);}';done”
extend .1.3.6.1.4.1.2021.88 zpool_available /bin/bash -c “zpool list -Ho name|xargs zfs get -Hp -o value available”
extend .1.3.6.1.4.1.2021.88 zpool_capacity /bin/bash -c “zpool list -H -o capacity”
extend .1.3.6.1.4.1.2021.85 arc_meta_max /bin/bash -c “echo ::arc | mdb -k| grep arc_meta_max|tr -cd ‘[:digit:]'”
extend .1.3.6.1.4.1.2021.85 arc_meta_used /bin/bash -c “echo ::arc | mdb -k| grep arc_meta_used|tr -cd ‘[:digit:]'”
extend .1.3.6.1.4.1.2021.85 arc_size /bin/bash -c “echo ::arc | mdb -k| grep -w size|tr -cd ‘[:digit:]'”
extend .1.3.6.1.4.1.2021.85 arc_meta_limit /bin/bash -c “echo ::arc | mdb -k| grep arc_meta_limit|tr -cd ‘[:digit:]'”
extend .1.3.6.1.4.1.2021.85 arc_meta_c_max /bin/bash -c “echo ::arc | mdb -k| grep c_max|tr -cd ‘[:digit:]'”
extend .1.3.6.1.4.1.2021.89 arc_hits /bin/bash -c “kstat -p ::arcstats:hits| cut -s -f 2″
extend .1.3.6.1.4.1.2021.89 arc_misses /bin/bash -c “kstat -p ::arcstats:misses| cut -s -f 2″
extend .1.3.6.1.4.1.2021.89 arc_l2_hits /bin/bash -c “kstat -p ::arcstats:l2_hits| cut -s -f 2″
extend .1.3.6.1.4.1.2021.89 arc_l2_misses /bin/bash -c “kstat -p ::arcstats:l2_misses| cut -s -f 2″
extend .1.3.6.1.4.1.2021.90 vopstats_zfs_nread /bin/bash -c “kstat -p ::vopstats_zfs:nread | cut -s -f 2″
extend .1.3.6.1.4.1.2021.90 vopstats_zfs_nwrite /bin/bash -c “kstat -p ::vopstats_zfs:nwrite | cut -s -f 2″
extend .1.3.6.1.4.1.2021.90 vopstats_zfs_read_bytes /bin/bash -c “kstat -p ::vopstats_zfs:read_bytes | cut -s -f 2″
extend .1.3.6.1.4.1.2021.90 vopstats_zfs_write_bytes /bin/bash -c “kstat -p ::vopstats_zfs:write_bytes | cut -s -f 2″

I know on Extends you normally would not have to provide an OID, but I like to provide them anyway so I know where to look for the SNMP OID.

After adding these, save the file and say yes to the question to reload the file after the save. Now check the configuration with:

setup network service snmp-agent confcheck

and then restart the snmpd with:

setup network service snmp-agent restart

Your work is now done on the Nexenta host, you can check the settings with an snmpwalk command to see if it actually works

On CACTI

I assume you have an SNMP enabled device set up to point to your Nexenta server, if not this would be a good time to do so. SNMP V2c works for me.

Now import the following XML graph templates on the end of this post to your Cacti server (I’ve got these from this forum but had to modify them quite a bit for them to see get the data form the correct SNMP OID’s.

Now add these templates to your device, and create the graphs. If you are lucky you will get some pretty pictures with some usefull information, especially the one on L2ARC cache turned out to be quite useful to us.

Good luck, and if you have any questions, post  them.

 

Cheers

Continue reading

VMware tools on Ubuntu 12.04 LTS from Repository


Task for today, equip a couple of dozen VM’s with VMware Tools. I hate manual labor so I hacked up this little script.

Make sure current VMware tools or open-vm tools are uninstalled and purged otherwise this will crap out.

# Fetch the key

apt-get install python-software-properties –yes
wget http://packages.vmware.com/tools/keys/VMWARE-PACKAGING-GPG-RSA-KEY.pub
apt-key add VMWARE-PACKAGING-GPG-RSA-KEY.pub
rm VMWARE-PACKAGING-GPG-RSA-KEY.pub

# Add the Repo to APT, and remove sources (we remove all sources, but you can specify to remove only
# VMware sources (since they are not published and will end up in an error)

apt-add-repository ‘deb http://packages.vmware.com/tools/esx/5.0latest/ubuntu precise main’ && wait
# We dont want any sources by default
sed -i ‘s/deb-src/#deb-src/g’ /etc/apt/sources.list

 

# Install the tools

apt-get update &&
# Check Kernel version, we use 12.04 LTS ONLY, esx-nox is NO GFX support, as it should be
apt-get install vmware-tools-esx-kmods-3.2.0-23-generic vmware-tools-esx-nox –yes &&
apt-get upgrade –yes && wait

 

Ce’st ca, all done.

How to install ESXi on those darn cheap-ass SanDisk Cruzer fit sticks


It took me some time to figure out, but for my next batch of ESXi hosts for our cloud platform I simply refused to go to the WanChai computer center again to buy new USB sticks, as I still had a batch of those SanDisk Cruzer Fit sticks lying around from my previous attempt. Last time, because of time constraints, I had to take the easy route and go and buy HP branded stuff for our army of BL495. But today we received another 16 or so 495’ers to fill up another enclosure, and as said, I refused to go out to the computer-center to buy sticks again, it bloody pouring out there :)

So I dug in to this and with a little help from google and the syslog logs, I figured it out, It seems that ESXi wan’t to format the USB stick with a GPT partition, which some sticks like these SanDisk Cruzers, won’t take. Now to force the installed to use classic MBR, upon install, on the boot screen press SHIFT-O for the boot options of runweasel.

Remove whatever you see there and just replace it with runweasel formatwithmbr press enter, and voila, your install will proceed, format the stick, install the binaries, and most important, boot from it on next reboot.

 

 

Cisco 7961 headaches


Man, I never liked Cisco stuff, and after today my esteem for the SF router and switch giant has dropped another notch.  Why?. well try to get a Cisco 7961 (or 7960 for that matter) work in Asterisk, then you’ll understand. So to ease the burden of some of you out there that try to do the same, here is the Asterisk  / FreePBX template that finally made it work for me.

We used the freely available SIP41.8-4-3S fimware, just create yourself an account on Cisco support and fetch it, we tried some of the 9 versions without any luck, so stick to the 8 versions i’d say, works just fine.

Also as other blogs outline in great detail, <natEnabled>false</natEnabled> seem to be quite important :),

well good luck…

<?xml version="1.0" ?>
<device>
 <deviceProtocol>SIP</deviceProtocol>
 <sshUserId>root</sshUserId>
 <sshPassword>KHGSHJGD##J</sshPassword>
 <devicePool>
 <dateTimeSetting>
 <dateTemplate>{$date_template}</dateTemplate>
 <timeZone>China Standard/Daylight Time</timeZone>
 <ntps>
 <ntp>
 <name>192.168.1.10</name>
 <ntpMode>Unicast</ntpMode>
 </ntp>
 </ntps>
 </dateTimeSetting>
 <callManagerGroup>
 <members>
 <member priority="0">
 <callManager>
 <processNodeName>{$server.ip.1}</processNodeName>
 <ports>
 <sipPort>5060</sipPort>
 </ports>
 </callManager>
 </member>
 </members>
 </callManagerGroup>
 </devicePool>
 <sipProfile>
 <natEnabled>false</natEnabled>
 <natAddress></natAddress>
 <sipProxies>
 <registerWithProxy>true</registerWithProxy>
 <outboundProxy>{$outbound_host.line.1}</outboundProxy>
 <outboundProxyPort>{$outbound_port.line.1}</outboundProxyPort>
 <backupProxy>{$server_host.line.1}</backupProxy>
 <backupProxyPort>{$server_port.line.1}</backupProxyPort>
 </sipProxies>
 <preferredCodec>{$preferredcodec}</preferredCodec>
 <phoneLabel>{$displayname.line.1}</phoneLabel>
 <stutterMsgWaiting>1</stutterMsgWaiting>
 <callStats>true</callStats>
 <silentPeriodBetweenCallWaitingBursts>10</silentPeriodBetweenCallWaitingBursts>
 <disableLocalSpeedDialConfig>false</disableLocalSpeedDialConfig>
 <startMediaPort>16384</startMediaPort>
 <stopMediaPort>32766</stopMediaPort>
 <sipLines>
{line_loop}
 <line button="{$line}">
 <featureID>9</featureID>
 <featureLabel>{$username}</featureLabel>
 <proxy>{$server_host}</proxy>
 <port>{$server_port}</port>
 <name>{$username}</name>
 <authName>{$username}</authName>
 <authPassword>{$secret}</authPassword>
 <messageWaitingLampPolicy>3</messageWaitingLampPolicy>
 <messagesNumber>{$voicemail_extension}</messagesNumber>
 <forwardCallInfoDisplay>
 <callerName>true</callerName>
 <callerNumber>true</callerNumber>
 <redirectedNumber>false</redirectedNumber>
 <dialedNumber>true</dialedNumber>
 </forwardCallInfoDisplay>
 </line>
{/line_loop}
 </sipLines>
 <dialTemplate>dialplan.xml</dialTemplate>
 </sipProfile>
 {$image_name}
 {$tonescheme}
 <vendorConfig>
 <disableSpeaker>false</disableSpeaker>
 <disableSpeakerAndHeadset>false</disableSpeakerAndHeadset>
 <pcPort>1</pcPort>
 <settingsAccess>1</settingsAccess>
 <garp>0</garp>
 <voiceVlanAccess>0</voiceVlanAccess>
 <videoCapability>0</videoCapability>
 <autoSelectLineEnable>0</autoSelectLineEnable>
 <webAccess>0</webAccess>
 <spanToPCPort>1</spanToPCPort>
 <loggingDisplay>1</loggingDisplay>
 <loadServer></loadServer>
 </vendorConfig>
</device>

No more Nexenta CPU overload


We have all seen it, NexentaStor eating away the CPU in an ESXi environment. It’s actually not consuming the CPU cycles, but since Illuminos reserves cycles (basically telling the CPU to go through a zillion NOOPS) the CPU get trashed, eating up to 15% for each core. Not a pretty sight, and certainly something you would want to get rid off since this seriously screws up your ESXi resource scheduler.

How to go about this is actually quite easy, just disable the nmdtrace service. One small down side to that though, removing / disabling this service will kill all performance stats in the NMS, not that they are of any use anyways, they are nothing short of pathetic (sorry Nexenta), to get around that I will describe how to extend SNMP to get proper statistics into something like Cacti in a later post.

First, lets free up those NOOP cycles and kill nmdtrace. Before doing so you would like to remove the dependency of it with the NVM, so here goes (all as SU on the console of your Nexenta box of course)

svccfg -s nmv delpg nmdtrace

check the NVM service for state

svcs nmv

And if necessary, remove failure state by

svcadm clear nmv

svcadm refresh  nmv

Now we are good to go to kill the nmdtrace process by issuing

svcadm disable -s nmdtrace

If you would like to enable it (god knows why) just issue

svcadm enable s nmdtrace

See the pretty graph :)

ESXi CPU

 

 

 

 

 

Local Ubuntu Mirror


We were getting tired of our Ubuntu server reaching out to the internet every day/week to fetch updates and patches. Man if you though Windoze was bad, these Ubuntu servers know their way around consuming bandwidth as well. So in the Windoze world we had something called WSUS (if it is still called that?) so i wanted something similar to keep roughly 150+ servers from going out to fetch their bits.

The APT-MIRROR package to the rescue. Setup is quite easy, but i did not find any ‘cook book’ that fitted all my needs.

First off I need to store these mirrors on a nice de-duplicated NFS server (Nexenta/ZFS) to keep it from consuming to many GB’s. So what we did is create a NFS export on one of our SAN’s with root access for the Mirror server, simple enough.

On the server side

On the server we install APT-MIRROR with a simple apt-get command;

apt-get install apt-mirror

This installs the package and creates the structure in /var/spool/apt-mirror

Now we mount our NFS mount-point to the /var/spool/apt-mirror/mirror from the FSTAB in the usual way

<hostname>:/volumes/pool1/mirror /var/spool/apt-mirror/mirror nfs auto,noatime,nolock,bg,nfsvers=3,intr,tcp,actimeo=1800 0 0

(yes, our NFS is still V3, since it serves some ESXI hosts as well)

Now it is time to configure our /etc/apt/mirror.list, we ended up with

############# config ##################
#
set base_path /var/spool/apt-mirror
#
set mirror_path $base_path/mirror
set skel_path $base_path/skel
set var_path $base_path/var
set cleanscript $var_path/clean.sh
set postmirror_script $var_path/postmirror.sh
set run_postmirror 1
set nthreads 10
set _tilde 5
set defaultarch amd64

############## end config ##############

deb http://tw.archive.ubuntu.com/ubuntu precise main restricted universe multiverse
deb http://tw.archive.ubuntu.com/ubuntu precise-security main restricted universe multiverse
deb http://tw.archive.ubuntu.com/ubuntu precise-updates main restricted universe multiverse
deb http://tw.archive.ubuntu.com/ubuntu precise-proposed main restricted universe multiverse
deb http://tw.archive.ubuntu.com/ubuntu precise-backports main restricted universe multiverse
deb http://tw.archive.ubuntu.com/ubuntu precise main main/debian-installer restricted restricted/debian-installer universe universe/debian-installer multiverse multiverse/debian-installer

clean http://tw.archive.ubuntu.com/ubuntu

Now how did we decide to go for http://tw.archive.ubuntu.com/ubuntu to fetch our mirror, for us this was the best performing (not necessarily closest) mirror that is up-to-date. A way to determine your fastest mirror is with the handy netselect tool. how to install and use;

First fetch the latest binary (at time of writing 0.3.ds1-25

wget http://ftp.us.debian.org/debian/pool/main/n/netselect/netselect_0.3.ds1-25_amd64.deb

Then throw it against the mirror lists from launchpad, and add some grep magic to make the output readable

netselect -v -s10 -t20 `wget -q -O- https://launchpad.net/ubuntu/+archivemirrors | grep -P -B8 “statusUP|statusSIX” | grep -o -P “(f|ht)tp.*\”” | tr ‘”\n’ ‘ ‘`

For us it came back with the Hong Kong Chinese University and some others, the Taiwan repo was at the 3rd place, We decided to use the Taiwan one since the Hong Kong archives tend not up to date, maybe we should open up ours to have an up-to-date repo down one here :) . Anyway, you should verify your results against the mirror list on https://launchpad.net/ubuntu/+archivemirrors to get your close and up to date repo.

It is now time to do a first time population of the repository by manually executing

apt-mirror -c apt-mirror

This can take some time as we will pull down

Now the newly downloaded repo can be made available through a web server of choice (Apache for us). We just linked the ubuntu folder to the /vat/www, we could also share the folder out as an RSYNC repo on the Nexenta storage server, maybe at a later time, not today.

ln -s /var/spool/apt-mirror/mirror/tw.archive.ubuntu.com/ubuntu /var/www/ubuntu

Now all that is left to do on the server side is to add the update script to your cron tab for scheduled execution, this is made easy as the package provides this schedule file in /etc/cron.d/apt-mirror, the only thing you need to do there is to uncomment the line

On the Client Side

On the client side we need to edit the /etc/apt/sources.list. Here we commented every deb repo and left the deb-src pointing to the us repositories. Then on the top of the file we added the deb repe’s for our own mirror.

deb http://<domain>/ubuntu precise main restricted universe multiverse
deb http://<domain>/ubuntu precise-security main restricted universe multiverse
deb http://<domain>/ubuntu precise-updates main restricted universe multiverse
deb http://<domain>/ubuntu precise-proposed main restricted universe multiverse
deb http://<domain>/ubuntu precise-backports main restricted universe multiverse

#deb http://us.archive.ubuntu.com/ubuntu/ precise main restricted
deb-src http://us.archive.ubuntu.com/ubuntu/ precise main restricted
#deb http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted
deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted
#deb http://us.archive.ubuntu.com/ubuntu/ precise universe
deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe
#deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe
deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe
#deb http://us.archive.ubuntu.com/ubuntu/ precise multiverse
deb-src http://us.archive.ubuntu.com/ubuntu/ precise multiverse
#deb http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse
deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse
deb-src http://us.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse
#deb http://us.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse
#deb http://security.ubuntu.com/ubuntu precise-security main restricted
deb-src http://security.ubuntu.com/ubuntu precise-security main restricted
#deb http://security.ubuntu.com/ubuntu precise-security universe
deb-src http://security.ubuntu.com/ubuntu precise-security universe
#deb http://security.ubuntu.com/ubuntu precise-security multiverse
deb-src http://security.ubuntu.com/ubuntu precise-security multiverse

now run an apt-get update / apt-get upgrade, and all should be coming from your own repo

As always, hope this helps someone save time, if it does not work out for you, pleaee  leave a comment and we’ll try to help where possible

- Fault

 

 

Cacti script to get temperatures from an ILO device.


We wanted to experiment with different airflow scenarios in our DC and did not want to get in to expensive data center power management tools, so we decided to use a simple cacti server and the ILO temperature measurements from servers in certain spots in the racks to get a historic graph of inflow air temperatures, you get the idea.

But man!, Cacti can be a pain in the butt!, and so can be the HP/ILO SNMP implementation. As you might have noticed the SNMP/OID does not push out measured temperature sensors, sigh.. To get around this hurdle we had to hack up a quick script, and since it took me quite some time to google all components together i’d thought why not put them here for others to enjoy, saving the world some time here.

Our setup, a simple vanilla Ubuntu 12.04 LTS VM,  with Cacti from the Ubuntu Repo (0.8.7i), I assume I don’t need to explain how to set this up. So to get stuff working, first we get the HP Lights-Out XML PERL Scripting Sample (I fetched this one) for Linux, untar and install.

Copy out the files Get_EmHealth.xml and locfg.pl to the /usr/share/cacti/site/scripts folder, chown them to root (as Cacti runs its scripts as root)

Add your ILO credentials to the EmHealth.xml, I could go as far as parameterizing them for Cacti to provide, but since we use the same across all ILO’s in a separate management network, i did not bother

<RIBCL VERSION=”2.21″>
<LOGIN USER_LOGIN=”your user name” PASSWORD=”your password“>
<SERVER_INFO MODE=”read”>
<GET_EMBEDDED_HEALTH />
</SERVER_INFO>
</LOGIN>
</RIBCL>

Next, I created this little PERL script

$command = “/usr/bin/perl -X /usr/share/cacti/site/scripts/locfg.pl -s $ARGV[0] -f /usr/share/cacti/site/scripts/Get_EmHealth.xml”;

#$command = “/usr/bin/perl -X /usr/share/cacti/site/scripts/locfg.pl -s 192.168.3.104 -f /usr/share/cacti/site/scripts/Get_EmHealth.xml”;

$output = `$command`;

@lines = split(/\n/,$output);

foreach $line(@lines){
if (index ($line ,”LOCATION VALUE”) != -1) {
$line =~ s/\s+/_/g;
if ($line =~ /”(.+?)”/) { print”$1:”;}}
if (index ($line ,”CURRENTREADING VALUE”) != -1) {
if ($line =~ /”(.+?)”/) { print”$1 “;}}
}

What this basically does is parse the XML (ish) output from locfg.pl, find the Location Value and CurrentReading Value (which are sensors and values) and write them out as a valid Cacti script return string for by DL385g5 this is:

CPU:62 CPU_1:50 CPU_2:44 Memory_a:57 Memory_b:46 System:62 Ambient:21

As you can see I also trip out all spaces and replace them by underscores to keep the output valid for Cacti. So now we can run this script in a Cacti Data Input Method, see this link on how to do that, and as long as we provide the EXACT output fields, we will get values back to push in to RDD. I am only interested in the Ambient sensors, but maybe you like to graph more..

Hope this helps someone save some time

 

Cheers

 

-Fault