Category Archives: Technical

Dimstat and add on stats

In my working life I regularly use a tool called Dimstat, a unix performance monitoring tool which stores the statistics into a MySQL database. The tool is written by a former colleague and is available from http://dimitrik.free.fr/

It comes with ‘built in’ support to gather standard unix statistics such as vmstat, mpstat but one of the great features is the ability to add new statistics collection methods. It can handle single line output (e.g. vmstat) or multi line output like mpstat. Let’s say we want to gather filesystem utilisation information based on the output of df -k. This will be a multi-line STAT as we will have more than one filesystem reported.

Creating your script

First you create your script that gathers the information you want. You must remember to include a line separator field in your output (default newline) so dimstat knows that the record is complete and also a field to accept the interval.

So here’s my basic script, you may want to adjust the filesystems I am excluding from the list (or make the egrep more elegant!)

# cat mel_df.sh
#!/bin/bash
while true
do
df -k |egrep -v 'fd|mnttab|objfs|sharetab|Filesystem|volatile|proc|devices|contract|dev' | sort | awk  '{ print $6 " " $2 " " $3 " " $4}'
echo ""
sleep $1
done

Test your script interactively

# ./mel_df.sh 5
/logpool 1717567488 31 1262824069
/var/fmw/app 1717567488 454729292 1262824069
/rpool 429391872 73 164531945
/ 429391872 14039895 164531945
/var 429391872 125827192 164531945
/var/share 429391872 195 164531945
/export 429391872 32 164531945
/export/home 429391872 35 164531945
/export/home/oracle 429391872 1415887 164531945
/export/home/otd_user 429391872 35 164531945
/export/home/weblogic 429391872 36 164531945
/zones 429391872 32 164531945
/zones/otd-zone 429391872 35 164531945
/tmp 235826528 48 235826480

/logpool 1717567488 31 1262824069
/var/fmw/app 1717567488 454729292 1262824069
/rpool 429391872 73 164531769
/ 429391872 14039895 164531769
/var 429391872 125827196 164531769
/var/share 429391872 195 164531769
/export 429391872 32 164531769
/export/home 429391872 35 164531769
/export/home/oracle 429391872 1415887 164531769
/export/home/otd_user 429391872 35 164531769
/export/home/weblogic 429391872 36 164531769
/zones 429391872 32 164531769
/zones/otd-zone 429391872 35 164531769
/tmp 235826528 48 235826480

Now you copy the script into /etc/STATsrv/bin on the hosts you want to capture the statistics from.

Then you edit the /etc/STATsrv/access file and add a line pointing to your script, and giving it the name ‘DF_CHECK’

command  DF_check       /etc/STATsrv/bin/mel_df.sh

On the host with the STATsrv demon we can check if this stat is advertised and available..

 ./STATcmd -h localhost -c STAT_LIST
STAT *** OK CONNECTION 0 sec.
STAT *** LIST COMMAND (STAT_LIST)
STAT: vmstat
STAT: mpstat
STAT: netstat
STAT: ForkExec
STAT: MEMSTAT
STAT: tailX
STAT: ioSTAT.sh
STAT: netLOAD.sh
STAT: netLOAD
STAT: psSTAT
STAT: UserLOAD
STAT: ProcLOAD
STAT: bsdlink
STAT: bsdlink.sh
STAT: sysinfo
STAT: SysINFO
STAT: Siostat
STAT: ProjLOAD
STAT: PoolLOAD
STAT: TaskLOAD
STAT: ZoneLOAD
STAT: IOpatt
STAT: CPUSet
STAT: UDPstat
STAT: DF_check
STAT *** LIST END (STAT_LIST)

We can test if the STATsrv demon can run the script

 ./STATcmd -h localhost -c "DF_check 1"
STAT *** OK CONNECTION 0 sec.
STAT *** OK COMMAND (cmd: DF_check)
/logpool 1717567488 31 891420077
/var/fmw/app 1717567488 826058863 891420077
/rpool 429391872 73 55609192
/ 429391872 10794969 55609192
/var 429391872 237518790 55609192
/var/share 429391872 169 55609192
/export 429391872 32 55609192
/export/home 429391872 35 55609192
/export/home/mel 429391872 2871626 55609192
/export/home/oracle 429391872 136058 55609192
/export/home/weblogic 429391872 36 55609192
/tmp 194877488 296 194877192

/logpool 1717567488 31 891420077
/var/fmw/app 1717567488 826058863 891420077
/rpool 429391872 73 55609192
/ 429391872 10794969 55609192
/var 429391872 237518790 55609192
/var/share 429391872 169 55609192
/export 429391872 32 55609192
/export/home 429391872 35 55609192
/export/home/mel 429391872 2871626 55609192
/export/home/oracle 429391872 136058 55609192
/export/home/weblogic 429391872 36 55609192
/tmp 194877488 296 194877192

Declaring your script to the Dimstat server

You have 2 methods to declare your script to the server, either via the GUI or by importing a stat definition file (the format for these files make this option for experienced users only)

Via the GUI you select ADD-on STATS -> Integrate new ADD-on STAT

Enter the name of your STAT: DF_check and complete the information about the column names and data types

dimstat

Once you have declared your add on stat you should now be able to start a new collect on the host using your new statistic. Once it has collected some data, the button for your statistic will become visible in the Analyze page.

Sample stat description file

# =======================================================================
# DF_check: dim_STAT New STAT Description
# =======================================================================
DF_check
4
1
DF_check Statistic(s)
DF_check %i


# =======================================================================
# Column: v_df_check_att (mountpoint)
# =======================================================================
v_df_check_att
64
1
mountpoint
mountpoint
0
# =======================================================================
# Column: v_column4 (size_kb)
# =======================================================================
v_column4
1
2
size_kb
size_kb
0
# =======================================================================
# Column: v_column5 (used_kb)
# =======================================================================
v_column5
1
3
used_kb
used_kb
0
# =======================================================================
# Column: v_column6 (free_kb)
# =======================================================================
v_column6
1
4
free_kb
free_kb
0

mp3splt

mp3splt is a really useful little tool to chop big mp3s into smaller ones. I use it mainly to split audio books into finer grained sections for easier navigation in the car or on my ipod.

To achieve this I use the following command

C:\Program Files (x86)\mp3splt>mp3splt d:\temp\audio\TheAdventureoftheBlueCarbun
cle.mp3 -a -t 30.00 -d d:\temp\audio\bluecarbuncle

This will split my mp3 file into 30 minute (-t 30.00) chunks, auto adjusting to cut in silent parts (-a) and writing the output files into a directory, creating if necessary.

CURL – curl: (9) Server denied you to change to the given directory

I hit this error trying to import some VMs into OVM

To get more detail I tried the command from the ocmmand line

curl -v "ftp://root:blahroot@192.168.8.192/var/tmp/taxud-disk1.vmdk"
* About to connect() to 192.168.8.192 port 21
* Trying 192.168.8.192... connected
* Connected to 192.168.8.192 (192.168.8.192) port 21
< 220 (vsFTPd 2.0.5) > USER root
< 331 Please specify the password. > PASS blahroot
< 230 Login successful. > PWD
< 257 "/root" * Entry path is '/root' > CWD var
< 550 Failed to change directory. * Server denied you to change to the given directory * Connection #0 to host 192.168.8.192 left intact curl: (9) Server denied you to change to the given directory > QUIT
< 221 Goodbye.

This is because the pathname I have given is relative, rather than absolute, so I was unknowingly trying to change directory to /root/var/tmp which did not exist.

To give an absolute pathname, you need an extra slash

curl “ftp://root:blahroot@192.168.8.192//var/tmp/taxud-disk1.vmdk”

Strange behaviour of listener_networks and scan listener

I had an Exalogic and a Sparc SuperCluster T4-4 connected together by infiniband for a set of tests. This meant that I was able to enable SDP and IP over the infiniband network.

To configure it, I had followed the instructions in the Exalogic Manual.

After setting the listener_networks parameter I checked whether the services had registered correctly with the scan listener. Expected behaviour is to see all instances registered with all 3 scan listeners

– Set your environment to the GRID_HOME

- Check which nodes are running the scan listener as you can only interrogate the listener from that node

 oracle@ssca01:~$ srvctl status scan
 SCAN VIP scan1 is enabled
 SCAN VIP scan1 is running on node ssca03
 SCAN VIP scan2 is enabled
 SCAN VIP scan2 is running on node ssca04
 SCAN VIP scan3 is enabled
 SCAN VIP scan3 is running on node ssca01

So on ssca01, I can check the status of LISTENER_SCAN1 ..
And it had  no services registered. Strange. Checked all of my listeners and only LISTENER_SCAN3 had any services registered

oracle@ssca01:~$ /u01/app/11.2.0.3/grid/bin/lsnrctl status LISTENER_SCAN3
LSNRCTL for Solaris: Version 11.2.0.3.0 - Production on 06-SEP-2012 15:20:43
Copyright (c) 1991, 2011, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3)))
 STATUS of the LISTENER
 ------------------------
 Alias                     LISTENER_SCAN3
 Version                   TNSLSNR for Solaris: Version 11.2.0.3.0 - Production
 Start Date                30-AUG-2012 15:59:53
 Uptime                    6 days 23 hr. 20 min. 49 sec
 Trace Level               off
 Security                  ON: Local OS Authentication
 SNMP                      OFF
 Listener Parameter File   /u01/app/11.2.0.3/grid/network/admin/listener.ora
 Listener Log File         /u01/app/11.2.0.3/grid/log/diag/tnslsnr/ssca01/listener_scan3/alert/log.xml
 Listening Endpoints Summary...
 (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN3)))
 (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=137.3.16.87)(PORT=1521)))
 Services Summary...
 Service "ibs" has 3 instance(s).
 Instance "IBS2", status READY, has 1 handler(s) for this service...
 Instance "IBS3", status READY, has 1 handler(s) for this service...
 Instance "IBS4", status READY, has 1 handler(s) for this service...
 The command completed successfully

I was confident my scan address was registered correctly in the DNS

oracle@ssca01:~$ nslookup ssca-scan
 Server:         138.4.34.5
 Address:        138.4.34.5#53
Name:   ssca-scan.blah.com
 Address: 137.3.16.89
 Name:   ssca-scan.blah.com
 Address: 137.3.16.88
 Name:   ssca-scan.blah.com
 Address: 137.3.16.87

I looked on Oracle Support and I could find no other reports of this problem, but then only a small proportion of customers will be running in this configuration.

However, I did find a note 1448717.1 that documented a similar problem with the remote_listener parameter.

So, I amended my tnsnames.ora file so that my LISTENER_IPREMOTE alias included the 3 scan ip addresses

#LISTENER_IPREMOTE =
#  (DESCRIPTION =
#    (ADDRESS = (PROTOCOL = TCP)(HOST = ssca-scan.blah.com)(PORT = 1521))
#  )

LISTENER_IPREMOTE =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 137.3.16.87)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = 137.3.16.88)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = 137.3.16.89)(PORT = 1521))
  )

You can trace the PMON registration process by setting the following database event

alter system set events=’immediate trace name listener_registration level 3′;

and then issue a alter system register; to force pmon to re-register to listeners.

This will produce a trace file in background_dump dest

Looking through this logfile I saw it was still trying to register with the SCAN address.

 Remote listeners:
  0 - (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ssca-scan.osc.uk.oracle.com)(PORT=1521)))
       state=1, err=0
       nse[0]=0, nse[1]=0, nte[0]=0, nte[1]=0, nte[2]=0
       ncre=0
       endp=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ssca-scan.osc.uk.oracle.com)(PORT=1521)))
         flg=0x0 nse=12545

At this point I realised that the tnsnames.ora must only be checked at certain times, such as database startup. So, I restarted my database.

Success! On checkng all of my scan listeners they all had services registered.

Zoning a Brocade Switch Using WWNs

Way back in the mists of time I used to use port based zoning on Brocade switches, however, I started having problems with this and newer storage systems (almost certainly pilot error!). I needed to zone some switches for a customer’s piece of work and this time I thought I’d get with the future and use WWN based zoning.

So, in my setup I have 2 hosts, each with 2 connections per switch, and 2 storage arrays with 1 connection to the switch.

swd77:admin> switchshow
switchName:     swd77
switchType:     34.0
switchState:    Online
switchMode:     Native
switchRole:     Principal
switchDomain:   1
switchId:       fffc01
switchWwn:      10:00:00:05:1e:02:a2:08
zoning:         OFF
switchBeacon:   OFF

Area Port Media Speed State
==============================
  0   0   id    N4   Online    F-Port  20:14:00:a0:b8:29:f5:56 <- Storage Array 1
  1   1   id    N4   Online    F-Port  20:16:00:a0:b8:29:cd:b4 <- Storage Array 2
  2   2   id    N4   Online    F-Port  21:00:00:24:ff:20:3a:f6 <- Host A
  3   3   id    N4   Online    F-Port  21:00:00:24:ff:20:3a:e0 <- Host A
  4   4   --    N4   No_Module
  5   5   --    N4   No_Module
  6   6   id    N4   No_Light
  7   7   id    N4   No_Light
  8   8   id    N4   Online    F-Port  21:00:00:24:ff:20:3b:92 <- Host B
  9   9   id    N4   Online    F-Port  21:00:00:24:ff:25:6d:ac <- Host B
 10  10   id    N4   No_Light
 11  11   id    N4   No_Light
 12  12   id    N4   No_Light
 13  13   id    N4   No_Light
 14  14   --    N4   No_Module
 15  15   --    N4   No_Module

Create aliases for your hosts and storage arrays

swd77:admin> alicreate host1_a,"21:00:00:24:ff:20:3b:92"
swd77:admin> alicreate host1_b,"21:00:00:24:ff:25:6d:ac"
swd77:admin> alicreate host2_a,"21:00:00:24:ff:20:3a:f6"
swd77:admin> alicreate host2_b,"21:00:00:24:ff:20:3a:e0"
swd77:admin> alicreate "a6140","20:14:00:a0:b8:29:f5:56"
swd77:admin> alicreate "b6140","20:16:00:a0:b8:29:cd:b4"

Create Zones to include your alias

swd77:admin> zonecreate "port2","host1_a; a6140; b6140"
swd77:admin> zonecreate "port3","host1_b; a6140; b6140"
swd77:admin> zonecreate "port8","host2_a;  a6140; b6140"
swd77:admin> zonecreate "port9","host2_b;  a6140; b6140"

Create a configuration for your zones and save it

swd77:admin> cfgcreate "customer1","port2; port3; port8; port9"
swd77:admin> cfgsave
You are about to save the Defined zoning configuration. This
action will only save the changes on Defined configuration.
Any changes made on the Effective configuration will not
take effect until it is re-enabled.
Do you want to save Defined zoning configuration only?  (yes, y, no, n): [no] yes

When you’re happy with your configuration, enable it.

swd77:admin> cfgenable customer1
You are about to enable a new zoning configuration.
This action will replace the old zoning configuration with the
current configuration selected.
Do you want to enable 'customer1' configuration  (yes, y, no, n): [no] y
zone config "customer1" is in effect
Updating flash ...

Check at the OS level to see if you can see all your required volumes.

Android Calendar Synchronization

My work phone has just been upgraded to a HTC Legend. The phone is very attractive, the screen clear and bright and the choice of applications dizzying.

I did discover one thing that Android did less well than the Nokia E71 – Calendar Synchronization. The native app seems to be totally geared around you being able to use google calendar, which is fine for personal use, but no good if your company uses another calendar technology and prevent you from storing your corporate diary in external services.

The answer I’ve found is the application http://www.hypermatix.com/products/calendar_sync_for_android?q=faq which is able to connect to my corporate calendar and use it to populate the local calendar on my phone. Currently only in Beta, but it seems to be an effective solution to the problem.

Nokia – Free Navigation for all?

There has been a lot of buzz about Nokia making their navigation free forever.

However, it isn’t that simple, unless you are on the short list of phones that support Ovi Maps 3.0.3, you don’t get the free navigation at all. This means if you have a smartphone such as the E71 you don’t get the free navigation, and there are some rumours that you cannot buy new navigation licenses.

Next time, no nokia for me.

MySQL 5.0 DBA

As I’m a goal oriented person, I tend to take certifications for new products that I’m having to use for work. The certification process gives me a learning structure that forces me not to skip over the things that are less interesting, and hopefully gives me a more rounded knowledge base.

I’ve recently (3 hours ago 😉 ) completed the MySQL 5.0 DBA certification. To use a footballing cliché it was very definitely a game of two halves. The first exam I found very tricky. Perhaps this was because I still had my Oracle head on, and the concept of multiple pluggable storage engines wasn’t sticking, but I really struggled to find the correct answers. The second exam I found much more understandable, lots of questions about EXPLAIN. Both of the MySQL exams are multiple choice format – however, there did seem to be a lot of ‘Select all that are correct’ questions.

I used this book http://www.mysql.com/certification/studyguides/study-guide-5.0.html to help me along with the incredibly detailed MySQL reference manual.