Tag Archives: zfs

Factory resetting a ZFS Appliance when you can’t login to the system.

Note – this process will completely destroy all configuration and data on the ZFS Appliance. I only need to do this when a system is returned to me with an unknown IP and password, but I can get onto the ILOM. Please contact Oracle Support before doing this and truly understand what you are doing.

Normally, if you can login to a system you can issue the command ‘maintenance system factoryreset’ to get this result.  DO NOT DO THIS IF YOU HAVE ANY DATA YOU NEED ON THE APPLIANCE.

First – ensure that you are on the current version of Firmware/BIOS on the ZFS Appliance. This can be checked using MOS document 1174698.1 Oracle ZFS Storage Appliance: How to check the SP BIOS revision level as you could encounter problems editing the grub menu.

Login to the SP/ILOM


   -> start /SYS

   -> start /SP/console

 

Wait for the GRUB menu which will be editable for 10 seconds.

Within the 10 seconds, Press ‘e’ on the keyboard

Select the line kernel … To navigate, use ‘v’ to go down and ‘^’ to go up.

Press “e” on keyboard to edit this line

Append ” -c” to this line (spell as “space minus c”)

change this  :

 kernel$ /platform/i86pc/kernel/$ISADIR/unix -B zfs-bootfs=system/368,console=ttya,ttya-mode="9600,8,n,1,-"

to this      :

kernel$ /platform/i86pc/kernel/$ISADIR/unix -B zfs-bootfs=system/368,console=ttya,ttya-mode="9600,8,n,1,-" -c

Press <return>

Finally press “b” on keyboard to reboot.

 

This will print the following lines :

SunOS Release 5.11 Version ak/generic@2013.06.05.6.8,1-1.1 64-bit
Copyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.

   Use is subject to license terms.

>  Clearing appliance configuration ...... done.

   Configuring network devices ... done.

The system will then wipe all of the previous configuration,  reboot, and allow you to reconfigure the networking and root password.

Note: The change you made to the grub boot menu is temporary, and so you should not need to go back in and edit it again.

 

Creating an Application Zone on a SuperCluster

Application Zones on a SuperCluster Solaris 11 LDOM are subject to a lot fewer restrictions than the Exadata Database Zones. This also means that the documentation is less proscriptive and detailed.

This post will show a simple Solaris 11 zone creation, it is meant for example purposes only and not as a product. I am going to use a T5 SuperCluster for this walkthrough. The main difference you will need to consider for a M7 SuperCluster are:

  1. both heads of the ZFS-ES are active so you will need to select the correct head and infiniband interface name.
  2. there is only 1 QGBE card available per PDOM. This means you may need to present vnics from the domain that owns the card for the management network if you require this connectivity.

 

Useful Related MOS Notes

Considerations

As per note 2041460.1 the best practice for creating the file systems for the zone root filesystem is to  1 LUN per LDOM and create a filesystem on this shared pool for each application zone. Reservations and quotas can be used to prevent a zone from using more that its share.

You need to make sure you calculate minimum number of cores required for the global-zone  as per note 1625055.1

You  need to make sure that the IPS repos are all available, and that any IDRs  you have applied to your global zone are available.

Preparation


Put entries into global zone’s hostfile to for  your new zone.  I will use 3 addresses, one for the 1gbit management network, 1 for the 10gbit client network and 1 for the infiniband network on the storage partition (p8503).

 

10.10.14.15     sc5bcn01-d4.blah.mydomain.com   sc5bcn01-d4
10.10.10.78     sc5b01-d4.blah.mydomain.com sc5b01-d4
192.168.28.10   sc5b01-d4-storIB.blah.mydomain.com      sc5102-d4-storIB

 

Create an iscsi LUN for the zone root filesystem if you do not already have one already defined to hold zone roots. I am going to use the iscsi-lun.sh script that is designed for use  by other tools which create the Exadata Database Zones. The good thing about using this is it follows the naming convention etc. used for the other zones. However, it is not installed by default on Application zones (it is provided by the system/platform/supercluster/iscsi package in the exa-family repository) and this is not a supported use of the script.

  • -z is the name of my ZFS-ES
  • -i is the 1gbit hostname of my globalzone
  • -n and -N are used by the exavm utility to create the LUNs. In our case they will both be set to 1.
  • -s The size of the LUN to be created.
  • -l the volume block size. I have selected 32K, you may have other performance metrics that lead you to a different block size.
root@sc5bcn01-d3:/opt/oracle.supercluster/bin# ./iscsi-lun.sh create  \
-z sc5bsn01 -i sc5bcn01-d3  -n 1 -N 1 -s 500G -l 32K
Verifying sc5bcn01-d3 is an initiator node
The authenticity of host 'sc5bcn01-d3 (10.10.14.14)' can't be established.
RSA key fingerprint is 72:e6:d1:a1:be:a3:b3:d9:96:ea:77:61:bd:c7:f8:de.
Are you sure you want to continue connecting (yes/no)? yes
Password: 
Getting IP address of IB interface ipmp1 on sc5bsn01
Password: 
Setting up iscsi service on sc5bcn01-d3
Password: 
Setting up san object(s) and lun(s) for sc5bcn01-d3 on sc5bsn01
Password: 
Setting up iscsi devices on sc5bcn01-d3
Password: 
c0t600144F0F0C4EECD00005436848B0001d0 has been formatted and ready to use

Create a zpool to hold all of your zone roots

root@sc5bcn01-d3:/# zpool create zoneroots c0t600144F0F0C4EECD00005436848B0001d0

Now create a filesystem for your zone root and set a quota on it (optional).

root@sc5bcn01-d3:/# zfs create zoneroots/sc5b01-d4-rpool 
root@sc5bcn01-d3:/# zfs set quota=100G zoneroots/sc5b01-d4-rpool

Create partitions so your zone can access the IB Storage Network (optional, but nice to have, and my example will include them). First locate the interfaces that have access to the IB Storage Network partition  (PKEY=8503) using dladm and then create partitions using these interfaces.

root@sc5bcn01-d3:~# dladm show-part
LINK         PKEY  OVER         STATE    FLAGS
stor_ipmp0_0 8503  net7         up       f---
stor_ipmp0_1 8503  net8         up       f---
bondib0_0    FFFF  net8         up       f---
bondib0_1    FFFF  net7         up       f---
root@sc5bcn01-d3:~# dladm create-part -l net8 -P 8503 sc5b01d4_net8_p8503
root@sc5bcn01-d3:~# dladm create-part -l net7 -P 8503 sc5b01d4_net7_p8503

Create the Zone

Prepare your zone configuration file, here is mine. Note, I have non-standard link names to make it more readable. You will need to use ipadm to determine the lower-link names  that match your system

create -b
set brand=solaris
set zonepath=/zoneroots/sc5b01-d4-rpool
set autoboot=true
set ip-type=exclusive
add net
set configure-allowed-address=true
set physical=sc5b01d4_net7_p8503
end
add net
set configure-allowed-address=true
set physical=sc5b01d4_net8_p8503
end
add anet
set linkname=net0
set lower-link=auto
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
end
add net
set linkname=mgmt0
set lower-link=net0
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
end
add net
set linkname=mgmt1
set lower-link=net1
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
end
add anet
set linkname=client0
set lower-link=net2
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
end
add anet
set linkname=client1
set lower-link=net5
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
end

 

Implement the zone configuration using your pre-configured file or type it in manually..

root@sc5bcn01-d3:~# zonecfg -z sc5b01-d4 -f <yourzonefile>

 

Install the zone. Optionally you can specify a template to install required packages on top of the standard solaris-small-server group, or specify another package group. I base this on the standard xml file used by zone installs and customize the <software data> section (see this blog post here https://blogs.oracle.com/zoneszone/entry/automating_custom_software_installation_in  for an example)

root@sc5bcn01-d3:~# cp /usr/share/auto_install/manifest/zone_default.xml myzone.xml
root@sc5bcn01-d3:~# zoneadm -z sc5b01-d4 install -m myzone.xml

Next you boot the zone, and use zlogin -C to login to the console and answer the usual Solaris configuration questions about root password, timezone, locale. I do not usually configure the networking at this time, and add it later.

root@sc5bcn01-d3:~# zoneadm -z sc5b01-d4 boot
root@sc5bcn01-d3:~# zlogin -C sc5b01-d4

Create the required networking

# ipadm create-ip  mgmt0
# ipadm create-ip  mgmt1
# ipadm create-ip  client1
# ipadm create-ip  client0
# ipadm create-ipmp -i mgmt0 -i mgmt1 scm_ipmp0
# ipadm create-ipmp -i client0 -i client1 sc_ipmp0
# ipadm create-addr -T static -a local=10.10.10.78/22 sc_ipmp0/v4
# ipadm create-addr -T static -a local=10.10.14.15/24 scm_ipmp0/v4
# route -p add default 10.10.8.1
# ipadm create-ip sc5b01d4_net8_p8503
# ipadm create-ip sc5b01d4_net7_p8503
# ipadm create-ipmp -i sc5b01d4_net8_p8503 -i sc5b01d4_net7_p8503 stor_ipmp0
# ipadm set-ifprop -p standby=on -m ip sc5b01d4_net8_p8503
# ipadm create-addr -T static -a local=192.168.28.10/22 stor_ipmp0/v4

Optional Post Install steps

Root Login

Allow root to login over ssh by editing /etc/ssh/sshd_config and changing PermitRootLogin=no to PermitRootLogin=yes.
# svcadm restart ssh

Configure DNS support

# svccfg -s dns/client setprop config/search = astring: "blah.mydomain.com"
# svccfg -s dns/client setprop config/nameserver = net_address: \(10.10.34.4 10.10.34.5\)
# svccfg -s dns/client refresh 
# svccfg -s dns/client:default  validate
# svccfg -s dns/client:default  refresh 
# svccfg -s /system/name-service/switch setprop config/default = astring: \"files dns\"
# svccfg -s system/name-service/switch:default refresh
# svcadm enable dns/client

 

 Resource Capping

At the time of writing (20/04/16) virtual and physical memory capping is not supported on SuperCluster. This is mentioned in Oracle Support Document 1452277.1 (SuperCluster Critical Issues) as issue SOL_11_1.

Creating Processor sets and associating with your zone

See more detail about pools and processor sets on my blog here and here.And of course in the Solaris 11.3 manuals.

A quick summary of the commands follows.

This creates a fixed size processor set, consisting of 64 threads.

poolcfg -c "create pset pset_sc5bcn02-d4.osc.uk.oracle.com_id_6160 (uint pset.min = 64; uint pset.max = 64)" /etc/pooladm.conf

Then a pool is created, and associated with the processor set.

poolcfg -c "create pool pool_sc5bcn02-d4.osc.uk.oracle.com_id_6160" /etc/pooladm.conf
poolcfg -c "associate pool pool_sc5bcn02-d4.osc.uk.oracle.com_id_6160 (pset pset_sc5bcn02-d4.osc.uk.oracle.com_id_6160)" /etc/pooladm.conf
poolcfg -c 'modify pool pool_sc5bcn02-d4.osc.uk.oracle.com_id_6160 (string pool.scheduler="TS")' /etc/pooladm.conf

Enable the pool configuration saved in /etc/pooladm.conf

pooladm -c

modify the zone config to set the pool

zonecfg -z sc5bcn02-d4
zonecfg:sc5bcn02-d4> set pool=pool_sc5bcn02-d4.osc.uk.oracle.com_id_6160
zonecfg:sc5bcn02-d4> verify
zonecfg:sc5bcn02-d4> commit

Then you can stop and restart the zone to associate it with the processor set.

Disable access time (atime) recording on ZFS

If you have a filesystem that contains data which is accessed often, but you do not want to record the access time information because it is static data (e.g. content for a webserver) you can change this in zfs properties.

You can see the full list of zfs properties

root@sc5acn02-d2:/var/fmw# zfs get all logpool/fmw_app
NAME             PROPERTY              VALUE                  SOURCE
logpool/fmw_app  aclinherit            restricted             default
logpool/fmw_app  aclmode               discard                default
logpool/fmw_app  atime                 on                     default
logpool/fmw_app  available             1.57T                  -
logpool/fmw_app  canmount              on                     default
logpool/fmw_app  casesensitivity       mixed                  -
logpool/fmw_app  checksum              on                     default
logpool/fmw_app  compression           off                    default
logpool/fmw_app  compressratio         1.00x                  -
logpool/fmw_app  copies                1                      default
logpool/fmw_app  creation              Wed Jun  4 15:47 2014  -
logpool/fmw_app  dedup                 off                    default
logpool/fmw_app  devices               on                     default
logpool/fmw_app  encryption            off                    -
logpool/fmw_app  exec                  on                     default
logpool/fmw_app  keychangedate         -                      default
logpool/fmw_app  keysource             none                   default
logpool/fmw_app  keystatus             none                   -
logpool/fmw_app  logbias               latency                default
logpool/fmw_app  mlslabel              none                   -
logpool/fmw_app  mounted               yes                    -
logpool/fmw_app  mountpoint            /logpool/fmw_app       default
logpool/fmw_app  multilevel            off                    -
logpool/fmw_app  nbmand                off                    default
logpool/fmw_app  normalization         none                   -
logpool/fmw_app  primarycache          all                    default
logpool/fmw_app  quota                 none                   default
logpool/fmw_app  readonly              off                    default
logpool/fmw_app  recordsize            128K                   default
logpool/fmw_app  referenced            30.1G                  -
logpool/fmw_app  refquota              none                   default
logpool/fmw_app  refreservation        none                   default
logpool/fmw_app  rekeydate             -                      default
logpool/fmw_app  reservation           none                   default
logpool/fmw_app  rstchown              on                     default
logpool/fmw_app  secondarycache        all                    default
logpool/fmw_app  setuid                on                     default
logpool/fmw_app  shadow                none                   -
logpool/fmw_app  share.*               ...                    default
logpool/fmw_app  snapdir               hidden                 default
logpool/fmw_app  sync                  standard               default
logpool/fmw_app  type                  filesystem             -
logpool/fmw_app  used                  30.1G                  -
logpool/fmw_app  usedbychildren        0                      -
logpool/fmw_app  usedbydataset         30.1G                  -
logpool/fmw_app  usedbyrefreservation  0                      -
logpool/fmw_app  usedbysnapshots       0                      -
logpool/fmw_app  utf8only              off                    -
logpool/fmw_app  version               6                      -
logpool/fmw_app  vscan                 off                    default
logpool/fmw_app  xattr                 on                     default
logpool/fmw_app  zoned                 off                    default

You can use the set to command to change the property.

root@sc5acn02-d2:/var/fmw# zfs set atime=off logpool/fmw_app

You do not need to remount the filesystem, the change is applied instantly

root@sc5acn01-d2:/var/bea# mount
[snipped output]
/var/fmw/app on logpool/fmw_app read/write/setuid/devices/rstchown/nonbmand/exec/xattr/noatime/dev=47d0012 on Wed Jun  4 16:19:57 2014

Enabling DNFS and configuring a RMAN backup to a ZFS 7320

DNFS Configuration process

This process is based on the setup required to attach a ZFS-BA to an Exadata. Unlike the ZFS-7320 a ZFS-BA has more infiniband links connected to the system and so can support greater throughput.

On the ZFS appliance

Create a  new project to hold the backup destination ‘MyCompanyBackuptest’

Edit project ‘MyCompanyBackuptest’
General Tab

→ Set ‘Synchronous write bias’ to Throughput
→ Set ‘Mountpoint’ to /export/mydb

Protocols Tab

→ Add nfs exceptions for all of ‘MyCompany’ servers for read/write and root access, using ‘Network’ and giving the individual IP addresses.

192.168.28.7/32
192.168.28.6/32
192.168.28.3/32
192.168.28.2/32

Shares Tab

→ Create filesystems backup1 to backup8

On SPARC node

As root

Check the required kernel parameters are set in /etc/system (done automatically by ssctuner service)

set rpcmod:clnt_max_conns = 8
set nfs:nfs3_bsize = 131072

Set suggested ndd parameters, by creating a script in /etc/rc2.d so they are set after every boot.

root@sc5acn01-d1:/etc/rc2.d# cat S99ndd
/usr/sbin/ndd -set /dev/tcp tcp_max_buf 4194304
/usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 2097152
/usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 2097152
/usr/sbin/ndd -set /dev/tcp tcp_conn_req_max_q 16384
/usr/sbin/ndd -set /dev/tcp tcp_conn_req_max_q0 16384

Create mountpoints for the backup directories

root@sc5acn01-d1:/# for i in 1 2 3 4 5 6 7 8 
do 
mkdir /backup${i} 
done

Add /etc/vfstab entries for the mountpoints

sc5a-storIB:/export/mydb/backup1 - /backup1 nfs - yes rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
sc5a-storIB:/export/mydb/backup2 - /backup2 nfs - yes rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
sc5a-storIB:/export/mydb/backup3 - /backup3 nfs - yes rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
sc5a-storIB:/export/mydb/backup4 - /backup4 nfs - yes rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
sc5a-storIB:/export/mydb/backup5 - /backup5 nfs - yes rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
sc5a-storIB:/export/mydb/backup6 - /backup6 nfs - yes rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
sc5a-storIB:/export/mydb/backup7 - /backup7 nfs - yes rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
sc5a-storIB:/export/mydb/backup8 - /backup8 nfs - yes rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio

Mount the filesystems and set ownership to oracle:dba

root@sc5acn01-d1:/# for i in 1 2 3 4 5 6 7 8 
do 
mount /backup${i} 
done
root@sc5acn01-d1:/# for i in 1 2 3 4 5 6 7 8 
do 
chown oracle:dba 
/backup${i} 
done

As Oracle

Stop any databases running from the ORACLE_HOME where you want to enable DNFS.
Ensure you can remotely authenticate as sysdba, creating a password file using orapwd if required.
Relink for dnfs support

oracle@sc5acn01-d1:/u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib$ make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk dnfs_on

I was a little uncertain about the oradnfstab entries as most examples relate to a ZFS-BA which has many IB connections and 2 active heads, whereas the 7320 in this case was set in Active/Passive. I created $ORACLE_HOME/dbs/oradnfstab with the following entries.

server:sc5a-storIB path:192.168.28.1
export: /export/mydb/backup1 mount:/backup1
export: /export/mydb/backup2 mount:/backup2
export: /export/mydb/backup3 mount:/backup3
export: /export/mydb/backup4 mount:/backup4
export: /export/mydb/backup5 mount:/backup5
export: /export/mydb/backup6 mount:/backup6
export: /export/mydb/backup7 mount:/backup7
export: /export/mydb/backup8 mount:/backup8

Restart you database and check the alertlog to see if DNFS has been enabled by grepping for NFS.

Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0
 Wed Mar 26 16:50:43 2014

Backup and restore scripts will need to be adjusted to set suggested underscore parameters and to use the new locations.

oracle@sc5acn01-d1:~/mel$ cat dnfs_backup.rman
startup mount
run
{
sql 'alter system set "_backup_disk_bufcnt"=64';
sql 'alter system set "_backup_disk_bufsz"=1048576';
ALLOCATE CHANNEL ch01 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup1/mydb/%U';
ALLOCATE CHANNEL ch02 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup2/mydb/%U';
ALLOCATE CHANNEL ch03 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup3/mydb/%U';
ALLOCATE CHANNEL ch04 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup4/mydb/%U';
ALLOCATE CHANNEL ch05 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup5/mydb/%U';
ALLOCATE CHANNEL ch06 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup6/mydb/%U';
ALLOCATE CHANNEL ch07 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup7/mydb/%U';
ALLOCATE CHANNEL ch08 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup8/mydb/%U';
backup database TAG='dnfs-backup';
backup current controlfile format '/backup/dnfs-backup/backup-controlfile';
}
oracle@sc5acn01-d1:~/mel$ cat dnfs_restore.rman
startup nomount
restore controlfile from '/backup/dnfs-backup/backup-controlfile';
alter database mount;
configure device type disk parallelism 2;
run
{
sql 'alter system set "_backup_disk_bufcnt"=64';
sql 'alter system set "_backup_disk_bufsz"=1048576';
ALLOCATE CHANNEL ch01 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup1/mydb/%U';
ALLOCATE CHANNEL ch02 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup2/mydb/%U';
ALLOCATE CHANNEL ch03 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup3/mydb/%U';
ALLOCATE CHANNEL ch04 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup4/mydb/%U';
ALLOCATE CHANNEL ch05 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup5/mydb/%U';
ALLOCATE CHANNEL ch06 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup6/mydb/%U';
ALLOCATE CHANNEL ch07 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup7/mydb/%U';
ALLOCATE CHANNEL ch08 DEVICE TYPE DISK connect 'sys/welcome1@mydb' format '/backup8/mydb/%U';
restore database from TAG='dnfs-backup';
}

Results of the changes

The timings are based on the longest running backup piece, rather than the wall clock time as this could include other RMAN operations such as re-cataloging files.

Standard NFS DNFS
Backup 2:32:09 44:58
Restore 33:42 24:46

So, it’s clear from these results that DNFS can have a huge impact on the backup performance and also a positive effect on restore performance.

If you look at the ZFS analytics for the backup, you can see that we were writing approximately 2 GB/s

backup

Also we were seeing approximately 1.2 GB/s read for the restore.
restore

ZFS Appliance NFS exceptions

I had a situation where I wanted to restrict access to a project on my ZFS storage appliance (7320) to a small list of hosts on a private network. The project needs to be accessible r/w, with root permissions from 4 hosts that I need to specify by IP address.

192.168.28.2     
192.168.28.3    
192.168.28.6   
192.168.28.7

However, other hosts in the 192.168.28.X/22 range must not be able to mount the share.
The way to achieve this is to lock down the permissions and then explicitly grant access to the systems you need. You have 3 ways of specifying the names of hosts for exceptions:-

  • Host(FQDN) or Netgroup – This requires you to have your private hostnames registered in DNS, which was not possible in my case. You CANNOT enter an IP address in this field.
  • DNS Domain – all of my hosts are in the same domain, so this was not fine grained enough.
  • Network – Counter-intuitively, it is network that will allow me to specify individual IP addresses, using a CIDR netmask that allows only 1 host (the netmask does not have to match that of the underlying interface)

First thing – set the default NFS share mode to ‘NONE’ so that non-excepted hosts cannot mount the share.

Then add exception for each host, using a /32 netmask which limits it to a single IP.

zfs

So, a quick test. This one should work

root@myhost-d1:/stage# ifconfig stor_ipmp0
stor_ipmp0: flags=8001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> mtu 65520 index 3
        inet 192.168.28.2 netmask fffffc00 broadcast 192.168.31.255
        groupname stor_ipmp0
root@myhost-d1:/# mount -f nfs -o rw 192.168.28.1:/export/stage /mnt
root@myhost-d1:/# df -k /mnt
Filesystem           1024-blocks        Used   Available Capacity  Mounted on
192.168.28.1:/export/stage
                     10737418209          31 10737418178     1%    /mnt

This one should fail

root@myhost-d3:~# ifconfig stor_ipmp0
stor_ipmp0: flags=8001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> mtu 65520 index 3
        inet 192.168.28.4 netmask fffffc00 broadcast 192.168.31.255
        groupname stor_ipmp0
root@myhost-d3:~#  mount -f nfs -o rw 192.168.28.1:/export/stage /mnt
nfs mount: mount: /mnt: Permission denied

Destroying a zpool that cannot be imported

Usually to get rid of a defunct ZFS pool, you just import it by id and destroy it. Unfortunately, this pool was created on a newer version of solaris and so I cannot import it onto my machine.
root@ssccn1 # zpool import
  pool: rpool
    id: 3132242033135066260
 state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

        rpool                        UNAVAIL  newer version
          mirror-0                   ONLINE
            c0t5000CCA0251AB18Cd0s0  ONLINE
            c0t5000CCA02523E10Cd0s0  ONLINE
root@ssccn1 # zpool import 3132242033135066260
cannot import ‘rpool’: pool is formatted using a newer ZFS version

As I don’t care about the data, but want an installation tool to complete without problems, I need to get the disks into a ‘clean’ state.

Create a temporary pool using the problem devices with the -f option to override the potential active zfs pool errors.

root@ssccn1 # zpool create -f melpool c0t5000CCA0251AB18Cd0s0 c0t5000CCA02523E10Cd0s0
‘melpool’ successfully created, but with no redundancy; failure of one
device will cause loss of the pool

Now you have a pool belonging to the correct version you can destroy it.

root@ssccn1 # zpool destroy melpool
root@ssccn1 # zpool import
no pools available to import

Now you have a pool belonging to the correct version you can destroy it.

ZFS disc quota exceeded – workaround

Sometimes I hit a problem when I have downloaded a load of software images and I managed to totally fill my ZFS home directory. Unfortunately I don’t have root on this system so can’t extend my quota so I have to find a workaround.

If you completely fill your quota on zfs you cannot delete any files

Filesystem kbytes used avail capacity Mounted on
192.168.1.18:/export/edi-homes/kitty
209715213 209715213 0 100% /osc/home/kitty
kitty@eedi-sol-desktop2 # ls
kitty@eedi-sol-desktop2 # rm recreate_temp.sql
rm: recreate_temp.sql: override protection 555 (yes/no)? yes
rm: recreate_temp.sql not removed: Disc quota exceeded

One way to get some space back is to resize/truncate a file using dd. Locate a large file on your disk that you no longer need or can easily replace.

kitty@eedi-sol-desktop2 # ls *iso
OAKFactoryImage_2.6.0.0.0_130423.1.iso sol-11_1-text-sparc.iso

Use dd for 1 count to overwrite it with a small amount of data.

kitty@eedi-sol-desktop2 # dd if=/dev/random of=OAKFactoryImage_2.6.0.0.0_130423.1.iso count=1
1+0 records in
1+0 records out
kitty@eedi-sol-desktop2 # df -k .
Filesystem kbytes used avail capacity Mounted on
192.168.1.18:/export/edi-homes/kitty
209715200 208521578 1193622 100% /osc/home/kitty

Now you have some wiggle room to tidy up some more files, all without having to bother your grumpy sysadmin.