Zfs grow rpool disk
wordpress meta
title: 'ZFS Grow rpool disk'
date: '2012-12-07T01:35:14-06:00'
status: publish
permalink: /zfs-grow-rpool-disk
author: admin
excerpt: ''
type: post
id: 144
category:
- Solaris
tag: []
post_format: []
title: 'ZFS Grow rpool disk'
date: '2012-12-07T01:35:14-06:00'
status: publish
permalink: /zfs-grow-rpool-disk
author: admin
excerpt: ''
type: post
id: 144
category:
- Solaris
tag: []
post_format: []
Growing disks for virtual machines have become pretty trivial with tools like livecd's and gparted. Recently I had to grow my Solaris 11 disk from 16GB to 20GB. And of course on Solaris it's a ZFS volume.
I don't think gparted can re-size the Solaris2 partitions used by Solaris 11 so I did the re-size on a running system using format. There might be a better way and I advise you NOT to do this on a critical system. Nonetheless it worked for me on a Virtualbox as well as a KVM virtual machine.
Re-sizing the disk on the host side is out of scope and you can use a myriad of ways to accomplish that for instance lvextend when using LVM. In this case I documented the re-sizing as was performed with Virtualbox.
Also note this only worked on Solaris x86. On Sparc there is no expand option for the partition in the format tool. There is a way to resize a system disk but it is pretty painful. Search my blog for Growing Solaris LDOM rpool.
Re-size Disk:
$ vboxmanage showhdinfo Solaris11.vdi
Logical size: 16384 MBytes
Current size on disk: 9818 MBytes
$ vboxmanage modifyhd Solaris11.vdi --resize 20000
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
$ vboxmanage showhdinfo Solaris11.vdi
Logical size: 20000 MBytes
Current size on disk: 9819 MBytes
Information before disk resize:
root@solaris:~# uname -a
SunOS solaris 5.11 11.1 i86pc i386 i86pc
root@solaris:~# zpool status rpool
pool: rpool
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c3t0d0s0 ONLINE 0 0 0
root@solaris:~# df -h | grep rpool
rpool/ROOT/solaris-2 16G 3.8G 7.0G 36% /
root@solaris:~# format
AVAILABLE DISK SELECTIONS:
0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63>
/pci@0,0/pci8086,2829@d/disk@0,0
Specify disk (enter its number): 0
selecting c3t0d0
[disk formatted]
/dev/dsk/c3t0d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
...
partition> pr
Current partition table (original):
Total disk cylinders available: 2085 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 1 - 2084 15.96GB (2084/0/0) 33479460
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 2086 15.99GB (2087/0/0) 33527655
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
...
Total disk size is 2088 cylinders
Cylinder size is 16065 (512 byte) blocks
Cylinders
Partition Status Type Start End Length %
========= ====== ============ ===== === ====== ===
1 Active Solaris2 1 2087 2087 100
Physical disk information after resize at host level:
...
Total disk size is 2549 cylinders
Cylinder size is 16065 (512 byte) blocks
Cylinders
Partition Status Type Start End Length %
========= ====== ============ ===== === ====== ===
1 Active Solaris2 1 2087 2087 82
Tell the OS about the new size using expand:
...
partition> expand
Expansion of label cannot be undone; continue (y/n) ? y
The expanded capacity was added to the disk label.
Disk label was written to disk.
partition> pr
Current partition table (original):
Total disk cylinders available: 2546 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 1 - 2084 15.96GB (2084/0/0) 33479460
2 backup wu 0 - 2086 15.99GB (2087/0/0) 33527655
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
Make the changes to the physical partition. I removed the "backup" slice as well since I don't need it.
...
partition> pr
Current partition table (unnamed):
Total disk cylinders available: 2546 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 1 - 2545 19.50GB (2545/0/0) 40885425
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
partition> label
Ready to label disk, continue? y
Finally scrub and grow ZFS:
root@solaris:~# zpool scrub rpool
root@solaris:~# zpool status rpool
pool: rpool
state: ONLINE
scan: scrub in progress since Fri Dec 7 14:21:25 2012
11.8M scanned out of 8.57G at 670K/s, 3h43m to go
0 repaired, 0.13% done
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c3t0d0s0 ONLINE 0 0 0
root@solaris:~# zpool set autoexpand=on rpool
root@solaris:~# zpool get all rpool | grep size
rpool size 19.4G
root@solaris:~# df -h | grep ROOT
rpool/ROOT/solaris-2 19G 3.8G 10G 27% /
rpool/ROOT/solaris-2/var 19G 955M 10G 9% /var
root@solaris:~# zpool set autoexpand=off rpool