tail -f /dev/null

If you haven't had any obstacles lately, you're not challenging. be the worst.

AWS EBS volume size 拡張

Linux OS (Centos 7)

Extend volume size

# ex. modify root volume size 10GiB to 20GiB
$ aws ec2 modify-volume --volume-id vol-xxx --size 20 --region ap-northeast-1

Check File system

$ sudo file -s /dev/xvd*
/dev/xvda:  x86 boot sector; partition 1: ID=0x83, active, starthead 32, startsector 2048, 20969439 sectors, code offset 0x63
/dev/xvda1: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)
/dev/xvdb:  Linux rev 1.0 ext4 filesystem data, UUID=xxx (needs journal recovery) (extents) (64bit) (large files) (huge files)
/dev/xvdba: LVM2 PV (Linux Logical Volume Manager), UUID: xxx, size: 54760833024
/dev/xvdbb: LVM2 PV (Linux Logical Volume Manager), UUID: xxx, size: 54760833024
/dev/xvdbc: LVM2 PV (Linux Logical Volume Manager), UUID: xxx, size: 10737418240
/dev/xvdbd: LVM2 PV (Linux Logical Volume Manager), UUID:xxx, size: 10737418240 
/dev/xvdc:  ASCII text, with no line terminators
  • xvdba, xvdbb は 50GiB で LVM (RAID0) を組んでいる (gp2 grade, IOPS about 1000).
    • RAID0 なのは別 instance へ data replication している為.
    • Amazon EBS において RAID5, RAID6 では volume に使用できる IOPS の一部が RAID parity write 操作によって消費されるためお勧めされないとのこと.
      • RAID array 構成により RAID で使用できる IOPS が RAID0 と比較し 20~30% 少なくなる場合があるとのこと.
    • volume size, speed が同等の 2 volume RAID0 array の方が RAID6 array で cost 2倍, 4 volume の構成よりも performance が高い場合がある.
    • RAID0 やり過ぎると障害発生率も増えそう.
  • /dev/xvda1 は XFS file system な root (boot) volume.
  • それ以外は ext4 file system な追加 volume.

Check disk status

$ lsblk # Check disk usage
$ pvdisplay # Check physical volume size
$ vgdisplay # Check volume group size
$ lvdisplay # Check logical volume size

どの partition, lvm の拡張が必要で, mount point は何処かを確認する.

# `xvda1` needs extend partition.
$ lsblk
NAME                                        MAJ:MIN   RM  SIZE RO TYPE MOUNTPOINT
xvda                                        202:0      0   20G  0 disk # root volume
└─xvda1                                     202:1      0   10G  0 part / # partition
xvdba                                       202:13312  0  50G  0 disk
└─vg_xxx-lv_xxx             253:2      0 100G  0 lvm  /xxx
xvdbb                                       202:13568  0  50G  0 disk
└─vg_xxx-lv_xxx             253:2      0 100G  0 lvm  /xxx
xvdbc                                       202:13824  0  10G  0 disk
└─vg_yyy-lv_yyy   253:5      0  10G  0 lvm  /yyy
xvdbd                                       202:14080  0   10G  0 disk
└─vg_zzz-lv_zzz 253:4      0   10G  0 lvm  /zzz

Extend partition

  • attach されている EBS volume size を拡張しても partition size は拡張されない.
  • size を変更した volume の file system を拡張する前に, partition の拡張が必要か確認する.
    • volume の size が partition に反映されていない場合は, partition を拡張する.
  • 今回は root volume を拡張.
  • EBS では online の拡張が可能で Linux OS では growpart command で partition 拡張が可能.
# Extend block device partition and please recheck lsblk.
# Confirm that the extended volume size is reflected in the partition.
$ growpart /dev/xvda 1

Extend file system

  • Filesystem size を volume size と同じ size に拡張する.
  • resize2fs は Centos7 で XFS file system に対し利用不可となった.
    • xfs_growfs を利用する.

Extend XFS file system

# Extend XFS file system (root)
$ sudo xfs_growfs -d /

Extend ext2, ext3, ext4 file system

LVM (Logical Volume Manager) で physical volume size を変更する.

$ pvresize /dev/xvdba
  Physical volume "/dev/xvdba" changed
$ pvresize /dev/xvdbb
  Physical volume "/dev/xvdbb" changed

logical volume を拡張する.

$ lvextend -l +100%FREE /dev/vg_xxx/lv_xxx

各 volume の file system を拡張する.

$ resize2fs  /dev/vg_xxx/lv_xxx6

Check file system size

拡張された volume size が各 file system に反映されていることを確認する.

$ sudo df -h
Filesystem Size  Used Avail Use% Mounted on
/dev/xvda1 20G  9.2G  11G  46% /
/dev/mapper/vg_xxx-lv_xxx 100G  41G  50G  44% /xxx
/dev/mapper/vg_yyy-lv_yyy 9.8G  434M  8.8G   5% /yyy
/dev/mapper/vg_zzz-lv_zzz 99G  9.7G   84G  11% /zzz
# Check disk condition
$ sudo fdisk -l

Disk /dev/xvda: 20 GB, 20737418240 bytes, 40971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000abb0d

    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *        2048    20971486    10484719+  83  Linux

Disk /dev/xvdba: 54.6 GB, 54760833024 bytes, 106954752 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/xvdbd: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/xvdbb: 54.6 GB, 54760833024 bytes, 1069547520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/xvdbc: 107.4 GB, 10737418240 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/vg_xxx-lv_xxx: 109.2 GB, 109520827187 bytes, 213907865 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 131072 bytes

Disk /dev/mapper/vg_zzz-lv_zzz: 10.7 GB, 10733223936 bytes, 20963328 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/vg_yyy-lv_yyy: 107.4 GB, 107369988096 bytes, 209707008 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Windows (Sever 2016)

https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/recognize-expanded-volume-windows.html の通りに実施.

  • EBS volume size 変更
  • volume の拡張
    • Disk management > Action > Rescan Disks.
    • Disk management > C/D.. drive 右 click > volume の拡張.

余談

EC2 が停止した状態であれば, modify-volume するだけで partition と file system も拡張してくれる.

# EC2 が停止した状態で root を 10 -> 20GB に拡張
$ df -h
ファイルシス                                          サイズ  使用  残り 使用% マウント位置
devtmpfs                                                 60G     0   60G    0% /dev
tmpfs                                                    60G   32M   60G    1% /dev/shm
tmpfs                                                    60G   17M   60G    1% /run
tmpfs                                                    60G     0   60G    0% /sys/fs/cgroup
/dev/xvda1                                               20G   15G  5.1G   75% /

$ sudo growpart /dev/xvda 1
NOCHANGE: partition 1 is size 41940959. it cannot be grown

# sudo xfs_growfs -d /
meta-data=/dev/xvda1             isize=512    agcount=11, agsize=508800 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=5242619, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data size unchanged, skipping

References