« Back to all recent discussions

Expanding volume after hard disks upgrade

badkernelbadkernel Posts: 5  Junior Member
edited December 2019 in Questions
Hello 
I've recently replaced all 4 disks from 1TB to 4TB and rebuild RAID5 (replacing one disk at the time).
It took 5 days to do this and had to do multiple retries to get it done (for some reason it would fail and I had to restart NAS to finish rebuilding RAID5). In the end it worked out fine.

Now I'm trying to expand volume but its not completing and status look like this:


it seems to be working on it but it eventually fails with out providing details :(

cat /proc/mdstat shows this

~ # cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

md2 : active raid5 sda3[7] sdd3[4] sdc3[5] sdb3[6]

      11708660736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

      

md1 : active raid1 sda2[7] sdd2[4] sdc2[5] sdb2[6]

      1998784 blocks super 1.2 [4/4] [UUUU]

      

md0 : active raid1 sda1[7] sdd1[4] sdc1[5] sdb1[6]

      1997760 blocks super 1.2 [4/4] [UUUU]

      

unused devices: <none>

cat /proc/partitions

major minor  #blocks  name


   7        0     147456 loop0

  31        0        256 mtdblock0

  31        1        512 mtdblock1

  31        2        256 mtdblock2

  31        3      10240 mtdblock3

  31        4      10240 mtdblock4

  31        5     112640 mtdblock5

  31        6      10240 mtdblock6

  31        7     112640 mtdblock7

  31        8       6144 mtdblock8

   8        0 3907018584 sda

   8        1    1998848 sda1

   8        2    1999872 sda2

   8        3 3903017984 sda3

   8       16 3907018584 sdb

   8       17    1998848 sdb1

   8       18    1999872 sdb2

   8       19 3903017984 sdb3

   8       32 3907018584 sdc

   8       33    1998848 sdc1

   8       34    1999872 sdc2

   8       35 3903017984 sdc3

   8       48 3907018584 sdd

   8       49    1998848 sdd1

   8       50    1999872 sdd2

   8       51 3903017984 sdd3

  31        9     102424 mtdblock9

   9        0    1997760 md0

   9        1    1998784 md1

  31       10       4464 mtdblock10

   9        2 11708660736 md2

mdadm --examine /dev/sd[abcd]3

/dev/sda3:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x0

     Array UUID : 0c7deebc:1fade3f4:1bde3e73:4bcee8ea

           Name : NAS540:2

  Creation Time : Sun Jun 21 04:36:36 2015

     Raid Level : raid5

   Raid Devices : 4


 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)

     Array Size : 11708660736 (11166.25 GiB 11989.67 GB)

    Data Offset : 262144 sectors

   Super Offset : 8 sectors

          State : clean

    Device UUID : 534058d0:e0b0f646:29a981ce:5791da0b


    Update Time : Wed Dec 11 19:29:26 2019

       Checksum : 682271bd - correct

         Events : 1366


         Layout : left-symmetric

     Chunk Size : 64K


   Device Role : Active device 0

   Array State : AAAA ('A' == active, '.' == missing)

/dev/sdb3:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x0

     Array UUID : 0c7deebc:1fade3f4:1bde3e73:4bcee8ea

           Name : NAS540:2

  Creation Time : Sun Jun 21 04:36:36 2015

     Raid Level : raid5

   Raid Devices : 4


 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)

     Array Size : 11708660736 (11166.25 GiB 11989.67 GB)

    Data Offset : 262144 sectors

   Super Offset : 8 sectors

          State : clean

    Device UUID : f522056d:705545a6:8f06f374:21d3820b


    Update Time : Wed Dec 11 19:29:26 2019

       Checksum : a37981e - correct

         Events : 1366


         Layout : left-symmetric

     Chunk Size : 64K


   Device Role : Active device 1

   Array State : AAAA ('A' == active, '.' == missing)

/dev/sdc3:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x0

     Array UUID : 0c7deebc:1fade3f4:1bde3e73:4bcee8ea

           Name : NAS540:2

  Creation Time : Sun Jun 21 04:36:36 2015

     Raid Level : raid5

   Raid Devices : 4


 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)

     Array Size : 11708660736 (11166.25 GiB 11989.67 GB)

    Data Offset : 262144 sectors

   Super Offset : 8 sectors

          State : clean

    Device UUID : 98c36445:3c29e2e1:1c67c26c:cf29705a


    Update Time : Wed Dec 11 19:29:26 2019

       Checksum : 64f0c3c7 - correct

         Events : 1366


         Layout : left-symmetric

     Chunk Size : 64K


   Device Role : Active device 2

   Array State : AAAA ('A' == active, '.' == missing)

/dev/sdd3:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x0

     Array UUID : 0c7deebc:1fade3f4:1bde3e73:4bcee8ea

           Name : NAS540:2

  Creation Time : Sun Jun 21 04:36:36 2015

     Raid Level : raid5

   Raid Devices : 4


 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)

     Array Size : 11708660736 (11166.25 GiB 11989.67 GB)

    Data Offset : 262144 sectors

   Super Offset : 8 sectors

          State : clean

    Device UUID : 014e806b:6a0740ae:172599c5:a7884f0e


    Update Time : Wed Dec 11 19:29:26 2019

       Checksum : 64204930 - correct

         Events : 1366


         Layout : left-symmetric

     Chunk Size : 64K


   Device Role : Active device 3

   Array State : AAAA ('A' == active, '.' == missing)


#NAS_Dec_2019

Best Answer

  • badkernelbadkernel Posts: 5  Junior Member
    Accepted Answer
    @Mijzelf
    My suspicion was the same and I did the following ( so that other can save some time)

    killed all processes that used /dev/md2 

    lsof | grep '/i-data/0c7deebc/' -> will provide the list of all process that needed to be killed

    after that I unmounted /dev/md2 and check for errors : 

    1. umount /dev/md2
    2. check if its still mounted with cmd df -h
    3. checked for error  e2fsck /dev/md2 ( and there where few to say the least)
    4. and then I resized it resize2fs /dev/md2
    5. checked for errors again e2fsck /dev/md2

    resize2fs /dev/md2 

    resize2fs 1.42.12 (29-Aug-2014)

    The filesystem can be resize to 2927165184 blocks.chk_expansible=0


    Resizing the filesystem on /dev/md2 to 2927165184 (4k) blocks.

    The filesystem on /dev/md2 is now 2927165184 (4k) blocks long.

    e2fsck /dev/md2

    e2fsck 1.42.12 (29-Aug-2014)

    /dev/md2: clean, 211994/731791360 files, 612973016/2927165184 blocks


    Then I restarted NAS (reboot cmd) and voila new volume size and one happy camper


    Its safe to say that zyxel needs to work on GUI so that "normal" users can do this with out falling on linux command line and doing things the "OLD" way :)



Answers

Sign In or Register to comment.