« Back to all recent discussions

Expanding RAID volume on NAS326 is not working

GunslingerGunslinger Posts: 5  Junior Member
edited June 2019 in Discussions
My NAS326 had previously two 1.5TB disks on RAID 1. After one them busted, I decided to replace them both with 2TB WD Red disks. I first replaced the broken disk, and then repaired the RAID volume. I then replaced the other disk, and repaired the volume again. I now had fully functioning 1.5TB RAID 1 volume on 2TB disks. Since then, I've tried to expand the volume to 2TB, but that doesn't seem to work. 

I've tried to restart the expanding several times, but it seems to stay forever on "expanding" status. Overview of the store manager says "Volume1 is expanding." If I check Volume at the Internal Storage, the status of the volume is simply "Expanding". There's spinner running around, but no time estimation or percentage information available. 

First time I restarted the expansion after a day. Second time after a week. Third time after three weeks. Now, the expansion has been running for a solid full month. I'm getting pretty confident, that the expansion is not working at all, and will not finish no matter how long I wait. Any ideas how to proceed / get things working?


#NAS_Jun_2019
Tagged:

Comments

  • MijzelfMijzelf Posts: 1,073  Heroic Warrior Member
    Can you enable the ssh server, login over ssh, and post the output of
    cat /proc/partitions

  • GunslingerGunslinger Posts: 5  Junior Member
    Sure! Here it goes:

    cat /proc/partitions
    major minor  #blocks  name
    
       7        0     146432 loop0
      31        0       2048 mtdblock0
      31        1       2048 mtdblock1
      31        2      10240 mtdblock2
      31        3      15360 mtdblock3
      31        4     108544 mtdblock4
      31        5      15360 mtdblock5
      31        6     108544 mtdblock6
       8        0 1953514584 sda
       8        1    1998848 sda1
       8        2    1999872 sda2
       8        3 1949514752 sda3
       8       16 1953514584 sdb
       8       17    1998848 sdb1
       8       18    1999872 sdb2
       8       19 1949514752 sdb3
       9        0    1997760 md0
       9        1    1998784 md1
       9        2 1949383680 md2
    
    cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
    md2 : active raid1 sda3[3] sdb3[2]
          1949383680 blocks super 1.2 [2/2] [UU]
    
    md1 : active raid1 sda2[3] sdb2[2]
          1998784 blocks super 1.2 [2/2] [UU]
    
    md0 : active raid1 sda1[3] sdb1[2]
          1997760 blocks super 1.2 [2/2] [UU]
    
    unused devices: <none>
    
    su
    mdadm --examine /dev/sd[ab]3
    /dev/sda3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : ae61f12f:b8df7792:b05fd5f6:8c108264
               Name : NAS326:2
      Creation Time : Sun Jan 22 22:37:47 2017
         Raid Level : raid1
       Raid Devices : 2
    
     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
         Array Size : 1949383680 (1859.08 GiB 1996.17 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 4a7ca21a:b05d62be:070099a4:fc8f962e
    
        Update Time : Mon Jun 17 11:34:46 2019
           Checksum : 5ec43eb8 - correct
             Events : 900
    
    
       Device Role : Active device 0
       Array State : AA ('A' == active, '.' == missing)
    /dev/sdb3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : ae61f12f:b8df7792:b05fd5f6:8c108264
               Name : NAS326:2
      Creation Time : Sun Jan 22 22:37:47 2017
         Raid Level : raid1
       Raid Devices : 2
    
     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)
         Array Size : 1949383680 (1859.08 GiB 1996.17 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 664c54ec:9b44985a:79ab8811:2c6fdd0c
    
        Update Time : Mon Jun 17 11:34:46 2019
           Checksum : 17e28060 - correct
             Events : 900
    
    
       Device Role : Active device 1
       Array State : AA ('A' == active, '.' == missing)


  • MijzelfMijzelf Posts: 1,073  Heroic Warrior Member
    The partitions and the raid array are already resized. So only the filesystem has to be done.
    su

  • GunslingerGunslinger Posts: 5  Junior Member
    I tried that, but I'm getting an error:
    ~ # resize2fs /dev/md2
    resize2fs 1.42.12 (29-Aug-2014)
    The filesystem can be resize to 487345920 blocks.chk_expansible=0
    
    Filesystem at /dev/md2 is mounted on /i-data/ae61f12f; on-line resizing required
    old_desc_blocks = 88, new_desc_blocks = 117
    resize2fs: Permission denied to resize filesystem
    
    And when I try to unmount the drive, I get another error:
    ~ # umount /dev/md2
    umount: /i-data/ae61f12f: target is busy
            (In some cases useful info about processes that
             use the device is found by lsof(8) or fuser(1).)
    
    even though I have disabled the sharing on my local network. 


  • MijzelfMijzelf Posts: 1,073  Heroic Warrior Member
    In that case there is a filesystem error, so resize2fs wants to run fsck first, and for that the filesystem has to be umounted.

    That is a bit hard, but you can let the firmware do it. Edit the file /etc/init.d/rc.shutdown. The box has one usable editor, vi. It's a nasty editor. You can get to edit mode by pressing i. After having made your adjustments, press <ESC>:wq to save the file and exit the editor.

    Search for '# swapoff'. Below that line add
    /sbin/telnetd
    Save the file, and shutdown the box (command poweroff). After you lost your ssh connection, you should be able to login again over telnet.
    Now execute
    umount /i-data/sysvol
    After that you can continue shutdown with
    killall -9 /bin/sh


  • GunslingerGunslinger Posts: 5  Junior Member
    Everything worked like a charm, right until the last step! After executing that
    killall -9 /bin/sh
    I got just information that there were no processes to stop. After that, I tried to continue the shutdown by 'poweroff'. Telnet connection to NAS seemed to disconnect, but the NAS itself did not shut down. Can I just force shutdown from power button, or how should I try to restart it so I don't mess the file system?
  • MijzelfMijzelf Posts: 1,073  Heroic Warrior Member
    You can simpy cut the power. The filesystem is not mounted.
  • GunslingerGunslinger Posts: 5  Junior Member
    Yup, that's it. Working like a charm now. Thank you so, so much!
  • TukemoniTukemoni Posts: 5  Junior Member
    Hi. Im having same issue than Gunslinger had. I was able to follow instructions all the way to the point where Mijzelf ask to edit rc.shutdown file. Could you give more detailed instructions how to do it. How to find the file, how to edit.... Im real newbie with these. Do I need to enable also telnet service from control panel of nas to be able to login over telnet? Thank you.
  • MijzelfMijzelf Posts: 1,073  Heroic Warrior Member
    It doesn't matter if you have enable the telnet daemon or not. It's stopped at shutdown anyway. The injected code will be executed /after/ the firmware daemons have been stopped.

    BTW, this stupid forum software has removed part of the instructions. The code to add is

    /sbin/telnetd
    /bin/sh

    The first line starts a telnet daemon, the second a shell (on serial port, I think), and is blocking. Without that 2nd line a telnet daemon is started, and the box powers off.

    About finding and editing that file, the command to do so is

    vi /etc/init.d/rc.shutdown

    and a brief instruction is in the post upside.


  • TukemoniTukemoni Posts: 5  Junior Member
    Hi. Thanks for the advices. I manage to edit and save the file, but for some reason I'm not able to login over telnet after "power off" command. Any further help is highly welcome.
  • TukemoniTukemoni Posts: 5  Junior Member
    User mistake as usual. Managed to telnet and got the system to work with expansion. Thank you Mijzelf!
  • JoonasJoonas Posts: 1
    Got it working replacing my 2x4TB drives with two new 2x10TB drives.
    Maybe some steps can be skipped, but for good measure I'm going to show everything I did.

    1. Removed one of the HHD:s from the NAS and inserted a new HDD in it's place. In the admin-panel (//NAS326) I could see the progress. It took roughly 8 hours.
    2. Repeated with second disk. Kept the old disks safe in case of failure.
    3. Enabled SSH (and Telnet just in case) on the NAS using the admin-panel.
    4. Installed Putty.
    5. Inserted NAS IP (without any prefix), Port 22 and Connection type: SSH in Putty. IP can be found in admin-panel --> Control Panel.
    6. Loged in with the admin account in the command window that poped up.
    7. Typed "su" and had to provide admin password again.
    8. Pasted and ran "vi /etc/init.d/rc.shutdown".
    9. Pressed "i" on keyboard.
    10. Looked for "#swapoff" and pasted theses lines after that line:
      /sbin/telnetd
      /bin/sh
    11. Pressed ESC-key and then typed and ran ":wq".
    12. Closed Putty.
    13. Opened Putty once again, using the same IP and Port, but select "Telnet" instead of "SSH" as Connection type. A new command window opened and I had to login again. I propably ran step 7. again.
    14. Pasted and ran "umount /i-data/sysvol"
    15. Pasted and ran "e2fsck -f /dev/md2". Action took a few minutes.
    16. Pasted and ran "resize2fs /dev/md2". This action took a while longer.
    17. Pasted and ran "killall -9 /bin/sh".
    18. Restarted NAS by pressing the physical power button.
    19. Loged in to the admin panel and could confirm resize. Disabled "SSH" and "Telnet" again.
    20. Made 10 cheers for Mijzelf and Gunslinger.


Sign In or Register to comment.