« Back to all recent discussions

NAS520: Restore access to lost volume/disk 1 [SOLVED]

ariekariek Posts: 10  Junior Member
edited November 2019 in Questions
I have lost access to volume 1/disk 1/sysvol. I can logon to the NAS520 and when I choose "Storage Manager" it tells me that there is "No Storage" on the internal volume (external drives are not connected). Obviously, I can create a new volume but this will erase all data on the HDD will be lost.

However, when I click on other icons in the webgui, everything that is on the HDD is visible/present.


#NAS_Nov_2019


Comments

  • lodiabailodiabai Posts: 94  Warrior Member
    What kind of RAID Type you used? Is there any screenshots?

    When you used File Browser of NAS520 or CIFS or FTP or ..., do you able to access your data?
    If yes, I suggest you to backup your data first.
  • ariekariek Posts: 10  Junior Member
    edited November 2019
    It is only 1 HDD. JBOD structure and that is to my knowledge just a plain ext4 drive.
    The status is LOST, internal HDD, and can't access the data because the volume doesn't exist (not recognized). I have to create a new volume but that will format the drive and all data will be lost.

    With MiniTools Partition Manager the HDD (note: I have changed some partitions from primary to logical)
    When I hook up the drive as an external drive I can access the drive content. The 'main partition' [with all my files, etc.] and the 'system partition' (which contains a system.img).
    The file structure:
    I'm able to access the drive content when the HDD is mounted as an external drive (USB1) but I can't when the drive is an internal drive, as the drive is not mounted or not recognized as a valid volume.
    If only I could fix the mounting of the internal HDD. Theoretically, the NAS should be functioning just fine.

    I can connect to the NAS over telnet/SSH but only have access to the ram drive.



  • MijzelfMijzelf Posts: 1,259  Paragon Member
    (note: I have changed some partitions from primary to logical)

    Why? In that case the disk is no longer recognized as internal disk. At boot this script is run to check if it's a valid internal disk:

    #action: check is legal disk partition format
    It checks partitions 1 and 2 for their sizes. A problem is that the primary partitions are numbered 1..4, while the logical partitions are numbered 5... . So this disk is no longer valid. Can you change that back?
  • ariekariek Posts: 10  Junior Member
    edited November 2019
    I can change the partitions back from logical to primary with MiniTool Partition manager.
    After changing it, the HDD is still not recognized as an internal HDD/Volume.
    # cat /proc/mdstat
    # fdisk -l
    I'm not sure if the partition IDs (83) are correct.






  • MijzelfMijzelf Posts: 1,259  Paragon Member
    Hm. That didn't work out as intended. Somehow the size of the partition 1 changed by converting to logical and back, and somehow your 2nd partition is lost.
    /dev/sda1          4096    3999615    3995520  1.9G 83 Linux
    The script wants another size:
    LEGAL_FW_SIZ=3997696        #2047MB using parted created
    and indeed my 520 has this sizes:
    Device       Start        End    Sectors   Size Type
    Have you run some partition seek tool on this disk? It is striking that my 1st partition starts on sector 2048, while yours start on 4096. But mine has a raid array header of 2048 sectors, and the datasize of the array is 3995520, which exactly matches your partition size.

       Device Role : Active device 0
       Array State : A. ('A' == active, '.' == missing)
    mdadm --examine /dev/sda1
    So a tool searching for filesystems could have done this.

    If that is the case, I think you can clone my partition table, except for the end of partition 3. That should be the highest available sector number.

  • ariekariek Posts: 10  Junior Member
    edited November 2019
    This was my 'sysvol'
    /i-data/e162cf92/

     # mdadm --examine /dev/sda1
    Above is the current state. As is the ouput of my previous post.
    Below the output is posted of the previous state before I converted partitions back from logical to primary.
    # cat /proc/mdstat
    Are Zyxel NASes formatted as ext2 or ext4?
    I don't know how to clone partition table, be I would give it a try.



  • MijzelfMijzelf Posts: 1,259  Paragon Member
    edited November 2019
    Device     Boot   Start        End    Sectors  Size Id Type
    You have only 2 partitions here. Sda1 is an extended partition, which contains one logical partition sda5. You can see that on their start- and end sectors. And sda2 is a primary partition, containing your data. (Somehow, I hope).

    Let's have a look if a raid header can be found at sectors 2048 and 7999488, which is where they should be, according to my partition table.
    Create a loopdevice at the given offset of sda, and let mdadm have a look at it:
    losetup -o 2097152 /dev/loop1 /dev/sda
    Are Zyxel NASes formatted as ext2 or ext4?
    The 520 ext4. Older ones used ext3, xfs or reiserfs.
    I don't know how to clone partition table, be I would give it a try.
    That is the next step.

  • ariekariek Posts: 10  Junior Member
    edited November 2019
    # losetup -o 2097152 /dev/loop1 /dev/sdamdadm --examine /dev/loop1losetup -d /
    I've got an error.

    losetup


  • MijzelfMijzelf Posts: 1,259  Paragon Member
    Sorry. Stupid forumsoftware bôrked up my commands. I've edited my post.
  • ariekariek Posts: 10  Junior Member
    # losetup -o 2097152 /dev/loop1 /dev/sda



  • MijzelfMijzelf Posts: 1,259  Paragon Member
    OK, the raid header of the data partition is where I expected it, and it healthy. That is good news. The raid header of the firmware partition is corrupted. maybe because it has contained an extended partition table, maybe because of the root cause of your problem (Whatever happened before you lost your data array?)

    I prepared a partition table which should fit your disk, which I attached. Unzip it, and put it on your NAS. When you are using Windows, you can use WinSCP for that, and put it in the /tmp/ directory.
    Then execute
    dd if=/tmp/gpt of=/dev/sda
    It is possible that after the reboot everything will work, it's also possible that we will have to create new raid arrays for firmware, and maybe also for swap. When
    cat /proc/mdstat
    doesn't show 3 arrays, some extra actions are needed.
  • ariekariek Posts: 10  Junior Member
    edited November 2019
    /tmp # dd if=/tmp/gpt of=/dev/sda
     cat /proc/mdstat
    It worked! The NAS finds the internal HDD again. Everything seems normal.
    Thanks a lot! And seriously, where and how can I donate for all the work you have done for me and many others on this and the old forum?

    What did a do? I fixed a 2 USB flash drives, which were totally corrupted. Windows wouldn't recognize the USB Flash drive, or it took over 20+ minutes to recognize it and then immediately (r)ejected them making them totally unusable.
    I was able to rescue the flash drives under Linux on the NAS.
    After a reboot [previous reboots went all fine] of the NAS the internal HDD was no longer recognized. Probably a typo with potentially huge consequences.
  • MijzelfMijzelf Posts: 1,259  Paragon Member
    I was able to rescue the flash drives under Linux on the NAS.

    The rescue involved overwriting the partition table and filesystem headers on the flash drives, I presume?

    I wonder if some extra safety could be reached by changing the rights of /dev/sdb (or whatever device node the USB disk has), and do the action as admin.

    where and how can I donate for all the work you have done
    Well, you can't donate to me. But, last weekend I upgraded my router to the latest release(candidate) of OpenWRT, and it worked flawlessly, like always. I though I should give them a donation to keep the servers up. But if you want, you can give them a donation on my behalf.
  • PawelSPawelS Posts: 8  Junior Member
    Hi,
    I have similiar problem, my nas is NAS326, but I suppose it doesn't matter.

    my data:
    ~ # cat /proc/partitions
    major minor  #blocks  name

       7        0     146432 loop0
      31        0       2048 mtdblock0
      31        1       2048 mtdblock1
      31        2      10240 mtdblock2
      31        3      15360 mtdblock3
      31        4     108544 mtdblock4
      31        5      15360 mtdblock5
      31        6     108544 mtdblock6
       8        0  976762584 sda
       8        1    1998848 sda1
       8        2    1999872 sda2
       8        3  972762112 sda3
       8       16  976762584 sdb
       8       17    1998848 sdb1
       8       18    1999872 sdb2
       8       19  972762112 sdb3
       9        0    1997760 md0
       9        1    1998784 md1
       9        2  972630848 md2
       9        3  972630848 md3
     253        0     102400 dm-0
     253        1  972525568 dm-1
     253        2     102400 dm-2
     253        3  972525568 dm-3
    ~ # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
    md3 : active raid1 sda3[0]
          972630848 blocks super 1.2 [1/1] [U]

    md2 : active raid1 sdb3[0]
          972630848 blocks super 1.2 [1/1] [U]

    md1 : active raid1 sda2[0] sdb2[2]
          1998784 blocks super 1.2 [2/2] [UU]

    md0 : active raid1 sda1[0] sdb1[2]
          1997760 blocks super 1.2 [2/2] [UU]

    unused devices: <none>
    ~ # cat /proc/mounts
    rootfs / rootfs rw 0 0
    /proc /proc proc rw,relatime 0 0
    /sys /sys sysfs rw,relatime 0 0
    devpts /dev/pts devpts rw,relatime,mode=600 0 0
    ubi4:ubi_rootfs1 /firmware/mnt/nand ubifs ro,relatime 0 0
    /dev/md0 /firmware/mnt/sysdisk ext4 ro,relatime,data=ordered 0 0
    /dev/loop0 /ram_bin ext2 ro,relatime 0 0
    /dev/loop0 /usr ext2 ro,relatime 0 0
    /dev/loop0 /lib/security ext2 ro,relatime 0 0
    /dev/loop0 /lib/modules ext2 ro,relatime 0 0
    /dev/loop0 /lib/locale ext2 ro,relatime 0 0
    /dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0
    /dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0
    ubi2:ubi_config /etc/zyxel ubifs rw,relatime 0 0
    configfs /sys/kernel/config configfs rw,relatime 0 0
    ~ # mdadm --examine /dev/sd[abcd]3
    /dev/sda3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : 3fa2ac41:656e56d2:686f9f1f:1b51f286
               Name : NAS326:2  (local to host NAS326)
      Creation Time : Mon May 25 03:57:52 2015
         Raid Level : raid1
       Raid Devices : 1

     Avail Dev Size : 1945262080 (927.57 GiB 995.97 GB)
         Array Size : 972630848 (927.57 GiB 995.97 GB)
      Used Dev Size : 1945261696 (927.57 GiB 995.97 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : c93ceca9:5ab09837:787ce117:f2a96ff9

        Update Time : Tue Jun 30 17:43:12 2020
           Checksum : 58295563 - correct
             Events : 16


       Device Role : Active device 0
       Array State : A ('A' == active, '.' == missing)
    /dev/sdb3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : e497df71:99394e60:2cff4d77:9efb710f
               Name : NAS326:3  (local to host NAS326)
      Creation Time : Sun Oct 29 11:58:05 2017
         Raid Level : raid1
       Raid Devices : 1

     Avail Dev Size : 1945262080 (927.57 GiB 995.97 GB)
         Array Size : 972630848 (927.57 GiB 995.97 GB)
      Used Dev Size : 1945261696 (927.57 GiB 995.97 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 67b61188:5f31e662:1f99eab7:9b372964

        Update Time : Tue Jun 30 16:15:22 2020
           Checksum : 104b055f - correct
             Events : 2


       Device Role : Active device 0
       Array State : A ('A' == active, '.' == missing)
    mdadm: cannot open /dev/sdc3: No such device or address
    mdadm: cannot open /dev/sdd3: No such device or address

    What can I do not to lose all data I have on my 2 disks?
    Please help.
    TIA
Sign In or Register to comment.