« Back to all recent discussions

NAS 540 Volume gone after latest update.

kimmekimme Posts: 19  Junior Member
edited August 8 in Questions
Hi, 

I just updated my NAS540 to the latest firmware.
After rebooting my volume is gone!

Is there any way to restore this? Otherwise I'm loosing 5TB of data!

I hope there's a solution for this!

Kim

#NAS_Aug
Tagged:

Best Answer

  • MijzelfMijzelf Posts: 271  Advanced Warrior Member
    Accepted Answer
    Nothing abnormal.

    Well, it won't hurt to assemble the array again, and if the firmware doesn't 'pick it up', you can also add sda3 manually:
    mdadm --manage /dev/md2 --add /dev/sda3
    The rebuilding will happen in background. You can see the status with
    cat /proc/mdstat

«13

Answers

  • MijzelfMijzelf Posts: 271  Advanced Warrior Member
    What kind of volume is that? (Single disk, RAID1, RAID5)
    Do you reboot the box regularly, or was this the first reboot in a long time?
    Can you enable the ssh server (Control Panel->Network->Terminal) login (you can use PuTTY for that), and post the output of
    cat /proc/partitions
    cat /proc/mdstat

  • kimmekimme Posts: 19  Junior Member
    Hi, 

    Thanks for the reply!

    It's a RAID5 setup with 4 disks.
    Never used ssh so I'd might need some help here. (Using a mac)
  • IjnrsiIjnrsi Posts: 186  Warrior Member
    MAC has the terminal tool embedded, so you can the tool by searching "terminal".
    Then type "ssh nas_ip" or "telnet nas_ip" to access your NAS540.
    Login info is the same as admin/password.
  • kimmekimme Posts: 19  Junior Member

    don't know if this is what you need but here we go :)



    / $ cat /proc/partitions 

    major minor  #blocks  name


       7        0     147456 loop0

      31        0        256 mtdblock0

      31        1        512 mtdblock1

      31        2        256 mtdblock2

      31        3      10240 mtdblock3

      31        4      10240 mtdblock4

      31        5     112640 mtdblock5

      31        6      10240 mtdblock6

      31        7     112640 mtdblock7

      31        8       6144 mtdblock8

       8        0 1953514584 sda

       8        1    1998848 sda1

       8        2    1999872 sda2

       8        3 1949514752 sda3

       8       16 1953514584 sdb

       8       17    1998848 sdb1

       8       18    1999872 sdb2

       8       19 1949514752 sdb3

       8       32 1953514584 sdc

       8       33    1998848 sdc1

       8       34    1999872 sdc2

       8       35 1949514752 sdc3

       8       48 1953514584 sdd

       8       49    1998848 sdd1

       8       50    1999872 sdd2

       8       51 1949514752 sdd3

      31        9     102424 mtdblock9

       9        0    1997760 md0

       9        1    1998784 md1

      31       10       4464 mtdblock10




    / $ cat /proc/mdstat 

    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

    md1 : active raid1 sda2[4] sdd2[3] sdb2[5] sdc2[6]

          1998784 blocks super 1.2 [4/4] [UUUU]

          

    md0 : active raid1 sda1[4] sdd1[3] sdb1[5] sdc1[6]

          1997760 blocks super 1.2 [4/4] [UUUU]

          

    unused devices: <none>

  • MijzelfMijzelf Posts: 271  Advanced Warrior Member
    edited August 8
    So the volume is indeed gone. Let's have a look at the raidmembers:
    su
    mdadm --examine /dev/sd[abcd]3
    After 'su' it will ask you for your password again. It's elevating your login from 'admin' to 'root'.
  • kimmekimme Posts: 19  Junior Member

    Already a big thanks for your time (gezien je nick, bedankt voor je tijd ;) )


    This is the output:


    ~ # mdadm --examine /dev/sd[abcd]3

    /dev/sda3:

              Magic : a92b4efc

            Version : 1.2

        Feature Map : 0x2

         Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b

               Name : NAS540:2  (local to host NAS540)

      Creation Time : Mon Dec 29 12:51:05 2014

         Raid Level : raid5

       Raid Devices : 4


     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)

         Array Size : 5848150464 (5577.23 GiB 5988.51 GB)

      Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)

        Data Offset : 262144 sectors

       Super Offset : 8 sectors

    Recovery Offset : 2193056496 sectors

              State : active

        Device UUID : 08268038:19d12420:f4d51e5e:a935b818


        Update Time : Tue Aug  7 11:15:24 2018

           Checksum : 99528fda - correct

             Events : 8397


             Layout : left-symmetric

         Chunk Size : 64K


       Device Role : Active device 0

       Array State : AAAA ('A' == active, '.' == missing)

    /dev/sdb3:

              Magic : a92b4efc

            Version : 1.2

        Feature Map : 0x0

         Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b

               Name : NAS540:2  (local to host NAS540)

      Creation Time : Mon Dec 29 12:51:05 2014

         Raid Level : raid5

       Raid Devices : 4


     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)

         Array Size : 5848151040 (5577.23 GiB 5988.51 GB)

        Data Offset : 262144 sectors

       Super Offset : 8 sectors

              State : clean

        Device UUID : 66638459:c9900aa2:c3173028:8ac7f14d


        Update Time : Tue Aug  7 12:51:32 2018

           Checksum : b8d11f5a - correct

             Events : 10878


             Layout : left-symmetric

         Chunk Size : 64K


       Device Role : Active device 1

       Array State : .AAA ('A' == active, '.' == missing)


  • MijzelfMijzelf Posts: 271  Advanced Warrior Member
    This is not complete. It shows only the raid headers of /dev/sda3 and /dev/sdb2. There should also be a /dev/sdc3 and /dev/sdd3.

    Yet this already tells something went wrong between Tue Aug  7 11:15:24 2018 and Tue Aug  7 12:51:32 2018 (UTC) being the update times of both devices.
    At Tue Aug  7 11:15:24 2018 /dev/sda3 has recorded that the array state is AAAA. So all for members were alive and kicking.
    At Tue Aug  7 12:51:32 2018 /dev/sdb3 has recorded the state .AAA. So at that moment /dev/sda3 was already dropped from the array, and the array was degraded. I think the headers of /dev/sdc3 and /dev/sdd3 will show that /dev/sdb3 is also dropped, bringing the array down.

    Does that timestamp ring a bell? Have you written anything to the array after Aug  7 12:51:32?

  • kimmekimme Posts: 19  Junior Member
    edited August 8
    Indeed, it was incomplete, I'll paste again below.

    I've got a warning mail 2 nights ago that the array was degraded due to an I/O error on disk1. When I checked the disks they were all healthy so I assumed there was a problem with the array itself. That's why I've let the NAS repair the array. After the repair (around noon) it still said it was degraded. As the disks were still in perfect health I thought it might have been a FW-issue that the device was thinking that there was something wrong. So I did the update. When the device was rebooted I was "welcomed" with the question to setup a volume.

    Error List:
     2018-08-06 16:09:35   crit  storage    Detected Disk1 I/O error.
     2018-08-07 00:37:49   alert  storage    There is a RAID Degraded.
     2018-08-07 00:50:19   alert  storage    There is a RAID Degraded.
     2018-08-07 12:15:42   crit  storage    Detected Disk1 I/O error. 


    /dev/sda3:

              Magic : a92b4efc

            Version : 1.2

        Feature Map : 0x2

         Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b

               Name : NAS540:2  (local to host NAS540)

      Creation Time : Mon Dec 29 12:51:05 2014

         Raid Level : raid5

       Raid Devices : 4


     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)

         Array Size : 5848150464 (5577.23 GiB 5988.51 GB)

      Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)

        Data Offset : 262144 sectors

       Super Offset : 8 sectors

    Recovery Offset : 2193056496 sectors

              State : active

        Device UUID : 08268038:19d12420:f4d51e5e:a935b818


        Update Time : Tue Aug  7 11:15:24 2018

           Checksum : 99528fda - correct

             Events : 8397


             Layout : left-symmetric

         Chunk Size : 64K


       Device Role : Active device 0

       Array State : AAAA ('A' == active, '.' == missing)

    /dev/sdb3:

              Magic : a92b4efc

            Version : 1.2

        Feature Map : 0x0

         Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b

               Name : NAS540:2  (local to host NAS540)

      Creation Time : Mon Dec 29 12:51:05 2014

         Raid Level : raid5

       Raid Devices : 4


     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)

         Array Size : 5848151040 (5577.23 GiB 5988.51 GB)

        Data Offset : 262144 sectors

       Super Offset : 8 sectors

              State : clean

        Device UUID : 66638459:c9900aa2:c3173028:8ac7f14d


        Update Time : Tue Aug  7 12:51:32 2018

           Checksum : b8d11f5a - correct

             Events : 10878


             Layout : left-symmetric

         Chunk Size : 64K


       Device Role : Active device 1

       Array State : .AAA ('A' == active, '.' == missing)

    /dev/sdc3:

              Magic : a92b4efc

            Version : 1.2

        Feature Map : 0x0

         Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b

               Name : NAS540:2  (local to host NAS540)

      Creation Time : Mon Dec 29 12:51:05 2014

         Raid Level : raid5

       Raid Devices : 4


     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)

         Array Size : 5848151040 (5577.23 GiB 5988.51 GB)

        Data Offset : 262144 sectors

       Super Offset : 8 sectors

              State : clean

        Device UUID : d1ace266:039ced6e:7e8e15cf:fde9814d


        Update Time : Tue Aug  7 12:51:32 2018

           Checksum : 39880d2f - correct

             Events : 10878


             Layout : left-symmetric

         Chunk Size : 64K


       Device Role : Active device 2

       Array State : .AAA ('A' == active, '.' == missing)

    /dev/sdd3:

              Magic : a92b4efc

            Version : 1.2

        Feature Map : 0x0

         Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b

               Name : NAS540:2  (local to host NAS540)

      Creation Time : Mon Dec 29 12:51:05 2014

         Raid Level : raid5

       Raid Devices : 4


     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)

         Array Size : 5848151040 (5577.23 GiB 5988.51 GB)

        Data Offset : 262144 sectors

       Super Offset : 8 sectors

              State : clean

        Device UUID : d9248da6:15c80681:a3feb907:e643fa7d


        Update Time : Tue Aug  7 12:51:32 2018

           Checksum : f4687b57 - correct

             Events : 10878


             Layout : left-symmetric

         Chunk Size : 64K


       Device Role : Active device 3

       Array State : .AAA ('A' == active, '.' == missing)

  • MijzelfMijzelf Posts: 271  Advanced Warrior Member
    According to the headers, the array is not down:
    /dev/sda3:
    Device Role : Active device 0
    Update Time : Tue Aug  7 11:15:24 2018
    Array State : AAAA ('A' == active, '.' == missing)

    /dev/sdb3:
    Device Role : Active device 1
    Update Time : Tue Aug  7 12:51:32 2018
    Array State : .AAA ('A' == active, '.' == missing)

    /dev/sdc3:
    Device Role : Active device 2
    Update Time : Tue Aug  7 12:51:32 2018
    Array State : .AAA ('A' == active, '.' == missing)

    /dev/sdd3:
    Device Role : Active device 3
    Update Time : Tue Aug  7 12:51:32 2018
    Array State : .AAA ('A' == active, '.' == missing)
    Device 1,2 and 3 agree that device 0 is dropped, but also agree that they still are a degraded array.
    You should be able to assemble the array:
    su
    mdadm --assemble /dev/md2 /dev/sd[bcd]3 --run
    Don't know why it failed for the firmware. Maybe mdadm will tell.

  • kimmekimme Posts: 19  Junior Member
    edited August 9
    Hi,

    I did what you've said and it says that it's started with the 3 disks. But when I try to log in on the GUI my login credentials are incorrect.
    In the terminal nothing was happening anymore as I could input new data but is it possible that the device is rebuilding the volume in the background and that it takes some time before it's completed and that I can log in again?

    Again, thanks for all the time you're putting into this!

    This is the examine output I'm getting after the assembly

    ~ # mdadm --examine /dev/sd[abcd]3

    /dev/sda3:

              Magic : a92b4efc

            Version : 1.2

        Feature Map : 0x2

         Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b

               Name : NAS540:2  (local to host NAS540)

      Creation Time : Mon Dec 29 12:51:05 2014

         Raid Level : raid5

       Raid Devices : 4


     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)

         Array Size : 5848150464 (5577.23 GiB 5988.51 GB)

      Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)

        Data Offset : 262144 sectors

       Super Offset : 8 sectors

    Recovery Offset : 2193056496 sectors

              State : active

        Device UUID : 08268038:19d12420:f4d51e5e:a935b818


        Update Time : Tue Aug  7 11:15:24 2018

           Checksum : 99528fda - correct

             Events : 8397


             Layout : left-symmetric

         Chunk Size : 64K


       Device Role : Active device 0

       Array State : AAAA ('A' == active, '.' == missing)

    /dev/sdb3:

              Magic : a92b4efc

            Version : 1.2

        Feature Map : 0x0

         Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b

               Name : NAS540:2  (local to host NAS540)

      Creation Time : Mon Dec 29 12:51:05 2014

         Raid Level : raid5

       Raid Devices : 4


     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)

         Array Size : 5848151040 (5577.23 GiB 5988.51 GB)

        Data Offset : 262144 sectors

       Super Offset : 8 sectors

              State : clean

        Device UUID : 66638459:c9900aa2:c3173028:8ac7f14d


        Update Time : Tue Aug  7 12:51:32 2018

           Checksum : b8d11f5a - correct

             Events : 10878


             Layout : left-symmetric

         Chunk Size : 64K


       Device Role : Active device 1

       Array State : .AAA ('A' == active, '.' == missing)

    /dev/sdc3:

              Magic : a92b4efc

            Version : 1.2

        Feature Map : 0x0

         Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b

               Name : NAS540:2  (local to host NAS540)

      Creation Time : Mon Dec 29 12:51:05 2014

         Raid Level : raid5

       Raid Devices : 4


     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)

         Array Size : 5848151040 (5577.23 GiB 5988.51 GB)

        Data Offset : 262144 sectors

       Super Offset : 8 sectors

              State : clean

        Device UUID : d1ace266:039ced6e:7e8e15cf:fde9814d


        Update Time : Tue Aug  7 12:51:32 2018

           Checksum : 39880d2f - correct

             Events : 10878


             Layout : left-symmetric

         Chunk Size : 64K


       Device Role : Active device 2

       Array State : .AAA ('A' == active, '.' == missing)

    /dev/sdd3:

              Magic : a92b4efc

            Version : 1.2

        Feature Map : 0x0

         Array UUID : 4ebe74b4:1d2f6ed0:60c3a5d5:4cf7435b

               Name : NAS540:2  (local to host NAS540)

      Creation Time : Mon Dec 29 12:51:05 2014

         Raid Level : raid5

       Raid Devices : 4


     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)

         Array Size : 5848151040 (5577.23 GiB 5988.51 GB)

        Data Offset : 262144 sectors

       Super Offset : 8 sectors

              State : clean

        Device UUID : d9248da6:15c80681:a3feb907:e643fa7d


        Update Time : Tue Aug  7 12:51:32 2018

           Checksum : f4687b57 - correct

             Events : 10878


             Layout : left-symmetric

         Chunk Size : 64K


       Device Role : Active device 3

       Array State : .AAA ('A' == active, '.' == missing)

  • MijzelfMijzelf Posts: 271  Advanced Warrior Member
    I don't think the array is rebuilding, as the headers didn't change. But you can see that in /proc/mdstat
    cat /proc/mdstat
    About the inability to login, I don't think that is related. There is no logon information stored on that volume. If the array is not rebuilding, you can first try to reboot the box:
    su
    reboot

  • kimmekimme Posts: 19  Junior Member
    After the reboot I can log back in so that's solved again.

    / $ cat /proc/mdstat 

    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 

    md1 : active raid1 sda2[4] sdd2[3] sdb2[5] sdc2[6]

          1998784 blocks super 1.2 [4/4] [UUUU]

          

    md0 : active raid1 sda1[4] sdd1[3] sdb1[5] sdc1[6]

          1997760 blocks super 1.2 [4/4] [UUUU]

          

    unused devices: <none>


    Should I try to assemble again now?

  • MijzelfMijzelf Posts: 271  Advanced Warrior Member
    Are there in the logs any traces why the array isn't assembled? I wonder if the disk sda has a hardware error which only shows up when accessing certain sectors. The arrays md0 and md1 both have a partition on sda as member, without problems.
    You can view the kernel log using
    dmesg
    Maybe a filter is helpful
    dmesg | grep sda

  • kimmekimme Posts: 19  Junior Member

    $ dmesg | grep sda

    [   20.746660] sd 0:0:0:0: [sda] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)

    [   20.754443] sd 0:0:0:0: [sda] 4096-byte physical blocks

    [   20.760218] sd 0:0:0:0: [sda] Write Protect is off

    [   20.765039] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00

    [   20.770639] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA

    [   20.847325]  sda: sda1 sda2 sda3

    [   20.858269] sd 0:0:0:0: [sda] Attached SCSI disk

    [   34.988141] md: bind<sda1>

    [   35.057240] md: bind<sda2>


Sign In or Register to comment.