« Back to all recent discussions

Got a problem with NAS 542 raid 5 volume down

AxbergAxberg Posts: 13  Junior Member
edited August 7 in Questions
Hi!
How can I retrive my data from this NAS.
I'm a total newbie, so please bare with me.

~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sdb3[1] sdc3[4] sdd3[3]
      11708660736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]

md1 : active raid1 sdc2[4] sdd2[5] sdb2[1]
      1998784 blocks super 1.2 [4/3] [_UUU]

md0 : active raid1 sdb1[6] sdd1[5] sdc1[4]
      1997760 blocks super 1.2 [4/3] [U_UU]

~ # mdadm --examine /dev/sd[abcd]3
mdadm: cannot open /dev/sda3: No such device or address
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
           Name : NAS542:2  (local to host NAS542)
  Creation Time : Mon Feb 19 16:18:55 2018
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : ad1bbcba:851fe7ce:9b4e0515:15f8029b

    Update Time : Tue Jul 28 17:02:21 2020
       Checksum : ec098281 - correct
         Events : 18103

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : .AAA ('A' == active, '.' == missing)
/dev/sdc3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
           Name : NAS542:2  (local to host NAS542)
  Creation Time : Mon Feb 19 16:18:55 2018
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 0182f1cb:4bcb13a7:0ccde221:ae77558d

    Update Time : Tue Jul 28 17:02:21 2020
       Checksum : d49b92a8 - correct
         Events : 18103

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : .AAA ('A' == active, '.' == missing)
/dev/sdd3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
           Name : NAS542:2  (local to host NAS542)
  Creation Time : Mon Feb 19 16:18:55 2018
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
     Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 6834c489:800c6cb4:67c317fc:4e3d686f

    Update Time : Tue Jul 28 17:02:21 2020
       Checksum : 5c0e42bf - correct
         Events : 18103

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2

#NAS_August_2020


«1

Answers

  • MijzelfMijzelf Posts: 1,255  Paragon Member
    According to this data your raid array is running, yet degraded. What is the output of

    cat /proc/mounts
  • AxbergAxberg Posts: 13  Junior Member
    Hi Mijzelf

    ~ # cat /proc/mounts
    rootfs / rootfs rw 0 0
    /proc /proc proc rw,relatime 0 0
    /sys /sys sysfs rw,relatime 0 0
    none /proc/bus/usb usbfs rw,relatime 0 0
    devpts /dev/pts devpts rw,relatime,mode=600 0 0
    ubi7:ubi_rootfs2 /firmware/mnt/nand ubifs ro,relatime 0 0
    /dev/md0 /firmware/mnt/sysdisk ext4 ro,relatime,user_xattr,barrier=1,data=ordere                                                                                                             d 0 0
    /dev/loop0 /ram_bin ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /usr ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /lib/security ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /lib/modules ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /lib/locale ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0
    /dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0
    ubi3:ubi_config /etc/zyxel ubifs rw,relatime 0 0
    configfs /sys/kernel/config configfs rw,relatime 0 0

    What to do now?
  • MijzelfMijzelf Posts: 1,255  Paragon Member
    /dev/md2 should have been mounted, but it isn't. What if you try to mount it manually?

    su
    mkdir /mnt/mountpoint
    mount /dev/md2 /mnt/mountpoint
    dmesg | tail

    It is also possible that the data partition is not directly on /dev/md2, but in a Logical Volume. To see that, use

    cat /proc/partitions
    lvscan && lvdisplay --all
  • AxbergAxberg Posts: 13  Junior Member
    Hi again

    As I said I'm a total newbie to linux and any help is mostly welcome.

    ~ # mkdir /mnt/mountpoint
    mkdir: can't create directory '/mnt/mountpoint': File exists
    ~ # mount /dev/md2 /mnt/mountpoint
    mount: unknown filesystem type 'LVM2_member'
    ~ # dmesg | tail
    [ 2340.614356]
    [ 2340.614359] ****** disk(1:0:0:0) spin down at 204061 ******
    [ 2341.430716]
    [ 2341.430719] ****** disk(3:0:0:0) spin down at 204143 ******
    [ 2342.394420]
    [ 2342.394423] ****** disk(2:0:0:0) spin down at 204239 ******
    [78376.998914]
    [78376.998917] ****** disk(1:0:0:0 0)(HD2) awaked by lvscan (cmd: 28) ******
    [78390.843568]
    [78390.843571] ****** disk(2:0:0:0 0)(HD3) awaked by lvscan (cmd: 28) ******

    ~ # cat /proc/partitions
    major minor  #blocks  name

       7        0     147456 loop0
      31        0        256 mtdblock0
      31        1        512 mtdblock1
      31        2        256 mtdblock2
      31        3      10240 mtdblock3
      31        4      10240 mtdblock4
      31        5     112640 mtdblock5
      31        6      10240 mtdblock6
      31        7     112640 mtdblock7
      31        8       6144 mtdblock8
       8        0 3907018584 sda
       8       16 3907018584 sdb
       8       17    1998848 sdb1
       8       18    1999872 sdb2
       8       19 3903017984 sdb3
       8       48 3907018584 sdd
       8       49    1998848 sdd1
       8       50    1999872 sdd2
       8       51 3903017984 sdd3
       8       32 3907018584 sdc
       8       33    1998848 sdc1
       8       34    1999872 sdc2
       8       35 3903017984 sdc3
      31        9     102424 mtdblock9
       9        0    1997760 md0
       9        1    1998784 md1
      31       10       4464 mtdblock10
       9        2 11708660736 md2
     253        0     102400 dm-0
     253        1 11708555264 dm-1
    ~ # lvscan && lvdisplay --all
      ACTIVE            '/dev/vg_c1b2735e/vg_info_area' [100.00 MiB] inherit
      ACTIVE            '/dev/vg_c1b2735e/lv_168e8bf4' [10.90 TiB] inherit
      --- Logical volume ---
      LV Path                /dev/vg_c1b2735e/vg_info_area
      LV Name                vg_info_area
      VG Name                vg_c1b2735e
      LV UUID                SCecoC-nAfM-B8BV-mphS-QtBe-OLsT-J2wyF4
      LV Write Access        read/write
      LV Creation host, time NAS542, 2018-02-19 16:18:57 +0100
      LV Status              available
      # open                 0
      LV Size                100.00 MiB
      Current LE             25
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     1024
      Block device           253:0

      --- Logical volume ---
      LV Path                /dev/vg_c1b2735e/lv_168e8bf4
      LV Name                lv_168e8bf4
      VG Name                vg_c1b2735e
      LV UUID                2If3DE-2zBN-mlC4-PDiv-PqQ5-P9VD-iwOL7S
      LV Write Access        read/write
      LV Creation host, time NAS542, 2018-02-19 16:18:57 +0100
      LV Status              available
      # open i'm                 0
      LV Size                10.90 TiB
      Current LE             2858534
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     1024
      Block device           253:1

  • MijzelfMijzelf Posts: 1,255  Paragon Member
    mount: unknown filesystem type 'LVM2_member'

    OK, you indeed have a logical volume. (LVM is logical volume manager), which is confirmed by lv_display.

    Your data volume is /dev/vg_c1b2735e/lv_168e8bf4, so try to mount that

    mount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpoint

    dmesg | tail


  • AxbergAxberg Posts: 13  Junior Member
    ~ # mount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpoint
    mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg_c1b2735e-lv_168e8bf4,
           missing codepage or helper program, or other error

           In some cases useful info is found in syslog - try
           dmesg | tail or so.
    ~ # dmesg | tail
    dmesg | tail

    [79325.182629]
    [79325.182631] ****** disk(2:0:0:0) spin down at 7902518 ******
    [87365.942603]
    [87365.942606] ****** disk(2:0:0:0 0)(HD3) awaked by mount (cmd: 88) ******
    [87372.805601]
    [87372.805604] ****** disk(1:0:0:0 0)(HD2) awaked by mount (cmd: 88) ******
    [87386.728280]
    [87386.728283] ****** disk(3:0:0:0 0)(HD4) awaked by mount (cmd: 88) ******
    [87394.376921] JBD2: no valid journal superblock found
    [87394.381842] EXT4-fs (dm-1): error loading journal
    ~ #
    ~ # dmesg | tail
    [79325.182629]
    [79325.182631] ****** disk(2:0:0:0) spin down at 7902518 ******
    [87365.942603]
    [87365.942606] ****** disk(2:0:0:0 0)(HD3) awaked by mount (cmd: 88) ******
    [87372.805601]
    [87372.805604] ****** disk(1:0:0:0 0)(HD2) awaked by mount (cmd: 88) ******
    [87386.728280]
    [87386.728283] ****** disk(3:0:0:0 0)(HD4) awaked by mount (cmd: 88) ******
    [87394.376921] JBD2: no valid journal superblock found
    [87394.381842] EXT4-fs (dm-1): error loading journal
    ~ #
    ~ #

  • MijzelfMijzelf Posts: 1,255  Paragon Member
    [87394.376921] JBD2: no valid journal superblock found
    [87394.381842] EXT4-fs (dm-1): error loading journal

    There is a problem with the journal. Maybe e2fsck can fix that.

    e2fsck /dev/vg_c1b2735e/lv_168e8bf4

  • AxbergAxberg Posts: 13  Junior Member
    Hi Mijzelf
    Now the journal is fixed by e2fsck, but I still hav e a problen, fault go from volume down to 
    Disk Group is down.and RAID is degraded..
    I've tried
     mount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpoint
    mount: mount point /mnt/mountpoint does not exist

    Can you give me a hint to fix my nas.?



  • MijzelfMijzelf Posts: 1,255  Paragon Member
    Rebooted in between I presume? The whole rootfilesystem of the NAS is volatile, so you'll have to repeat all changes you made after a reboot.

    mkdir /mnt/mountpoint
  • AxbergAxberg Posts: 13  Junior Member
    Hi

    I will start over and then I will remember to restart, as I have already said this is completely new to me, I try again and return with a result
  • AxbergAxberg Posts: 13  Junior Member
    Hi
    Maybe I'm just to stupid to understand, wh.en I make the directory and then mount, then I make a reboot and I got the same 2 failure as before

    ~ # mkdir /mnt/mountpoint
    ~ #  mount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpoint

  • AxbergAxberg Posts: 13  Junior Member
    Tried this after reboot:
    ~ # mount /dev/vg_c1b2735e/lv_168e8bf4 /mnt/mountpoint
    mount: mount point /mnt/mountpoint does not exist
    ~ #
    ~ # e2fsck /dev/vg_c1b2735e/lv_168e8bf4
    e2fsck 1.42.12 (29-Aug-2014)
    /dev/vg_c1b2735e/lv_168e8bf4 is mounted.
    e2fsck: Cannot continue, aborting.

  • MijzelfMijzelf Posts: 1,255  Paragon Member
    /dev/vg_c1b2735e/lv_168e8bf4 is mounted.
    So it's already mounted. That is not surprising, as that is the default situation. The only reason that is was not mounted, was because of the error, which you repaired.
    Have you already checked if your shares are back?

    If not, you can find the current mountpoint using

    cat /proc/mounts

    and look into the filesystem using

    ls -l <mountpoint>

    where you have to substitute the 'real'  mountpoint.
  • AxbergAxberg Posts: 13  Junior Member
    ~ # cat /proc/mounts
    rootfs / rootfs rw 0 0
    /proc /proc proc rw,relatime 0 0
    /sys /sys sysfs rw,relatime 0 0
    none /proc/bus/usb usbfs rw,relatime 0 0
    devpts /dev/pts devpts rw,relatime,mode=600 0 0
    ubi7:ubi_rootfs2 /firmware/mnt/nand ubifs ro,relatime 0 0
    /dev/md0 /firmware/mnt/sysdisk ext4 ro,relatime,user_xattr,barrier=1,data=ordered 0 0
    /dev/loop0 /ram_bin ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /usr ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /lib/security ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /lib/modules ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/loop0 /lib/locale ext2 ro,relatime,user_xattr,barrier=1 0 0
    /dev/ram0 /tmp/tmpfs tmpfs rw,relatime,size=5120k 0 0
    /dev/ram0 /usr/local/etc tmpfs rw,relatime,size=5120k 0 0
    ubi3:ubi_config /etc/zyxel ubifs rw,relatime 0 0
    /dev/mapper/vg_c1b2735e-lv_168e8bf4 /i-data/168e8bf4 ext4 rw,noatime,user_xattr,barrier=1,stripe=48,data=ordered,usrquota 0 0
    configfs /sys/kernel/config configfs rw,relatime 0 0

    ~ # ls -l <mountpoint>
    sh: syntax error: unexpected newline


  • AxbergAxberg Posts: 13  Junior Member
    Hi
    Here are the new
    ~ # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md2 : active raid5 sdb3[1] sdc3[4] sdd3[3]
          11708660736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]

    md1 : active raid1 sdc2[4] sdd2[5] sdb2[1]
          1998784 blocks super 1.2 [4/3] [_UUU]

    md0 : active raid1 sdb1[6] sdd1[5] sdc1[4]
          1997760 blocks super 1.2 [4/3] [U_UU]
    unused devices: <none>

    ~ # mdadm --examine /dev/sd[abcd]3
    mdadm: cannot open /dev/sda3: No such device or address
    /dev/sdb3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Mon Feb 19 16:18:55 2018
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
         Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : ad1bbcba:851fe7ce:9b4e0515:15f8029b

        Update Time : Mon Aug  3 18:44:22 2020
           Checksum : ec118de4 - correct
             Events : 20785

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 1
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdc3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Mon Feb 19 16:18:55 2018
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
         Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 0182f1cb:4bcb13a7:0ccde221:ae77558d

        Update Time : Mon Aug  3 18:44:22 2020
           Checksum : d4a39e0b - correct
             Events : 20785

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 3
       Array State : .AAA ('A' == active, '.' == missing)
    /dev/sdd3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : c1b2735e:9b10b90c:cb9b6690:e84965d6
               Name : NAS542:2  (local to host NAS542)
      Creation Time : Mon Feb 19 16:18:55 2018
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 7805773824 (3722.08 GiB 3996.56 GB)
         Array Size : 11708660736 (11166.25 GiB 11989.67 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 6834c489:800c6cb4:67c317fc:4e3d686f

        Update Time : Mon Aug  3 18:44:22 2020
           Checksum : 5c164e22 - correct
             Events : 20785

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 2
       Array State : .AAA ('A' == active, '.' == missing)
    ~ #

    I am very grateful for your attempt to help me
Sign In or Register to comment.