« Back to all recent discussions

NAS540: Volume down, repairing failed, how to restore data?

FlorianFlorian Posts: 7  Junior Member
edited November 2017 in Questions
Hi all,
I use a NAS540 with latest firmware and 4 x 3TB WD RED disks.
A few days ago the NAS told me that the voulme was degraded and disk 2 was taken out. In fact the status of all disks was 'green'. Anyway, I followed the instructions to repair the volume. The time needed was shown as ~35hrs. After around 24hrs I checked the NAS and its status was frozen - no disc activity at all.
After resetting the device (simple power-on-reset) I was able to enter the NAS via browser. After entering the GUI the NAS said 'volume down' and then the GUI froze - no more actions possible.

I am sure the disks are all physically ok as also shown by the NAS and that my data are still on the disks (at least on 3 of them) So how can I restore the data? Is there for example a specific FW version from Zyxel or any other solution?

Thanks in advance for you help,
Florian

#NAS_November
Tagged:

Best Answers

  • FlorianFlorian Posts: 7  Junior Member
    Accepted Answer
    What happened was: Disk 2 failed and the volume was degraded. When restoring the volume it crashed due to a failure of disk 4, so it was set to 'volume down'.
    In fact all disks are ok, and according to the PC recovery program the data can be restored using disk 1, 3 and 4. Does this help?

    The --examine command gives:
    /dev/sda3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : dab25c00:63dad717:ee42b94f:ac212d3f
               Name : NAS540:2  (local to host NAS540)
      Creation Time : Wed Mar 28 01:22:02 2018
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 968507392 (461.82 GiB 495.88 GB)
         Array Size : 1452761088 (1385.46 GiB 1487.63 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 107019af:fa398c09:4ad35dc3:766004fa

        Update Time : Wed Mar 28 19:57:47 2018
           Checksum : 56881215 - correct
             Events : 33

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 0
       Array State : A.A. ('A' == active, '.' == missing)
    /dev/sdb3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : dab25c00:63dad717:ee42b94f:ac212d3f
               Name : NAS540:2  (local to host NAS540)
      Creation Time : Wed Mar 28 01:22:02 2018
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 968511488 (461.82 GiB 495.88 GB)
         Array Size : 1452761088 (1385.46 GiB 1487.63 GB)
      Used Dev Size : 968507392 (461.82 GiB 495.88 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : e116bb1e:dd5ed3bb:a17565b8:4da4d78e

        Update Time : Wed Mar 28 19:57:47 2018
           Checksum : 24bd3fb - correct
             Events : 33

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : spare
       Array State : A.A. ('A' == active, '.' == missing)
    /dev/sdc3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : dab25c00:63dad717:ee42b94f:ac212d3f
               Name : NAS540:2  (local to host NAS540)
      Creation Time : Wed Mar 28 01:22:02 2018
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 968511488 (461.82 GiB 495.88 GB)
         Array Size : 1452761088 (1385.46 GiB 1487.63 GB)
      Used Dev Size : 968507392 (461.82 GiB 495.88 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : da89f5c7:b4d2b5bb:14703757:1e9d3f9e

        Update Time : Wed Mar 28 19:57:47 2018
           Checksum : 59a2ae0d - correct
             Events : 33

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 2
       Array State : A.A. ('A' == active, '.' == missing)
    /dev/sdd3:
              Magic : a92b4efc
            Version : 1.2
        Feature Map : 0x0
         Array UUID : dab25c00:63dad717:ee42b94f:ac212d3f
               Name : NAS540:2  (local to host NAS540)
      Creation Time : Wed Mar 28 01:22:02 2018
         Raid Level : raid5
       Raid Devices : 4

     Avail Dev Size : 968511488 (461.82 GiB 495.88 GB)
         Array Size : 1452760512 (1385.46 GiB 1487.63 GB)
      Used Dev Size : 968507008 (461.82 GiB 495.88 GB)
        Data Offset : 262144 sectors
       Super Offset : 8 sectors
              State : clean
        Device UUID : 660c4cc8:4d11d51f:09a7a278:8a850842

        Update Time : Wed Mar 28 19:51:44 2018
           Checksum : 83508b9c - correct
             Events : 20

             Layout : left-symmetric
         Chunk Size : 64K

       Device Role : Active device 3
       Array State : AAAA ('A' == active, '.' == missing)
    Thanks!
  • MijzelfMijzelf Posts: 665  Heroic Warrior Member
    Accepted Answer
    This array does not assemble because the most up to date member (Update Time) says the 'Array State' is A.A. . So it has only 2 members, which is too little.
    /dev/sdb3 is marked as spare because it was not yet fully initialized when sdd3 went out. This partition can only be used as last resort, as we don't know how much usable data it contains.

    As you can see /dev/sdd3 has an 'Array State' AAAA, because it is no longer updated after it was kicked from the array.

    Assuming /dev/sd[acd]3 are all useable, you can re-create the array, using the same settings as originally were used.

    mdadm --create --assume-clean --level=5  --raid-devices=4 --metadata=1.2 --chunk=64K  --layout=left-symmetric /dev/md2 /dev/sda3 missing /dev/sdc3 /dev/sdd3 
    (that is a single line)

    Some settings are the defaults, according to https://linux.die.net/man/8/mdadm, but I just add them for safety and completeness.
    Here --assume-clean tells mdadm the partitions contain a valid array, and the 'missing' keyword tells that the device with role '1' is missing. The roles 0 - 3 are assigned in the order in which you specify the partitions here.

    The sequence of the other arguments is important for mdadm, and I don't know if this is the right sequence. Fortunately it will tell you if it's wrong, and it will also tell what should be right.

  • FlorianFlorian Posts: 7  Junior Member
    Accepted Answer
    Mijzelf, you are the best!
    I tried it first on my test system and then on the original one.
    After stopping md2 and entering the command the NAS said:
    mdadm: /dev/sda3 appears to be part of a raid array:
        level=raid5 devices=4 ctime=Wed Jul 12 10:16:08 2017
    mdadm: /dev/sdc3 appears to be part of a raid array:
        level=raid5 devices=4 ctime=Wed Jul 12 10:16:08 2017
    mdadm: /dev/sdd3 appears to be part of a raid array:
        level=raid5 devices=4 ctime=Wed Jul 12 10:16:08 2017
    Continue creating array? y
    mdadm: array /dev/md2 started.

    ~ # cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md2 : active raid5 sdd3[3] sdc3[2] sda3[0]
          8778405312 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [U_UU]
    md1 : active raid1 sdb2[0] sda2[3] sdc2[5] sdd2[4]
          1998784 blocks super 1.2 [4/4] [UUUU]
    md0 : active raid1 sdb1[6] sda1[3] sdc1[5] sdd1[4]
          1997760 blocks super 1.2 [4/4] [UUUU]
    unused devices: <none>
    The beeper immediatly started and the GUI told me (after soft reset) that the volume is degraded.
    But the GUI also shows that the volume is down, so there is no option to repair it as it was at the first time.
    Anyhow, the data is accessible. I will copy it and create a new volume later.
    Btw, is there a possibility to copy specific folders (in video, music, photo) to an external HDD via USB3 port using telnet/ssh command (not the GUI)?

    Again, thanks a lot for your competent help!
    Florian
«13

Answers

  • MijzelfMijzelf Posts: 665  Heroic Warrior Member
    ZyXEL uses default Linux software raid, and ext4 as filesystem. If you can connect the disks simultaneously to any Linux system (a PC booted from a Linux Live USB stick is fine) you can assemble&mount the array. Depending on the distro this will be done automagically. (AFAIK Ubuntu will do so).
    A problem can be that some USB-Sata convertors do some nasty sector translating on disk >2TiB, so to be sure you should use 'real' Sata ports.


  • FlorianFlorian Posts: 7  Junior Member
    Thanks for your hints, Mijzelf!
    Any other ideas, especially with respect to a solution using the Zyxel NAS?

  • MijzelfMijzelf Posts: 665  Heroic Warrior Member
    Well, the NAS is capable to assemble and mount the array (duh!). The problem is, your NAS seems to stall if you try to, which it shouldn't. 
    In your case I'd enable the ssh server, and try to login over ssh after stall, to see 
    • if the ssh server is still running
    • investigate if the raid recovery is still running
    The latter can be seen with 'cat /proc/mdstat'. If it is stopped, and the array isn't healthy yet, maybe 'dmesg' will tell why.
  • FlorianFlorian Posts: 7  Junior Member
    Hello Mijzelf, hello all,

    I set up a test system with 4 x 500GB HDDs and tried to reproduce the error. The NAS is still accessible. Do you think there is a chance to recover the data?

    Thanks,
    Florian

    ---
    'cat /proc/mdstat':
    ---
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md2 : inactive sda3[0](S) sdb3[4](S) sdc3[2](S)
          1452765184 blocks super 1.2

    md1 : active raid1 sda2[0] sdd2[5] sdc2[2] sdb2[4]
          1998784 blocks super 1.2 [4/4] [UUUU]

    md0 : active raid1 sda1[0] sdd1[5] sdc1[2] sdb1[4]
          1997760 blocks super 1.2 [4/4] [UUUU]

    unused devices: <none>
    ---

    ---
    cat /proc/partitions
    ---
    major minor  #blocks  name

       7        0     156672 loop0
      31        0        256 mtdblock0
      31        1        512 mtdblock1
      31        2        256 mtdblock2
      31        3      10240 mtdblock3
      31        4      10240 mtdblock4
      31        5     112640 mtdblock5
      31        6      10240 mtdblock6
      31        7     112640 mtdblock7
      31        8       6144 mtdblock8
       8        0  488385527 sda
       8        1    1998848 sda1
       8        2    1999872 sda2
       8        3  484384768 sda3
       8       16  488386584 sdb
       8       17    1998848 sdb1
       8       18    1999872 sdb2
       8       19  484386816 sdb3
       8       32  488386584 sdc
       8       33    1998848 sdc1
       8       34    1999872 sdc2
       8       35  484386816 sdc3
       8       48  488386584 sdd
       8       49    1998848 sdd1
       8       50    1999872 sdd2
       8       51  484386816 sdd3
      31        9     102424 mtdblock9
       9        0    1997760 md0
       9        1    1998784 md1
      31       10       4464 mtdblock10


    ---
    mount:
    ---
    /proc on /proc type proc (rw)
    /sys on /sys type sysfs (rw)
    none on /proc/bus/usb type usbfs (rw)
    devpts on /dev/pts type devpts (rw)
    ubi7:ubi_rootfs2 on /firmware/mnt/nand type ubifs (ro)
    /dev/md0 on /firmware/mnt/sysdisk type ext4 (ro)
    /firmware/mnt/sysdisk/sysdisk.img on /ram_bin type ext2 (ro)
    /ram_bin/usr on /usr type none (ro,bind)
    /ram_bin/lib/security on /lib/security type none (ro,bind)
    /ram_bin/lib/modules on /lib/modules type none (ro,bind)
    /ram_bin/lib/locale on /lib/locale type none (ro,bind)
    /dev/ram0 on /tmp/tmpfs type tmpfs (rw,size=5m)
    /tmp/tmpfs/usr_etc on /usr/local/etc type none (rw,bind)
    ubi3:ubi_config on /etc/zyxel type ubifs (rw)
    configfs on /sys/kernel/config type configfs (rw)
    ---

    ---
    dmesg (extract):
    ---
    [   28.915002] UBIFS: mounted read-only
    [   28.918602] UBIFS: file system size:   103612416 bytes (101184 KiB, 98 MiB, 816 LEBs)
    [   28.926463] UBIFS: journal size:       5206016 bytes (5084 KiB, 4 MiB, 41 LEBs)
    [   28.933792] UBIFS: media format:       w4/r0 (latest is w4/r0)
    [   28.939645] UBIFS: default compressor: lzo
    [   28.943752] UBIFS: reserved for root:  4893869 bytes (4779 KiB)
    [   29.276964] md: md0 stopped.
    [   29.286394] md: bind<sdb1>
    [   29.289474] md: bind<sdc1>
    [   29.292535] md: bind<sdd1>
    [   29.295587] md: bind<sda1>
    [   29.302182] bio: create slab <bio-1> at 1
    [   29.306433] md/raid1:md0: active with 4 out of 4 mirrors
    [   29.311848] md0: detected capacity change from 0 to 2045706240
    [   29.338375] md: md1 stopped.
    [   29.353937] md: bind<sdb2>
    [   29.357165] md: bind<sdc2>
    [   29.360229] md: bind<sdd2>
    [   29.363287] md: bind<sda2>
    [   29.369893] md/raid1:md1: active with 4 out of 4 mirrors
    [   29.375310] md1: detected capacity change from 0 to 2046754816
    [   29.660673]  md1: unknown partition table
    [   29.740320] Adding 1998780k swap on /dev/md1.  Priority:-1 extents:1 across:1998780k
    [   29.770212]  md0: unknown partition table
    [   29.872725] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
    [   33.866143] EXT4-fs (md0): re-mounted. Opts: (null)
    [   33.891522] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem
    [   33.902240] EXT4-fs (loop0): mounted filesystem without journal. Opts: (null)
    [   34.102222] UBI: attaching mtd3 to ubi3
    [   34.172401] UBI: scanning is finished
    [   34.186445] UBI: attached mtd3 (name "config", size 10 MiB) to ubi3
    [   34.192734] UBI: PEB size: 131072 bytes (128 KiB), LEB size: 126976 bytes
    [   34.199555] UBI: min./max. I/O unit sizes: 2048/2048, sub-page size 2048
    [   34.206301] UBI: VID header offset: 2048 (aligned 2048), data offset: 4096
    [   34.213202] UBI: good PEBs: 80, bad PEBs: 0, corrupted PEBs: 0
    [   34.219060] UBI: user volume: 0, internal volumes: 1, max. volumes count: 128
    [   34.226223] UBI: max/mean erase counter: 3/1, WL threshold: 4096, image sequence number: 435004548
    [   34.235207] UBI: available PEBs: 36, total reserved PEBs: 44, PEBs reserved for bad PEB handling: 40
    [   34.244380] UBI: background thread "ubi_bgt3d" started, PID 1278
    [   34.295129] UBI: detaching mtd3 from ubi3
    [   34.300492] UBI: mtd3 is detached from ubi3
    [   35.420677] UBI: attaching mtd3 to ubi3
    [   35.458115] UBI: scanning is finished
    [   35.461989] UBI: empty MTD device detected
    [   35.487405] UBI: attached mtd3 (name "config", size 10 MiB) to ubi3
    [   35.494664] UBI: PEB size: 131072 bytes (128 KiB), LEB size: 126976 bytes
    [   35.502873] UBI: min./max. I/O unit sizes: 2048/2048, sub-page size 2048
    [   35.510994] UBI: VID header offset: 2048 (aligned 2048), data offset: 4096
    [   35.519273] UBI: good PEBs: 80, bad PEBs: 0, corrupted PEBs: 0
    [   35.526512] UBI: user volume: 0, internal volumes: 1, max. volumes count: 128
    [   35.535052] UBI: max/mean erase counter: 0/0, WL threshold: 4096, image sequence number: 1814224287
    [   35.545516] UBI: available PEBs: 36, total reserved PEBs: 44, PEBs reserved for bad PEB handling: 40
    [   35.556067] UBI: background thread "ubi_bgt3d" started, PID 1296
    [   35.664081] UBIFS: default file-system created
    [   35.767263] UBIFS: mounted UBI device 3, volume 0, name "ubi_config"
    [   35.773639] UBIFS: file system size:   3428352 bytes (3348 KiB, 3 MiB, 27 LEBs)
    [   35.781003] UBIFS: journal size:       1015809 bytes (992 KiB, 0 MiB, 6 LEBs)
    [   35.788170] UBIFS: media format:       w4/r0 (latest is w4/r0)
    [   35.794018] UBIFS: default compressor: lzo
    [   35.798131] UBIFS: reserved for root:  161928 bytes (158 KiB)
    [   36.025939] NTFS driver 2.1.30 [Flags: R/O MODULE].
    [   36.041758] tntfs: module license 'Commercial. For support email [email protected]' taints kernel.
    [   36.051394] Disabling lock debugging due to kernel taint
    [   36.064609] Tuxera NTFS driver 3014.4.29 [Flags: R/W MODULE].
    [   36.083971] PPP generic driver version 2.4.2
    [   36.108591] PPP MPPE Compression module registered
    [   36.133641] PPP Deflate Compression module registered
    [   36.143532] NET: Registered protocol family 24
    [   36.160953] PPP BSD Compression module registered
    [   36.473822] elp_register_ocf: Comcerto 2000 ELP Crypto Offload Engine
    [   36.480272] m86xxx_elp: Registering  key des/3des aes md5 sha1 sha2_256 sha2_384 sha2_512 sha256_hmac null
    [   37.216190] egiga0: no IPv6 routers present
    [   37.516186] egiga1: no IPv6 routers present
    [   49.595449] md: md2 stopped.
    ---


  • MijzelfMijzelf Posts: 665  Heroic Warrior Member
    This is a simulation of what happened to your 4x3 array? How did you simulate it? 

    mdstat tells that an md2 (data volume on a '5xx) exists, but it doesn't know what kind of raid it is, as it only contains spare members. And sda3 is lacking.

    Presuming that you want to assemble this array, first have a look if you can assemble it:
    mdadm --stop /dev/md2
    mdadm --assemble /dev/md2 /dev/sdb3 /dev/sdc3 /dev/sdd3

  • FlorianFlorian Posts: 7  Junior Member
    1) Yes, kind of a simulation. I took a 2nd NAS (same type, same FW) and put some "old" HDDs in it.
    When executing mdstat the original NAS says:
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md2 : inactive sda3[0](S) sdb3[4](S) sdc3[2](S)
          8778405888 blocks super 1.2
    md1 : active raid1 sdb2[0] sda2[3] sdc2[5] sdd2[4]
          1998784 blocks super 1.2 [4/4] [UUUU]
    md0 : active raid1 sdb1[6] sda1[3] sdc1[5] sdd1[4]
          1997760 blocks super 1.2 [4/4] [UUUU]
    So I assume that I was able to reproduce at least a similar error.

    2) How do you know that sda3 is lacking? What does lacking mean?

    3) When using the --assemble command the NAS says:
    mdadm: superblock on /dev/sdd3 doesn't match others - assembly aborted
    I also tried
    mdadm --assemble /dev/md2 /dev/sda3 /dev/sdc3 /dev/sdd3
    with the same result.
    I tried this variant because the 2nd drive failed originally (which is b). I also disassembled the NAS put the drives into my PC and ran a recovery program. It was able to restore the data using disk 1, 3 and 4. To copy the data I need a license which costs 70USD so I would like to give the "manual" way a try.

    Regards,
    Florian
  • MijzelfMijzelf Posts: 665  Heroic Warrior Member
    2) How do you know that sda3 is lacking? What does lacking mean?

    English is not my native language, and maybe I used the wrong word, but AFAIK it means that it's not there, while it should, or was at least expectable.

    And I see I made a mistake, it's sdd3 which is lacking.

    md2 : inactive sda3[0](S) sdb3[4](S) sdc3[2](S)
          1452765184 blocks super 1.2

    md1 : active raid1 sda2[0] sdd2[5] sdc2[2] sdb2[4]
    Md1 is assambled from the partitions sda2 - sdd2, so the 4 disks sda - sdd are available. Yet md2 doesn't have an sdd3. The size of sdd3 in /proc/partitions is the same as sd[abc]3, so it's not a size thing.
    mdadm: superblock on /dev/sdd3 doesn't match others - assembly aborted
    md2 : inactive sda3[0](S) sdb3[4](S) sdc3[2](S)In this case sdd3 is lacking, so the assemble command should be
    mdadm --assemble /dev/md2 /dev/sda3 /dev/sdb3 /dev/sdc3
    BTW, the [0] here means 'role 0'. You have 4 disks, so I would expect a [0] - [3]. Yet you have a [4]. Normally this means that a disk has failed, and it's role has been taken over by a (hotspare?) disk with role 4. That can hardly be true.
    See https://raid.wiki.kernel.org/index.php/Mdstat
    Om the other hand
    I tried this variant because the 2nd drive failed originally (which is b)

    Thinking about it, you nowhere mention the used raid level. I am assuming raid5, 3 disks + 1 parity.
    Can you post the raid headers?
    mdadm --examine /dev/sd[abcd]3




  • MijzelfMijzelf Posts: 665  Heroic Warrior Member
    is there a possibility to copy specific folders (in video, music, photo) to an external HDD via USB3 port using telnet/ssh command (not the GUI)?
    Sure. By default the usb3 disk is mounted on /e-data/<some-long-uuid>/. For conveniance, you can create a symlink
    ln -s /usbdisk /e-data/<some-long-uuid>
    (this will not survive a reboot)

    If the firmware mounted your array, it's mounted on /i-data/<some-hex-code>, and probably /i-data/sysvol is an (indirect) symlink to it. The shares are in the root of the array.

    If you want to copy a subdirectory, the command can be
    cd /i-data/<some-hex-code>/Video
    cp -a MySubdir /usbdisk/
    If you want it verbose
    cp -av MySubdir /usbdisk/



  • FlorianFlorian Posts: 7  Junior Member
    Thanks again, Mijzelf!
    Let me know when you are in Berlin, then I will invite you for a beer :)
    Florian
  • basetronbasetron Posts: 13  Junior Member
    Hi guys,

    I'm facing a similar issue: my drive (also md2) went down and I got it replaced with a new one 

    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]md3 : active raid1 sdc3[0] sdd3[1]      1949383488 blocks super 1.2 [2/2] [UU]
    md2 : inactive sdb3[2](S)      1949383680 blocks super 1.2
    md1 : active raid1 sdb2[4] sdc2[5] sdd2[6]      1998784 blocks super 1.2 [4/3] [U_UU]
    md0 : active raid1 sdb1[4] sdc1[5] sdd1[6]      1997760 blocks super 1.2 [4/3] [U_UU]
    unused devices: <none>
    my mdadm.conf returns:

    ARRAY /dev/md0 level=raid1 num-devices=4 metadata=1.2 name=NAS540:0 UUID=60e20528:5ff04d12:9729d455:3fdd0b58   devices=/dev/sdb1,/dev/sdc1,/dev/sdd1ARRAY /dev/md1 level=raid1 num-devices=4 metadata=1.2 name=NAS540:1 UUID=069aed49:2bf44e5c:b42db67f:82d34ecc   devices=/dev/sdb2,/dev/sdc2,/dev/sdd2ARRAY /dev/md3 level=raid1 num-devices=2 metadata=1.2 name=XXX:3 UUID=da1a17ea:f2a26d21:d35d0f62:4daa646d   devices=/dev/sdc3,/dev/sdd3
    

    and for md2 on scanning, I get the following:

    md device /dev/md2 does not appear to be active

    Take a look at md3  - it looks strange as all these names (md0, md1, md3) should have been identical I guess. 

    Could you please write in steps what should I do? All the shares from the faulty raid matrix (the one which contains md2) are drive are unavailable. I haven't tried anything so far on these drives to preserve my files. 

    Below I'm pasting my web interface screens perhaps it will make it easier to come up with a successful solution?











    Thanks in advance,

    BT


  • MijzelfMijzelf Posts: 665  Heroic Warrior Member
    Can you post the output of
    su
    mdadm --examine /dev/sd[abcd]3

  • basetronbasetron Posts: 13  Junior Member
    Hi Mijzelf,

    it looks as follows:

    mdadm: cannot open /dev/sda3: No such device or address/dev/sdb3:          Magic : a92b4efc        Version : 1.2    Feature Map : 0x2     Array UUID : d69d46c2:8e90e04e:a2188060:2c230b03           Name : NAS540:2  Creation Time : Fri Jul 17 21:09:32 2015     Raid Level : raid1   Raid Devices : 2
     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)     Array Size : 1949383680 (1859.08 GiB 1996.17 GB)    Data Offset : 262144 sectors   Super Offset : 8 sectorsRecovery Offset : 731019264 sectors          State : clean    Device UUID : 8e602567:8efc09d7:f3a620a9:02f935e5
        Update Time : Sat Mar 16 09:39:47 2019       Checksum : 409f1b - correct         Events : 1470

       Device Role : Active device 1   Array State : AA ('A' == active, '.' == missing)/dev/sdc3:          Magic : a92b4efc        Version : 1.2    Feature Map : 0x0     Array UUID : da1a17ea:f2a26d21:d35d0f62:4daa646d           Name : XXX:3  (local to host XXX)  Creation Time : Mon May  9 17:11:30 2016     Raid Level : raid1   Raid Devices : 2
     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)     Array Size : 1949383488 (1859.08 GiB 1996.17 GB)  Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)    Data Offset : 262144 sectors   Super Offset : 8 sectors          State : clean    Device UUID : ca63fccc:a5346ad6:6c4a09ed:be7f9a64
        Update Time : Wed May 29 23:59:27 2019       Checksum : 8a5fa17b - correct         Events : 2

       Device Role : Active device 0   Array State : AA ('A' == active, '.' == missing)/dev/sdd3:          Magic : a92b4efc        Version : 1.2    Feature Map : 0x0     Array UUID : da1a17ea:f2a26d21:d35d0f62:4daa646d           Name : XXX:3  (local to host XXX)  Creation Time : Mon May  9 17:11:30 2016     Raid Level : raid1   Raid Devices : 2
     Avail Dev Size : 3898767360 (1859.08 GiB 1996.17 GB)     Array Size : 1949383488 (1859.08 GiB 1996.17 GB)  Used Dev Size : 3898766976 (1859.08 GiB 1996.17 GB)    Data Offset : 262144 sectors   Super Offset : 8 sectors          State : clean    Device UUID : 201d9c23:684df915:3de56760:e8bcfcfe
        Update Time : Wed May 29 23:59:27 2019       Checksum : 2e4f4b8f - correct         Events : 2

       Device Role : Active device 1   Array State : AA ('A' == active, '.' == missing)
    Thanks,

    BT

Sign In or Register to comment.