« Back to all recent discussions
NAS540 failed harddisk questions
I have a nas540 with 4 2tb seagate drives in a raid5 configuration , which was working fine for about two years now. A few day ago I noticed that the system got slow and data I retrieved from it had missing pieces so I went to the gui to find it saying my raid was degraded. I wanted to go to the storage manager to find more information on the issue which was impossible as the dialog wouldn't load at all. So I connected via ash and ran mdadm -D on all of the md devices, found out that md2 was degraded, than I ran smartctl -all on sda to sdd and three were fine while sdb didn't report any smart data at all, I had to terminate the smartctl process. I ordered a replacement disk and added sdb3 back to md2 which went well, data retrieved from the system were complete again (it was still very slow, but that was OK for me as replacement was on it's way) the next day I had broken data again and was thinking "OK, the broken disk fell out of sync again", logged in, ran mdadm -D again to find a surprise, it wasn't sdb3 that had fallen out of the array, it was sda3, sda's smart status is still fine and I was able to rebuild the array once again. Today my replacement disk arrived and I'm unsure what I should do, replace sdb (the one with the failed smart status) or sda which fell out of the array with no obvious reason.
Can a failed disk cause another one to fall out of the raid array? Should I order another disk?
Any advice is appreciated.