How to recover mdadm RAID array after superblock is zeroed

Few days ago I got one of my Linux RAID1 arrays go bad. One of the disks got bad sectors and the other one lost it’s superblock. So the array was degradated and the only one “good” disk was with bad sectors. I added a new disk and tried to sync the data but it stucked on 36%. Using tools like “dd” or “ddrescue” didn’t help neither. The “dd” just kept stopping and the “ddrescue” was recovering with 364Bytes/second so on 3TB disk this was kind of slow. After 2 days I just gave up recovering with this speed. So back to google I found that mdadm is smart enough so when creating new array it preservs the old data. So I decided to give it a try. Still just in case I didn’t wanted to loose my data so did a clone of the disk with the “ddrescue” tool.

ddrescue -d -f -r3 /dev/sdb /dev/sdc /home/username/rescue.logfile

 So if something goes wrong I still can try recover the data from the original disk.

First I stopped the array:

$ sudo mdadm --stop /dev/md0
mdadm: stopped /dev/md0

Then I commented out the lines for this array configuration in the mdadm config file

/etc/mdadm/mdadm.conf

 so it doesn’t keep any info for the old array.
Then rebooted the machine.

Now let’s check the disks and the labels:

$ lsblk -o name,label,size,fstype,model
NAME                  LABEL   SIZE FSTYPE MODEL
sda                         298.1G        Hitachi HTS54503
├─sda1                        3.7G
│ └─cryptswap1 (dm-0)         3.7G
├─sda2                      139.7G
└─sda3                      154.7G
sdb                           2.7T        001-1CH166
sr0                          1024M        DS8A5SH

 As you can see it is not showing that on sdb there is any data and it’s not associated with any array.

 So it’s time to recreate the array:

sudo mdadm --create /dev/md0 --verbose --level=1 --raid-devices=2 /dev/sdb missing

 And the result was:

mdadm: /dev/sdb appears to be part of a raid array:
    level=raid1 devices=2 ctime=Sat Nov 30 16:52:22 2013
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 2930135360K
Continue creating array?

As you can see it says that the partition table will be lost so it asks for confirmation.
After I confirmed:

mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Here is the list of disks after the recovery:

$ lsblk -o name,label,size,fstype,model
NAME                  LABEL   SIZE FSTYPE MODEL
sda                         298.1G        Hitachi HTS54503
├─sda1                        3.7G
│ └─cryptswap1 (dm-0)         3.7G
├─sda2                      139.7G
└─sda3                      154.7G
sdb                           2.7T        001-1CH166
└─md0                         2.7T
sr0                          1024M        DS8A5SH

Even though the mdadm was saying it will delete the partition table I had all my files kept. 3TB of data were saved!