Samba share on software raid1


Login or Register to Reply

 
Thread Tools Search this Thread
# 22  
Old 08-31-2018
Quote:
Originally Posted by hicksd8
Please answer the question asked by Peasant about did you run:

Code:
# sudo update-initramfs -u

Quote:
Originally Posted by Peasant
Lets just make this clear for the sake of argument.

So, you create a /dev/md0 from /dev/sdc /dev/sdb disks.
You mount it via mount command and add into /etc/fstab.

After reboot (all disks are inside), everything works as expected (mounted and healty).
You poweroff the box, and remove one of the disks from array.
Booting up everything is fine, except lacking one disk you removed.

After you poweroff and replug the drive, power on, there should be still one disk in the md setup. The one you didn't touch should still be part of md device.
Is this correct ?

Point being, if you unplug the drive, that drive is not considired part of array anymore.
When you plug it back, it will not magically rebuild, this has to be done by hand.
And of course, if you plug the other one out, and leave the first one in, you will not have a working md array.

Regards
Peasant.
You are right.
I added it to md0 manually, thats not a problem. Problem is why my data from /mnt/raid is not accessible and i cant read it when i unplug one of the 2 devices?
Thats strange from me, i cant figure out.
I added a folder in array in which i have mounted /dev/md0.

so i have a folder
/mnt/raid/test
and inside test some files.
I cant access it and all is fine. When I unplug disk when machine is running(or not, its the same result), i cant read from that test folder.
Input output error gives me.
But when I unplug another disk, it is working just fine.
Thats what bothers me.
Idea behind all this is to have a shared folder and prevention of data loss. If that disk fails i cannot access to that files. Thats the problem

Last edited by tomislav91; 08-31-2018 at 10:12 AM..
# 23  
Old 08-31-2018
So, as far as i understood - one disk removed array works, other disk array does not work ?
Did you use fdisk or (g)parted on those disks at all to put raid type ?

As for samba and your actual requirement, that is layers above. One at the time Smilie
First you need your md device to be redundant and working after reboot.

What is the HW on that server, since you can unplug disks live ?
I would advise against that practice to test redundancy, if not specifically supported.

You see disk fail in various ways, but rarely unplugging the cable or hitting it with an axe.
Testing will prove difficult Smilie
But from my experience @ home, i had an RAID1 array, one disk died of natural causes (old age) and the mdadm system did the job.
This was some time ago tho..

Can you show output of :
Code:
fdisk -l /dev/sdb
fdisk -l /dev/sdc

Have you considered using ZFS on ubuntu ?
It should really ease up the process of creating a mirror and managing it in a long run.

Regards
Peasant
# 24  
Old 09-01-2018
Quote:
Originally Posted by Peasant
So, as far as i understood - one disk removed array works, other disk array does not work ?
Did you use fdisk or (g)parted on those disks at all to put raid type ?

As for samba and your actual requirement, that is layers above. One at the time Smilie
First you need your md device to be redundant and working after reboot.

What is the HW on that server, since you can unplug disks live ?
I would advise against that practice to test redundancy, if not specifically supported.

You see disk fail in various ways, but rarely unplugging the cable or hitting it with an axe.
Testing will prove difficult Smilie
But from my experience @ home, i had an RAID1 array, one disk died of natural causes (old age) and the mdadm system did the job.
This was some time ago tho..

Can you show output of :
Code:
fdisk -l /dev/sdb
fdisk -l /dev/sdc

Have you considered using ZFS on ubuntu ?
It should really ease up the process of creating a mirror and managing it in a long run.

Regards
Peasant
Here you are output
Code:
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
root@myuser:/mnt/md0# fdisk -l /dev/sdc
Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Yeah,u arr right. When i pull out on of the disks i cant access folder test where i mounted md0.
My only concerne is to secure those data. Ok samba is on the layers above and its not very important to users to access it,but files must be replicated each other when i put some unix based live system to backup or save files if one disk fail.
I didnt try anything,only madm

------ Post updated 09-01-18 at 04:26 AM ------

My only concern if i left it like this, does the data be readble from a some linux usb live system or not. Nothing else Smilie
# 25  
Old 09-01-2018
I do not know if you have tried already any of the following commands to fail a drive, instead of forcefully removing it.

Code:
mdadm /dev/md0 -f /dev/sdc

Check the result
Code:
mdadm --detail /dev/md0

Remove the failed drive
Code:
mdadm /dev/md0 -r /dev/sdc

Check the result
Code:
mdadm --detail /dev/md0

Afterward you can add it back and check the result again and see it recover.

Code:
mdadm /dev/md0 -a /dev/sdc

This User Gave Thanks to Aia For This Post:
RudiC (09-01-2018)
# 26  
Old 09-01-2018
Quote:
Originally Posted by tomislav91
sorry i don't write it.
Yeah, ofcourse.
this is output
Code:
sudo update-initramfs -u
update-initramfs: Generating /boot/initrd.img-4.10.0-19-generic

i have notices that in conf file array is /dev/md/0

Code:
ARRAY /dev/md/0 metadata=1.2 name=johna:0 UUID=bd34d949:34a3999d:949d6038:a871e1c1

------ Post updated at 06:41 AM ------

i also cant access to my data after deattach one hard drive.

------ Post updated at 06:43 AM ------

THis is output of mdstat
Code:
root@myuser:/mnt/raid/test# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc[1] sdb[0]
      1953383488 blocks super 1.2 [2/2] [UU]
      bitmap: 0/15 pages [0KB], 65536KB chunk

unused devices: <none>

------ Post updated at 07:23 AM ------

i now do it like this
Link
i delete array and added like in example. No errors found.
I can see that it is monuted
Code:
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/johnas--vg-root  223G  1.4G  210G   1% /
/dev/md0                     1.8T  898M  1.7T   1% /mnt/md0

Code:
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME                    SIZE FSTYPE            TYPE  MOUNTPOINT
sda                   232.9G                   disk
└─sda1                232.9G LVM2_member       part
  ├─johnas--vg-root     227G ext4              lvm   /
  └─johnas--vg-swap_1   5.9G swap              lvm   [SWAP]
sdb                     1.8T linux_raid_member disk
└─md0                   1.8T ext4              raid1 /mnt/md0
sdc                     1.8T linux_raid_member disk
└─md0                   1.8T ext4              raid1 /mnt/md0

I am now waiting to sync and try again to deattach disks and try to copy for example to /home/johnas/something, to see if it is good to go or not.
Just one question.
If I deattach one disk, and restart, it should be also in md0 because i added it in fstab.
Quote:
Originally Posted by Aia
I do not know if you have tried already any of the following commands to fail a drive, instead of forcefully removing it.

Code:
mdadm /dev/md0 -f /dev/sdc

Check the result
Code:
mdadm --detail /dev/md0

Remove the failed drive
Code:
mdadm /dev/md0 -r /dev/sdc

Check the result
Code:
mdadm --detail /dev/md0

Afterward you can add it back and check the result again and see it recover.

Code:
mdadm /dev/md0 -a /dev/sdc

[CODE]
Code:
mdadm /dev/md0 -a /dev/sdc

[/QUOTE]


Code:
root@myuser:/home/myuser# mdadm /dev/md0 -f /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:42:36 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3655

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       -       0        0        1      removed

       1       8       32        -      faulty   /dev/sdc
root@myuser:/home/myuser# mdadm /dev/md0 -r /dev/sdc
mdadm: hot removed /dev/sdc from /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:42:48 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3656

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       -       0        0        1      removed

root@myuser:/home/myuser# mdadm /dev/md0 -a /dev/sdc
mdadm: re-added /dev/sdc
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:43:29 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3660

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

Code:
root@myuser:/home/myuser# mdadm /dev/md0 -f /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:27 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3672

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       32        1      active sync   /dev/sdc

       0       8       16        -      faulty   /dev/sdb
root@myuser:/home/myuser# mdadm /dev/md0 -r /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:37 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3673

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       32        1      active sync   /dev/sdc
root@myuser:/home/myuser# mdadm /dev/md0 -a /dev/sdb
mdadm: re-added /dev/sdb
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:55 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3677

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

------ Post updated at 09:55 AM ------

I tried to do
Code:
mdadm /dev/md0 -f 
/dev/sdc mdadm /dev/md0 -r /dev/sdc

and login to share folder via Windows, and put, download and delete files, and then add disk, then try again, then do it with /dev/sdb, and everything went ok.

So, from how i can see, it it ok.

------ Post updated at 09:57 AM ------

So from this i can be sure that files are on the disks? Can I check it somehow? Or this is a good test for it.
# 27  
Old 09-01-2018
Quote:
Originally Posted by tomislav91
Code:
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:55 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3677

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc



So, from how i can see, it it ok.

------ Post updated at 09:57 AM ------

So from this i can be sure that files are on the disks? Can I check it somehow? Or this is a good test for it.
The highlighted parts in red shows that the raid is good, you would need to investigate if you see a degraded state.

You should be able to access the files going to the mounting point whether your raid1 is degraded or not. There's some indication that you have done it in /mnt/md0

The samba setup is not particularly related to whether the underneath storage technology is raid or something else.
# 28  
Old 09-02-2018
Quote:
Originally Posted by Aia
The highlighted parts in red shows that the raid is good, you would need to investigate if you see a degraded state.

You should be able to access the files going to the mounting point whether your raid1 is degraded or not. There's some indication that you have done it in /mnt/md0

The samba setup is not particularly related to whether the underneath storage technology is raid or something else.
I understand that, and samba just watch for folders. My problem was that when i pull out one disk of two, that test folder inside /dev/raid (earlier mount point) was unreadable.
Now is ok.
THanks a loT!
Login or Register to Reply

|
Thread Tools Search this Thread
Search this Thread:
Advanced Search

More UNIX and Linux Forum Topics You Might Find Helpful
Samba share - currently not working psychocandy Red Hat 2 08-22-2017 11:58 AM
Copy files to samba share Ubuntu 14.04 cmccabe UNIX for Beginners Questions & Answers 2 09-13-2016 05:12 PM
Samba Share access from windows zaineyma UNIX for Beginners Questions & Answers 1 03-29-2016 01:07 PM
Samba share assess w/no authentication dignarn Red Hat 0 07-17-2015 02:59 PM
Samba share Windows problem PAfreakFlorian UNIX for Dummies Questions & Answers 0 10-08-2014 05:37 PM
Samba share problem in Linux 6.4 Priy Red Hat 3 06-11-2013 01:59 PM
How to Map AD groups to Samba share? sunnysthakur Red Hat 3 01-16-2013 09:17 AM
samba issue: one samba share without password prompting and the others with. ideal2545 Solaris 0 01-12-2012 03:45 PM
Problems between a HP UX 11.31 Samba share and Windows 7... zixzix01 UNIX for Advanced & Expert Users 1 04-22-2011 02:48 PM
SAMBA is connecting to the share somehow as root woot14 Solaris 0 09-24-2010 10:46 AM
Samba share script funyotros Linux 0 09-23-2010 11:20 AM
Facing problem in Samba share manoj.solaris Linux 1 01-11-2010 08:18 AM
Samba: share subfolder as read only. sparcman Solaris 2 06-09-2009 07:05 AM
How to mount windows Share on solaris using SAMBA daya.pandit Solaris 2 11-12-2008 12:06 AM
How can i set up Software disk mirroring(Raid1) in SCO 5.0.5 with two SCSI harddisk ? coralsea UNIX for Dummies Questions & Answers 1 01-16-2002 07:44 PM