Samba share on software raid1


Login or Register to Reply

 
Thread Tools Search this Thread
# 22  
Quote:
Originally Posted by hicksd8
Please answer the question asked by Peasant about did you run:

Code:
# sudo update-initramfs -u

Quote:
Originally Posted by Peasant
Lets just make this clear for the sake of argument.

So, you create a /dev/md0 from /dev/sdc /dev/sdb disks.
You mount it via mount command and add into /etc/fstab.

After reboot (all disks are inside), everything works as expected (mounted and healty).
You poweroff the box, and remove one of the disks from array.
Booting up everything is fine, except lacking one disk you removed.

After you poweroff and replug the drive, power on, there should be still one disk in the md setup. The one you didn't touch should still be part of md device.
Is this correct ?

Point being, if you unplug the drive, that drive is not considired part of array anymore.
When you plug it back, it will not magically rebuild, this has to be done by hand.
And of course, if you plug the other one out, and leave the first one in, you will not have a working md array.

Regards
Peasant.
You are right.
I added it to md0 manually, thats not a problem. Problem is why my data from /mnt/raid is not accessible and i cant read it when i unplug one of the 2 devices?
Thats strange from me, i cant figure out.
I added a folder in array in which i have mounted /dev/md0.

so i have a folder
/mnt/raid/test
and inside test some files.
I cant access it and all is fine. When I unplug disk when machine is running(or not, its the same result), i cant read from that test folder.
Input output error gives me.
But when I unplug another disk, it is working just fine.
Thats what bothers me.
Idea behind all this is to have a shared folder and prevention of data loss. If that disk fails i cannot access to that files. Thats the problem

Last edited by tomislav91; 08-31-2018 at 09:12 AM..
# 23  
So, as far as i understood - one disk removed array works, other disk array does not work ?
Did you use fdisk or (g)parted on those disks at all to put raid type ?

As for samba and your actual requirement, that is layers above. One at the time Smilie
First you need your md device to be redundant and working after reboot.

What is the HW on that server, since you can unplug disks live ?
I would advise against that practice to test redundancy, if not specifically supported.

You see disk fail in various ways, but rarely unplugging the cable or hitting it with an axe.
Testing will prove difficult Smilie
But from my experience @ home, i had an RAID1 array, one disk died of natural causes (old age) and the mdadm system did the job.
This was some time ago tho..

Can you show output of :
Code:
fdisk -l /dev/sdb
fdisk -l /dev/sdc

Have you considered using ZFS on ubuntu ?
It should really ease up the process of creating a mirror and managing it in a long run.

Regards
Peasant
# 24  
Quote:
Originally Posted by Peasant
So, as far as i understood - one disk removed array works, other disk array does not work ?
Did you use fdisk or (g)parted on those disks at all to put raid type ?

As for samba and your actual requirement, that is layers above. One at the time Smilie
First you need your md device to be redundant and working after reboot.

What is the HW on that server, since you can unplug disks live ?
I would advise against that practice to test redundancy, if not specifically supported.

You see disk fail in various ways, but rarely unplugging the cable or hitting it with an axe.
Testing will prove difficult Smilie
But from my experience @ home, i had an RAID1 array, one disk died of natural causes (old age) and the mdadm system did the job.
This was some time ago tho..

Can you show output of :
Code:
fdisk -l /dev/sdb
fdisk -l /dev/sdc

Have you considered using ZFS on ubuntu ?
It should really ease up the process of creating a mirror and managing it in a long run.

Regards
Peasant
Here you are output
Code:
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
root@myuser:/mnt/md0# fdisk -l /dev/sdc
Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Yeah,u arr right. When i pull out on of the disks i cant access folder test where i mounted md0.
My only concerne is to secure those data. Ok samba is on the layers above and its not very important to users to access it,but files must be replicated each other when i put some unix based live system to backup or save files if one disk fail.
I didnt try anything,only madm

------ Post updated 09-01-18 at 04:26 AM ------

My only concern if i left it like this, does the data be readble from a some linux usb live system or not. Nothing else Smilie
# 25  
I do not know if you have tried already any of the following commands to fail a drive, instead of forcefully removing it.

Code:
mdadm /dev/md0 -f /dev/sdc

Check the result
Code:
mdadm --detail /dev/md0

Remove the failed drive
Code:
mdadm /dev/md0 -r /dev/sdc

Check the result
Code:
mdadm --detail /dev/md0

Afterward you can add it back and check the result again and see it recover.

Code:
mdadm /dev/md0 -a /dev/sdc

This User Gave Thanks to Aia For This Post:
# 26  
Quote:
Originally Posted by tomislav91
sorry i don't write it.
Yeah, ofcourse.
this is output
Code:
sudo update-initramfs -u
update-initramfs: Generating /boot/initrd.img-4.10.0-19-generic

i have notices that in conf file array is /dev/md/0

Code:
ARRAY /dev/md/0 metadata=1.2 name=johna:0 UUID=bd34d949:34a3999d:949d6038:a871e1c1

------ Post updated at 06:41 AM ------

i also cant access to my data after deattach one hard drive.

------ Post updated at 06:43 AM ------

THis is output of mdstat
Code:
root@myuser:/mnt/raid/test# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc[1] sdb[0]
      1953383488 blocks super 1.2 [2/2] [UU]
      bitmap: 0/15 pages [0KB], 65536KB chunk

unused devices: <none>

------ Post updated at 07:23 AM ------

i now do it like this
Link
i delete array and added like in example. No errors found.
I can see that it is monuted
Code:
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/johnas--vg-root  223G  1.4G  210G   1% /
/dev/md0                     1.8T  898M  1.7T   1% /mnt/md0

Code:
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME                    SIZE FSTYPE            TYPE  MOUNTPOINT
sda                   232.9G                   disk
└─sda1                232.9G LVM2_member       part
  ├─johnas--vg-root     227G ext4              lvm   /
  └─johnas--vg-swap_1   5.9G swap              lvm   [SWAP]
sdb                     1.8T linux_raid_member disk
└─md0                   1.8T ext4              raid1 /mnt/md0
sdc                     1.8T linux_raid_member disk
└─md0                   1.8T ext4              raid1 /mnt/md0

I am now waiting to sync and try again to deattach disks and try to copy for example to /home/johnas/something, to see if it is good to go or not.
Just one question.
If I deattach one disk, and restart, it should be also in md0 because i added it in fstab.
Quote:
Originally Posted by Aia
I do not know if you have tried already any of the following commands to fail a drive, instead of forcefully removing it.

Code:
mdadm /dev/md0 -f /dev/sdc

Check the result
Code:
mdadm --detail /dev/md0

Remove the failed drive
Code:
mdadm /dev/md0 -r /dev/sdc

Check the result
Code:
mdadm --detail /dev/md0

Afterward you can add it back and check the result again and see it recover.

Code:
mdadm /dev/md0 -a /dev/sdc

[CODE]
Code:
mdadm /dev/md0 -a /dev/sdc

[/QUOTE]


Code:
root@myuser:/home/myuser# mdadm /dev/md0 -f /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:42:36 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3655

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       -       0        0        1      removed

       1       8       32        -      faulty   /dev/sdc
root@myuser:/home/myuser# mdadm /dev/md0 -r /dev/sdc
mdadm: hot removed /dev/sdc from /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:42:48 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3656

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       -       0        0        1      removed

root@myuser:/home/myuser# mdadm /dev/md0 -a /dev/sdc
mdadm: re-added /dev/sdc
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:43:29 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3660

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

Code:
root@myuser:/home/myuser# mdadm /dev/md0 -f /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:27 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3672

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       32        1      active sync   /dev/sdc

       0       8       16        -      faulty   /dev/sdb
root@myuser:/home/myuser# mdadm /dev/md0 -r /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:37 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3673

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       32        1      active sync   /dev/sdc
root@myuser:/home/myuser# mdadm /dev/md0 -a /dev/sdb
mdadm: re-added /dev/sdb
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:55 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3677

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

------ Post updated at 09:55 AM ------

I tried to do
Code:
mdadm /dev/md0 -f 
/dev/sdc mdadm /dev/md0 -r /dev/sdc

and login to share folder via Windows, and put, download and delete files, and then add disk, then try again, then do it with /dev/sdb, and everything went ok.

So, from how i can see, it it ok.

------ Post updated at 09:57 AM ------

So from this i can be sure that files are on the disks? Can I check it somehow? Or this is a good test for it.
# 27  
Quote:
Originally Posted by tomislav91
Code:
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:55 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3677

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc



So, from how i can see, it it ok.

------ Post updated at 09:57 AM ------

So from this i can be sure that files are on the disks? Can I check it somehow? Or this is a good test for it.
The highlighted parts in red shows that the raid is good, you would need to investigate if you see a degraded state.

You should be able to access the files going to the mounting point whether your raid1 is degraded or not. There's some indication that you have done it in /mnt/md0

The samba setup is not particularly related to whether the underneath storage technology is raid or something else.
# 28  
Quote:
Originally Posted by Aia
The highlighted parts in red shows that the raid is good, you would need to investigate if you see a degraded state.

You should be able to access the files going to the mounting point whether your raid1 is degraded or not. There's some indication that you have done it in /mnt/md0

The samba setup is not particularly related to whether the underneath storage technology is raid or something else.
I understand that, and samba just watch for folders. My problem was that when i pull out one disk of two, that test folder inside /dev/raid (earlier mount point) was unreadable.
Now is ok.
THanks a loT!
Login or Register to Reply

|
Thread Tools Search this Thread
Search this Thread:
Advanced Search

More UNIX and Linux Forum Topics You Might Find Helpful
Samba share - currently not working
psychocandy
Long running samba share. Never have any problems, Suddenly started asking windows users for password - which doesnt work. Tried to manually reset smb password and manually map - still wrong password. Restart samba?... Red Hat
2
Red Hat
Samba share assess w/no authentication
dignarn
RH 6.4 Samba 3.5.10. Joined a Windows AD (net rpc) I'm trying to create a public read-only share but regardless of the options I've tried users get prompted for a user/password. Can someone tell me whats wrong with my setup? I'm trying to make a whole filesystem that will contain installation...... Red Hat
0
Red Hat
samba issue: one samba share without password prompting and the others with.
ideal2545
Hi All, I've been trying to configure samba on Solaris 10 to allow me to have one share that is open and writable to all users and have the rest of my shares password protected by a generic account. If I set my security to user, my secured shares work just fine and prompt accordingly, but when...... Solaris
0
Solaris
Samba share script
funyotros
Hi everyone! I'm trying to run a script when a folder is shared and when it stop being shared. Is there something like .start_share or .stop_share scripts in Samba where I could run some commands?:confused: edit: maybe with a wrapper in smbmount but I share folders via nautilus. Any ideas?... Linux
0
Linux
How can i set up Software disk mirroring(Raid1) in SCO 5.0.5 with two SCSI harddisk ?
coralsea
thank u very much,... UNIX for Dummies Questions & Answers
1
UNIX for Dummies Questions & Answers