Samba share on software raid1


 
Thread Tools Search this Thread
Operating Systems Linux Ubuntu Samba share on software raid1
# 22  
Old 08-31-2018
Quote:
Originally Posted by hicksd8
Please answer the question asked by Peasant about did you run:

Code:
# sudo update-initramfs -u

Quote:
Originally Posted by Peasant
Lets just make this clear for the sake of argument.

So, you create a /dev/md0 from /dev/sdc /dev/sdb disks.
You mount it via mount command and add into /etc/fstab.

After reboot (all disks are inside), everything works as expected (mounted and healty).
You poweroff the box, and remove one of the disks from array.
Booting up everything is fine, except lacking one disk you removed.

After you poweroff and replug the drive, power on, there should be still one disk in the md setup. The one you didn't touch should still be part of md device.
Is this correct ?

Point being, if you unplug the drive, that drive is not considired part of array anymore.
When you plug it back, it will not magically rebuild, this has to be done by hand.
And of course, if you plug the other one out, and leave the first one in, you will not have a working md array.

Regards
Peasant.
You are right.
I added it to md0 manually, thats not a problem. Problem is why my data from /mnt/raid is not accessible and i cant read it when i unplug one of the 2 devices?
Thats strange from me, i cant figure out.
I added a folder in array in which i have mounted /dev/md0.

so i have a folder
/mnt/raid/test
and inside test some files.
I cant access it and all is fine. When I unplug disk when machine is running(or not, its the same result), i cant read from that test folder.
Input output error gives me.
But when I unplug another disk, it is working just fine.
Thats what bothers me.
Idea behind all this is to have a shared folder and prevention of data loss. If that disk fails i cannot access to that files. Thats the problem

Last edited by tomislav91; 08-31-2018 at 10:12 AM..
# 23  
Old 08-31-2018
So, as far as i understood - one disk removed array works, other disk array does not work ?
Did you use fdisk or (g)parted on those disks at all to put raid type ?

As for samba and your actual requirement, that is layers above. One at the time Smilie
First you need your md device to be redundant and working after reboot.

What is the HW on that server, since you can unplug disks live ?
I would advise against that practice to test redundancy, if not specifically supported.

You see disk fail in various ways, but rarely unplugging the cable or hitting it with an axe.
Testing will prove difficult Smilie
But from my experience @ home, i had an RAID1 array, one disk died of natural causes (old age) and the mdadm system did the job.
This was some time ago tho..

Can you show output of :
Code:
fdisk -l /dev/sdb
fdisk -l /dev/sdc

Have you considered using ZFS on ubuntu ?
It should really ease up the process of creating a mirror and managing it in a long run.

Regards
Peasant
# 24  
Old 09-01-2018
Quote:
Originally Posted by Peasant
So, as far as i understood - one disk removed array works, other disk array does not work ?
Did you use fdisk or (g)parted on those disks at all to put raid type ?

As for samba and your actual requirement, that is layers above. One at the time Smilie
First you need your md device to be redundant and working after reboot.

What is the HW on that server, since you can unplug disks live ?
I would advise against that practice to test redundancy, if not specifically supported.

You see disk fail in various ways, but rarely unplugging the cable or hitting it with an axe.
Testing will prove difficult Smilie
But from my experience @ home, i had an RAID1 array, one disk died of natural causes (old age) and the mdadm system did the job.
This was some time ago tho..

Can you show output of :
Code:
fdisk -l /dev/sdb
fdisk -l /dev/sdc

Have you considered using ZFS on ubuntu ?
It should really ease up the process of creating a mirror and managing it in a long run.

Regards
Peasant
Here you are output
Code:
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
root@myuser:/mnt/md0# fdisk -l /dev/sdc
Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Yeah,u arr right. When i pull out on of the disks i cant access folder test where i mounted md0.
My only concerne is to secure those data. Ok samba is on the layers above and its not very important to users to access it,but files must be replicated each other when i put some unix based live system to backup or save files if one disk fail.
I didnt try anything,only madm

------ Post updated 09-01-18 at 04:26 AM ------

My only concern if i left it like this, does the data be readble from a some linux usb live system or not. Nothing else Smilie
# 25  
Old 09-01-2018
I do not know if you have tried already any of the following commands to fail a drive, instead of forcefully removing it.

Code:
mdadm /dev/md0 -f /dev/sdc

Check the result
Code:
mdadm --detail /dev/md0

Remove the failed drive
Code:
mdadm /dev/md0 -r /dev/sdc

Check the result
Code:
mdadm --detail /dev/md0

Afterward you can add it back and check the result again and see it recover.

Code:
mdadm /dev/md0 -a /dev/sdc

This User Gave Thanks to Aia For This Post:
# 26  
Old 09-01-2018
Quote:
Originally Posted by tomislav91
sorry i don't write it.
Yeah, ofcourse.
this is output
Code:
sudo update-initramfs -u
update-initramfs: Generating /boot/initrd.img-4.10.0-19-generic

i have notices that in conf file array is /dev/md/0

Code:
ARRAY /dev/md/0 metadata=1.2 name=johna:0 UUID=bd34d949:34a3999d:949d6038:a871e1c1

------ Post updated at 06:41 AM ------

i also cant access to my data after deattach one hard drive.

------ Post updated at 06:43 AM ------

THis is output of mdstat
Code:
root@myuser:/mnt/raid/test# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc[1] sdb[0]
      1953383488 blocks super 1.2 [2/2] [UU]
      bitmap: 0/15 pages [0KB], 65536KB chunk

unused devices: <none>

------ Post updated at 07:23 AM ------

i now do it like this
Link
i delete array and added like in example. No errors found.
I can see that it is monuted
Code:
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/johnas--vg-root  223G  1.4G  210G   1% /
/dev/md0                     1.8T  898M  1.7T   1% /mnt/md0

Code:
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME                    SIZE FSTYPE            TYPE  MOUNTPOINT
sda                   232.9G                   disk
└─sda1                232.9G LVM2_member       part
  ├─johnas--vg-root     227G ext4              lvm   /
  └─johnas--vg-swap_1   5.9G swap              lvm   [SWAP]
sdb                     1.8T linux_raid_member disk
└─md0                   1.8T ext4              raid1 /mnt/md0
sdc                     1.8T linux_raid_member disk
└─md0                   1.8T ext4              raid1 /mnt/md0

I am now waiting to sync and try again to deattach disks and try to copy for example to /home/johnas/something, to see if it is good to go or not.
Just one question.
If I deattach one disk, and restart, it should be also in md0 because i added it in fstab.
Quote:
Originally Posted by Aia
I do not know if you have tried already any of the following commands to fail a drive, instead of forcefully removing it.

Code:
mdadm /dev/md0 -f /dev/sdc

Check the result
Code:
mdadm --detail /dev/md0

Remove the failed drive
Code:
mdadm /dev/md0 -r /dev/sdc

Check the result
Code:
mdadm --detail /dev/md0

Afterward you can add it back and check the result again and see it recover.

Code:
mdadm /dev/md0 -a /dev/sdc

[CODE]
Code:
mdadm /dev/md0 -a /dev/sdc

[/QUOTE]


Code:
root@myuser:/home/myuser# mdadm /dev/md0 -f /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:42:36 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3655

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       -       0        0        1      removed

       1       8       32        -      faulty   /dev/sdc
root@myuser:/home/myuser# mdadm /dev/md0 -r /dev/sdc
mdadm: hot removed /dev/sdc from /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:42:48 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3656

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       -       0        0        1      removed

root@myuser:/home/myuser# mdadm /dev/md0 -a /dev/sdc
mdadm: re-added /dev/sdc
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:43:29 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3660

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

Code:
root@myuser:/home/myuser# mdadm /dev/md0 -f /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:27 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3672

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       32        1      active sync   /dev/sdc

       0       8       16        -      faulty   /dev/sdb
root@myuser:/home/myuser# mdadm /dev/md0 -r /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md0
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:37 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3673

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       32        1      active sync   /dev/sdc
root@myuser:/home/myuser# mdadm /dev/md0 -a /dev/sdb
mdadm: re-added /dev/sdb
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:55 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3677

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

------ Post updated at 09:55 AM ------

I tried to do
Code:
mdadm /dev/md0 -f 
/dev/sdc mdadm /dev/md0 -r /dev/sdc

and login to share folder via Windows, and put, download and delete files, and then add disk, then try again, then do it with /dev/sdb, and everything went ok.

So, from how i can see, it it ok.

------ Post updated at 09:57 AM ------

So from this i can be sure that files are on the disks? Can I check it somehow? Or this is a good test for it.
# 27  
Old 09-01-2018
Quote:
Originally Posted by tomislav91
Code:
root@myuser:/home/myuser# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Aug 31 14:17:31 2018
     Raid Level : raid1
     Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Sep  1 16:46:55 2018
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : myuser:0  (local to host myuser)
           UUID : bb96aaed:688c6f5c:970569cb:092e37f9
         Events : 3677

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc



So, from how i can see, it it ok.

------ Post updated at 09:57 AM ------

So from this i can be sure that files are on the disks? Can I check it somehow? Or this is a good test for it.
The highlighted parts in red shows that the raid is good, you would need to investigate if you see a degraded state.

You should be able to access the files going to the mounting point whether your raid1 is degraded or not. There's some indication that you have done it in /mnt/md0

The samba setup is not particularly related to whether the underneath storage technology is raid or something else.
# 28  
Old 09-02-2018
Quote:
Originally Posted by Aia
The highlighted parts in red shows that the raid is good, you would need to investigate if you see a degraded state.

You should be able to access the files going to the mounting point whether your raid1 is degraded or not. There's some indication that you have done it in /mnt/md0

The samba setup is not particularly related to whether the underneath storage technology is raid or something else.
I understand that, and samba just watch for folders. My problem was that when i pull out one disk of two, that test folder inside /dev/raid (earlier mount point) was unreadable.
Now is ok.
THanks a loT!
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Mounting a samba share

Hi, I need to mount a directory from a Windows server to a CentOS box. The Windows server used is Windows Server 2003, and the path to the directory that I want to mount on CentOS is C:\Tomcat6\webapps\NASApp\logs. I am not sure of the correct way to mount this on CentOS, as most of the... (2 Replies)
Discussion started by: anaigini45
2 Replies

2. Red Hat

Samba share - currently not working

Long running samba share. Never have any problems, Suddenly started asking windows users for password - which doesnt work. Tried to manually reset smb password and manually map - still wrong password. Restart samba? (2 Replies)
Discussion started by: psychocandy
2 Replies

3. UNIX for Beginners Questions & Answers

Samba Share access from windows

Hello, I want to connect to two samba shares both on the same Linux box but each with a different username from a windows server 2008. I created 2 gpos to connect and I can connect to the shares individually via net use command, but once I entered credentials for one of the shares, it seems I... (1 Reply)
Discussion started by: zaineyma
1 Replies

4. Red Hat

Samba share problem in Linux 6.4

Hi , In samba i have shared my home directory, but its showing as a printer. Not able to share data. $ smbclient -L 192.168.122.1 Enter priyank's password: Domain= OS= Server= Sharename Type Comment --------- ---- ------- shared_priyank Printer ... (3 Replies)
Discussion started by: Priy
3 Replies

5. Red Hat

How to Map AD groups to Samba share?

I am setup a samba share server which is authenticating from Active Directory. I am able to access the share with AD user but not able to access when group defined in "valid users" parameters. below are the steps i performed. In smb.conf workgroup = QASLABS password server =... (3 Replies)
Discussion started by: sunnysthakur
3 Replies

6. Solaris

samba issue: one samba share without password prompting and the others with.

Hi All, I've been trying to configure samba on Solaris 10 to allow me to have one share that is open and writable to all users and have the rest of my shares password protected by a generic account. If I set my security to user, my secured shares work just fine and prompt accordingly, but when... (0 Replies)
Discussion started by: ideal2545
0 Replies

7. UNIX for Advanced & Expert Users

Problems between a HP UX 11.31 Samba share and Windows 7...

Hi I have an issue with a client. He was able to use his mounted Samba share for a long time. However, a couple of days ago, he wasn't able to access all of his files all of a sudden. He still see's the share and majority of the files, but not some that he needs. I checked with Secure CRT on... (1 Reply)
Discussion started by: zixzix01
1 Replies

8. Linux

Samba share script

Hi everyone! I'm trying to run a script when a folder is shared and when it stop being shared. Is there something like .start_share or .stop_share scripts in Samba where I could run some commands?:confused: edit: maybe with a wrapper in smbmount but I share folders via nautilus. Any ideas? (0 Replies)
Discussion started by: funyotros
0 Replies

9. Linux

Facing problem in Samba share

Hi, I am facing problem while accessing samba share on Linux 5.1 from windows, though I have done the same configuration on Linux 4 (Update 2), on Red Hat 4.0 it is working but while on Linux 5.1 these configuration are not working, I have disabled the firewall also. Kindly suggest me... (1 Reply)
Discussion started by: manoj.solaris
1 Replies

10. UNIX for Dummies Questions & Answers

How can i set up Software disk mirroring(Raid1) in SCO 5.0.5 with two SCSI harddisk ?

thank u very much, (1 Reply)
Discussion started by: coralsea
1 Replies
Login or Register to Ask a Question