Sponsored Content
Homework and Emergencies Emergency UNIX and Linux Support mdadm unable to fail a resyncing drive? Post 302545847 by DGPickett on Tuesday 9th of August 2011 05:12:00 PM
Old 08-09-2011
The man seems to day you can pull them all offline, but is otherwise terse, so maybe they assume if you want to limp without the bad drive, you just pull it? Maybe the facility you want is lower down, in the system device layer not the md virtual device layer.
This User Gave Thanks to DGPickett For This Post:
 

10 More Discussions You Might Find Interesting

1. Solaris

Resyncing Progress of hardware mirror

Hi, I've recently mirrored the 4 disks in a V440. Disks 0 + 1 have been mirrored with hardware mirroring using the command raidctl -c c1t0d0 c1t2d0, the other 2 disks have been mirrored using Soltisce Disk Suite. I know how to check the progress for the SDS mirroring but how can I find the... (2 Replies)
Discussion started by: Chains
2 Replies

2. Solaris

Unable to mount USB Pen drive on my Server

Hello Gurus!! Very recently i tried to mount a USB pen drive onto my solaris 10 (X4170 model) server. As i understand, in ideal scenarios it should get mounted automatically, but it did not happen. Neither anything is shown in "iostat -En" output or "rmformat -l" about the pen drive. I also... (10 Replies)
Discussion started by: EmbedUX
10 Replies

3. UNIX for Advanced & Expert Users

mdadm question

Hello, I have 4 drives (500G each) in a raid 10, I got a power failior and this is the result? cat /proc/mdstat Personalities : md126 : inactive sdb sdc sdd sde 1953536528 blocks super external:-md127/0 md127 : inactive sdd(S) sde(S) sdb(S) sdc(S) 9028 blocks super... (3 Replies)
Discussion started by: rmokros
3 Replies

4. UNIX for Advanced & Expert Users

mdadm container! How does it work

Hi everyone, I am not sure if I understand how mdadm --create /dev/md0 --level=container works? A device called /dev/md0 appears in /proc/mdstat but I am not sure how to use that device? I have 2 blank drives with 1 500GB partition on each. I would like to setup mirroring, but not in the... (0 Replies)
Discussion started by: hytron
0 Replies

5. Debian

Unable to mount external drive

Trying to mount an external 160GB Toshiba drive but.... this is my dmesg tail output: usb 2-2: new high speed USB device using ehci_hcd and address 3 usb 2-2: New USB device found, idVendor=13fd, idProduct=1618 usb 2-2: New USB device strings: Mfr=0, Product=0, SerialNumber=0 usb 2-2:... (4 Replies)
Discussion started by: Ridson
4 Replies

6. Red Hat

mdadm for / and /boot

had this RHEL 5 installation with /dev/sda1 and /dev/sda2 running.. created two more partitions /dev/sdj1 and /dev/sdj2 , the same sized partition as /dev/sda trying to use mdadm to create RAID1 .. I cannot even do it in "rescue" mode, I wonder if it can be done.. it kept... (2 Replies)
Discussion started by: ppchu99
2 Replies

7. UNIX for Dummies Questions & Answers

unable to automount a cifs drive in linux

Hi I am using SUSE 11 linux I have couple of "nfs" entries in /etc/fstab which are automatically loaded after system restart. One of the entry is windows drive mounted using cifs as shown below //IP-Address/Partition /mnt/x cifs credentials=/creds/.creds,rw,uid=<name> 0 0 I want to... (1 Reply)
Discussion started by: rakeshkumar
1 Replies

8. Filesystems, Disks and Memory

MDADM Failure - where it came from?

Hello, i have a system with 6 sata3 seagate st3000dm01 disks running on stable Debian with software raid mdadm. i have md0 for root and md1 for swap and md2 for the files. i now want to add one more disk = sdh4 for md2 but i got this errors: The new disk is connected to an 4 port sata... (7 Replies)
Discussion started by: Sunghost
7 Replies

9. Solaris

Unable to send SCSI commands to USB Drive

I am connecting a USB mass storage removeable drive to Solaris 10 x86 machine. The device is detected and i am able to perform standard read and write functions. But i want to use a code to send IOCTL based SCSI commands to the same device to read and write the data. Which i am unable to do so.... (17 Replies)
Discussion started by: danish2012
17 Replies

10. Solaris

Maint, resyncing and last-erred notifications

Hi fellow members! I have a oracle solaris server with two internal disks, that acts as an authentication server only, and for now the server seems to be doing its job, but when typing metastat -c I get some notifications. I am not familiar with SVM, I wonder if someone can help me on this:... (3 Replies)
Discussion started by: fretagi
3 Replies
MOUNT_NULLFS(8) 					    BSD System Manager's Manual 					   MOUNT_NULLFS(8)

NAME
mount_nullfs -- mount a loopback file system sub-tree; demonstrate the use of a null file system layer SYNOPSIS
mount_nullfs [-o options] target mount-point DESCRIPTION
The mount_nullfs utility creates a null layer, duplicating a sub-tree of the file system name space under another part of the global file system namespace. This allows existing files and directories to be accessed using a different pathname. The primary differences between a virtual copy of the file system and a symbolic link are that the getcwd(3) functions work correctly in the virtual copy, and that other file systems may be mounted on the virtual copy without affecting the original. A different device number for the virtual copy is returned by stat(2), but in other respects it is indistinguishable from the original. The mount_nullfs file system differs from a traditional loopback file system in two respects: it is implemented using a stackable layers techniques, and its ``null-node''s stack above all lower-layer vnodes, not just over directory vnodes. The options are as follows: -o Options are specified with a -o flag followed by a comma separated string of options. See the mount(8) man page for possible options and their meanings. The null layer has two purposes. First, it serves as a demonstration of layering by providing a layer which does nothing. (It actually does everything the loopback file system does, which is slightly more than nothing.) Second, the null layer can serve as a prototype layer. Since it provides all necessary layer framework, new file system layers can be created very easily by starting with a null layer. The remainder of this man page examines the null layer as a basis for constructing new layers. INSTANTIATING NEW NULL LAYERS
New null layers are created with mount_nullfs. The mount_nullfs utility takes two arguments, the pathname of the lower vfs (target-pn) and the pathname where the null layer will appear in the namespace (mount-point-pn). After the null layer is put into place, the contents of target-pn subtree will be aliased under mount-point-pn. OPERATION OF A NULL LAYER
The null layer is the minimum file system layer, simply bypassing all possible operations to the lower layer for processing there. The majority of its activity centers on the bypass routine, through which nearly all vnode operations pass. The bypass routine accepts arbitrary vnode operations for handling by the lower layer. It begins by examining vnode operation arguments and replacing any null-nodes by their lower-layer equivalents. It then invokes the operation on the lower layer. Finally, it replaces the null- nodes in the arguments and, if a vnode is returned by the operation, stacks a null-node on top of the returned vnode. Although bypass handles most operations, vop_getattr, vop_inactive, vop_reclaim, and vop_print are not bypassed. Vop_getattr must change the fsid being returned. Vop_inactive and vop_reclaim are not bypassed so that they can handle freeing null-layer specific data. Vop_print is not bypassed to avoid excessive debugging information. INSTANTIATING VNODE STACKS
Mounting associates the null layer with a lower layer, in effect stacking two VFSes. Vnode stacks are instead created on demand as files are accessed. The initial mount creates a single vnode stack for the root of the new null layer. All other vnode stacks are created as a result of vnode operations on this or other null vnode stacks. New vnode stacks come into existence as a result of an operation which returns a vnode. The bypass routine stacks a null-node above the new vnode before returning it to the caller. For example, imagine mounting a null layer with mount_nullfs /usr/include /dev/layer/null Changing directory to /dev/layer/null will assign the root null-node (which was created when the null layer was mounted). Now consider open- ing sys. A vop_lookup would be done on the root null-node. This operation would bypass through to the lower layer which would return a vnode representing the UFS sys. Null_bypass then builds a null-node aliasing the UFS sys and returns this to the caller. Later operations on the null-node sys will repeat this process when constructing other vnode stacks. CREATING OTHER FILE SYSTEM LAYERS
One of the easiest ways to construct new file system layers is to make a copy of the null layer, rename all files and variables, and then begin modifying the copy. The sed(1) utility can be used to easily rename all variables. The umap layer is an example of a layer descended from the null layer. INVOKING OPERATIONS ON LOWER LAYERS
There are two techniques to invoke operations on a lower layer when the operation cannot be completely bypassed. Each method is appropriate in different situations. In both cases, it is the responsibility of the aliasing layer to make the operation arguments "correct" for the lower layer by mapping a vnode argument to the lower layer. The first approach is to call the aliasing layer's bypass routine. This method is most suitable when you wish to invoke the operation cur- rently being handled on the lower layer. It has the advantage that the bypass routine already must do argument mapping. An example of this is null_getattrs in the null layer. A second approach is to directly invoke vnode operations on the lower layer with the VOP_OPERATIONNAME interface. The advantage of this method is that it is easy to invoke arbitrary operations on the lower layer. The disadvantage is that vnode arguments must be manually mapped. SEE ALSO
mount(8) UCLA Technical Report CSD-910056, Stackable Layers: an Architecture for File System Development. HISTORY
The mount_nullfs utility first appeared in 4.4BSD. BSD
May 1, 1995 BSD
All times are GMT -4. The time now is 08:55 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy