Sponsored Content
Full Discussion: Automount on NIS slave
Operating Systems Linux SuSE Automount on NIS slave Post 302283324 by otheus on Tuesday 3rd of February 2009 05:18:45 AM
Old 02-03-2009
Is the master NFS server the same as the master NIS server? The the answer is: you cannot.

What you are asking is: how do I do replication with NFS? The answers are quite varied, but in general: you don't. There are, however, two general approaches to the problem:

(1) as you suggest: synchronize the data between master/server. Some filesystems do this: CODA/transarc are the ones that come to mind. There is also block-level replication; DRBD does this. You can do it with rsync, but that's slow and you may lose several minutes of updates, depending on the size of your filesystem. (2) Use an external disk system that can connect to multiple hosts. Solaris and HP offer such products. Then you create a software "fence" so that only one computer can access partition X. If one computer becomes unavailable, the other can access it. The fence makes sure the first computer will not be able to mount the partition while the other has it locked. Examples include RedHat's GFS. Do-it-yourself methods also available.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

NIS map mail.aliases specified, but NIS not running

Hi all, I just took over the admin role from someone and I wanna setup sendmail (just to send mail from the host) however, after I config all the resolv.conf, nssitch.conf, hosts file and when I try to send a mail out, after I punched ctl-D, it returned he following, "NIS map mail.aliases... (2 Replies)
Discussion started by: stancwong
2 Replies

2. UNIX for Advanced & Expert Users

SUSE 9 and 10 NIS clients with RedHat 8.0 NIS server not working

We have a RedHat 8.0 NIS master, with a RedHat 8.0 NIS Slave. We also have a small number of SUSE 9.1 and SUSE 10 machines here for evaluation. However, no matter what i do, the SUSE machines will not talk to the NIS Servers. If i broadcast for NIS Servers for the specified NIS domain, it... (1 Reply)
Discussion started by: fishsponge
1 Replies

3. UNIX for Advanced & Expert Users

NIS master / slave problems

Our NIS master server went down. We have since fixed it and brought it back up. However all of are machines still point to the slave server when looking at it with ypwhich. My question is how do i point the servers back to the master. Frank (2 Replies)
Discussion started by: frankkahle
2 Replies

4. Solaris

How to configure a NIS client bound to the NIS server in another subnet?

Hi, all. I have a Solaris client here needs to bind to NIS server in another subnet. Following is the configuration i made on the client, 1) edit /etc/inet/hosts to add an entry of the NIS server -- nserver01 2) execute `domainname` to set local NIS domain to the domain of the NIS server.... (1 Reply)
Discussion started by: sn_wukong
1 Replies

5. UNIX for Advanced & Expert Users

Automount on NIS slave

I am setting up the NIS slave server to automount the home directory just like its master server on suse linux (SLES 10). Since the master will be the first to mount the /home on the client. I have not been able to mount the /home on the on the slave when the master NIS server is unavailable. How... (0 Replies)
Discussion started by: ibroxy
0 Replies

6. AIX

Slave NIS server configuration change

Hello Everybody, I have a question regarding SLAVE NIS SERVER in aix. We are using NIS master of Sun Solaris 9.0 which is on different subnet i.e. 10.197.93.0. And Our slave server is having AIX 5.3 installed which is on 10.207.13.0 subnet. I have a query regarding its name and ip address... (0 Replies)
Discussion started by: jit15975
0 Replies

7. Solaris

How to bind from a NIS slave server

Hi All, I have a client already binded with a NIS master server. Now, I want to bind this particular client to one of NIS slave. How to do it? Thanks, Deepak (2 Replies)
Discussion started by: naw_deepak
2 Replies

8. Solaris

NIS slave unable to copy the data bases

Hi, I'm learning for my Solaris 10 sys-admin part 2. I'm now trying to get nis working because for the exercise. I run in to a problem. Setup: Three Systems solaris101 (client) Nothing wrong here havent made any config changes yet. solaris102 (master server) Interfaces ... (1 Reply)
Discussion started by: jld
1 Replies

9. UNIX for Advanced & Expert Users

NIS setup: Mastr on AIX and slave on Solaris?

Hello - This could be a stupid question. But can we configure NIS with different flavors of UNIX. Like Master on AIX and slave on Solaris? ---------- Post updated 09-06-11 at 04:17 AM ---------- Previous update was 09-05-11 at 06:34 AM ---------- Hi - Can anyone please answer this? (1 Reply)
Discussion started by: manju--
1 Replies

10. Solaris

Testing NIS slave server

Hi guys, In my Sol-10 i setup NIS server following the oracle doc and setup a Linux as NIS client all went ok. I added another Sol-10 and configure it as a NIS slave server following the oracle doc again. and added the server on the yp.conf on my NIS client How do i test if my NIS slave... (3 Replies)
Discussion started by: batas
3 Replies
mount.gfs(8)						      System Manager's Manual						      mount.gfs(8)

NAME
mount.gfs - GFS mount options SYNOPSIS
mount [StandardMountOptions] -t gfs DEVICE MOUNTPOINT -o [GFSOption1,GFSOption2,GFSOptionX...] DESCRIPTION
GFS may be used as a local (single computer) filesystem, but its real purpose is in clusters, where multiple computers (nodes) share a com- mon storage device. Above is the format typically used to mount a GFS filesystem, using the mount(8) command. The device may be any block device on which you have created a GFS filesystem. Examples include a single disk partition (e.g. /dev/sdb3), a loopback device, a device exported from another node (e.g. an iSCSI device or a gnbd(8) device), or a logical volume (typically comprised of a number of individual disks). device does not necessarily need to match the device name as seen on another node in the cluster, nor does it need to be a logical volume. However, the use of a cluster-aware volume manager such as CLVM2 (see lvm(8)) will guarantee that the managed devices are named identically on each node in a cluster (for much easier management), and will allow you to configure a very large volume from multiple storage units (e.g. disk drives). device must make the entire filesystem storage area visible to the computer. That is, you cannot mount different parts of a single filesystem on different computers. Each computer must see an entire filesystem. You may, however, mount several GFS filesystems if you want to distribute your data storage in a controllable way. mountpoint is the same as dir in the mount(8) man page. This man page describes GFS-specific options that can be passed to the GFS file system at mount time, using the -o flag. There are many other -o options handled by the generic mount command mount(8). However, the options described below are specifically for GFS, and are not interpreted by the mount command nor by the kernel's Virtual File System. GFS and non-GFS options may be intermingled after the -o, sepa- rated by commas (but no spaces). As an alternative to mount command line options, you may send mount options to gfs using "gfs_tool margs" (after loading the gfs kernel module, but before mounting GFS). For example, you may need to do this when working from an initial ramdisk initrd(4). The options are restricted to the ones described on this man page (no general mount(8) options will be recognized), must not be preceded by -o, and must be separated by commas (no spaces). Example: # gfs_tool margs "lockproto=lock_nolock,ignore_local_fs" Options loaded via "gfs_tool margs" have a lifetime of only one GFS mount. If you wish to mount another GFS filesystem, you must set another group of options with "gfs_tool margs". If you have trouble mounting GFS, check the syslog (e.g. /var/log/messages) for specific error messages. OPTIONS
lockproto=LockModuleName This specifies which inter-node lock protocol is used by the GFS filesystem for this mount, overriding the default lock protocol name stored in the filesystem's on-disk superblock. The LockModuleName must be an exact match of the protocol name presented by the lock module when it registers with the lock harness. Traditionally, this matches the .o filename of the lock module, e.g. lock_dlm, or lock_nolock. The default lock protocol name is written to disk initially when creating the filesystem with mkfs.gfs(8), -p option. It can be changed on-disk by using the gfs_tool(8) utility's sb proto command. The lockproto mount option should be used only under special circumstances in which you want to temporarily use a different lock protocol without changing the on-disk default. locktable=LockTableName This specifies the identity of the cluster and of the filesystem for this mount, overriding the default cluster/filesystem identify stored in the filesystem's on-disk superblock. The cluster/filesystem name is recognized globally throughout the cluster, and establishes a unique namespace for the inter-node locking system, enabling the mounting of multiple GFS filesystems. The format of LockTableName is lock-module-specific. For lock_dlm, the format is clustername:fsname. For lock_nolock, the field is ignored. The default cluster/filesystem name is written to disk initially when creating the filesystem with mkfs.gfs(8), -t option. It can be changed on-disk by using the gfs_tool(8) utility's sb table command. The locktable mount option should be used only under special circumstances in which you want to mount the filesystem in a different cluster, or mount it as a different filesystem name, without changing the on-disk default. localcaching This flag tells GFS that it is running as a local (not clustered) filesystem, so it can turn on some block caching optimizations that can't be used when running in cluster mode. This is turned on automatically by the lock_nolock module, but can be overridden by using the ignore_local_fs option. localflocks This flag tells GFS that it is running as a local (not clustered) filesystem, so it can allow the kernel VFS layer to do all flock and fcntl file locking. When running in cluster mode, these file locks require inter-node locks, and require the support of GFS. When running locally, better performance is achieved by letting VFS handle the whole job. This is turned on automatically by the lock_nolock module, but can be overridden by using the ignore_local_fs option. oopses_ok Normally, GFS automatically turns on the "kernel.panic_on_oops" sysctl to cause the machine to panic if an oops (an in-kernel seg- fault or GFS assertion failure) happens. An oops on one machine of a cluster filesystem can cause the filesystem to stall on all machines in the cluster. (Panics don't have this "feature".) By turning on "panic_on_oops", GFS tries to make sure the cluster remains in operation even if one machine has a problem. There are cases, however, where this behavior is not desirable -- debugging being the main one. The oopses_ok option causes GFS to leave the "panic_on_oops" variable alone so oopses can happen. Use this option with care. This is turned on automatically by the lock_nolock module, but can be overridden by using the ignore_local_fs option. ignore_local_fs By default, using the nolock lock module automatically turns on the localcaching and localflocks optimizations. ignore_local_fs forces GFS to treat the filesystem as if it were a multihost (clustered) filesystem, with localcaching and localflocks optimizations turned off. upgrade This flag tells GFS to upgrade the filesystem's on-disk format to the version supported by the current GFS software installation on this computer. If you try to mount an old-version disk image, GFS will notify you via a syslog message that you need to upgrade. Try mounting again, using the -o upgrade option. When upgrading, only one node may mount the GFS filesystem. num_glockd Tunes GFS to alleviate memory pressure when rapidly acquiring many locks (e.g. several processes scanning through huge directory trees). GFS' glockd kernel daemon cleans up memory for no-longer-needed glocks. Multiple instances of the daemon clean up faster than a single instance. The default value is one daemon, with a maximum of 32. Since this option was introduced, other methods of rapid cleanup have been developed within GFS, so this option may go away in the future. acl Enables POSIX Access Control List acl(5) support within GFS. spectator Mount this filesystem using a special form of read-only mount. The mount does not use one of the filesystem's journals. suiddir Sets owner of any newly created file or directory to be that of parent directory, if parent directory has S_ISUID permission attribute bit set. Sets S_ISUID in any new directory, if its parent directory's S_ISUID is set. Strips all execution bits on a new file, if parent directory owner is different from owner of process creating the file. Set this option only if you know why you are setting it. LINKS
http://sources.redhat.com/cluster -- home site of GFS http://www.suse.de/~agruen/acl/linux-acls/ -- good writeup on ACL support in Linux SEE ALSO
gfs(8), mount(8) for general mount options, chmod(1) and chmod(2) for access permission flags, acl(5) for access control lists, lvm(8) for volume management, ccs(7) for cluster management, umount(8), initrd(4). mount.gfs(8)
All times are GMT -4. The time now is 03:23 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy