Sponsored Content
Special Forums Hardware Filesystems, Disks and Memory Clustered filesystem which one to pick Post 303003018 by jokken on Wednesday 6th of September 2017 03:11:50 PM
Old 09-06-2017
Clustered filesystem which one to pick

Hi all,

I'm a bit new to advanced filesystem types. I've just only learned that if you wish to share a single fibre channel extent with many servers you need to use a clustered filesystem to prevent data corruption.

looking through a list of clustered file systems I saw gfs2 which I thought might be a good one to use. but is it the best for what I need or want to do?

I have a large 7TB fibre channel extent which is accessible by 14+ servers on the fibre channel network. I'd like each server to be able to use this storage space for the vHDs of their running VMs. I don't want to split up this 7TB into 500GB vdisks so each server can have a slice.

So I understand I need a special filesystem to do this. what would you recommend?

If it is an important detail I'll mention these 14 servers are Openstack Newton Nova/Compute nodes. (Ubuntu 16.04.3LTS)

my guess is I would have to format the drive as GFS2 from one of the 14 servers and then gfs mount it from all 14 servers

please let me know what you think of GFS2
or comment on what I' doing.
I'll gladly supply more info on my setup if you need it!

thx!

Last edited by rbatte1; 09-07-2017 at 04:52 AM.. Reason: Spelling
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

netstat command between clustered hosts

I have 2 clustered hosts, is it possible for me to issue a netstat command against 1 host from the other ? (4 Replies)
Discussion started by: murphyboy
4 Replies

2. High Performance Computing

Clustered Databases Versus Virtualization for CEP Applications

Tim Bass Sat, 17 Nov 2007 04:11:25 +0000 In my earlier*post, A Model For Distributed Event*Processing, I promised to address grid computing, distributed object caching and virtualization, and how these technologies relate to complex event processing.***Some of my readers might forget my earlier... (0 Replies)
Discussion started by: Linux Bot
0 Replies

3. UNIX for Dummies Questions & Answers

hwo to find shared filesystem and local filesystem in AIX

Hi, I wanted to find out that in my database server which filesystems are shared storage and which filesystems are local. Like when I use df -k, it shows "filesystem" and "mounted on" but I want to know which one is shared and which one is local. Please tell me the commands which I can run... (2 Replies)
Discussion started by: kamranjalal
2 Replies

4. AIX

restoring mksysb backup of a clustered server configured in HACMP

Hi, I have done NIM restoration via nim_bosinst a lot of times but I have some doubts on restoring a server which is clustered specifically HACMP. Previously, I don't know the trend but after doing a nim_bosinst, I can see the client's hostname is back to "localhost" rather than its original... (0 Replies)
Discussion started by: depam
0 Replies

5. Solaris

Solaris Filesystem vs. Windows FileSystem

Hi guys! Could you tell me what's the difference of filesystem of Solaris to filesystem of Windows? I need to compare both. I have read some over the net but it's so much technical. Could you explain it in a more simpler term? I am new to Solaris. Hope you help me guys. Thanks! (4 Replies)
Discussion started by: arah
4 Replies

6. AIX

Mount Filesystem in AIX Unable to read /etc/filesystem

Dear all, We are facing prolem when we are going to mount AIX filesystem, the system returned the following error 0506-307The AFopen call failed : A file or directory in the path name does not exist. But when we ls filesystems in the /etc/ directory it show -rw-r--r-- 0 root ... (2 Replies)
Discussion started by: m_raheelahmed
2 Replies

7. Solaris

Solaris Zone migration which are clustered

Hello Admins... How can we migrate a solaris 10 zone which is clustered...?? We have sun cluster 3.2 in our environment. And this is 2 -node cluster Let me know guys... Thanks. (0 Replies)
Discussion started by: snchaudhari2
0 Replies

8. Red Hat

RHEL 5.5: how to remove a clustered VG?

Hi all, I have a 2 node rhel 5.5 cluster (2 server and 1 quorum disk). I created 2 cluster resources using Luci web console, they are 2 Volume Groups. I want to remove that cluster and shutdown node 2, but I don't want to loose data on Volume Groups clustered. How can I remove that... (0 Replies)
Discussion started by: peppeunz
0 Replies

9. UNIX for Beginners Questions & Answers

Start Services in Clustered Environment

Hello Experts, I have a requirement to start and stop weblogic services in a clustered environment. First i need to start weblogic server and once the server is in Running mode i need to do SSH to other server and there i need to start Node Manager and Managed server, After these two are in... (1 Reply)
Discussion started by: beginner786
1 Replies
mount.gfs2(8)						      System Manager's Manual						     mount.gfs2(8)

NAME
mount.gfs2 - GFS2 mount options SYNOPSIS
mount -a [-fnrsvw] -t gfs2 [-O options] mount [-fnrsvw] -t gfs2 [-o options ] device dir DESCRIPTION
For details on the common mount options, please see the mount(8) command man page. The device may be any block device on which you have created a GFS2 filesystem. Examples include a single disk partition (e.g. /dev/sdb3), a loopback device, a device exported from another node (e.g. an iSCSI device), or a logical volume (typically comprised of a number of individual disks). device does not necessarily need to match the device name as seen on another node in the cluster, nor does it need to be a logical volume. However, the use of a cluster-aware volume manager such as CLVM2 (see lvm(8)) will guarantee that the managed devices are named identically on each node in a cluster (for much easier management), and will allow you to configure a very large volume from multiple storage units (e.g. disk drives). device must make the entire filesystem storage area visible to the computer. That is, you cannot mount different parts of a single filesystem on different computers. Each computer must see an entire filesystem. You may, however, mount several GFS2 filesystems if you want to distribute your data storage in a controllable way. This man page describes GFS2-specific options that can be passed to the GFS2 file system at mount time, using the -o flag. There are many other -o options handled by the generic mount command mount(8). However, the options described below are specifically for GFS2, and are not interpreted by the mount command nor by the kernel's Virtual File System. GFS2 and non-GFS2 options may be intermingled after the -o, separated by commas (but no spaces). The options commit, discard, errors, quota_quantum, statfs_quantum, statfs_percent, barrier, acl, quota, suiddir, and data can be changed after mount using the "mount -o remount,option /mountpoint" command. The options quota, discard, barrier, acl, and suiddir support the "no" prefix. For example, "noacl" turns off what "acl" turns on. If you have trouble mounting GFS2, check the syslog (e.g. /var/log/messages) for specific error messages. OPTIONS
lockproto=LockProtoName This specifies which inter-node lock protocol is used by the GFS2 filesystem for this mount, overriding the default lock protocol name stored in the filesystem's on-disk superblock. The LockProtoName must be one of the supported locking protocols, currently these are lock_nolock and lock_dlm. The default lock protocol name is written to disk initially when creating the filesystem with mkfs.gfs2(8), -p option. It can be changed on-disk by using the gfs2_tool(8) utility's sb proto command. The lockproto mount option should be used only under special circumstances in which you want to temporarily use a different lock protocol without changing the on-disk default. Using the incorrect lock protocol on a cluster filesystem mounted from more than one node will almost certainly result in filesystem corruption. locktable=LockTableName This specifies the identity of the cluster and of the filesystem for this mount, overriding the default cluster/filesystem identify stored in the filesystem's on-disk superblock. The cluster/filesystem name is recognized globally throughout the cluster, and establishes a unique namespace for the inter-node locking system, enabling the mounting of multiple GFS2 filesystems. The format of LockTableName is lock-module-specific. For lock_dlm, the format is clustername:fsname. For lock_nolock, the field is ignored. The default cluster/filesystem name is written to disk initially when creating the filesystem with mkfs.gfs2(8), -t option. It can be changed on-disk by using the gfs2_tool(8) utility's sb table command. The locktable mount option should be used only under special circumstances in which you want to mount the filesystem in a different cluster, or mount it as a different filesystem name, without changing the on-disk default. localcaching This flag tells GFS2 that it is running as a local (not clustered) filesystem, so it can turn on some block caching optimizations that can't be used when running in cluster mode. This is turned on automatically by the lock_nolock module, but can be overridden by using the ignore_local_fs option. localflocks This flag tells GFS2 that it is running as a local (not clustered) filesystem, so it can allow the kernel VFS layer to do all flock and fcntl file locking. When running in cluster mode, these file locks require inter-node locks, and require the support of GFS2. When running locally, better performance is achieved by letting VFS handle the whole job. This is turned on automatically by the lock_nolock module, but can be overridden by using the ignore_local_fs option. errors=[panic|withdraw] Setting errors=panic causes GFS2 to oops when encountering an error that would otherwise cause the mount to withdraw or print an assertion warning. The default setting is errors=withdraw. This option should not be used in a production system. It replaces the earlier debug option on kernel versions 2.6.31 and above. ignore_local_fs By default, using the nolock lock module automatically turns on the localcaching and localflocks optimizations. ignore_local_fs forces GFS2 to treat the filesystem as if it were a multihost (clustered) filesystem, with localcaching and localflocks optimiza- tions turned off. upgrade This flag tells GFS2 to upgrade the filesystem's on-disk format to the version supported by the current GFS2 software installation on this computer. If you try to mount an old-version disk image, GFS2 will notify you via a syslog message that you need to upgrade. Try mounting again, using the -o upgrade option. When upgrading, only one node may mount the GFS2 filesystem. acl Enables POSIX Access Control List acl(5) support within GFS2. spectator Mount this filesystem using a special form of read-only mount. The mount does not use one of the filesystem's journals. The node is unable to recover journals for other nodes. suiddir Sets owner of any newly created file or directory to be that of parent directory, if parent directory has S_ISUID permission attribute bit set. Sets S_ISUID in any new directory, if its parent directory's S_ISUID is set. Strips all execution bits on a new file, if parent directory owner is different from owner of process creating the file. Set this option only if you know why you are setting it. quota=[off/account/on] Turns quotas on or off for a filesystem. Setting the quotas to be in the "account" state causes the per UID/GID usage statistics to be correctly maintained by the filesystem, limit and warn values are ignored. The default value is "off". discard Causes GFS2 to generate "discard" I/O requests for blocks which have been freed. These can be used by suitable hardware to implement thin-provisioning and similar schemes. This feature is supported in kernel version 2.6.30 and above. barrier This option, which defaults to on, causes GFS2 to send I/O barriers when flushing the journal. The option is automatically turned off if the underlying device does not support I/O barriers. We highly recommend the use of I/O barriers with GFS2 at all times unless the block device is designed so that it cannot lose its write cache content (e.g. its on a UPS, or it doesn't have a write cache) commit=secs This is similar to the ext3 commit= option in that it sets the maximum number of seconds between journal commits if there is dirty data in the journal. The default is 60 seconds. This option is only provided in kernel versions 2.6.31 and above. data=[ordered|writeback] When data=ordered is set, the user data modified by a transaction is flushed to the disk before the transaction is committed to disk. This should prevent the user from seeing uninitialized blocks in a file after a crash. Data=writeback mode writes the user data to the disk at any time after it's dirtied. This doesn't provide the same consistency guarantee as ordered mode, but it should be slightly faster for some workloads. The default is ordered mode. meta This option results in selecting the meta filesystem root rather than the normal filesystem root. This option is normally only used by the GFS2 utility functions. Altering any file on the GFS2 meta filesystem may render the filesystem unusable, so only experts in the GFS2 on-disk layout should use this option. quota_quantum=secs This sets the number of seconds for which a change in the quota information may sit on one node before being written to the quota file. This is the preferred way to set this parameter. The value is an integer number of seconds greater than zero. The default is 60 seconds. Shorter settings result in faster updates of the lazy quota information and less likelihood of someone exceeding their quota. Longer settings make filesystem operations involving quotas faster and more efficient. statfs_quantum=secs Setting statfs_quantum to 0 is the preferred way to set the slow version of statfs. The default value is 30 secs which sets the max- imum time period before statfs changes will be syned to the master statfs file. This can be adjusted to allow for faster, less accurate statfs values or slower more accurate values. When set to 0, statfs will always report the true values. statfs_percent=value This setting provides a bound on the maximum percentage change in the statfs information on a local basis before it is synced back to the master statfs file, even if the time period has not expired. If the setting of statfs_quantum is 0, then this setting is ignored. BUGS
GFS2 doesn't support errors=remount-ro or data=journal. It is not possible to switch support for user and group quotas on and off indepen- dently of each other. Some of the error messages are rather cryptic, if you encounter one of these messages check firstly that gfs_controld is running and secondly that you have enough journals on the filesystem for the number of nodes in use. SEE ALSO
gfs2(8), mount(8) for general mount options, chmod(1) and chmod(2) for access permission flags, acl(5) for access control lists, lvm(8) for volume management, ccs(7) for cluster management, umount(8), initrd(4). mount.gfs2(8)
All times are GMT -4. The time now is 09:04 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy