Sponsored Content
Operating Systems Linux Red Hat GFS file system performance is very slow Post 302616267 by mark54g on Friday 30th of March 2012 07:34:35 PM
Old 03-30-2012
GFSv1 is VERY slow, mostly due to locking issues on inodes. If you use a top level single directory as your shared storage, anything inside can lock the directory inode, which causes VERY poor performance.

Shared filesystems tend to have overhead that already cost you about 12-20% performance penalties.
 

8 More Discussions You Might Find Interesting

1. Post Here to Contact Site Administrators and Moderators

Help! Slow Performance

Is the performance now very, very slow (pages take a very long time to load)? Or is it just me? Neo (6 Replies)
Discussion started by: Neo
6 Replies

2. Shell Programming and Scripting

egrep is very slow : How to improve performance

We have an egrep search in a while loop. egrep -w "$key" ${PICKUP_DIR}/new_update >> ${PICKUP_DIR}/update_record_new ${PICKUP_DIR}/new_update is 210 MB file In each iteration, the egrep on an average takes around 50-60 seconds to search. Ther'es nothing significant in the loop other... (7 Replies)
Discussion started by: hidnana
7 Replies

3. Filesystems, Disks and Memory

Slow Copy(CP) performance

Hi all We have got issues with copying a 2.6 GB file from one folder to another folder. Well, this is not the first issue we are having on the box currently, i will try to explain everything we have done from the past 2 days. We got a message 2 days back saying that our Production is 98%... (3 Replies)
Discussion started by: b_sri
3 Replies

4. Filesystems, Disks and Memory

Is GFS 6.2 same as GFS 2 file system?

Hi, I have a bit knowledge of GFS. As per my understanding it is the file system used in the cluster environment. I have a question. Is GFS 6.2 same as GFS 2 files system? Please some one clear my confusion. Thanks in advance. (1 Reply)
Discussion started by: praveen_b744
1 Replies

5. UNIX for Dummies Questions & Answers

Slow copy/performance... between volumes

hi guys We are seeing weird issues on my Linux Suse 10, it has lotus 8.5 and 1 filesystem for OS and another for Lotus Database. the issue is when the Lotus service starts wait on top is very high about 25% percent and in general CPU usage is very high we found that when this happens if we... (0 Replies)
Discussion started by: kopper
0 Replies

6. Shell Programming and Scripting

Slow performance filtering file

Please, I need help tuning my script. It works but it's too slow. The code reads an acivity log file with 50.000 - 100.000 lines and filters error messages from it. The data in the actlog file look similar to this: 02/08/2011 00:25:01,ANR2034E QUERY MOUNT: No match found using this criteria.... (5 Replies)
Discussion started by: Miila
5 Replies

7. Infrastructure Monitoring

99% performance wa, slow server.

There is a big problem with the server (VPS based on OpenVZ, CentOS 5, 3GB RAM). The problem is the following. The first 15-20 minutes after starting the server is operating normally, the load average is less than or about 1.0, but then begins to increase sharply% wa, then hovers around 95-99%.... (2 Replies)
Discussion started by: draiphod
2 Replies

8. Solaris

Solaris 11.1 Slow Network Performance

I have identical M5000 machines that are needing to transfer very large amounts of data between them. These are fully loaded machines, and I've already checked IO, memory usage, etc... I get poor network performance even when the machines are idle or copying via loopback. The 10 GB NICs are... (7 Replies)
Discussion started by: christr
7 Replies
sys_attrs_ufs(5)						File Formats Manual						  sys_attrs_ufs(5)

NAME
sys_attrs_ufs - ufs subsystem attributes DESCRIPTION
This reference page lists and describes attributes for the UNIX File System (ufs) kernel subsystem. Refer to the sys_attrs(5) reference page for an introduction to the topic of kernel subsystem attributes. A value that enables (1) or disables (0) the creation of fast symbolic link files. Default value: 1 (enabled) Size of (number of slots in) the inode hash chain table for the inode least recently used (LRU) cache. Default value: 2048 (slots) Minimum value: 64 Maximum value: 16,384 Large inode hash chain tables spread the inode structures and may make chain lengths short. This can reduce linear searches and improve lookup speeds. In general, chains should contain only 2 or 3 elements. Obsolete. This attribute has been replaced by the vfs subsystem's max_ufs_mounts attribute, which is tunable at run time. See sys_attrs_vfs(5) for information about max_ufs_mounts. The range of blocks behind the current block location through which to search for a free block to allocate for an indirect block write operation (for all writes other than the first). A value greater than 1 enables a look-behind search before writing each indi- rect block after the first block write operation. Default value: 16 (blocks) Minimum value: 1 Maximum value: 64 The maximum number of vnodes that UFS can process while flushing dirty buffers on the mount list before releasing the lock on the mount list to other components. If ufs_lockholdmax number of vnodes is reached before all dirty buffers on all vnodes are flushed from the mount list, UFS releases the mount list lock and regains it later to process (flush dirty buffers on) the remaining vnodes. Default value: 500 (vnodes) Minimum value: 0 (means that UFS must release the mount list lock each time a vnode is processed) Maximum value: 5000 Reducing the value of ufs_lockholdmax can improve system responsiveness for systems under heavy I/O load. The cost is an increase in CPU overhead (more time required to flush dirty buffers from the mount list). Reducing the ufs_lockholdmax value too low or on sys- tems with low I/O overhead incurs CPU cost without increasing system responsiveness. If the rt_preempt_opt attribute is set, ufs_lockholdmax is automatically reduced to 50 to better support the strict preemption requirements of realtime operations. A value that enables (nonzero) or disables (0) an operation that restricts how blocks of stale UFS data and metadata (such as inodes) are reused. When ufs_object_safety is set to a nonzero value, blocks are cleared before being rewritten and update opera- tions are ordered in a certain way to conform to C2 security requirements. For example, during file creation, a dinode is always written before a directory entry is updated to point to that dinode. When a file is deleted, the directory entry is always updated before the dinode is deleted. Default value: 0 Change ufs_object_safety to a nonzero value only if a C2 security level is a system requirement. There is a serious UFS performance cost associated with this change. Furthermore, the default setting, even though not in strict conformance with C2 security require- ments, leaves a low probability that confidential data will be exposed. SEE ALSO
sys_attrs(5) sys_attrs_ufs(5)
All times are GMT -4. The time now is 11:25 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy