Sponsored Content
Top Forums UNIX for Dummies Questions & Answers Unix File System performance with large directories Post 48557 by malcom on Wednesday 10th of March 2004 07:00:28 AM
Old 03-10-2004
Hi Dirk,

if you want to tune your filesystem, the most important question for you is , "What size are the files on it ?"

Related to this, you will change the blocksize and similar.

Regards
Malcom
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

list directories on a file system

how can i see the list of directories, mounted on a filesystem? example, to show the list of directories mounted on / for aix (8 Replies)
Discussion started by: yls177
8 Replies

2. AIX

Why is my file system cache so large

Hi I have a filesystem cache which is around 20G in size and I'm a bit perplexed as to what is in it. I'm running Sybase on the machine with the db on raw volumes and a tempdb on a ramdisk. My understanding is that raw volumes are not cached and I assumed that the ramdisk is not either. Am... (1 Reply)
Discussion started by: mgibbons
1 Replies

3. UNIX for Advanced & Expert Users

suggestions require for unix system performance on certain task

Dear all, On my UNIX server there is an apache web log file. The rate of logging of data in this file is very high. I want to extract user logging log from this file in run time. As soon as the user logging log logged in this file I want to redirect this user log into another file. I want to... (4 Replies)
Discussion started by: zing_foru
4 Replies

4. Shell Programming and Scripting

Performance issue in UNIX while generating .dat file from large text file

Hello Gurus, We are facing some performance issue in UNIX. If someone had faced such kind of issue in past please provide your suggestions on this . Problem Definition: /Few of load processes of our Finance Application are facing issue in UNIX when they uses a shell script having below... (19 Replies)
Discussion started by: KRAMA
19 Replies

5. Programming

question about empty directories in unix system

how is it possible for a directory to be empty and still have a size greater than 0 in bytes... i made a shell script that shows info about all files/directories and this is what came up the last one is the size, here its showing 1024 in the for loop i did something like for h in * .*; do ... (4 Replies)
Discussion started by: omega666
4 Replies

6. Red Hat

Empty directory, large size and performance

Hi, I've some directory that I used as working directory for a program. At the end of the procedure, the content is deleted. This directory, when I do a ls -l, appears to still take up some space. After a little research, I've seen on a another board of this forum that it's not really taking... (5 Replies)
Discussion started by: bdx
5 Replies

7. Red Hat

GFS file system performance is very slow

My code Hi All, I am having redhat linux 5.3 (Tikanga) with GFS file system and its very very slow for executing ls -ls command also.Please see the below for 2minits 12 second takes. Please help me to fix the issue. $ sudo time ls -la BadFiles |wc -l 0.01user 0.26system... (3 Replies)
Discussion started by: susindram
3 Replies

8. Shell Programming and Scripting

Performance issue in Grepping large files

I have around 300 files(*.rdf,*.fmb,*.pll,*.ctl,*.sh,*.sql,*.prog) which are of large size. Around 8000 keywords(which will be in the file $keywordfile) needed to be searched inside those files. If a keyword is found in a file..I have to insert the filename,extension,catagoery,keyword,occurrence... (8 Replies)
Discussion started by: millan
8 Replies

9. HP-UX

Test cases for file system mount/umount performance in HP

Hi Folks, Could anyone please assist me with the what could be the scenarios to test the file system mount/umount performance check in HPUX. Thanks in advance, Vaishey (5 Replies)
Discussion started by: Vaishey
5 Replies

10. Shell Programming and Scripting

Bash script search, improve performance with large files

Hello, For several of our scripts we are using awk to search patterns in files with data from other files. This works almost perfectly except that it takes ages to run on larger files. I am wondering if there is a way to speed up this process or have something else that is quicker with the... (15 Replies)
Discussion started by: SDohmen
15 Replies
FILEFRAG(8)						      System Manager's Manual						       FILEFRAG(8)

NAME
filefrag - report on file fragmentation SYNOPSIS
filefrag [ -bblocksize ] [ -BeksvxX ] [ files... ] DESCRIPTION
filefrag reports on how badly fragmented a particular file might be. It makes allowances for indirect blocks for ext2 and ext3 filesys- tems, but can be used on files for any filesystem. The filefrag program initially attempts to get the extent information using FIEMAP ioctl which is more efficient and faster. If FIEMAP is not supported then filefrag will fall back to using FIBMAP. OPTIONS
-B Force the use of the older FIBMAP ioctl instead of the FIEMAP ioctl for testing purposes. -bblocksize Use blocksize in bytes for output instead of the filesystem blocksize. For compatibility with earlier versions of filefrag, if blocksize is unspecified it defaults to 1024 bytes. -e Print output in extent format, even for block-mapped files. -k Use 1024-byte blocksize for output (identical to '-b 1024'). -s Sync the file before requesting the mapping. -v Be verbose when checking for file fragmentation. -x Display mapping of extended attributes. -X Display extent block numbers in hexadecimal format. AUTHOR
filefrag was written by Theodore Ts'o <tytso@mit.edu>. E2fsprogs version 1.42.9 December 2013 FILEFRAG(8)
All times are GMT -4. The time now is 10:08 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy