Sponsored Content
Operating Systems Linux Red Hat Advice regarding filesystems handling large number of files Post 302556846 by shoaibjameel123 on Monday 19th of September 2011 09:35:46 PM
Old 09-19-2011
Advice regarding filesystems handling large number of files

Hi All,

I have a CentOS operating system installed. I work with really huge number of files which are not only huge in number but some of them really huge in size. Minimum number of files could be 1 million to 2 million in one directory itself. Some of the files are even several Gigabytes in size. Like 10Gb 15Gb etc.

I have ext3 filesystem on my disk drive. Recently, that disk drive crashed. I am not sure why it crashed. But when I used to work and execute programs the hard disk used to take lots of time to respond. Even find and du -h commands used to take lots of time to show results.

I would request any one of you to kindly advice me as to why the disk drive crashed?

Is it because:

1. The filesystem ext3 is not suited for such large number of files?

2. Is it because there are too many files and I should have organized those files in several directories to reduce the load on Hard Disk's cylinders?

3. I've read that XFS file system is well catered to large number of files. Should I use XFS filesystem on my new hard drive?

4. The hard disk should be around 3 years old.

The machine is really good with 50GB RAM and 16 core processor.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

moving large number of files

I have a task to move more than 35000 files every two hours, from the same directory to another directory based on a file that has the list of filenames I tried the following logics (1) find . -name \*.dat > list for i in `cat list` do mv $i test/ done (2) cat list|xargs -i mv "{}"... (7 Replies)
Discussion started by: bryan
7 Replies

2. Shell Programming and Scripting

Error Handling -pls advice

Dear friends, I am using the below command in my unix script ----------------------------------------------- File_Name=`ls $CTRY*$DATE_SUFFIX*zip` --> Command-1 ..... if then unzip -a $File_Name -d $CTRY_DIR/$CTRY else echo "File for $CTRY dated $DATE_SUFFIX does not exist... (2 Replies)
Discussion started by: sureshg_sampat
2 Replies

3. UNIX for Dummies Questions & Answers

questing regarding tar large number of files

I want to tar large number of files about 150k. i am using the find command as below to create a file with all file names. & then trying to use the tar -I command as below. # find . -type f -name "gpi*" > include-file # tar -I include-file -cvf newfile.tar This i got from one of the posts... (2 Replies)
Discussion started by: crux123
2 Replies

4. Solaris

How to safely copy full filesystems with large files (10Gb files)

Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS. I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies

5. Shell Programming and Scripting

Concatenation of a large number of files

Hellow i have a large number of files that i want to concatenate to one. these files start with the word 'VOICE_' for example VOICE_0000000000 VOICE_1223o23u0 VOICE_934934927349 I use the following code: cat /ODS/prepaid/CDR_FLOW/MEDIATION/VOICE_* >> /ODS/prepaid/CDR_FLOW/WORK/VOICE ... (10 Replies)
Discussion started by: chriss_58
10 Replies

6. UNIX for Dummies Questions & Answers

Delete large number of files

Hi. I need to delete a large number of files listed in a txt file. There are over 90000 files in the list. Some of the directory names and some of the file names do have spaces in them. In the file, each line is a full path to a file: /path/to/the files/file1 /path/to/some other/files/file 2... (4 Replies)
Discussion started by: inakajin
4 Replies

7. SCO

Need advice: Copying large CSV report files off SCO system

I have a SCO Unix server from 1999 running SCO 5.0.5 and some ancient accounting software called Real World A report writer program on the system is used to generate CSV files from accounting that we write with DOSCOPY commands to 3.5" floppies In the next 60 days we will be decommissioning... (11 Replies)
Discussion started by: magnetman
11 Replies

8. UNIX for Dummies Questions & Answers

Rename a large number of files in subdirectories

Hi, I have a large number of subdirectories (>200), and in each of these directories there is a file with a name like "opp1234.dat". I'd like to know how I could change the names of these files to say "out.dat" in all these subdirectories in one go. Thanks! (5 Replies)
Discussion started by: lost.identity
5 Replies

9. Shell Programming and Scripting

Sftp large number of files

Want to sftp large number of files ... approx 150 files will come to server every minute. (AIX box) Also need make sure file has been sftped successfully... Please let me know : 1. What is the best / faster way to transfer files? 2. should I use batch option -b so that connectivity will be... (3 Replies)
Discussion started by: vegasluxor
3 Replies

10. UNIX for Beginners Questions & Answers

Advice on how to set up error handling

Hi Folks - I want to add error handling to a portion of a *.ksh, but I'm having difficulty doing so in an easily digestible way. Essentially, I want to echo weather it was successful or unsuccessful after each command. Here is the code I need to add error handling to: perl... (2 Replies)
Discussion started by: SIMMS7400
2 Replies
ALLOC_HUGEPAGES(2)					     Linux Programmer's Manual						ALLOC_HUGEPAGES(2)

NAME
alloc_hugepages, free_hugepages - allocate or free huge pages SYNOPSIS
void *alloc_hugepages(int key, void *addr, size_t len, int prot, int flag); int free_hugepages(void *addr); DESCRIPTION
The system calls alloc_hugepages() and free_hugepages() were introduced in Linux 2.5.36 and removed again in 2.5.54. They existed only on i386 and ia64 (when built with CONFIG_HUGETLB_PAGE). In Linux 2.4.20 the syscall numbers exist, but the calls fail with the error ENOSYS. On i386 the memory management hardware knows about ordinary pages (4 KiB) and huge pages (2 or 4 MiB). Similarly ia64 knows about huge pages of several sizes. These system calls serve to map huge pages into the process's memory or to free them again. Huge pages are locked into memory, and are not swapped. The key argument is an identifier. When zero the pages are private, and not inherited by children. When positive the pages are shared with other applications using the same key, and inherited by child processes. The addr argument of free_hugepages() tells which page is being freed: it was the return value of a call to alloc_hugepages(). (The memory is first actually freed when all users have released it.) The addr argument of alloc_hugepages() is a hint, that the kernel may or may not follow. Addresses must be properly aligned. The len argument is the length of the required segment. It must be a multiple of the huge page size. The prot argument specifies the memory protection of the segment. It is one of PROT_READ, PROT_WRITE, PROT_EXEC. The flag argument is ignored, unless key is positive. In that case, if flag is IPC_CREAT, then a new huge page segment is created when none with the given key existed. If this flag is not set, then ENOENT is returned when no segment with the given key exists. RETURN VALUE
On success, alloc_hugepages() returns the allocated virtual address, and free_hugepages() returns zero. On error, -1 is returned, and errno is set appropriately. ERRORS
ENOSYS The system call is not supported on this kernel. FILES
/proc/sys/vm/nr_hugepages Number of configured hugetlb pages. This can be read and written. /proc/meminfo Gives info on the number of configured hugetlb pages and on their size in the three variables HugePages_Total, HugePages_Free, Hugepagesize. CONFORMING TO
These calls are specific to Linux on Intel processors, and should not be used in programs intended to be portable. NOTES
These system calls are gone; they existed only in Linux 2.5.36 through to 2.5.54. Now the hugetlbfs file system can be used instead. Mem- ory backed by huge pages (if the CPU supports them) is obtained by using mmap(2) to map files in this virtual file system. The maximal number of huge pages can be specified using the hugepages= boot parameter. COLOPHON
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at http://www.kernel.org/doc/man-pages/. Linux 2007-05-31 ALLOC_HUGEPAGES(2)
All times are GMT -4. The time now is 06:06 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy