04-08-2011
Quote:
Originally Posted by
Mr.AIX
The below command worked but after 3 hours finished but the data is not completed.
whole data in the source mount porint is 80% after finish the copying data came in the target mount porint as 30%
This sounds quite suspicious. Could you please post the AIX version you are using (and, if applies, if you are using a 32-bit or 64-bit version?). This sounds like an older version not capable of dealing with files larger than 8G (ustar) or maybe hitting a 2G file size limit.
While you are at it: output of "ulimit -a" might also be interesting.
bakunin
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hi,
As per my requirement, I need to take difference between two big files(around 6.5 GB) and get the difference to a output file without any line numbers or '<' or '>' in front of each new line.
As DIFF command wont work for big files, i tried to use BDIFF instead.
I am getting incorrect... (13 Replies)
Discussion started by: pyaranoid
13 Replies
2. UNIX for Advanced & Expert Users
Hi , i need a fast way to delete duplicates entrys from very huge files ( >2 Gbs ) , these files are in plain text.
I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but it always ended with the same result (memory core dump)
In using HP-UX large servers.
Any advice will... (8 Replies)
Discussion started by: Klashxx
8 Replies
3. UNIX for Dummies Questions & Answers
Dear All,
I am working with windoes OS but remote a linux machine. I wonder the way to copy an paste some part of a huge file in linux machine.
the contain of file like as follow:
...
dump annealling all custom 10 anneal_*.dat id type x y z q
timestep 0.02
run 200000
Memory... (2 Replies)
Discussion started by: ariesto
2 Replies
4. Shell Programming and Scripting
Hi, all:
I've got two folders, say, "folder1" and "folder2".
Under each, there are thousands of files.
It's quite obvious that there are some files missing in each. I just would like to find them. I believe this can be done by "diff" command.
However, if I change the above question a... (1 Reply)
Discussion started by: jiapei100
1 Replies
5. UNIX for Dummies Questions & Answers
Hi All,
HP-UX dev4 B.11.11 U 9000/800 3251073457
I need to copy huge data from windows text file to vi editor. when I tried copy huge data, the format of data is not preserverd and appered to scatterd through the vi, something like give below. Please let me know, how can I correct this?
... (18 Replies)
Discussion started by: alok.behria
18 Replies
6. Shell Programming and Scripting
Hi
I have a shell script to copy a pattern of files from Linux to Windows Filesystem.
When i execute the below command
cp -av TOUT_05-02-13* Windows/Folder
`TOUT_05-02-13-19:02:37.tar.gz' -> `Windows/Folder/SYSOUT_05-02-13-19:02:37.tar.gz'
cp: cannot create regular file... (5 Replies)
Discussion started by: rakeshkumar
5 Replies
7. Shell Programming and Scripting
Hi Friends !!
I am facing a hash total issue while performing over a set of files of huge volume:
Command used:
tail -n +2 <File_Name> |nawk -F"|" -v '%.2f' qq='"' '{gsub(qq,"");sa+=($156<0)?-$156:$156}END{print sa}' OFMT='%.5f'
Pipe delimited file and 156 column is for hash totalling.... (14 Replies)
Discussion started by: Ravichander
14 Replies
8. Solaris
Dear Experts,
I would like to know what's the best method for copy data around 3 mio (spread in a hundred folders, size each file around 1kb) between 2 servers?
I already tried using Rsync and tar command. But using these command is too long.
Please advice.
Thanks
Edy (11 Replies)
Discussion started by: edydsuranta
11 Replies
9. Solaris
Gents,
I have NAS File System mounted in Solaris as \Sysapp with size 8 TB
the problem once the backup stared it is impacting the performance of the OS.
Do you have any idea how to can we backup this FS with fast scenario without impacting the OS.
Backup type : Netbackup (3 Replies)
Discussion started by: AbuAliiiiiiiiii
3 Replies
10. Solaris
Gents
I have huge NAS File System as /sys with size 10 TB and I want to Split each 1TB in spirit File System to be mounted in the server.
How to can I do that without changing anything in the source.
Please your support. (1 Reply)
Discussion started by: AbuAliiiiiiiiii
1 Replies
LEARN ABOUT CENTOS
e4defrag
E4DEFRAG(8) System Manager's Manual E4DEFRAG(8)
NAME
e4defrag - online defragmenter for ext4 filesystem
SYNOPSIS
e4defrag [ -c ] [ -v ] target ...
DESCRIPTION
e4defrag reduces fragmentation of extent based file. The file targeted by e4defrag is created on ext4 filesystem made with "-O extent"
option (see mke2fs(8)). The targeted file gets more contiguous blocks and improves the file access speed.
target is a regular file, a directory, or a device that is mounted as ext4 filesystem. If target is a directory, e4defrag reduces fragmen-
tation of all files in it. If target is a device, e4defrag gets the mount point of it and reduces fragmentation of all files in this mount
point.
OPTIONS
-c Get a current fragmentation count and an ideal fragmentation count, and calculate fragmentation score based on them. By seeing this
score, we can determine whether we should execute e4defrag to target. When used with -v option, the current fragmentation count and
the ideal fragmentation count are printed for each file.
Also this option outputs the average data size in one extent. If you see it, you'll find the file has ideal extents or not. Note
that the maximum extent size is 131072KB in ext4 filesystem (if block size is 4KB).
If this option is specified, target is never defragmented.
-v Print error messages and the fragmentation count before and after defrag for each file.
NOTES
e4defrag does not support swap file, files in lost+found directory, and files allocated in indirect blocks. When target is a device or a
mount point, e4defrag doesn't defragment files in mount point of other device.
Non-privileged users can execute e4defrag to their own file, but the score is not printed if -c option is specified. Therefore, it is
desirable to be executed by root user.
AUTHOR
Written by Akira Fujita <a-fujita@rs.jp.nec.com> and Takashi Sato <t-sato@yk.jp.nec.com>.
SEE ALSO
mke2fs(8), mount(8).
e4defrag version 2.0 May 2009 E4DEFRAG(8)