Yes, wc -c is the best answer. It gives the exact correct answer.
As you suggest, do not worry about "overhead" at this point. As you can see from my previous post with the time test, wc runs very fast on large files. If there were really an "overhead" problem, if your script takes too long, you could worry about it then.
No, "wc -c" causes too much I/O load, because it must read the whole file!
Better stick to the "ls -l"; you can set a consistent locale:
most OS take the -o option
No, "wc -c" causes too much I/O load, because it must read the whole file!
Better stick to the "ls -l"; you can set a consistent locale:
most OS take the -o option
how can you measure the I/O load consumption of a process?
how can you measure the I/O load consumption of a process?
Have a 10GB file on an NFS share, and run "wc -c" on it on a hundred NFS clients simultaneously.
This will take very long, and your NFS server will be overloaded the whole time.
And your server admin will be angry. (The redness of his face is proportional to the overhead that you have caused.)
how can you measure the I/O load consumption of a process?
There is I/O load and time load.
I/O load is normally the file size, unless the file is already cached into RAM.
Time load is measured with time command, using an uncached copy, such as:
Unless you are dealing with large files, much bigger than the test file above, or you run into a problem, I would not worry about such "overhead" concerns. It's called "premature optimization" to "fix" something by making it complicated, before there is a known problem. On the other hand, if your script is too slow, then time to try something that does not read the actual data on the disk.
Hi All,
I am trying to sftp, get a file from remote server.
Post this we need to check the remote server file size or checksum value of the remote server file and compare it with the file size in the current server.
But I am facing issue to get the remote server file size. Could you please... (2 Replies)
Hi,
I have a file by redirecting some contents in unix shell.
Even when there is no content that is being redirected, the file size still shows greater than zero.
but even if there is no matching pattern the file APPRES has size greater than 0bytes.
awk -f AA.awk $logfile>APPRES... (3 Replies)
Hi everybody ,
I'm new here in the forum and new Dummy in L|U systems (Hope finding welcomes...:)).
I just want to ask : What is the OS's that works on servers and the OS's that work as client OS??
I just know that Solaris Work on sarvers :D..
and i'm glad to be memmber in this... (1 Reply)
I have a file on solaris/linux.
ls -ls shows the logical size to be: 13292
However, when I transfer the file to my windows machine.. Rightclick->Properties shows the file size as: 13421
I wrote a small program on unix and windows that does a stat() on the file and reports the st_size... (6 Replies)
Hi,
when can a unix library file size become zero? For example.: can mistyping this command -> /usr/ucb/ps -auxww|grep -i <process name> make the "ps" library file size to become zero or its contents to get deleted? Is there any other way that an inadvertant mistake could cause the file size to... (1 Reply)
hi ,
iam trying to sort millions of records which is delimited and i cant able to
use sort command more than 60 million..if i try to do so i got an message stating that "File size limit exceeded",Is there any file size limit for using sort command..
How can i solve this problem.
thanks
... (7 Replies)
Hi All,
Currently we are using HP-UX machine.. We are facing problems with respect to file size. The file size does not seem to be exceeding 2 GB.
Could you please let me know the following
1. Is there any difference between a 32 bit application and 64 bit application with respect to file... (2 Replies)
We have two files /var/adm/wtmp and /var/opt/OV/tmp/OpC/guiagtdf
Could you please tell me if what these files are and if I can purge them as they are very big and Vi will not let me look at them, it gives me a message 'file to long'.
Help any ideas
Thanks
:cool: (3 Replies)