because the folder has thousand of files it takes too long and have some trouble to get the largest files and then compress files or delete it, for instance
The above commad took an hour and i have to cancel it
secondly The above command took up some much time and got me root: args too long so it filed
Tried to compress some files but it took so long
are there more efficient methods or commands to do this because i failed tried with these 3 choices
The operating system is SunOS
If there are many many files, then any search trough them will take time, potentially a lot of time. If this is on a filesystems that is mounted over the network, then this time will be much greater. Far better to do the work when the disk really is.
Perhaps this may be slightly more efficient for you though:-
Can you tell us more about what hardware you have?
You may create a list of filenames and sizes prior to having a closer look with different criteria. So the long running part - the reading of all file sizes is only being done once.
Example
1) Read the sizes
2) find the big files
I've no knowledge with SunOS. The above commands may have slightly different syntax there.
Can you sign on to the server at address 10.80.1.83 ? If you can, running your code there will be significantly faster that running over the network. This will be not just the searching, but the actual compression too. If you compress over the network, then you have to read the file across the network to your local memory, compress it and then write the resultant file back across the network to the server disk.
It really could be a massive difference in performance.
Stomp - there is no stat command on vanilla SunOS. AFAIK. Since Oracle took over the Solaris freeware site died as well.
I think the OP also has another problem - Solaris 10 file systems (not ZFS) and earlier
all had a problem. If there are large numbers of files in a single directory, some file-related commands, notable is ls, bog down. A lot.
We had a directory with >30K small files in it. I fixed the performance problems by
moving files off the primary directory every day in a cron job. But still kept on the same file system. With about 5000 files performance was acceptable.
alexcol - please post the the output of a command the gives the physical size in bytes of the exact directory with the problem.
Since I do not know the name of the directory, here is an example, note the lowercase "d" in the command: Please post the result so we can help
And if you happen to have too many individual files, regardless of size, on a file system then you can run out of inodes as well. This is pretty hard to do, but if the filesystem was created with unusual parameters this happens.
To see used inodes, try:
where mountpoint is the place in the file system where you interesting directory is mounted
Hi,
I have a lengthy script which i have trimmed down for a test case as below.
more run.sh
#!/bin/bash
paths="allpath.txt"
while IFS= read -r loc
do
echo "Working on $loc"
startdir=$loc
find "$startdir" -type f \( ! -name "*.log*" ! -name "*.class*" \) -print |
while read file
do... (8 Replies)
Hi,
I am trying to search for a Directory called "mont" under a directory path "/opt/app/var/dumps"
Although "mont" is in the very parent directory called "dumps" i.e "/opt/app/var/dumps/mont" and it can never be inside any Sub-Directory of "dumps"; my below find command which also checks... (5 Replies)
Hi,
Below is my find command
find /opt/app/websphere -name myfolder -perm -600 | wc -l
At time it even takes 20 mins to complete.
my OS is : SunOS mypc 5.10 Generic_150400-09 sun4v sparc SUNW,T5440 (10 Replies)
Hi,
we currently having a issue where when we send jobs to the server for the application lawson, it is taking a very long time to complete. here are the last few lines of the database log.
2012-09-18-10.35.55.707279-240 E244403536A576 LEVEL: Warning
PID : 950492 ... (1 Reply)
Dear experts
I have a 200MG text file in this format:
text \tab number
I try to sort using options -fd and it takes very long! is that normal or I can speed it up in some ways?
I dont want to split the file since this one is already splitted.
I use this command: sort -fd file >... (12 Replies)
Hello,
like the title says, how can i measure the time it takes to load a module in Linux, and how how can i measure the time it takes to load a statically compiled module.
/Best Regards Olle
---------- Post updated at 01:13 PM ---------- Previous update was at 11:54 AM ----------
For... (0 Replies)
Hi,
I am trying to login using ssh on Red Hat Linux 5 server,
The password appears immediately but after I enter the password it takes about 90 seconds to login completely.
Please suggest what changes require?
Regards,
Manoj (4 Replies)
hello all.
i would like to be able to find the names of all files on a remote machine using ssh.
i only want the names of files, not directories
so far i'm stuck at "du -a | sort -n"
also, is it possible to write them to a file on my machine? i know how to write it to a file on that... (2 Replies)
Hello,
I have a C program that takes anywhere from 5 to 100 arguments and I'd like to run it from a script that makes sure it doesnt take too long to execute. If the C program takes more than 5 seconds to execute, i would like the shell script to kill it and return a short message to the user. ... (3 Replies)
Hi I am trying to find out the best way to find out how long a command takes to run in miliseconds ..
Is there such a way of doing this in Unix ?
Thanks (3 Replies)