10-18-2012
When you use NFS to access remote files, it keeps a cache of recently accessed blocks. As time goes by the cache is filled with more recently accessed files. So the first access when the cache is empty is relatively slow and later reads of the same page are relatively fast until those pages have been pushed out of the cache.
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hi peeps,
We are having around 60 users.
The time set to retrieve the mail is 300 sec.
But it's taking around 1 hour to deliver mails.
I am using debian sarge 3.1.
any clues?
And how it will affect if I decrease the time?
My machine has got 1 p4 3.0 GHZ processor and 1 GB ram.
The home... (2 Replies)
Discussion started by: squid04
2 Replies
2. Red Hat
I'm having a bit of a login performance issue.. wondering if anyone has any ideas where I might look.
Here's the scenario...
Linux Red Hat ES 4 update 5
regardless of where I login from (ssh or on the text console) after providing the password the system seems to pause for between 30... (4 Replies)
Discussion started by: retlaw
4 Replies
3. Shell Programming and Scripting
I'm new from UNIX scripting. Please help.
I have about 10,000 files from the $ROOTDIR/scp/inbox/string1 directory to compare with the 50 files from /$ROOTDIR/output/tma/pnt/bad/string1/ directory and it takes about 2 hours plus to complete the for loop. Is there a better way to re-write the... (5 Replies)
Discussion started by: hanie123
5 Replies
4. Shell Programming and Scripting
Hi,
I have here a script which is used to purge older files/directories based on defined purge period. The script consists of 45 find commands, where each command will need to traverse through more than a million directories. Therefore a single find command executes around 22-25 mins... (7 Replies)
Discussion started by: sravicha
7 Replies
5. UNIX for Dummies Questions & Answers
grep -f taking long time to compare for big files, any alternate for fast check
I am using grep -f file1 file2 to check - to ckeck dups/common rows prsents. But my files contains file1 contains 5gb and file 2 contains 50 mb and its taking such a long time to compare the files.
Do we have any... (10 Replies)
Discussion started by: gkskumar
10 Replies
6. UNIX for Dummies Questions & Answers
Hi ,
We have 20 jobs are scheduled.
In that one of our job is taking long time ,it's not completing.
If we are not terminating it's running infinity time actually the job completion time is 5 minutes.
The job is deleting some records from the table and two insert statements and one select... (7 Replies)
Discussion started by: ajaykumarkona
7 Replies
7. Solaris
Dear All,
OS = Solaris 5.10
Hardware Sun Fire T2000 with 1 Ghz quode core
We have oracle application 11i with 10g database. When ever i am trying to take cold backup of database with 55GB size its taking long time to finish. As the application is down nobody is using the server at all... (8 Replies)
Discussion started by: yoojamu
8 Replies
8. Shell Programming and Scripting
while read myhosts
do
while read discovered
do
echo "$discovered"
done < $LOGFILE | grep -Pi "|" | egrep... (7 Replies)
Discussion started by: SkySmart
7 Replies
9. Shell Programming and Scripting
Hi Gurus,
I have some weird issue. when using
ls -l
the result shows different time format:
-rw-r--r-- 1 abc gourp1 3032605576 Jun 14 2013 abc
-rw-rw-r-- 1 abc gourp1 1689948832 Aug 10 06:22 abc
one display 2013 which is year; another one displays 06:22 which is time.
... (4 Replies)
Discussion started by: ken6503
4 Replies
10. Shell Programming and Scripting
I have so many (hundreds of thousands) files and directories within this one specific directory that my "rm -rf" command to delete them has been taking forever.
I did this via the SSH, my question is: if my SSH connection times out before rm -rf finishes, will it continue to delete all of those... (5 Replies)
Discussion started by: phpchick
5 Replies
LEARN ABOUT DEBIAN
svn-fast-backup
svn-fast-backup(1) General Commands Manual svn-fast-backup(1)
NAME
svn-fast-backup - very fast backup for Subversion fsfs repositories.
SYNOPSIS
svn-fast-backup [-q] [-k{N|all}] [-f] [-t] [-s] repos_path backup_dir
DESCRIPTION
svn-fast-backup uses rsync snapshots for very fast backup of a Subversion fsfs repository at repos_path to backup_dir/repos-rev, the latest
revision number in the repository. Multiple fsfs backups share data via hardlinks, so old backups are almost free, since a newer revision
of a repository is almost a complete superset of an older revision.
This is good for replacing incremental log-dump+restore-style backups because it is just as space-conserving and even faster; there is no
inter-backup state (old backups are essentially caches); each backup directory is self-contained. It has the same command-line interface
as svn-hot-backup(1) (if you use --force), but only works for fsfs repositories.
svn-fast-backup keeps 64 backups by default and deletes backups older than these; this can be adjusted with the -k option.
OPTIONS
-h, --help
Shows some brief help text.
-q, --quiet
Quieter-than-usual operation.
-k, --keep=N
Keep a specified number of backups; the default is to keep 64.
-k, --keep=all
Do not delete any old backups at all.
-f, --force
Make a new backup even if one with the current revision exists.
-t, --trace
Show actions.
-s, --simulate
Don't perform actions.
AUTHOR
Voluntary contributions made by many individuals. Copyright (C) 2006 CollabNet.
2006-11-09 svn-fast-backup(1)