05-21-2014
This has happened to me on several occasions. Not a timeout per se, but the connection drops or something like that. There are ways to make the the rm process nearly immortal. But I simply reconnect and restart the command. The files that were deleted do not come back... they stay gone. So you don't lose any progress.
This User Gave Thanks to Perderabo For This Post:
10 More Discussions You Might Find Interesting
1. Red Hat
I'm having a bit of a login performance issue.. wondering if anyone has any ideas where I might look.
Here's the scenario...
Linux Red Hat ES 4 update 5
regardless of where I login from (ssh or on the text console) after providing the password the system seems to pause for between 30... (4 Replies)
Discussion started by: retlaw
4 Replies
2. Shell Programming and Scripting
I'm new from UNIX scripting. Please help.
I have about 10,000 files from the $ROOTDIR/scp/inbox/string1 directory to compare with the 50 files from /$ROOTDIR/output/tma/pnt/bad/string1/ directory and it takes about 2 hours plus to complete the for loop. Is there a better way to re-write the... (5 Replies)
Discussion started by: hanie123
5 Replies
3. UNIX for Advanced & Expert Users
I'd like to
1. Check and compare the 10,000 pnt files contains single record from the /$ROOTDIR/scp/inbox/string1 directory against 39 bad pnt files from the /$ROOTDIR/output/tma/pnt/bad/string1 directory based on the fam_id column value start at position 38 to 47 from the record below. Here is... (1 Reply)
Discussion started by: hanie123
1 Replies
4. UNIX for Dummies Questions & Answers
grep -f taking long time to compare for big files, any alternate for fast check
I am using grep -f file1 file2 to check - to ckeck dups/common rows prsents. But my files contains file1 contains 5gb and file 2 contains 50 mb and its taking such a long time to compare the files.
Do we have any... (10 Replies)
Discussion started by: gkskumar
10 Replies
5. UNIX for Dummies Questions & Answers
Hi ,
We have 20 jobs are scheduled.
In that one of our job is taking long time ,it's not completing.
If we are not terminating it's running infinity time actually the job completion time is 5 minutes.
The job is deleting some records from the table and two insert statements and one select... (7 Replies)
Discussion started by: ajaykumarkona
7 Replies
6. Solaris
It's almost 3 days now and my resync/re-attach is only at 80%. Is there something I can check in Solaris 10 that would be causing the degradation. It's only a standby machine.
My live system completed in 6hrs. (9 Replies)
Discussion started by: ravzter
9 Replies
7. Solaris
Dear All,
OS = Solaris 5.10
Hardware Sun Fire T2000 with 1 Ghz quode core
We have oracle application 11i with 10g database. When ever i am trying to take cold backup of database with 55GB size its taking long time to finish. As the application is down nobody is using the server at all... (8 Replies)
Discussion started by: yoojamu
8 Replies
8. UNIX for Dummies Questions & Answers
Hi,
All the data are kept on Netapp using NFS. some directories are so fast when doing ls but few of them are slow. After doing few times, it becomes fast. Then again after few minutes, it becomes slow again. Can you advise what's going on?
This one directory I am very interested is giving... (3 Replies)
Discussion started by: samnyc
3 Replies
9. Shell Programming and Scripting
while read myhosts
do
while read discovered
do
echo "$discovered"
done < $LOGFILE | grep -Pi "|" | egrep... (7 Replies)
Discussion started by: SkySmart
7 Replies
10. Shell Programming and Scripting
Hi,
I am running a ssh connection test in a script, how can I add a timeout to abolish the process if it takes too long?
ssh -i ~/.ssh/ssl_key useraccount@computer1
Thank you.
- j (1 Reply)
Discussion started by: hce
1 Replies
LEARN ABOUT PHP
mongoclient.killcursor
MONGOCLIENT.KILLCURSOR(3) 1 MONGOCLIENT.KILLCURSOR(3)
MongoClient::killCursor - Kills a specific cursor on the server
SYNOPSIS
public bool MongoClient::killCursor (string $server_hash, int|MongoInt64 $id)
DESCRIPTION
In certain situations it might be needed to kill a cursor on the server. Usually cursors time out after 10 minutes of inactivity, but it
is possible to create an immortal cursor with MongoCursor::immortal that never times out. In order to be able to kill such an immortal cur-
sor, you can call this method with the information supplied by MongoCursor::info.
PARAMETERS
o $server_hash
- The server hash that has the cursor. This can be obtained through MongoCursor::info.
o $id
- The ID of the cursor to kill. You can either supply an int containing the 64 bit cursor ID, or an object of the MongoInt64
class. The latter is necessary on 32 bit platforms (and Windows).
RETURN VALUES
Returns TRUE if the method attempted to kill a cursor, and FALSE if there was something wrong with the arguments (such as a wrong
$server_hash). The return status does not reflect where the cursor was actually killed as the server does not provide that information.
ERRORS
/EXCEPTIONS
This method displays a warning if the supplied $server_hash does not match up with an existing connection. No attempt to kill a cursor is
attempted in that case either.
EXAMPLES
Example #1
MongoClient.killCursor(3) example
This example shows how to connect, do a query, obtain the cursor information and then kill the cursor.
<?php
$m = new MongoClient();
$c = $m->testdb->collection;
$cursor = $c->find();
$result = $cursor->next();
// Now the cursor is valid, so we can get the hash and ID out:
$info = $cursor->info();
// Kill the cursor
MongoClient::killCursor( $info['server'], $info['id'] );
?>
PHP Documentation Group MONGOCLIENT.KILLCURSOR(3)