12-19-2010
This sounds like a record deadlock. Another process has the table record locked for update, your process is waiting for the record to be given up, so it can delete the record. A DBA can get into the db, and trace what processes have the record open.
You have to supply the table name(s).
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hi peeps,
We are having around 60 users.
The time set to retrieve the mail is 300 sec.
But it's taking around 1 hour to deliver mails.
I am using debian sarge 3.1.
any clues?
And how it will affect if I decrease the time?
My machine has got 1 p4 3.0 GHZ processor and 1 GB ram.
The home... (2 Replies)
Discussion started by: squid04
2 Replies
2. Red Hat
I'm having a bit of a login performance issue.. wondering if anyone has any ideas where I might look.
Here's the scenario...
Linux Red Hat ES 4 update 5
regardless of where I login from (ssh or on the text console) after providing the password the system seems to pause for between 30... (4 Replies)
Discussion started by: retlaw
4 Replies
3. Shell Programming and Scripting
I'm new from UNIX scripting. Please help.
I have about 10,000 files from the $ROOTDIR/scp/inbox/string1 directory to compare with the 50 files from /$ROOTDIR/output/tma/pnt/bad/string1/ directory and it takes about 2 hours plus to complete the for loop. Is there a better way to re-write the... (5 Replies)
Discussion started by: hanie123
5 Replies
4. Shell Programming and Scripting
Hi,
I have here a script which is used to purge older files/directories based on defined purge period. The script consists of 45 find commands, where each command will need to traverse through more than a million directories. Therefore a single find command executes around 22-25 mins... (7 Replies)
Discussion started by: sravicha
7 Replies
5. UNIX for Dummies Questions & Answers
grep -f taking long time to compare for big files, any alternate for fast check
I am using grep -f file1 file2 to check - to ckeck dups/common rows prsents. But my files contains file1 contains 5gb and file 2 contains 50 mb and its taking such a long time to compare the files.
Do we have any... (10 Replies)
Discussion started by: gkskumar
10 Replies
6. Solaris
It's almost 3 days now and my resync/re-attach is only at 80%. Is there something I can check in Solaris 10 that would be causing the degradation. It's only a standby machine.
My live system completed in 6hrs. (9 Replies)
Discussion started by: ravzter
9 Replies
7. Solaris
Dear All,
OS = Solaris 5.10
Hardware Sun Fire T2000 with 1 Ghz quode core
We have oracle application 11i with 10g database. When ever i am trying to take cold backup of database with 55GB size its taking long time to finish. As the application is down nobody is using the server at all... (8 Replies)
Discussion started by: yoojamu
8 Replies
8. UNIX for Dummies Questions & Answers
Hi,
All the data are kept on Netapp using NFS. some directories are so fast when doing ls but few of them are slow. After doing few times, it becomes fast. Then again after few minutes, it becomes slow again. Can you advise what's going on?
This one directory I am very interested is giving... (3 Replies)
Discussion started by: samnyc
3 Replies
9. UNIX and Linux Applications
One of my job is taking long running time.
I need to identify from the unix log file can you please help how to troubleshoot. (1 Reply)
Discussion started by: Nsharma3006
1 Replies
10. Shell Programming and Scripting
I have so many (hundreds of thousands) files and directories within this one specific directory that my "rm -rf" command to delete them has been taking forever.
I did this via the SSH, my question is: if my SSH connection times out before rm -rf finishes, will it continue to delete all of those... (5 Replies)
Discussion started by: phpchick
5 Replies
LEARN ABOUT OPENSOLARIS
audit
audit(2) System Calls audit(2)
NAME
audit - write a record to the audit log
SYNOPSIS
cc [ flag ... ] file ... -lbsm -lsocket -lnsl [ library... ]
#include <sys/param.h>
#include <bsm/libbsm.h>
int audit(caddr_t record, int length);
DESCRIPTION
The audit() function queues a record for writing to the system audit log. The data pointed to by record is queued for the log after a mini-
mal consistency check, with the length parameter specifying the size of the record in bytes. The data should be a well-formed audit
record as described by audit.log(4).
The kernel validates the record header token type and length, and sets the time stamp value before writing the record to the audit log.
The kernel does not do any preselection for user-level generated events. If the audit policy is set to include sequence or trailer tokens,
the kernel will append them to the record.
RETURN VALUES
Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error.
ERRORS
The audit() function will fail if:
E2BIG The record length is greater than the maximum allowed record length.
EFAULT The record argument points outside the process's allocated address space.
EINVAL The header token in the record is invalid.
ENOTSUP Solaris Audit is not defined for this system.
EPERM The {PRIV_PROC_AUDIT} privilege is not asserted in the effective set of the calling process.
USAGE
Only privileged processes can successfully execute this call.
ATTRIBUTES
See attributes(5) for descriptions of the following attributes:
+-----------------------------+-----------------------------+
| ATTRIBUTE TYPE | ATTRIBUTE VALUE |
+-----------------------------+-----------------------------+
|Interface Stability |Committed |
+-----------------------------+-----------------------------+
|MT-Level |MT-Safe |
+-----------------------------+-----------------------------+
SEE ALSO
bsmconv(1M), audit(1M), auditd(1M), svcadm(1M), auditon(2), getaudit(2), audit.log(4), attributes(5), privileges(5)
NOTES
The functionality described in this man page is available only if the Solaris Auditing has been enabled and the audit daemon auditd(1M) has
not been disabled by audit(1M) or svcadm(1M). See bsmconv(1M) for more information.
SunOS 5.11 16 Apr 2008 audit(2)