Unable to catch the redirection error when the disk is full
Hi Experts,
Problem summary :
I am facing the below problem on huge files when the disk is getting full on the half way through the execution.
If the disk was already full , the commands fail & everything is fine.
Sample Code :
Description : We are creating a load file to sqlldr by removing the header record. We have the above logic to achieve it.
Here when the disk is already full , any attempt to run the script fails & it is as per expectations.
But if the disk gets full half way through the execution , the command is not failing , not returning any non-zero code but completing by creating an incomplete file.
I have tried all other logics : tail +3 , perl , awk , sed . All commands behave in the same way.
Please excuse my stupidity , but i am unable to zero-in on the problem here.
Last edited by Scrutinizer; 05-09-2013 at 03:59 AM..
Reason: code tags
Hi Scru,
Yes done that too , but even sed created an incomplete file.
I have other options like "taking the count of the resultant file & comparing with original file". needless to say this is a huge overhead on bigfiles , when we have 100's of jobs running parallely.
Problem summary :
I am facing the below problem on huge files when the disk is getting full on the half way through the execution.
If the disk was already full , the commands fail & everything is fine.
Sample Code :
Description : We are creating a load file to sqlldr by removing the header record. We have the above logic to achieve it.
Here when the disk is already full , any attempt to run the script fails & it is as per expectations.
But if the disk gets full half way through the execution , the command is not failing , not returning any non-zero code but completing by creating an incomplete file.
I have tried all other logics : tail +3 , perl , awk , sed . All commands behave in the same way.
Please excuse my stupidity , but i am unable to zero-in on the problem here.
What system are you using. The grep utility should exit with a non-zero exit status if any write to the output file fails (whether due to ENOSPC or any other error condition). The cat is not needed. You could use:
instead of what you had, but it shouldn't affect the exit status of the grep command.
How are you checking the exit status? Is it the creation of sample_file.load that is failing, or is sqlldr failing to load sample_file.load into a database after the grep completes successfully?
Thanks Don,
I am using
SunOS 5.10 Generic_147440-19 sun4u sparc SUNW,SPARC-Enterprise system.
The problem is : Since the redirections arent failing , we are having an incomplete file to load.
The sqlldr does error since ** incomplete file ** means integrity of data is already lost(broken record , etc) .
Thanks Don,
I am using
SunOS 5.10 Generic_147440-19 sun4u sparc SUNW,SPARC-Enterprise system.
The problem is : Since the redirections arent failing , we are having an incomplete file to load.
The sqlldr does error since ** incomplete file ** means integrity of data is already lost(broken record , etc) .
Let me try the method you have posted.
Thanks for the information, but you didn't answer the key question: How are you checking the exit status of the grep command?
i want to create 1 script to monitor 1 particular filesystem out of the diferent filesystems. if disk space of that particular filesystem increases by 80% it sends an alert mail to an email id
---------- Post updated at 04:18 PM ---------- Previous update was at 04:17 PM ----------
no. I am... (1 Reply)
Hi,
I am unable to get the full FS space, as /home is 100% utilized and after deleting unwanted files, its still 100%. After checking the du -sk * | sort -n output and converting it to MBs, the total sizes comes out to be 351 MBs only however the lvol is of 3GB. I don't know where is all the space... (2 Replies)
I have a weird situation in which the binary dumps core and gives bus error. But before dumping the core and throwing the buss error, it gives some output.
unfortunately I can't grep the output before core dump
db2bfd -b test.bnd
maxSect 15
Bus Error (core dumped)
But if I do ... (4 Replies)
one of my servers / was full by 100% i cleard some space, now though i have
enough space on / partition still df is showing disk usage as 100% am not able to create any single txt file ? why so ? (3 Replies)
Hi all,
My disk space is 100% full.
df -k <dir> -> 100%
One of my debug files consume huge amount of space and i want to remove the same to start off fresh debugs.
However i'm unable to remove the file giving out the following error message:
rm -f debug.out22621
rm: cannot remove... (8 Replies)
Hello experts.
I am using Solaris 10(2005) on intel machine. I have installed Veritas Volume manager 5.0. I am unable to bring a disk error to online state. I would like to bring that disk to CDS format.
Commands i used and output are....
#vxdisk list
DEVICE TYPE ... (11 Replies)
Hi All,
Can anyone explain me the meanning of the following errors:
LABEL: SC_DISK_ERR2
IDENTIFIER: B6267342
Description
DISK OPERATION ERROR
Probable Causes
DASD DEVICE
Failure Causes
DISK DRIVE
DISK DRIVE ELECTRONICS
Recommended Actions
PERFORM PROBLEM DETERMINATION... (1 Reply)