06-17-2012
It seems to me any solution should always make use a temporary intermediate file for safety reasons. If we read the whole file into memory and then write it back to the same file, we run the risk of losing the original in case of power failure during the write-back phase..
With a temporary file there is only mv involved, which is only a rename if the temporary file is in the same dir on the same file system, so a temporary file in the same directory instead of /tmp for example may be preferably. If we use /tmp for example for the intermediate file, a temporary rename to .bak of the original until the move from /tmp may be required and safest will probably be to keep the .bak until the user deletes it..
10 More Discussions You Might Find Interesting
1. Solaris
Hi experts,
in my solaris 9 the file- /var/adm/messeages growin too first. by 24 hours 40MB. And always giving the below messages--
bash-2.05# tail -f messages
Nov 9 16:35:38 ME1 last message repeated 1 time
Nov 9 16:35:38 ME1 ftpd: wtmpx /var/adm/wtmpx No such file or directory
Nov 9... (7 Replies)
Discussion started by: thepurple
7 Replies
2. Solaris
I have a file with 28,00,000 lines of rows in this the first 80 lines will be chunks .
I want to delete the chunks of 80 lines. I tried tail -f2799920 filename.
is there any efficient way to do this.
Thanks in advance. (7 Replies)
Discussion started by: salaathi
7 Replies
3. Shell Programming and Scripting
Input:
a
b
b
c
d
d
I need:
a
c
I know how to get this (the lines that have duplicates) :
b
d
sort file | uniq -d
But i need opossite of this. I have searched the forum and other places as well, but have found solution for everything except this variant of the problem. (3 Replies)
Discussion started by: necroman08
3 Replies
4. Shell Programming and Scripting
I have a command which prints #lines after and before the search string in the huge file
nawk 'c-->0;$0~s{if(b)for(c=b+1;c>1;c--)print r;print;c=a}b{r=$0}' b=0 a=10 s="STRING1" FILE
The file is 5 gig big.
It works great and prints 10 lines after the lines which contains search string in... (8 Replies)
Discussion started by: prash184u
8 Replies
5. Shell Programming and Scripting
Hi,
I have a big (2.7 GB) text file. Each lines has '|' saperator to saperate each columns.
I want to delete those lines which has text like '|0|0|0|0|0'
I tried:
sed '/|0|0|0|0|0/d' test.txt
Unfortunately, it scans the file but does nothing.
file content sample:... (4 Replies)
Discussion started by: dipeshvshah
4 Replies
6. UNIX for Advanced & Expert Users
Hi All,
I have a very huge file (4GB) which has duplicate lines. I want to delete duplicate lines leaving unique lines. Sort, uniq, awk '!x++' are not working as its running out of buffer space.
I dont know if this works : I want to read each line of the File in a For Loop, and want to... (16 Replies)
Discussion started by: krishnix
16 Replies
7. Shell Programming and Scripting
Hi all,
I have a big file (about 6 millions rows) and I have to delete same occurrences, stored in a small file (about 9000 rews). I have tried this:
while read line
do
grep -v $line big_file > ok_file.tmp
mv ok_file.tmp big_file
done < small_file
It works, but is very slow.
How... (2 Replies)
Discussion started by: Tibbeche
2 Replies
8. UNIX for Dummies Questions & Answers
Hi,
To load a big file in a table,I have a make sure that all rows in the file has same number of the columns .
So in my file if I am getting any rows which have columns not equal to 6 , I need to delete it . Delimiter is space and columns are optionally enclosed by "".
This can be ... (1 Reply)
Discussion started by: hemantraijain
1 Replies
9. Shell Programming and Scripting
Hi All,
I am trying to get some lines from a file i did it with while-do-loop. since the files are huge it is taking much time. now i want to make it faster.
The requirement is the file will be having 1 million lines.
The format is like below.
##transaction, , , ,blah, blah... (38 Replies)
Discussion started by: mad man
38 Replies
10. UNIX for Beginners Questions & Answers
Dear all,
I have stuck with this problem for some days.
I have a very big file, this file can not open by vi command.
There are 200 loops in this file, in each loop will have one line like this:
GWA quasiparticle energy with Z factor (eV)
And I need 98 lines next after this line.
Is... (6 Replies)
Discussion started by: phamnu
6 Replies
LEARN ABOUT SUSE
libmaketmpfilefd
Netpbm subroutine library: pm_make_tmpfile_fd() function(3) Library Functions Manual Netpbm subroutine library: pm_make_tmpfile_fd() function(3)
NAME
pm_make_tmpfile_fd() - create a temporary named file
SYNOPSIS
#include <netpbm/pm.h>
pm_make_tmpfile(int * fdP,
const char ** filenameP);
EXAMPLE
This simple example creates a temporary file, writes 'hello world' to it, then writes some search patterns to it, then uses it as input to
grep:
#include <netpbm/pm.h>
int fd;
const char * myfilename;
pm_make_tmpfile_fd(&fdP, &myfilename);
write(fd, '^account:s.*0, 16);
fprintf(fd, '^name:s.*0, 13);
close(fd);
asprintfN(&grepCommand, 'grep --file='%s' /tmp/infile >/tmp/outfile');
system(grepCommand);
strfree(grepCommand);
unlink(myfilename);
strfree(myfilename);
DESCRIPTION
This library function is part of Netpbm(1)
pm_make_tmpfile_fd() is analogous to pm_make_tmpfile()(3) difference is that it opens the file as a low level file, as open() would, rather
than as a stream, as fopen() would.
If you don't need to access the file by name, use pm_tmpfile_fd() instead, because it's cleaner. With pm_tmpfile_fd(), the operating sys-
tem always deletes the temporary file when your program exits, if the program failed to clean up after itself.
HISTORY
pm_tmpfile() was introduced in Netpbm 10.42 (March 2008).
netpbm documentation 31 December 2007 Netpbm subroutine library: pm_make_tmpfile_fd() function(3)