I sincerely apologize. In each case, the output file you got had a filename derived from the 2nd field (i.e., the data between the 1st and 2nd tildes which seems to be a constant for the transactions you selected to print) in a line that contained a transaction number you wanted to print, and the contents of that file was the transactions starting with the transaction after the next to the last transaction number you requested in the big input file through the last transaction number you requested from the big input file.
It comes from me not getting nearly enough sleep, you not providing sample data that matched the actual format of your data, and from me not getting nearly enough sleep. (There were three problems and I'm blaming two of them on not getting enough sleep.) Now that I have cleaned up my test data to match what I believe is your current data format, the following seems to work. Please try this replacement:
Hopefully, this will do what you want.
As stated before, if someone wants to try this on a Solaris/SunOS system, change awk to /usr/xpg4/bin/awk or nawk.
Hi Don,
Thanks this was working as expected. it written all the 3 transactions as expected to separate files. I want to change the code in such a way that i want to write all three transactions set into single file. could you please help me?
So, exactly what pathname should this single output file have?
What good is this file going to be given that the script that will be reading this file can only handle a single transaction?
Looking at the awk script I provided, what do you think should be changed to produce a single output file instead of one output file per transaction?
My guess would be that one line needs to be removed and one line needs to be changed. And, it might make sense (as a minor optimization) to move that changed line from its current location into a BEGIN clause or an FNR==1 clause depending on whether the desired output file pathname is a constant or is a modification of the second input file's pathname).
This User Gave Thanks to Don Cragun For This Post:
So, exactly what pathname should this single output file have?
What good is this file going to be given that the script that will be reading this file can only handle a single transaction?
Looking at the awk script I provided, what do you think should be changed to produce a single output file instead of one output file per transaction?
My guess would be that one line needs to be removed and one line needs to be changed. And, it might make sense (as a minor optimization) to move that changed line from its current location into a BEGIN clause or an FNR==1 clause depending on whether the desired output file pathname is a constant or is a modification of the second input file's pathname).
Hi Don,
I just changed
Now it started to write all the transaction numbers into a same output file
Now it started to write all the transaction numbers into a same output file
/tmp/remedixz.20160120_085021_41222370_1_123456
I will make the (hopefully not too wild guess) from this that the name of pathname of the output file you want is the pathname of the input file with the string _123456 appended.
The variable transnum in that awk script is intended to be the transaction number of the transaction that is being copied from the input file to the output file. And, since your transaction numbers are 19 character alphanumeric strings (not six digit decimal strings), setting transnum = 123456 is NOT appropriate.
Changing the:
to:
means that instead of creating a new output file each time you run this script, it will append all of the transactions requested on the latest run to the output produced on any earlier runs. This would not seem to be a desirable side effect.
Please undo the changes you made and make the following changes instead:
First, change the line:
to:
and, second, delete the line:
With these changes, the transaction number printed when a transaction is copied to the output file will again be printed correctly and a single output file will be produced each time the script is run (and will contain only the transactions extracted on that execution of the script). Later executions of the script will replace the contents of that file (if it still exists from an earlier run) or create that file (if it had been removed).
Hi all,
I have a file like this I want to extract only those regions which are big and continous
chr1 3280000 3440000
chr1 3440000 3920000
chr1 3600000 3920000 # region coming within the 3440000 3920000. so i don't want it to be printed in output
chr1 3920000 4800000
chr1 ... (2 Replies)
Dear all,
I have stuck with this problem for some days.
I have a very big file, this file can not open by vi command.
There are 200 loops in this file, in each loop will have one line like this:
GWA quasiparticle energy with Z factor (eV)
And I need 98 lines next after this line.
Is... (6 Replies)
The dataset I'm working on is about 450G, with about 7000 colums and 30,000,000 rows.
I want to extract about 2000 columns from the original file to form a new file.
I have the list of number of the columns I need, but don't know how to extract them.
Thanks! (14 Replies)
Hi all
I have a big file which I have attached here.
And, I have to fetch certain entries and arrange in 5 columns
Name Drug DAP ID disease approved or notIn the attached file data is arranged with tab separated columns in this way:
and other data is... (2 Replies)
Hi,
I need a unix command to delete first n (say 100) lines from a log file. I need to delete some lines from the file without using any temporary file. I found sed -i is an useful command for this but its not supported in my environment( AIX 6.1 ). File size is approx 100MB.
Thanks in... (18 Replies)
hi,
i have two files.
file1.sh
echo "unix"
echo "linux"
file2.sh
echo "unix linux forums"
now the output i need is
$./file2.sh
unix linux forums (3 Replies)
Hi,
I have a big (2.7 GB) text file. Each lines has '|' saperator to saperate each columns.
I want to delete those lines which has text like '|0|0|0|0|0'
I tried:
sed '/|0|0|0|0|0/d' test.txt
Unfortunately, it scans the file but does nothing.
file content sample:... (4 Replies)
I have a command which prints #lines after and before the search string in the huge file
nawk 'c-->0;$0~s{if(b)for(c=b+1;c>1;c--)print r;print;c=a}b{r=$0}' b=0 a=10 s="STRING1" FILE
The file is 5 gig big.
It works great and prints 10 lines after the lines which contains search string in... (8 Replies)
1 . Thanks everyone who read the post first.
2 . I have a log file which size is 143M , I can not use vi open it .I can not use xedit open it too.
How to view it ?
If I want to view 200-300 ,how can I implement it
3 . Thanks (3 Replies)