The transnum are alpha numeric and they will be unique for each set of transactions.
Thanks.
---------- Post updated at 11:58 AM ---------- Previous update was at 11:36 AM ----------
Hi Aia,
The required transaction set will be decided by the transaction reference number 'transnum' from another file. This value i will be extracting from another file, as i explained this already to Don the script is a large script in which the transaction extraction is a part of it. when script reaches this section a variable will be holding the transnum value. So using it i will take out the particular transaction set. let me know if you have any other queries
Thanks
---------- Post updated at 01:44 PM ---------- Previous update was at 11:58 AM ----------
Hi Don,
Thanks for your command
Worked the way in which i required. By when i discussed this with my prior he said AWK commands are not allowed by our onsite counterparts since they are giving issue when we upgrade the AIX and leads us to fix them again. So any SED or perl equivalent to the above AWK would be helpful for me. Kindly help me out
Thanks.
---------- Post updated at 01:52 PM ---------- Previous update was at 01:44 PM ----------
Hi Aia,
Thanks for your command
This is not working. I exported the value of transnum to variable t. The output file doesn,t have the required output.
Please find one of the existing inline perl we use. If you give me your command in the same format it will be helpful
What the above code will do is it will export a value which is tilde seperated and get the 29th field to a temp file. This is just a sample code the reason why i pasted here is to show you the existing code punctuation. Now i am purely dependent on perl or sed kindly help me.
It sounds like I have wasted the last hour of my life trying to help you, but maybe this will help someone else. The following awk script only uses POSIX specified awk features and should work on any system (although you would need to change awk to /usr/xpg4/bin/awk or nawk if and only if you want to run this on a Solaris/SunOS system). It takes two files as inputs (which is what you said you had earlier). The first file (named trannums in this script) contains one or more lines with each line containing a transaction number to be extracted from your big file. The second file (named bigfile in this script) contains your big file containing transactions. It extracts each transaction listed in trannums into a separate output file with a name that is the string TX: followed by the transaction number:
Hi all,
I have a file like this I want to extract only those regions which are big and continous
chr1 3280000 3440000
chr1 3440000 3920000
chr1 3600000 3920000 # region coming within the 3440000 3920000. so i don't want it to be printed in output
chr1 3920000 4800000
chr1 ... (2 Replies)
Dear all,
I have stuck with this problem for some days.
I have a very big file, this file can not open by vi command.
There are 200 loops in this file, in each loop will have one line like this:
GWA quasiparticle energy with Z factor (eV)
And I need 98 lines next after this line.
Is... (6 Replies)
The dataset I'm working on is about 450G, with about 7000 colums and 30,000,000 rows.
I want to extract about 2000 columns from the original file to form a new file.
I have the list of number of the columns I need, but don't know how to extract them.
Thanks! (14 Replies)
Hi all
I have a big file which I have attached here.
And, I have to fetch certain entries and arrange in 5 columns
Name Drug DAP ID disease approved or notIn the attached file data is arranged with tab separated columns in this way:
and other data is... (2 Replies)
Hi,
I need a unix command to delete first n (say 100) lines from a log file. I need to delete some lines from the file without using any temporary file. I found sed -i is an useful command for this but its not supported in my environment( AIX 6.1 ). File size is approx 100MB.
Thanks in... (18 Replies)
hi,
i have two files.
file1.sh
echo "unix"
echo "linux"
file2.sh
echo "unix linux forums"
now the output i need is
$./file2.sh
unix linux forums (3 Replies)
Hi,
I have a big (2.7 GB) text file. Each lines has '|' saperator to saperate each columns.
I want to delete those lines which has text like '|0|0|0|0|0'
I tried:
sed '/|0|0|0|0|0/d' test.txt
Unfortunately, it scans the file but does nothing.
file content sample:... (4 Replies)
I have a command which prints #lines after and before the search string in the huge file
nawk 'c-->0;$0~s{if(b)for(c=b+1;c>1;c--)print r;print;c=a}b{r=$0}' b=0 a=10 s="STRING1" FILE
The file is 5 gig big.
It works great and prints 10 lines after the lines which contains search string in... (8 Replies)
1 . Thanks everyone who read the post first.
2 . I have a log file which size is 143M , I can not use vi open it .I can not use xedit open it too.
How to view it ?
If I want to view 200-300 ,how can I implement it
3 . Thanks (3 Replies)