The transnum are alpha numeric and they will be unique for each set of transactions.
Thanks.
---------- Post updated at 11:58 AM ---------- Previous update was at 11:36 AM ----------
Hi Aia,
The required transaction set will be decided by the transaction reference number 'transnum' from another file. This value i will be extracting from another file, as i explained this already to Don the script is a large script in which the transaction extraction is a part of it. when script reaches this section a variable will be holding the transnum value. So using it i will take out the particular transaction set. let me know if you have any other queries
Thanks
---------- Post updated at 01:44 PM ---------- Previous update was at 11:58 AM ----------
Hi Don,
Thanks for your command
Worked the way in which i required. By when i discussed this with my prior he said AWK commands are not allowed by our onsite counterparts since they are giving issue when we upgrade the AIX and leads us to fix them again. So any SED or perl equivalent to the above AWK would be helpful for me. Kindly help me out
Thanks.
---------- Post updated at 01:52 PM ---------- Previous update was at 01:44 PM ----------
Hi Aia,
Thanks for your command
This is not working. I exported the value of transnum to variable t. The output file doesn,t have the required output.
Please find one of the existing inline perl we use. If you give me your command in the same format it will be helpful
What the above code will do is it will export a value which is tilde seperated and get the 29th field to a temp file. This is just a sample code the reason why i pasted here is to show you the existing code punctuation. Now i am purely dependent on perl or sed kindly help me.
1 . Thanks everyone who read the post first.
2 . I have a log file which size is 143M , I can not use vi open it .I can not use xedit open it too.
How to view it ?
If I want to view 200-300 ,how can I implement it
3 . Thanks (3 Replies)
I have a command which prints #lines after and before the search string in the huge file
nawk 'c-->0;$0~s{if(b)for(c=b+1;c>1;c--)print r;print;c=a}b{r=$0}' b=0 a=10 s="STRING1" FILE
The file is 5 gig big.
It works great and prints 10 lines after the lines which contains search string in... (8 Replies)
Hi,
I have a big (2.7 GB) text file. Each lines has '|' saperator to saperate each columns.
I want to delete those lines which has text like '|0|0|0|0|0'
I tried:
sed '/|0|0|0|0|0/d' test.txt
Unfortunately, it scans the file but does nothing.
file content sample:... (4 Replies)
hi,
i have two files.
file1.sh
echo "unix"
echo "linux"
file2.sh
echo "unix linux forums"
now the output i need is
$./file2.sh
unix linux forums (3 Replies)
Hi,
I need a unix command to delete first n (say 100) lines from a log file. I need to delete some lines from the file without using any temporary file. I found sed -i is an useful command for this but its not supported in my environment( AIX 6.1 ). File size is approx 100MB.
Thanks in... (18 Replies)
Hi all
I have a big file which I have attached here.
And, I have to fetch certain entries and arrange in 5 columns
Name Drug DAP ID disease approved or notIn the attached file data is arranged with tab separated columns in this way:
and other data is... (2 Replies)
The dataset I'm working on is about 450G, with about 7000 colums and 30,000,000 rows.
I want to extract about 2000 columns from the original file to form a new file.
I have the list of number of the columns I need, but don't know how to extract them.
Thanks! (14 Replies)
Dear all,
I have stuck with this problem for some days.
I have a very big file, this file can not open by vi command.
There are 200 loops in this file, in each loop will have one line like this:
GWA quasiparticle energy with Z factor (eV)
And I need 98 lines next after this line.
Is... (6 Replies)
Hi all,
I have a file like this I want to extract only those regions which are big and continous
chr1 3280000 3440000
chr1 3440000 3920000
chr1 3600000 3920000 # region coming within the 3440000 3920000. so i don't want it to be printed in output
chr1 3920000 4800000
chr1 ... (2 Replies)
Discussion started by: amrutha_sastry
2 Replies
LEARN ABOUT DEBIAN
hxextract
HXEXTRACT(1) HTML-XML-utils HXEXTRACT(1)NAME
hxextract - extract selected elements from a HTML or XML file
SYNOPSIS
hxextract [ -h | -? ] [ -x ] [ -s text ] [ -e text ] [ -b base ] element-or-class [ -c configfile | file-or-URL ]
DESCRIPTION
hxextract outputs all elements with a certain name and/or class.
Input must be well-formed, since no HTML heuristics are applied.
OPTIONS
The following options are supported:
-x Use XML format conventions.
-s text Insert text at the start of the output.
-e text Insert text at the end of the output.
-b base URL base
-c configfile
Read @chapter lines from configfile (lines must be of the form "@chapter filename") and extract elements from each of those
files.
-h, -? Print command usage.
OPERANDS
The following operands are supported:
element-or-class
The name of an element to extract (e.g., "H2"), or the name of a class preceded by "." (e.g., ".example") or a combination of
both (e.g., "H2.example").
file-or-URL
A file name or a URL. To read from standard input, use "-".
ENVIRONMENT
To use a proxy to retrieve remote files, set the environment variables http_proxy and ftp_proxy. E.g., http_proxy="http://localhost:8080/"
BUGS
Remote files (specified with a URL) are currently only supported for HTTP. Password-protected files or files that depend on HTTP "cookies"
are not handled. (You can use tools such as curl(1) or wget(1) to retrieve such files.)
SEE ALSO hxselect(1)6.x 10 Jul 2011 HXEXTRACT(1)