i have a pipe delimited file with records spread in many lines.
i need to extract those records
1)having X in beginning of that record
2)and having at least one Y in beginning before other record begins
eg:
X|Rec1|
A|Rec1|
Y|Rec1|
X|Rec2|
Y|Rec2|
Z|Rec3|
X|Rec4|
M|Rec4|
... (4 Replies)
Hello,
I have got one file with more than 120+ million records(35 GB in size). I have to extract some relevant data from file based on some parameter and generate other output file.
What will be the besat and fastest way to extract the ne file.
sample file format :--... (2 Replies)
the data in my file is has no delimiters. it looks like this:
H52082320024740010PH333200612290000930 0.0020080131
D5208232002474000120070306200703060580T1502 TT 1.00
H52082320029180003PH333200702150001 30 100.0020080205
D5208232002918000120070726200707260580T1502 ... (3 Replies)
Example CSV:
$ cat myfile
HDR
COL_A,COL_B,COL_C
X,Y,Z
Z,Y,X
...
X,W,Z
In this example, I know that column names are on the second line. I also know that I would like to print lines where COL_A="X" and COL_C="Z". In this simple example, I know that COL_A = $1 and COL_C = $3, and hence... (6 Replies)
Hello Friends,
I have a file(InputFile.csv) with the following columns(the columns are pipe-delimited):
ColA|ColB|ColC|ColD|ColE|ColF
Now for this file, I have to get those records which fulfil the following condition:
If "ColB" is NOT NULL and "ColD" has values one of the following... (9 Replies)
Hi,
I look for a awk one liner for below issue.
input file
ABC 1234 abc 12345
ABC 4567 678 XYZ
xyz ght 678
ABC 787 yyuu
ABC 789 7890 777
zxr hyip hyu
mno uii 678 776
ABC ty7 888
All lines should be started with ABC as first field. If a record has another value for 1st... (7 Replies)
Hi Gents,
I have a file 1 like this
1 1000 20
2 2000 30
3 1000 40
5 1000 50
And I have other file 1 like
2 1
I would like to get from the file 1 the complete line which are in file 2, the key to compare is the column 2 then output should be.
2 2000 30.
I was trying to get it... (5 Replies)
Hi,
I have a file with 20GB Pipe Delimited file where i have too many duplicate records.
I need an awk script to extract the unique records from the file and put it into another file.
Kindly help.
Thanks,
Arun (1 Reply)
Hi,
I have a shell script which extracts records form oracle to unix file.
sqlplus -s ${WMD_DM_CONNECT} <<EOF >$tmpfile
set heading off
set pagesize 0
set feedback off
select CD_DESC||'|'||CD_ID||'|'||'Arun'||'|'||'Montu' from WMD_SYS_CD_LKUP
where CD_TYP =... (5 Replies)
Discussion started by: Arun Mishra
5 Replies
LEARN ABOUT DEBIAN
vm
VM(1) mgetty VM(1)NAME
vm - VoiceModem is the program for handling the voice modem functionality from shell scripts
ACTIONS
beep options [<frequency [<length in 0.001sec>]]>
diagnostics options device name (e.g. ttyS2
dial options phone number
help
play options [<file names]+>
record options file name
shell options [<shell script [shell options]]>
wait options [<time in seconds]>
devicetest
OPTIONS -c n use compression type n
-d n set i/o device
-t, -m, -i, -e, -s, -H equals to -d <2,3,4,5,6,7>
-l s set device string (e.g. -l ttyS2:ttyC0)
-v verbose output
-w use off / on hook signal from local handset to start and stop recording
-x n set debug level
-L n set maximum recording length in sec
-P print first DTMF tone on stdout and exit
-R read and print DTMF string on stdout and exit
-S s set default shell for shell scripts (e.g. -S /bin/sh)
-T n set silence timeout in 0.1sec
-V n set silence threshold to <n> (0-100%%)
SEE ALSO vgetty(1)POD ERRORS
Hey! The above document had some coding errors, which are explained below:
Around line 30:
You forgot a '=back' before '=head1'
Around line 32:
'=item' outside of any '=over'
Around line 71:
You forgot a '=back' before '=head1'
perl v5.10.1 2010-04-04 VM(1)