Hi,
I am processing a file and would like to delete duplicate records as indicated by one of its column. e.g.
COL1 COL2 COL3
A 1234 1234
B 3k32 2322
C Xk32 TTT
A NEW XX22
B 3k32 ... (7 Replies)
My input file:
AVI.out <detail>named as the RRM .</detail>
AVI.out <detail>Contains 1 RRM .</detail>
AR0.out <detail>named as the tellurite-resistance.</detail>
AWG.out <detail>Contains 2 HTH .</detail>
ADV.out <detail>named as the DENR family.</detail>
ADV.out ... (10 Replies)
Hii Friends.. I have a huge set of data stored in a file.Which is as shown below
a.dat:
RAO 1869 12 19 0 0 0.00 17.9000 82.3000 10.0 0 0.00 0 3.70 0.00 0.00 0 0.00 3.70 4 NULL
LEE 1870 4 11 1 0 0.00 30.0000 99.0000 0.0 0 0.00 0 0.00 0.00 0.00 0 ... (3 Replies)
I have a csv file that I would like to remove duplicate lines based on field 1 and sort. I don't care about any of the other fields but I still wanna keep there data intact. I was thinking I could do something like this but I have no idea how to print the full line with this. Please show any method... (8 Replies)
Hi,
How can I remove duplicates from a file based on group on other column? for example:
Test1|Test2|Test3|Test4|Test5
Test1|Test6|Test7|Test8|Test5
Test1|Test9|Test10|Test11|Test12
Test1|Test13|Test14|Test15|Test16
Test17|Test18|Test19|Test20|Test21
Test17|Test22|Test23|Test24|Test5
... (2 Replies)
Hi All,
i have input file like below...
CA009156;20091003;M;AWBKCA72;123;;CANADIAN WESTERN BANK;EDMONTON;;2300, 10303, JASPER AVENUE;;T5J 3X6;;
CA009156;20091003;M;AWBKCA72;321;;CANADIAN WESTERN BANK;EDMONTON;;2300, 10303, JASPER AVENUE;;T5J 3X6;;
CA009156;20091003;M;AWBKCA72;231;;CANADIAN... (2 Replies)
Hi ,
Some time i got duplicated value in my files ,
bundle_identifier= B
Sometext=ABC
bundle_identifier= A
bundle_unit=500
Sometext123=ABCD
bundle_unit=400
i need to check if there is a duplicated values or not if yes , i need to check if the value is A or B when Bundle_Identified ,... (2 Replies)
Dear members, I need to filter a file based on the 8th column (that is id), and does not mather the other columns, because I want just one id (1 line of each id) and remove the duplicates lines based on this id (8th column), and does not matter wich duplicate will be removed.
example of my file... (3 Replies)
Dear community,
I have to remove duplicate lines from a file contains a very big ammount of rows (milions?) based on 1st and 3rd columns
The data are like this:
Region 23/11/2014 09:11:36 41752
Medio 23/11/2014 03:11:38 4132
Info 23/11/2014 05:11:09 4323... (2 Replies)
Discussion started by: Lord Spectre
2 Replies
LEARN ABOUT DEBIAN
modemtest
MODEMTEST(1p) User Contributed Perl Documentation MODEMTEST(1p)NAME
modemtest - Tool to examining your modem through Perl's Device::SerialPort
SYNOPSIS
modemtest [OPTS] [DEVICE [BAUD [DATA [PARITY [STOP [FLOW]]]]]]
DEVICE Device to use as a serial port (default: "/dev/modem")
BAUD Serial speed to use (default: 9600)
DATA Number of databits to use (default: 8)
PARITY Type of parity to use (default: "none")
STOP Number of stop bits to use (default: 1)
FLOW Kind of flow control to use (default: "none")
-h, --help Help report
--skip-status Skip modem status bit tests
--hide-possible Don't show all possible settings
DESCRIPTION
Some systems, serial ports, and modem behave in strange ways. To test the capabilities of Perl's Device::SerialPort, this tool queries the
system settings for the given DEVICE, and attempts to set up the port and send the initialization string "ATE1" to the modem, reporting the
results seen.
SEE ALSO Device::SerialPort(3)perl(1)AUTHOR
Kees Cook <kees@outflux.net>.
COPYRIGHT AND LICENSE
Copyright 2000-2004 by Kees Cook <kees@outflux.net>.
This program is free software; you may redistribute it and/or modify it under the same terms ans Perl itself.
perl v5.14.2 2006-10-28 MODEMTEST(1p)