06-24-2005
The best possible approach will be push all the data in oracle using sqlloader.
Create index on the fly for the key u want unique.
And fire query to get the unique records.
Any better alternatives?
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
hello
i need help to remove directory . The directory is not empty ., it contains
several sub directories and files inside that..
total number of files in one directory is 12,24,446 .
rm -rf doesnt work . it is prompting for every file ..
i want to delete without prompting and... (6 Replies)
Discussion started by: getdpg
6 Replies
2. UNIX for Dummies Questions & Answers
Hello,
I can remove duplicate entries in a file by:
sort File1 | uniq > File2
but how can I remove duplicates without sorting the file?
I tried cat File1 | uniq > File2 but it doesn't work
thanks (4 Replies)
Discussion started by: orahi001
4 Replies
3. Shell Programming and Scripting
Hello Experts,
I have two files named old and new. Below are my example files. I need to compare and print the records that only exist in my new file. I tried the below awk script, this script works perfectly well if the records have exact match, the issue I have is my old file has got extra... (4 Replies)
Discussion started by: forumthreads
4 Replies
4. Shell Programming and Scripting
Hi,
I need to remove duplicates from a file. The file will be like this
0003 10101 20100120 abcdefghi
0003 10101 20100121 abcdefghi
0003 10101 20100122 abcdefghi
0003 10102 20100120 abcdefghi
0003 10103 20100120 abcdefghi
0003 10103 20100121 abcdefghi
Here if the first colum and... (6 Replies)
Discussion started by: gpaulose
6 Replies
5. Shell Programming and Scripting
Hi
I need a script that removes the duplicate records and write it to a new file
for example I have a file named test.txt and it looks like
abcd.23
abcd.24
abcd.25
qwer.25
qwer.26
qwer.98
I want to pick only $1 and compare with the next record and the output should be
abcd.23... (6 Replies)
Discussion started by: antointoronto
6 Replies
6. Shell Programming and Scripting
Hi,
I'm using the below command to sort and remove duplicates in a file. But, i need to make this applied to the same file instead of directing it to another.
Thanks (6 Replies)
Discussion started by: dvah
6 Replies
7. Shell Programming and Scripting
OK
I have two filelists......
The first is formatted like this....
/path/to/the/actual/file/location/filename.jpg
and has up to a million records
The second list shows filename.jpg where there is more then on instance.
and has maybe up to 65,000 records
I want to copy files... (4 Replies)
Discussion started by: Bashingaway
4 Replies
8. Shell Programming and Scripting
I need to use a bash script to remove duplicate files from a download list, but I cannot use uniq because the urls are different.
I need to go from this:
http://***/fae78fe/file1.wmv
http://***/39du7si/file1.wmv
http://***/d8el2hd/file2.wmv
http://***/h893js3/file2.wmv
to this:
... (2 Replies)
Discussion started by: locoroco
2 Replies
9. Shell Programming and Scripting
I have a file with the following format:
fields seperated by "|"
title1|something class|long...content1|keys
title2|somhing class|log...content1|kes
title1|sothing class|lon...content1|kes
title3|shing cls|log...content1|ks
I want to remove all duplicates with the same "title field"(the... (3 Replies)
Discussion started by: dtdt
3 Replies
10. Shell Programming and Scripting
Hi I have a below file structure.
200,1245,E1,1,E1,,7611068,KWH,30, ,,,,,,,,
200,1245,E1,1,E1,,7611070,KWH,30, ,,,,,,,,
300,20140223,0.001,0.001,0.001,0.001,0.001
300,20140224,0.001,0.001,0.001,0.001,0.001
300,20140225,0.001,0.001,0.001,0.001,0.001
300,20140226,0.001,0.001,0.001,0.001,0.001... (1 Reply)
Discussion started by: tejashavele
1 Replies
LEARN ABOUT DEBIAN
urifind
URIFIND(1p) User Contributed Perl Documentation URIFIND(1p)
NAME
urifind - find URIs in a document and dump them to STDOUT.
SYNOPSIS
$ urifind file
DESCRIPTION
urifind is a simple script that finds URIs in one or more files (using "URI::Find"), and outputs them to to STDOUT. That's it.
To find all the URIs in file1, use:
$ urifind file1
To find the URIs in multiple files, simply list them as arguments:
$ urifind file1 file2 file3
urifind will read from "STDIN" if no files are given or if a filename of "-" is specified:
$ wget http://www.boston.com/ -O - | urifind
When multiple files are listed, urifind prefixes each found URI with the file from which it came:
$ urifind file1 file2
file1: http://www.boston.com/index.html
file2: http://use.perl.org/
This can be turned on for single files with the "-p" ("prefix") switch:
$urifind -p file3
file1: http://fsck.com/rt/
It can also be turned off for multiple files with the "-n" ("no prefix") switch:
$ urifind -n file1 file2
http://www.boston.com/index.html
http://use.perl.org/
By default, URIs will be displayed in the order found; to sort them ascii-betically, use the "-s" ("sort") option. To reverse sort them,
use the "-r" ("reverse") flag ("-r" implies "-s").
$ urifind -s file1 file2
http://use.perl.org/
http://www.boston.com/index.html
mailto:webmaster@boston.com
$ urifind -r file1 file2
mailto:webmaster@boston.com
http://www.boston.com/index.html
http://use.perl.org/
Finally, urifind supports limiting the returned URIs by scheme or by arbitrary pattern, using the "-S" option (for schemes) and the "-P"
option. Both "-S" and "-P" can be specified multiple times:
$ urifind -S mailto file1
mailto:webmaster@boston.com
$ urifind -S mailto -S http file1
mailto:webmaster@boston.com
http://www.boston.com/index.html
"-P" takes an arbitrary Perl regex. It might need to be protected from the shell:
$ urifind -P 's?html?' file1
http://www.boston.com/index.html
$ urifind -P '.org' -S http file4
http://www.gnu.org/software/wget/wget.html
Add a "-d" to have urifind dump the refexen generated from "-S" and "-P" to "STDERR". "-D" does the same but exits immediately:
$ urifind -P '.org' -S http -D
$scheme = '^(http):'
@pats = ('^(http):', '.org')
To remove duplicates from the results, use the "-u" ("unique") switch.
OPTION SUMMARY
-s Sort results.
-r Reverse sort results (implies -s).
-u Return unique results only.
-n Don't include filename in output.
-p Include filename in output (0 by default, but 1 if multiple files are included on the command line).
-P $re
Print only lines matching regex '$re' (may be specified multiple times).
-S $scheme
Only this scheme (may be specified multiple times).
-h Help summary.
-v Display version and exit.
-d Dump compiled regexes for "-S" and "-P" to "STDERR".
-D Same as "-d", but exit after dumping.
AUTHOR
darren chamberlain <darren@cpan.org>
COPYRIGHT
(C) 2003 darren chamberlain
This library is free software; you may distribute it and/or modify it under the same terms as Perl itself.
SEE ALSO
URI::Find
perl v5.14.2 2012-04-08 URIFIND(1p)