Sponsored Content
Top Forums Shell Programming and Scripting Using an awk script to identify dupes in two files Post 302499158 by gimley on Wednesday 23rd of February 2011 11:57:13 AM
Old 02-23-2011
Re.:Using an awk script to identify dupes in two files

I am sorry to hassle you guys like this. I am on Windows Vista and locale does not work. Basically my locale is ISO 1252 (ANSI - Latin I), which should handle lower as well as upper ASCII characters.
I still am perplexed why the single chars work whereas the longer strings do not work.
Sorry to be such a bother, but this is a real mystery

Gimley

---------- Post updated at 11:57 AM ---------- Previous update was at 11:49 AM ----------

Sorry Guys my goof-up. In my excitement to get the data working, I had forgotten to put the file separator.
BEGIN {FS="="}
NR==FNR{O[$1]++;next} !($1 in O)
It works beautifully and ran through the records like a breeze.
Please excuse my stupidity.
Best regards and many thanks to all who helped me out.
GIMLEY
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

how to identify files I cannot access

I want to use find (or something else) to give me a list of all files in a directory tree where the group access is not rwx or rw-. I'm trying to archive the whole directory tree, but it won't archive any files where I do not have at least read access. I have tried: find . ! -perm -060 but... (4 Replies)
Discussion started by: wvdeijk
4 Replies

2. Shell Programming and Scripting

Need help to identify files

Hi, I have one problem. I want to identify all those files which are named according to the format <name>_<date>. I have tried using awk and grep in bash but i m not able to get it correct. Can someone please help? It's urgent !! (10 Replies)
Discussion started by: udayan_goswami
10 Replies

3. Shell Programming and Scripting

Shell script to identify the number of files and to append data

Hi I am having a question where I have to 1) Identify the number of files in a directory with a specific format and if the count is >1 we need to concatenate those two files into one file and remember that in the second file the header should not be copied. it should be form first file.... (4 Replies)
Discussion started by: pradkumar
4 Replies

4. Shell Programming and Scripting

Perl Script to identify files without extension and assign a value

Hi, I have a perl script which is a part of a shell script which read lines from a flat file(which is generated as part of a script after a series of bteq/fexp) and assigns a value for each object in the file based on the type of file name. (i.e extensions like .bteq/.ctl/.ksh etc) For example,... (1 Reply)
Discussion started by: yohasini
1 Replies

5. Shell Programming and Scripting

Script for identifying and deleting dupes in a line

I am compiling a synonym dictionary which has the following structure Headword=Synonym1,Synonym2 and so on, with each synonym separated by a comma. As is usual in such cases manual preparation of synonyms results in repeating the synonym which results in dupes as in the example below:... (3 Replies)
Discussion started by: gimley
3 Replies

6. Shell Programming and Scripting

Removing Dupes from huge file- awk/perl/uniq

Hi, I have the following command in place nawk -F, '!a++' file > file.uniq It has been working perfectly as per requirements, by removing duplicates by taking into consideration only first 3 fields. Recently it has started giving below error: bash-3.2$ nawk -F, '!a++'... (17 Replies)
Discussion started by: makn
17 Replies

7. Shell Programming and Scripting

Help in modifying existing Perl Script to produce report of dupes

Hello, I have a large amount of data with the following structure: Word=Transliterated word I have written a Perl Script (reproduced below) which goes through the full file and identifies all dupes on the right hand side. It creates successfully a new file with two headers: Singletons and Dupes.... (5 Replies)
Discussion started by: gimley
5 Replies

8. Shell Programming and Scripting

Bash script to find the number of files and identify which ones are 0 bytes.

I am writing a bash script to find out all the files in a directory which are empty. I am running into multiple issues. I will really appreciate if someone can please help me. #!/bin/bash DATE=$(date +%m%d%y) TIME=$(date +%H%M) DIR="/home/statsetl/input/civil/test" ... (1 Reply)
Discussion started by: monasharma13
1 Replies

9. Shell Programming and Scripting

Help with Perl script for identifying dupes in column1

Dear all, I have a large dictionary database which has the following structure source word=target word e.g. book=livre Since the database is very large in spite of all the care taken, it so happens that at times the source word is repeated e.g. book=livre book=tome Since I want to... (7 Replies)
Discussion started by: gimley
7 Replies

10. Shell Programming and Scripting

Modify script to remove dupes with two delimiters

Hello, I have a script which removes duplicates in a database with a single delimiter = The script is given below: # script to remove dupes from a row with structure word=word BEGIN{FS="="} {for(i=1;i<=NF;i++){a++;}for(i in a){b=b"="i}{sub("=","",b);$0=b;b="";delete a}}1 How do I modify... (6 Replies)
Discussion started by: gimley
6 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 11:27 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy