Median and max of duplicate rows


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Median and max of duplicate rows
# 1  
Old 07-31-2013
Median and max of duplicate rows

Hi all,

plz help me with this, I want to to extract the duplicate rows (column 1) in a file which at least repeat 4 times. then I want to summarize them by getting the max , mean, median and min. The file is sorted by column 1, all the repeated rows appear together.

If number of elements is odd, median is middle one , eg 4th element among 7 sorted numbers ... element number (n+1)/2
If number of elements is even, it is the average of middle 2, eg. average of 4th and 5th element for set of 8 sorted numbers...average of n/2 + 1 and n/2

Code:
Inp

R1 1
R1 2
R1 3
R2 1
R2 2
R2 3
R2 100
R3 5


output

R2 100 26.25 2.5 1

I figured our uniq -d option will give me the duplicate lines, but how do I work with at least 4?


Also, I tried to find the mean and median, getting errors but trying to get this to work.


Code:
sort -n file | awk ' { a[i++]=$2;  N[$1]++}
    END { for (key in i) {
                        avg = sum[key] / N[key];}
x=int((i+1)/2); 
if (x < (i+1)/2)
 print (a[x-1]+a[x])/2 " " avg; 
else print a[x-1] " " avg; }'

# 2  
Old 07-31-2013
Is it this that you are after?

Code:
sort file -k1,1 -k2,2n | awk '
{nbr[$1]++; a[$1]= a[$1] ? a[$1]"@"$2 : $2; sum[$1]+=$2}

END {
    for (key in a) {
        split(a[key], b, "@")
        len = length(b)
        for (i=1;i<=len;i++) {
            avg = sum[key] / nbr[key];
            if (nbr[key]%2) {
                median = b[(nbr[key]+1)/2]
            } else {
                median = (b[(nbr[key]/2)+1] + b[nbr[key]/2])/2
            }
        }
        printf "%s %s %s %s %s\n", key, b[len], avg, median, b[1]
    }
}
'


Last edited by ripat; 07-31-2013 at 08:32 AM.. Reason: typo and tabs expanded
These 2 Users Gave Thanks to ripat For This Post:
# 3  
Old 07-31-2013
This works good for all rows...but how do I print rows only which repeat at least 4 times?
I tried the following modification but it prints out gibberish..

Code:
sort file -k1,1 -k2,2n | awk ' {nbr[$1]++; a[$1]= a[$1] ? a[$1]"@"$2 : $2; sum[$1]+=$2}  END {     for (key in a) {         split(a[key], b, "@")         len = length(b)         for (i=1;i<=len;i++) {             avg = sum[key] / nbr[key];             if (nbr[key]%2) {                 median = b[(nbr[key]+1)/2]             } else {                 median = (b[(nbr[key]/2)+1] + b[nbr[key]/2])/2             }         }
        if (len >3) {  
        printf "%s %s %s %s %s\n", key, b[len], avg, median, b[1]
         }
 } } '

Also my original files are quite large..eg 500mb each, each there a way to speed this up? Right now it takes forever to run

---------- Post updated at 11:54 AM ---------- Previous update was at 11:10 AM ----------

Update..this seems to run fine... but if anything can be done to speed up..please let me know..

Code:
sort testmed.txt -k1,1 -k2,2n | awk '
{nbr[$1]++; a[$1]= a[$1] ? a[$1]"@"$2 : $2; sum[$1]+=$2}

END {
    for (key in a) {
        split(a[key], b, "@")
        len = length(b)
        for (i=1;i<=len;i++) {
            avg = sum[key] / nbr[key];
            if (nbr[key]%2) {
                median = b[(nbr[key]+1)/2]
            } else {
                median = (b[(nbr[key]/2)+1] + b[nbr[key]/2])/2
            }
        }
        if ( len > 3)
        {
        printf "%s %s %s %s %s\n", key, b[len], avg, median, b[1]
        }
    }
}
'

# 4  
Old 07-31-2013
Place your condition on the length higher in the code and also change the way to determine that length. Only marginal speed increase to be expected.

Code:
sort f -k1,1 -k2,2n | awk '
{nbr[$1]++; a[$1]= a[$1] ? a[$1]"@"$2 : $2; sum[$1]+=$2}

END {
  for (key in a) {
    len = nbr[key]
    if ( len > 3 ) {
      split(a[key], b, "@")
      for (i=1;i<=len;i++) {
        avg = sum[key] / nbr[key];
        if (nbr[key]%2) {
          median = b[(nbr[key]+1)/2]
        } else {
          median = (b[(nbr[key]/2)+1] + b[nbr[key]/2])/2
        }
      }
      printf "%s %s %s %s %s\n", key, b[len], avg, median, b[1]
    }
  }
}
'

This User Gave Thanks to ripat For This Post:
# 5  
Old 07-31-2013
thanks ripat !! but there seems to be an error in calculating median. It should be 0.00056 but showing 134.79100, also min should be 0

Code:
cat testiso_GRMZM2G074386

GRMZM2G074386 0.00000
GRMZM2G074386 0.00000
GRMZM2G074386 0.00000
GRMZM2G074386 0.00056
GRMZM2G074386 2.63247
GRMZM2G074386 112.58600
GRMZM2G074386 134.79100

 awk '
> {nbr[$1]++; a[$1]= a[$1] ? a[$1]"@"$2 : $2; sum[$1]+=$2}
>
> END {
>     for (key in a) {
>         split(a[key], b, "@")
>         len = length(b)
>         for (i=1;i<=len;i++) {
>             avg = sum[key] / nbr[key];
>             if (nbr[key]%2) {
>                 median = b[(nbr[key]+1)/2]
>             } else {
>                 median = (b[(nbr[key]/2)+1] + b[nbr[key]/2])/2
>             }
>         }
>         if ( len > 3)
>         {
>         printf "%s %s %s %s %s\n", key, b[len], avg, median, b[1]
>         }
>     }
> }
> ' testiso_GRMZM2G074386
GRMZM2G074386 134.79100 35.7157 134.79100 0.00056

# 6  
Old 07-31-2013
ok, I see where the problem is. The ternary condition was not expecting to see zero values.

Try this:
Code:
{nbr[$1]++; a[$1]= (a[$1]!="") ? a[$1]"@"$2 : $2; sum[$1]+=$2} # NEW


END {
  for (key in a) {
    len = nbr[key]
    if ( len > 3 ) {
      split(a[key], b, "@")
      for (i=1;i<=len;i++) {
        avg = sum[key] / nbr[key];
        if (nbr[key]%2) {
          median = b[(nbr[key]+1)/2]
        } else {
          median = (b[(nbr[key]/2)+1] + b[nbr[key]/2])/2
        }
      }
      printf "%s %s %s %s %s\n", key, b[len], avg, median, b[1]
    }
  }
}
'


Last edited by ripat; 07-31-2013 at 04:40 PM..
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Any 'shortcut' to doing this search for duplicate and print max

Hi, I have a file that contains multiple records of the same database. I need to search for the maximum size of the database. At the moment, I am doing as below: Sample generated file to parse is as below. With the caret (^) delimiter, field 1 is the database name, 2 is the database ID and... (3 Replies)
Discussion started by: newbie_01
3 Replies

2. Shell Programming and Scripting

How to duplicate rows using awk or any other method?

I want to duplicate each row in my file Egfile.txt Name State Age Jack NJ 34 John MA 23 Jessica FL 45 I want the code to produce this output Name State Age Jack NJ 34 Jack NJ 34 John MA 23 John MA 23 Jessica FL 45 Jessica FL 45 (6 Replies)
Discussion started by: sidnow
6 Replies

3. UNIX for Dummies Questions & Answers

get max value every 4 rows between 2 column

Hi all I have a file that has two columns and I need the maximum value in column 2 of 4 positions o rows. for example at position {1..3} there are 4 characters (A, C, G and T) each of these characters with a value with a value in column 2. I need the maximum value in column 2 and the corresponding... (2 Replies)
Discussion started by: xinox
2 Replies

4. Programming

Getting Rows from a MySQL Table with max values?

I feel stupid for asking this because it seems that MYSQL code isn't working the way that I think it should work. Basically I wrote code like this: select * from `Test_DC_Trailer` HAVING max(DR_RefKey); Where the DR_RefKey is a unique numeric field that is auto iterated (like a primary key)... (7 Replies)
Discussion started by: Astrocloud
7 Replies

5. Shell Programming and Scripting

Delete duplicate rows

Hi, This is a followup to my earlier post him mno klm 20 76 . + . klm_mango unix_00000001; alp fdc klm 123 456 . + . klm_mango unix_0000103; her tkr klm 415 439 . + . klm_mango unix_00001043; abc tvr klm 20 76 . + . klm_mango unix_00000001; abc def klm 83 84 . + . klm_mango... (5 Replies)
Discussion started by: jacobs.smith
5 Replies

6. Programming

eliminate duplicate rows - sqlloader

Hi , I have a data file in this format. p1 p2 p3 10 0 10 0 1000 I am using a sqlloader script to load the data into the database table.There is a unique constraint on the columns p1 and p2. So, sqlldr cannot load both the records. This eliminates duplicate records from being... (1 Reply)
Discussion started by: megha2525
1 Replies

7. Shell Programming and Scripting

How to extract duplicate rows

Hi! I have a file as below: line1 line2 line2 line3 line3 line3 line4 line4 line4 line4 I would like to extract duplicate lines (not unique, triplicate or quadruplicate lines). Output will be as below: line2 line2 I would appreciate if anyone can help. Thanks. (4 Replies)
Discussion started by: chromatin
4 Replies

8. HP-UX

How to get Duplicate rows in a file

Hi all, I have written one shell script. The output file of this script is having sql output. In that file, I want to extract the rows which are having multiple entries(duplicate rows). For example, the output file will be like the following way. ... (7 Replies)
Discussion started by: raghu.iv85
7 Replies

9. Shell Programming and Scripting

How to extract duplicate rows

I have searched the internet for duplicate row extracting. All I have seen is extracting good rows or eliminating duplicate rows. How do I extract duplicate rows from a flat file in unix. I'm using Korn shell on HP Unix. For.eg. FlatFile.txt ======== 123:456:678 123:456:678 123:456:876... (5 Replies)
Discussion started by: bobbygsk
5 Replies

10. Shell Programming and Scripting

duplicate rows in a file

hi all can anyone please let me know if there is a way to find out duplicate rows in a file. i have a file that has hundreds of numbers(all in next row). i want to find out the numbers that are repeted in the file. eg. 123434 534 5575 4746767 347624 5575 i want 5575 please help (3 Replies)
Discussion started by: infyanurag
3 Replies
Login or Register to Ask a Question