Today (Saturday) We will make some minor tuning adjustments to MySQL.

You may experience 2 up to 10 seconds "glitch time" when we restart MySQL. We expect to make these adjustments around 1AM Eastern Daylight Saving Time (EDT) US.


Remove duplicates in a dataframe (table) keeping all the different cells of just one of the columns


Login or Register to Reply

 
Thread Tools Search this Thread
# 1  
Remove duplicates in a dataframe (table) keeping all the different cells of just one of the columns

Hello all,
I need to filter a dataframe composed of several columns of data to remove the duplicates according to one of the columns. I did it with pandas. In the main time, I need that the last column that contains all different data ( not redundant) is conserved in the output like this:
Code:
A         B           C             D
a1        b1           c1            d1
a2       b2          c2           d2

output:
Code:
A         B           C             D
ad        bd       cd            d1,d2

where ad bd and cd are the dereplicated output rows and in D we have that for each of the unique rows we have all the data separated by a comma in one single cell for each unique row.
# 2  
You may want to try to explain that again.
I know that I do not see how you get from that example of 3 lines to 2 lines.
# 3  
Basically, I have a tabular file with 4 columns (A,B,C,D). and several rows (1,2,3,4,5,6,7,....)
Considering column A the data are redundant (like :
Code:
A                           B        C                  D
apple                  15        aaa           agcacagcagc
apple                  25        bbb         acgacgacgcga
banana               12        cccc        acagcgaagccga
cherry                 36        ddd        actgctgtcgagtag
berry                   55        eee        gactgatgctgtcgtc
banana               36        ffff         cacacgtgtgct

I need to output like:
Code:
A                         B              C            D
apple                25           aaa         agcacagcagc;acgacgacgcga
banana            36           cccc       acagcgaagccga;cacacgtgtgct
cherry              36           ddd        actgctgtcgagtag
berry                55            eee        gactgatgctgtcgtc

I don't really mind column C so whatever he keeps in the output it's ok. for column B I keep the higher ( I managed to do it with pandas but i'm not able to do the trick on column D)

thanks
# 4  
Code:
awk '
($1 in A)       { if($2 > A[$1][2]) A[$1][2] = $2
                        A[$1][4] = A[$1][4] ";" $4
                        next
                }
                { for(n = split($0, M); n; n--) A[$1][n] = M[n]
                }
END             { for(i in A) {
                        for(j = 1; j <= NF; j++) printf "%s ",  A[i][j]
                                print ""
                        }
                }' file

# 5  
Moderator's Comments:
Mod Comment The title of this thread has been changed from:
Remove duplicates in a dataframe (table) keepping all the different cells of just one of the columns
to:
Remove duplicates in a dataframe (table) keeping all the different cells of just one of the columns
to make searches more likely to find desired threads.
# 6  
Hello pedro88,

Could you please try following too, I am reading Input_file 2 times here and output will be in same sequence in which $1 appears to be in Input_file.

Code:
awk '
FNR==NR{
  a[$1]=a[$1]>$2?a[$1]:$2
  b[$1]=a[$1]>$2?b[$1]?b[$1]:$0:$0
  next
}
($1 in a){
  print b[$1]
  delete a[$1]
}
'   Input_file  Input_file

Output will be as follows.

Code:
A                           B        C                  D
apple                  25        bbb         acgacgacgcga
banana               36        ffff         cacacgtgtgct
cherry                 36        ddd        actgctgtcgagtag
berry                   55        eee        gactgatgctgtcgtc

Thanks,
R. Singh
Login or Register to Reply

|
Thread Tools Search this Thread
Search this Thread:
Advanced Search

10 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

Sort and remove duplicates in directory based on first 5 columns:

I have /tmp dir with filename as: 010020001_S-FOR-Sort-SYEXC_20160229_2212101.marker 010020001_S-FOR-Sort-SYEXC_20160229_2212102.marker 010020001-S-XOR-Sort-SYEXC_20160229_2212104.marker 010020001-S-XOR-Sort-SYEXC_20160229_2212105.marker 010020001_S-ZOR-Sort-SYEXC_20160229_2212106.marker... (4 Replies)
Discussion started by: gnnsprapa
4 Replies

2. UNIX for Beginners Questions & Answers

Merge cells in all rows of a HTML table dynamically.

Hello All, I have visited many pages in Unix.com and could find out one solution for merging the HTML cells in the 1st row. (Unable to post the complete URL as I should not as per website rules). But, however I try, I couldn't achieve this merging to happen for all other rows of HTML... (17 Replies)
Discussion started by: Mounika
17 Replies

3. Shell Programming and Scripting

Remove duplicates by keeping the order intact

Hello friends, I have a file with duplicate lines. I could eliminate duplicate lines by running sort <file> |uniq >uniq_file and it works fine BUT it changes the order of the entries as it we did "sort". I need to remove duplicates and also need to keep the order/sequence of entries. I... (1 Reply)
Discussion started by: magnus29
1 Replies

4. Shell Programming and Scripting

Remove Duplicates on multiple Key Columns and get the Latest Record from Date/Time Column

Hi Experts , we have a CDC file where we need to get the latest record of the Key columns Key Columns will be CDC_FLAG and SRC_PMTN_I and fetch the latest record from the CDC_PRCS_TS Can we do it with a single awk command. Please help.... (3 Replies)
Discussion started by: vijaykodukula
3 Replies

5. Shell Programming and Scripting

CSV with commas in field values, remove duplicates, cut columns

Hi Description of input file I have: ------------------------- 1) CSV with double quotes for string fields. 2) Some string fields have Comma as part of field value. 3) Have Duplicate lines 4) Have 200 columns/fields 5) File size is more than 10GB Description of output file I need:... (4 Replies)
Discussion started by: krishnix
4 Replies

6. UNIX Desktop Questions & Answers

Using grep to remove cells instead of lines

I would like to use grep to remove certain strings from a text file but I can't use the grep -v option because it removes the whole line that includes the string whereas I just want to remove the string. How do I go about doing that? My input file: Magmas CEU rs12542019 CPNE1 RBM12 CEU... (1 Reply)
Discussion started by: evelibertine
1 Replies

7. UNIX for Dummies Questions & Answers

Two files; if cells match then copy over other columns

My current issue is dealing with two space delimited files. The first file has column 1 as the sample ID's, then columns 2 - n as the observations. The second file has column 1 as the sample ID's, column 2 as the mother ID's, column 3 as the father ID's, column 4 as the gender, and column 5... (3 Replies)
Discussion started by: Renyulb28
3 Replies

8. Shell Programming and Scripting

Search based on 1,2,4,5 columns and remove duplicates in the same file.

Hi, I am unable to search the duplicates in a file based on the 1st,2nd,4th,5th columns in a file and also remove the duplicates in the same file. Source filename: Filename.csv "1","ccc","information","5000","temp","concept","new" "1","ddd","information","6000","temp","concept","new"... (2 Replies)
Discussion started by: onesuri
2 Replies

9. Shell Programming and Scripting

Remove duplicates based on the two key columns

Hi All, I needs to fetch unique records based on a keycolumn(ie., first column1) and also I needs to get the records which are having max value on column2 in sorted manner... and duplicates have to store in another output file. Input : Input.txt 1234,0,x 1234,1,y 5678,10,z 9999,10,k... (7 Replies)
Discussion started by: kmsekhar
7 Replies

10. Shell Programming and Scripting

Deleting table cells in a script

I'd like to use sed or awk to do this but I'm weak on both along with RE. Looking for a way with sed or awk to count for the 7th table data within a table row and if the condition is met to delete "<td>and everything in between </td>". Since the table header start on a specific line each time, that... (15 Replies)
Discussion started by: phpfreak
15 Replies

Featured Tech Videos