CSV Split field to check multiple codes


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting CSV Split field to check multiple codes
# 8  
Old 01-05-2016
Quote:
Originally Posted by SDohmen
Sorry i totally forgot to add those.

Here are the examples and the output like i would want it.

File 1
Code:
Barcode;Sku
4960999865300;testnderp
1230000000001;ND001
1230000000002;ND002
1230000000003;ND003
1230000000008;ND008
1230000000009, 1240000000012;ND009
1230000000010, 1240000000011;ND010

File2
Code:
4260236270233,4260236270677,4260236271766;8654951;;10;555.28;Pyr
4960999865300;8935214;0023030102;555;342.70;REV
4006341672544;7345028;0023490;2;555.98;REV
0688334022518;11123553;99263999;555;.01;Pac
0688334022457;123502;;555;.01;Pac
1230000555010, 1240000000010;12345;99263999;555;33.01;Pac

Output file 3:
Code:
Barcode;Art_ours;Art_Sup;SKU;stock;price;manufacturer
4960999865300;testnderp;8935214;0023030102;555;342.70;REV
1230000000010, 1240000000010;ND010;12345;99263999;555;33.01;Pac

I hope this clarifies the combining i am trying to achieve.





I just tested this code and below is the output it creates:

Code:
4960999865300;testnderp
;YVVSZN23;21355379;10
1230000000009;ND009
;YVVGZNDG;72051;2
1230000000010;ND010
;YVUVZN01;IOLO-SR-BOX;1

The following was in file 1 and file 2

Code:
Barcode;Sku
4960999865300;testnderp
1230000000001;ND001
1230000000009, 1240000000012;ND009
1230000000010, 1240000000011;ND010




4960999865300;YVVSZN23;21355379;10;41,31;Symantec
1230000000009;YVVGZNDG;72051;2;43,10;TEST
1230000000010;YVUVZN01;IOLO-SR-BOX;1;24,79;Diverse

It seems to add the lines from file 2 on a new line instead of behind the original lines but i am not sure how to fix that. Could you check that out?
Code:
perldoc -f chomp

I did mention it was untested
This User Gave Thanks to Skrynesaver For This Post:
# 9  
Old 01-05-2016
Quote:
Originally Posted by RudiC
Why does the second data line appear in the output? There's no match between the files.

---------- Post updated at 11:01 ---------- Previous update was at 10:57 ----------

Try
Code:
awk -F";" '
BEGIN           {print "Barcode;Art_ours;Art_Sup;SKU;stock;price;manufacturer"
                }
NR == FNR       {T[$1] = $2
                 next
                }
$1 in T         {$2 = T[$1] OFS $2
                 print
                }
' OFS=";" file[12]

There are 3 lines that contain the same EAN code in both files so those should be able to join together.

---------- Post updated at 11:28 AM ---------- Previous update was at 11:24 AM ----------

Quote:
Originally Posted by RavinderSingh13
Hello SDohmen,

I am confuse here by seeing your sample output, as follows are the comments on same.
Code:
Barcode;Art_ours;Art_Sup;SKU;stock;price;manufacturer
4960999865300;testnderp;8935214;0023030102;555;342.70;REV
### Above line you are comparing the first columns of File1 and File2.
1230000000010, 1240000000010;ND010;12345;99263999;555;33.01;Pac
### Above line you are NOT comparing the first column of File1 and File2?

If you want to compare always the first columns of both the Input_files then following may help you.
Code:
awk -F"[,|;]" 'FNR==NR{A[$1]=$0;next} ($1 in A){q=$1;sub($1,X);print A[q] $0}' Input_file1 Input_file2

Output will be as follows for above command.
Code:
 4960999865300;testnderp;8935214;0023030102;555;11.70;REV

If you have some other conditions with your query, I request you to please let us know the complete details on same, it will be helpful for us to help you. Hope this helps.

Thanks,
R. Singh
Sorry that it looked confusing. I had to manual type them over from the shell. The first column is the one that it should join together as long as 1 of the EAN codes is in both files.

I tested your AWK one-liner and it functions exactly as i needed it. Thank you for that.


Quote:
Originally Posted by Skrynesaver
Code:
perldoc -f chomp

I did mention it was untested
I know but i am no perl guru. Sorry Smilie

---------- Post updated at 01:07 PM ---------- Previous update was at 11:28 AM ----------

It seems that the awk line still has a error in there

I just tested it with real data of which there where supposed to be 3 results in the merged file.

Code:
Barcode;Sku;Fabrikant
123, 456, 567;microsoftnatural;Microsoft
123456789;testnderp;Acer
1230000000001;ND001;TestMerk
1230000000002;ND002;TestMerk
1230000000003;ND003;TestMerk
1230000000004;ND004;TestMerk
1230000000005;ND005;TestMerk
1230000000006, 9789062387410;ND006;TestMerk
1230000000007;ND007;TestMerk
1230000000008;ND008;TestMerk
0071020493219, 0112040014968, 3609740140679, 4716659214199, 4716659214359, 4716659214373, 8862272143272, 9865745929723;90-IGY7002M00-3PA0;Asus
0886227648320, 0886227648351, 0886227648412, 4053162744332, 4716659648352, 4716659648406, 4716659648413, 8862276483206;90IG00W0-BM3G20;Asus
0681495007080;glohdtest;Kobo
5703887123507;nd1235458;Nilfisk
0889894100573;TST1234;Lenovo
0889233045763;tst1111;Lenovo
PARTNERREF001;PARTNERREF001;Acer
32115464163654;9789062387410;AMD

File2
Code:
4716659648413;49200;90IG00W0-BM3G20;8;267,23;ASUS
4716659214199;74277;90-IGY7002M01-3PA0-;7;217,56;ASUS
5703887123507;93631;128470701;5;209,16;Nilfisk

Result
Code:
5703887123507;nd1235458;Nilfisk;93631;128470701;5;209,16;Nilfisk

The strange thing is that 1 of the lines works fine but the others dont work. I am guessing that is because of the multiple EAN codes in file 1. When i put the EAN code in the first position it worked fine but on any other position then the first it does not work.
# 10  
Old 01-05-2016
Code:
awk  '     
BEGIN           {FS = OFS = ";"
                 print "Barcode;Art_ours;Art_Sup;SKU;stock;price;manufacturer"
                }
NR == FNR       {gsub (/ /, "", $1)
                 for (n = split($1, T, ","); n > 0; n--) S[T[n]]=$2
                 next
                }
$1 in S         {$2 = S[$1] OFS $2
                 print
                } 
' file[12]
Barcode;Art_ours;Art_Sup;SKU;stock;price;manufacturer
4716659648413;90IG00W0-BM3G20;49200;90IG00W0-BM3G20;8;267,23;ASUS
4716659214199;90-IGY7002M00-3PA0;74277;90-IGY7002M01-3PA0-;7;217,56;ASUS
5703887123507;nd1235458;93631;128470701;5;209,16;Nilfisk

This User Gave Thanks to RudiC For This Post:
# 11  
Old 01-05-2016
Quote:
Originally Posted by RudiC
Code:
awk  '     
BEGIN           {FS = OFS = ";"
                 print "Barcode;Art_ours;Art_Sup;SKU;stock;price;manufacturer"
                }
NR == FNR       {gsub (/ /, "", $1)
                 for (n = split($1, T, ","); n > 0; n--) S[T[n]]=$2
                 next
                }
$1 in S         {$2 = S[$1] OFS $2
                 print
                } 
' file[12]
Barcode;Art_ours;Art_Sup;SKU;stock;price;manufacturer
4716659648413;90IG00W0-BM3G20;49200;90IG00W0-BM3G20;8;267,23;ASUS
4716659214199;90-IGY7002M00-3PA0;74277;90-IGY7002M01-3PA0-;7;217,56;ASUS
5703887123507;nd1235458;93631;128470701;5;209,16;Nilfisk

Thank you for the fast reply. I tested it and it seems to work superb of which you have my thanks. 1 question remains though. Could you perhaps explain the code perhaps so i know how it works?
# 12  
Old 01-05-2016
Hello SDohmen,

Following may help you in same.
Code:
awk  '                                      
BEGIN           {FS = OFS = ";"                                                   ###### Setting Field seprator and output field seprator as ;
                 print "Barcode;Art_ours;Art_Sup;SKU;stock;price;manufacturer"    ###### print Barcode;Art_ours;Art_Sup;SKU;stock;price;manufacturer in begining of the code execution.
                }
NR == FNR       {gsub (/ /, "", $1)                                               ###### NR==FNR is a condition which willbe TRUE when first file is being read because FNR is a variable which will have the line number same as NR but FNR's value will be reset each time new file is being read and NR's value will be keep increasing till last file completes.
                 for (n = split($1, T, ","); n > 0; n--) S[T[n]]=$2               ###### starting a for loop here, in which an array named T created whose delimiter is ,(comma) only for $1 because split works on method of split(line/field, array_name, delimiter). Now we are taking the count of array T's element into variable n. and for loop will run till n's value is greater than 0. Then we have created array named S whose index is value of array T's value whose index is variable n and assigning array S's value to $2 of file1.
                 next                                                             ###### Now don't do any further action by next keyword and start from condition NR==FNR.
                }
$1 in S         {$2 = S[$1] OFS $2                                                ###### This will be executed while second file file2 is being read, so $1 in S means, any $1's value which is present in array S, if this is TRUE then perform action where make $2(second field's) value to S[$1] OFS $2's value and assign again it to $2 itself.
                 print                                                            ###### Now print the complete line(with new value of $2, which we had formed in previous step.)
                } 
' file[12]                                                                        ###### mentioning Input_files file1 and file2 here.

Thanks,
R. Singh
These 2 Users Gave Thanks to RavinderSingh13 For This Post:
# 13  
Old 01-05-2016
Code:
awk  '
BEGIN           {FS = OFS = ";"                 # set in- and output separator
                 print "Barcode;..."            # print header                
                }
NR == FNR       {gsub (/ /, "", $1)             # remove spaces from first field
                 for (n = split($1, T, ","); n > 0; n--) S[T[n]]=$2
                                                # split SKUs into S array indexed by EANs
                 next                           # skip processing; goto next line
                }
$1 in S         {$2 = S[$1] OFS $2              # if EAN found in S arr: insert SKU into field 2
                 print                          # print modified line
                }
' file[12]

This User Gave Thanks to RudiC For This Post:
# 14  
Old 01-05-2016
thank you both for the excellent information. Even though the code is still a bit garbled for me i am beginning to understand what the code does.
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

awk - CSV file - field with single or multiple spaces

Hi, In a csv file, I want to select records where first column has zero or multiple spaces. Eg: abc.csv ,123,a ,22,b ,11,c a,11,d So output should be: ,123,a ,22,b ,11,c Please advise (5 Replies)
Discussion started by: vegasluxor
5 Replies

2. Shell Programming and Scripting

Match columns from two csv files and update field in one of the csv file

Hi, I have a file of csv data, which looks like this: file1: 1AA,LGV_PONCEY_LES_ATHEE,1,\N,1,00020460E1,0,\N,\N,\N,\N,2,00.22335321,0.00466628 2BB,LES_POUGES_ASF,\N,200,200,00006298G1,0,\N,\N,\N,\N,1,00.30887539,0.00050312... (10 Replies)
Discussion started by: djoseph
10 Replies

3. Shell Programming and Scripting

Split a .csv File into Multiple Files

Hi guys, I have a requirement where i need to split a .csv file into multiple files. Say for example i have data.csv file and i have splitted that into multiple files based on some conditions i.e first file should have 100, last file 50 and other files 1000 each. Am passing the values in... (2 Replies)
Discussion started by: azherkn3
2 Replies

4. Linux

How do I format a Date field of a .CSV file with multiple commas in a string field?

I have a .CSV file (file.csv) whose data are all enclosed in double quotes. Sample format of the file is as below: column1,column2,column3,column4,column5,column6, column7, Column8, Column9, Column10 "12","B000QRIGJ4","4432","string with quotes, and with a comma, and colon: in... (3 Replies)
Discussion started by: dhruuv369
3 Replies

5. Shell Programming and Scripting

How to split file into multiple files using awk based on 1 field in the file?

Good day all I need some helps, say that I have data like below, each field separated by a tab DATE NAME ADDRESS 15/7/2012 LX a.b.c 15/7/2012 LX1 a.b.c 16/7/2012 AB a.b.c 16/7/2012 AB2 a.b.c 15/7/2012 LX2 a.b.c... (2 Replies)
Discussion started by: alexyyw
2 Replies

6. Shell Programming and Scripting

Split a file into multiple files based on field value

Hi, I've one requirement. I have to split one comma delimited file into multiple files based on one of the column values. How can I achieve this Unix Here is the sample data. In this case I have split the files based on date column(c4) Input file c1,c2,c3,c4,c5... (1 Reply)
Discussion started by: manasvi24
1 Replies

7. Shell Programming and Scripting

awk to split one field and print the last two fields within the split part.

Hello; I have a file consists of 4 columns separated by tab. The problem is the third fields. Some of the them are very long but can be split by the vertical bar "|". Also some of them do not contain the string "UniProt", but I could ignore it at this moment, and sort the file afterwards. Here is... (5 Replies)
Discussion started by: yifangt
5 Replies

8. Shell Programming and Scripting

Field validations in multiple files CSV

Hi, I am regular reader of this forum. My advanced thanks to everyone. Below given are the sample files INDATA (Main data) Fild1Çfld2Çfld3….. Fild1Çfld2Çfld3….. Fild1Çfld2Çfld3….. Fild1Çfld2Çfld3….. Fild1Çfld2Çfld3….. . . N records (140000) eg GRPDATA (Reference file) (2 Replies)
Discussion started by: hyperion.krish
2 Replies

9. Shell Programming and Scripting

Matching lines across multiple csv files and merging a particular field

I have about 20 CSV's that all look like this: "","","","","","","","","","","","","","","",""What I've been told I need to produce is the exact same thing, but with each file now containing the start_code from every other file where the email matches. It doesn't matter if any of the other... (1 Reply)
Discussion started by: Demosthenes
1 Replies
Login or Register to Ask a Question