OK, I will explain what I want to achieve.
I have two data files, I want to generate the result through grouping and sorting the data in these two files.
I will explain a little bit for the output. For the row that starts with "A", "B,C,D,F,H" are extracted from all those lists that contain "A", and they are sorted by their appearing frequency in descending order.
I have already written the code that can generate the desired output, but I have noticed that the split function I used in the program is TOO slow when it comes to splitting a string that contains over 30,000 fields(comma separated). So I am here seeking a solution to making it fast.
This is the code I used to generate the output:
---------- Post updated at 08:39 PM ---------- Previous update was at 07:23 AM ----------
bump...
please, experts...how to make this script fast?
That code is incorrect. Since it occurs within double-quotes, $1 is replaced by the shell with the value of its first positional parameter, before AWK ever sees it. If, instead, you want it to refer the first field in AWK, you need to quote it, \$1.
The only reason I can think of for why you're getting the expected result is that the shell's first positional parameter, $1, is empty, and so AWK is only seeing "{ print }". Since lines split on the comma are yielding one-field records, within AWK, in this specific case, "print" (equivalent to "print $0") is equivalent to "print $1", and hence everything seems okay.
If I'm correct, setting $1 in the shell to a non-null value not equal to a literal '$0' or a literal '$1' will break the code.
Regards,
Alister
---------- Post updated 05-21-10 at 12:19 AM ---------- Previous update was 05-20-10 at 10:01 PM ----------
Quote:
Originally Posted by kevintse
I will explain a little bit for the output. For the row that starts with "A", "B,C,D,F,H" are extracted from all those lists that contain "A", and they are sorted by their appearing frequency in descending order.
I have already written the code that can generate the desired output.
That output is incorrect. If you take a close look at the desired output that you provided, they are incorrectly sorted. The easiest to spot is "H:A,B,C,F". H occurs with F twice, and the others once. That line should be, "H:F,A,B,C". As a matter of fact, with the exception of the C and D lines, they are all wrong. A quick look at your code suggests that the problem lies in:
Quote:
Originally Posted by kevintse
Specifically, 'for (i in arr)' is not guaranteed to return the array elements in any specific order. That pipeline sorts with the sort command but then that order is disgarded when the "in" operator is used.
Try this awk program which doesn't use the split function :
Input file 1 (kevin1.dat) :
Input file 2 (kevin2.dat) :
Output:
The problem with that script is that we run a sort command for every book.
The following solution use only one sort command :
With the same input files, the out is the same but times are better :
Jean-Pierre.
Last edited by aigles; 05-21-2010 at 05:59 PM..
These 2 Users Gave Thanks to aigles For This Post:
I have been going through your posts, and I don't quite understand a few things in your data files.
Quote:
Originally Posted by kevintse
...I have two data files, I want to generate the result through grouping and sorting the data in these two files.
I will explain a little bit for the output. For the row that starts with "A", "B,C,D,F,H" are extracted from all those lists that contain "A", and they are sorted by their appearing frequency in descending order.
...
The portion in red color is a repetition of the portion immediately above it in your file data1.txt.
(1) Can your actual file have data like that ?
(2) If yes, then can the "second" set be different ? For example, is it possible to have two lines like so -
(3) If yes, then can there be more than two sets in data1.txt ? like so -
(4) Do you collect a unique set of "lists", for each character on the left, in such a case ?
For example, for A => list1, list2, list3, list4, list5, list7 ?
(5) The character "C" is associated with 3 lists in data1.txt:
And these lists have the following set of characters:
So, the distinct set of characters associated with list1, list2 and list6 should be => (A, B, C, F, H)
Your desired output for "C" is like so -
Have you removed "C" from the right-hand-side list, because it is common on either side of the ":" character ?
(6) Is that also the reason you've omitted "G:G" from your desired output ?
Quote:
...
This is the code I used to generate the output:
...
What's the current response time for your actual data "data1.txt" (the one that has more than 30,000 strings delimited by commas) ?
And what is the acceptable response time for the same?
Try this awk program which doesn't use the split function :
Input file 1 (kevin1.dat) :
Input file 2 (kevin2.dat) :
Output:
The problem with that script is that we run a sort command for every book.
The following solution use only one sort command :
With the same input files, the out is the same but times are better :
Jean-Pierre.
Hi, aigles
Thank you so so much. your script is much faster than mine.
And it helps me a lot, I have learned many things about AWK(oh, it surprises me that AWK can be written this way) from your script.
Thank you!
It seems to me that both files contain the same information, though in different formats. A simpler solution would be to use a different algorithm, which builds an internal list of book-pairs in one pass using one data file:
Test run:
A perl solution which is probably faster:
Test run, using the same data file as with the sh/awk/sort solution:
Note: Its been about 10 years since I've written anything more than a one-liner in perl, so perhaps a perl guru can slash that to a couple of lines.
Hi, Alister
I am really grateful for your succinct solutions.
The perl script is faster. But I still need some help from you, I have never learnt perl, and I don't have spare time to learn it for the moment, I want to modify your script to just print the top 20 books associating each book. how?
Hello;
I have a file consists of 4 columns separated by tab. The problem is the third fields. Some of the them are very long but can be split by the vertical bar "|". Also some of them do not contain the string "UniProt", but I could ignore it at this moment, and sort the file afterwards. Here is... (5 Replies)
my @d =split('\|', $_);
west|ACH|3|Y|LuV|N||N||
Qt|UWST|57|Y|LSV|Y|Bng|N|KT|
It Returns d as 8 for First Line, and 9 as for Second Line . I want to Process Both the Files, How to Handle It. (3 Replies)
Hi... I have a question regarding the split function in PERL.
I have a very huge csv file (more than 80 million records). I need to extract a particular position(eg : 50th position) of each line from the csv file. I tried using split function. But I realized split takes a very long time.
Also... (1 Reply)
Hi... I have a question regarding the split function in PERL.
I have a very huge csv file (more than 80 million records). I need to extract a particular position(eg : 50th position) of each line from the csv file. I tried using split function. But I realized split takes a very long time.
Also... (0 Replies)
Hi... I have a question regarding the split function in PERL.
I have a very huge csv file (more than 80 million records). I need to extract a particular position(eg : 50th position) of each line from the csv file. I tried using split function. But I realized split takes a very long time.
Also... (1 Reply)
Hi,
I have some output in the form of:
#output:
abc123
def567
hij890
ghi324
the above is in one column, stored in the variable x ( and if you wana know about x... x=sprintf(tolower(substr(someArray,1,1)substr(userArray,3,1)substr(userArray,2,1)))
when i simply print x (print x) I get... (7 Replies)
I have gone through all the threads in the forum and tested out different things. I am trying to split a 3GB file into multiple files. Some files are even larger than this.
For example:
split -l 3000000 filename.txt
This is very slow and it splits the file with 3 million records in each... (10 Replies)
$mystring = "name:blk:house::";
print "$mystring\n";
@s_format = split(/:/, $mystring);
for ($i=0; $i <= $#s_format; $i++) {
print "index is $i,field is $s_format";
print "\n";
}
$size = $#s_format + 1;
print "total size of array is $size\n";
i am expecting my size to be 5, why is it... (5 Replies)
Hi all!
I am relatively new to UNIX staff, and I have come across a problem:
I have a big directory, which contains 100 smaller ones. Each of the 100 contains a file ending in .txt , so there are 100 files ending in .txt
I want to split each of the 100 files in smaller ones, which will contain... (4 Replies)