There's no way it should be taking 16 hours and eating that much memory. Even one million items is only about 20 levels deep recursion. You must have a rare condition in that code where it just recurs in an infinite loop. With the complete and total lack of any useful comments whatsoever anywhere, I'm left having to reverse-engineer your code, so I'm not completely understanding it yet. But I wonder if this code might serve you better. Wrote it way back in college.
Testing it in my example above, it ends up being about 5 times slower than the ordinary C qsort, even counting the time it takes to copy every node into an array for qsort. There's something to be said for being able to access your data in an indexed manner -- all that repeated list traversal wastes a huge amount of time.
Last edited by Corona688; 03-20-2011 at 04:55 PM..
Hello,
I have got one file with more than 120+ million records(35 GB in size). I have to extract some relevant data from file based on some parameter and generate other output file.
What will be the besat and fastest way to extract the ne file.
sample file format :--... (2 Replies)
Hi,
I have a huge file say with 2000000 records. The file has 42 fields. I would like to pick randomly 1000 records from this huge file. Can anyone help me how to do this? (1 Reply)
Hi Forum.
I was trying to search the following scenario on the forum but was not able to.
Let's say that I have a very large file that has some bad data in it (for ex: 0.0015 in the 12th column) and I would like to find the line number and remove that particular line.
What's the easiest... (3 Replies)
Hello gurus,
I am new to "awk" and trying to break a large file having 4 million records into several output files each having half million but at the same time I want to keep the similar key records in the same output file, not to exist accross the files.
e.g. my data is like:
Row_Num,... (6 Replies)
I would like to print the number of records of 2 files, and divide the two numbers
awk '{print NR}' file1 > output1
awk '{print NR}' file2 > output2
paste output1 output2 > output
awl '{print $1/$2}' output > output_2
is there a faster way? (8 Replies)
Hello All,
I have a large file, more than 50,000 lines, and I want to split it in even 5000 records. Which I can do using
sed '1d;$d;' <filename> | awk 'NR%5000==1{x="F"++i;}{print > x}'Now I need to add one more condition that is not to break the file at 5000th record if the 5000th record... (20 Replies)
I have a file, named records.txt, containing large number of records, around 0.5 million records in format below:
28433005 1 1 3 2 2 2 2 2 2 2 2 2 2 2
28433004 0 2 3 2 2 2 2 2 2 1 2 2 2 2
...
Another file is a key file, named key.txt, which is the list of some numbers in the first column of... (5 Replies)