Improve the performance of my C++ code


 
Thread Tools Search this Thread
Top Forums Programming Improve the performance of my C++ code
# 8  
Old 01-15-2015
If you want to run faster,

1. Insert data in a way that it's always sorted, as Corona688 has already noted.
2. Don't use new/delete in a loop.
3. Don't use C++ I/O routines - use C open/read/close or other low-level routines.

---------- Post updated at 04:34 PM ---------- Previous update was at 04:34 PM ----------

Quote:
Originally Posted by Corona688
It is UNIX text, not Windows text.
I don't always have access to a Unix box.
# 9  
Old 01-15-2015
You say you have a lot of memory so I think you would be best using a hash table. Store both each entry and it's reverse-complementary.
# 10  
Old 01-15-2015
My problem is the implementation. I want to try programming using some available "libraries". I appreciate any code example on top of mine. This sounds lazy, but I'm self-learning by practice. Of course googled a while, but did not get any similar example. Thanks a lot!

Last edited by yifangt; 01-16-2015 at 11:30 AM..
# 11  
Old 01-16-2015
I would agree to achenles 1(Corona688) and 3 points.


On top of that I would just use simple dynamic array with pointer to next element, something like :
Code:
typedef struct _sList _sList;
struct _sList{
    _sList    *next;
    int       SEQ_size;
    SEQ       *element;
};

when reading elements from file on complete element just start comparing SEQ_size from start of list, until point where element_size from file equals or smaller to element in list, then in case of smaller - add element to list (unique one). If equal memcmp(list_elemen, file_element, SEQ_size) until list element is greater, equal or SEQ_size of file element is greater, if equal delete - its equal, if greater add to list before greater element.
P.S. this is case when sorting from small to big.

This way you will fast forward to elements of same length, and then will compare only until you will find equal element or storing spot.

In the end you will have sorted list of unique elements.

Need even faster then make more difficult structures from which you could build graph(data tree). In this area only your imagination and content of element will stop from optimizing even more. Usually more complex structures will pay off when having bigger amount of data.
This User Gave Thanks to Lauris_k For This Post:
# 12  
Old 01-16-2015
Quote:
Originally Posted by yifangt
My problem is the implementation. I want to step to "my second stage" of programming by using those available libraries.
"hash table" isn't exactly a library, it's different enough from other data structures it's often hand-rolled. Generalizing it too much would run the risk of poor performance, you need to pick the right algorithms for your application. It has a lot of restrictions as well (hard to iterate, deletion can cause something like fragmentation, and it can't be sorted). I've seen a few attempts at building a library for it, but nothing I ever liked very much.

In the end it's not that complicated. It's a big array with strict rules about what data gets put in what element. I'd suggest "open chaining" for your table -- basically an array full of lists -- with an index that's not really hashed at all, just converted from ACGT into boolean. Four letters would be 8 bits, for an array 256 long for example. Then you could just look up the first four letters of your sequence, find that list, and speedily check every possible thing which might contain your sequence without having to brute-force it.

Last edited by Corona688; 01-16-2015 at 11:24 AM..
This User Gave Thanks to Corona688 For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

How to improve the performance of this script?

Hi , i wrote a script to convert dates to the formate i want .it works fine but the conversion is tkaing lot of time . Can some one help me tweek this script #!/bin/bash file=$1 ofile=$2 cp $file $ofile mydates=$(grep -Po '+/+/+' $ofile) # gets 8/1/13 mydates=$(echo "$mydates" | sort |... (5 Replies)
Discussion started by: vikatakavi
5 Replies

2. Shell Programming and Scripting

Improve performance of echo |awk

Hi, I have a script which looks like this. Input file data1^20 data2^30 #!/bin/sh file"/home/Test.txt" while read line do echo $line |awk 'BEGIN { FS = "^" } ; { print $2 }' echo $line |awk 'BEGIN { FS = "^" } ; { print $1 }' | gzip | wc -c done <"$file" How can i... (4 Replies)
Discussion started by: chetan.c
4 Replies

3. Programming

Help with improve the performance of grep

Input file: #content_1 12314345345 242467 #content_14 436677645 576577657 #content_100 3425546 56 #content_12 243254546 1232454 . . Reference file: content_100 (1 Reply)
Discussion started by: cpp_beginner
1 Replies

4. Shell Programming and Scripting

How to improve the performance of parsers in Perl?

Hi, I have around one lakh records. I have used XML for the creation of the data. I have used these 2 Perl modules. use XML::DOM; use XML::LibXML; The data will loo like this and most it is textual entries. <eid>19000</eid> <einfo>This is the ..........</einfo> ......... (3 Replies)
Discussion started by: vanitham
3 Replies

5. Shell Programming and Scripting

Want to improve the performance of script

Hi All, I have written a script as follows which is taking lot of time in executing/searching only 3500 records taken as input from one file in log file of 12 GB Approximately. Working of script is read the csv file as an input having 2 arguments which are transaction_id,mobile_number and search... (6 Replies)
Discussion started by: poweroflinux
6 Replies

6. Shell Programming and Scripting

Improve the performance of a shell script

Hi Friends, I wrote the below shell script to generate a report on alert messages recieved on a day. But i for processing around 4500 lines (alerts) the script is taking aorund 30 minutes to process. Please help me to make it faster and improve the performace of the script. i would be very... (10 Replies)
Discussion started by: apsprabhu
10 Replies

7. Shell Programming and Scripting

Any way to improve performance of this script

I have a data file of 2 gig I need to do all these, but its taking hours, any where i can improve performance, thanks a lot #!/usr/bin/ksh echo TIMESTAMP="$(date +'_%y-%m-%d.%H-%M-%S')" function showHelp { cat << EOF >&2 syntax extreme.sh FILENAME Specify filename to parse EOF... (3 Replies)
Discussion started by: sirababu
3 Replies

8. UNIX for Dummies Questions & Answers

Improve Performance

hi someone tell me which ways i can improve disk I/O and system process performance.kindly refer some commands so i can do it on my test machine.thanks, Mazhar (2 Replies)
Discussion started by: mazhar99
2 Replies

9. Shell Programming and Scripting

How to improve grep performance...

Hi All, I am using grep command to find string "abc" in one file . content of file is *********** abc = xyz def= lmn ************ i have given the below mentioned command to redirect the output to tmp file grep abc file | sort -u | awk '{print #3}' > out_file Then i am searching... (2 Replies)
Discussion started by: pooga17
2 Replies

10. UNIX for Advanced & Expert Users

improve performance by using ls better than find

Hi , i'm searching for files over many Aix servers with rsh command using this request : find /dir1 -name '*.' -exec ls {} \; and then count them with "wc" but i would improve this search because it's too long and replace directly find with ls command but "ls *. " doesn't work. and... (3 Replies)
Discussion started by: Nicol
3 Replies
Login or Register to Ask a Question