1. Insert data in a way that it's always sorted, as Corona688 has already noted.
2. Don't use new/delete in a loop.
3. Don't use C++ I/O routines - use C open/read/close or other low-level routines.
---------- Post updated at 04:34 PM ---------- Previous update was at 04:34 PM ----------
My problem is the implementation. I want to try programming using some available "libraries". I appreciate any code example on top of mine. This sounds lazy, but I'm self-learning by practice. Of course googled a while, but did not get any similar example. Thanks a lot!
I would agree to achenles 1(Corona688) and 3 points.
On top of that I would just use simple dynamic array with pointer to next element, something like :
when reading elements from file on complete element just start comparing SEQ_size from start of list, until point where element_size from file equals or smaller to element in list, then in case of smaller - add element to list (unique one). If equal memcmp(list_elemen, file_element, SEQ_size) until list element is greater, equal or SEQ_size of file element is greater, if equal delete - its equal, if greater add to list before greater element.
P.S. this is case when sorting from small to big.
This way you will fast forward to elements of same length, and then will compare only until you will find equal element or storing spot.
In the end you will have sorted list of unique elements.
Need even faster then make more difficult structures from which you could build graph(data tree). In this area only your imagination and content of element will stop from optimizing even more. Usually more complex structures will pay off when having bigger amount of data.
My problem is the implementation. I want to step to "my second stage" of programming by using those available libraries.
"hash table" isn't exactly a library, it's different enough from other data structures it's often hand-rolled. Generalizing it too much would run the risk of poor performance, you need to pick the right algorithms for your application. It has a lot of restrictions as well (hard to iterate, deletion can cause something like fragmentation, and it can't be sorted). I've seen a few attempts at building a library for it, but nothing I ever liked very much.
In the end it's not that complicated. It's a big array with strict rules about what data gets put in what element. I'd suggest "open chaining" for your table -- basically an array full of lists -- with an index that's not really hashed at all, just converted from ACGT into boolean. Four letters would be 8 bits, for an array 256 long for example. Then you could just look up the first four letters of your sequence, find that list, and speedily check every possible thing which might contain your sequence without having to brute-force it.
Last edited by Corona688; 01-16-2015 at 11:24 AM..
Hi ,
i wrote a script to convert dates to the formate i want .it works fine but the conversion is tkaing lot of time . Can some one help me tweek this script
#!/bin/bash
file=$1
ofile=$2
cp $file $ofile
mydates=$(grep -Po '+/+/+' $ofile) # gets 8/1/13
mydates=$(echo "$mydates" | sort |... (5 Replies)
Hi,
I have around one lakh records. I have used XML for the creation of the data.
I have used these 2 Perl modules.
use XML::DOM;
use XML::LibXML;
The data will loo like this and most it is textual entries.
<eid>19000</eid>
<einfo>This is the ..........</einfo>
......... (3 Replies)
Hi All,
I have written a script as follows which is taking lot of time in executing/searching only 3500 records taken as input from one file in log file of 12 GB Approximately.
Working of script is read the csv file as an input having 2 arguments which are transaction_id,mobile_number and search... (6 Replies)
Hi Friends,
I wrote the below shell script to generate a report on alert messages recieved on a day. But i for processing around 4500 lines (alerts) the script is taking aorund 30 minutes to process.
Please help me to make it faster and improve the performace of the script. i would be very... (10 Replies)
I have a data file of 2 gig
I need to do all these, but its taking hours, any where i can improve performance, thanks a lot
#!/usr/bin/ksh
echo TIMESTAMP="$(date +'_%y-%m-%d.%H-%M-%S')"
function showHelp {
cat << EOF >&2
syntax extreme.sh FILENAME
Specify filename to parse
EOF... (3 Replies)
hi someone tell me which ways i can improve disk I/O and system process performance.kindly refer some commands so i can do it on my test machine.thanks, Mazhar (2 Replies)
Hi All,
I am using grep command to find string "abc" in one file .
content of file is
***********
abc = xyz
def= lmn
************
i have given the below mentioned command to redirect the output to tmp file
grep abc file | sort -u | awk '{print #3}' > out_file
Then i am searching... (2 Replies)
Hi ,
i'm searching for files over many Aix servers with rsh command using this request :
find /dir1 -name '*.' -exec ls {} \;
and then count them with "wc"
but i would improve this search because it's too long and replace directly find with ls command but "ls *. " doesn't work.
and... (3 Replies)