Quote:
Originally Posted by
frank_rizzo
sqllite ? Berkley Database?
sqlite doesn't seem appropriate, a relational database won't really take advantage of sorted data. Select a range of dates and it won't be able to do a binary search to find the start and end; it'll either do a table-scan, or consult some mammoth index.
I'm less familiar with Berkeley DB, but being a key-pair system it wouldn't appear to have particular facility for sorted data either.
Quote:
Originally Posted by
methyl
How big is "huge"?
300 records a day doesn't sound like a lot, but that's about 100,000 records a year for one logger. And there might be many, ultimately. It could be hundreds of megs to several gigs of data if you wait long enough, and all of it should remain reasonably accessible.
Quote:
What Database Engines or High Level Languages do you have available?
I'm open to most open-source solutions. I've been using MySQL for most database tasks but it, and relational databases in general, doesn't seem suited to large amounts of sorted data. Considering the complexity of the data(or rather, the lack of it) it seems overkill in any case.
But, as I've said: I think I have this problem solved. I've made a fairly simple C application to partition data across a configurable number of sorted flat files based on their first key, it can also select arbitrary ranges from them without grinding a giant index.