Best way to dump metadata to file: when and by who?


 
Thread Tools Search this Thread
Top Forums Programming Best way to dump metadata to file: when and by who?
# 1  
Old 06-29-2009
Best way to dump metadata to file: when and by who?

Hi,

my application (actually library) indexes a file of many GB producing tables (arrays of offset and length of the data indexed) for later reuse. The tables produced are pretty big too, so big that I ran out of memory in my process (3GB limit), when indexing more than 8GB of file or so. Although I could fork another process to work around the memory limit size, this would not fix the problem, so I'd like to dump the tables to a file in order to free the memory, and avoid to re-index the same file more than once.

Bear in mind that currently, the tables produced are kept in memory in a single-linked list, shared with another thread that use it to produce another list of filtered data. So I'd rather not change this schema. The other thread only access the list once the whole file has been indexed.

Now, the questions I'm asking myself are:

- When and how it's best time to dump the tables to a file?

Dumping a table as it gets full doesn't sound very efficient to me. Would I keep nothing in memory? The linked list would always be empty? If I decide to keep N tables in memory, and dump every N, how do I avoid making a check for how many tables I have
in memory at every cycle ?

- Who should dump the metadata produced to file? Different thread? Same thread that index the data? I also wouldn't like to produce metadata files when the file processed is less then a giga (small file case), but at the same time I wouldn't want to complex the code of the indexer, that right now is pretty simply: parse, find the data, create an entry table, add it. If the table is full, create another one and add it to the linked list.

- Let's say I figured out (thanks to you) the best way (in my case) to dump the metadata. What policy should I use to load the data in order to let the other thread
filtering the index data without radically changing the way it works now (e.g. through the linked list) ?

One solution that come to my mind, that would avoid a drastical change in my schema is to create a "list manager" that would provide an interface to add and retrieve element from the list. This entity (either a thread or a process) would take care of keeping some data in memory (linked list) and some other in the file.

Please share with me your skill and experience! :-)

Thanks in advance.

Regards,
S.
# 2  
Old 06-30-2009
Wow, what a question. Are you re-engineering a database system?
Quote:
- When and how it's best time to dump the tables to a file?
On slightly-less than gigabyte boundaries. Actually, 256 kB blocks also work very well.
Quote:
- Who should dump the metadata produced to file? Different thread?
If it's in a different thread, what's the point? You can't just free the memory if the other thread still has a lock on it.
Quote:
What policy should I use to load the data
I don't think that's answerable unless one really knows your existing software architecture.
# 3  
Old 07-08-2009
Quote:
Originally Posted by otheus
Wow, what a question. Are you re-engineering a database system?
Nope. I'm just trying to write an application as efficient as possible, that needs to dump indexes table, and I'd like to learn as much as possible from this experience.

Quote:
Originally Posted by otheus
On slightly-less than gigabyte boundaries. Actually, 256 kB blocks also work very well.
Do you mean to execute an fwrite of a 256KB buffer? Currently I have a list where every element (table) is an array of N entry, for a total size of 4KB per array, and I dump every table at once with a single fwrite.

Quote:
Originally Posted by otheus
If it's in a different thread, what's the point? You can't just free the memory if the other thread still has a lock on it.

I don't think that's answerable unless one really knows your existing software architecture.
Basically one thread (A) indexes the file, while another thread (B) waits for it to finish, in order to use the produced tables (which I used to keep in memory) to process the data in the file. The problem is that the file indexed are huge (~30GB) and produce more than 4GB of data, which I can't keep in memory (limit of 3GB per process) so, at one point or another I have to dump the data produced in a file in order to free the memory.

The other thread (B), based on a flag, either read the tables from the file or the list in memory.

Thanks for your help,
S.
# 4  
Old 07-08-2009
I cannot help other than to quote an old software design maxim:

Quote:
don't reinvent the wheel
# 5  
Old 07-08-2009
Quote:
Originally Posted by otheus
I cannot help other than to quote an old software design maxim:
You mean I should use a database for holding the tables, like sqlite ?
# 6  
Old 07-08-2009
Which database primarily depends on how you many indexable and unique columns you have, on the ratio of readers to writers. sqlite? LOL. I was thinking more along the lines of MySQL or BerkelyDB/SleepyCat DB .
# 7  
Old 07-08-2009
Quote:
Originally Posted by otheus
Which database primarily depends on how you many indexable and unique columns you have, on the ratio of readers to writers. sqlite? LOL. I was thinking more along the lines of MySQL or BerkelyDB/SleepyCat DB .
That's why I wouldn't want to use a database. The work involved, and the dependency produced, is not worth it in my case (IMHO).

I only have one writer, and one reader.

Data are written sequentially, and never modified. Write once, read many.

An ad-hoc solution I thought would be my best way to go.

I appreciate your thought on this.

Thanks,
S.
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. UNIX and Linux Applications

About gvfsd-metadata

I need a hint about gvfsd-metadata using mate on bsd. Or dual-core cpu, quad-core cpu ore an old laptop single core, the gvfsd is an obstacle and does not accelerate anything, vice versa, it slows down many processes, coming from gnome. So someone can give me a hint how to wipe it out for good? I... (1 Reply)
Discussion started by: 1in10
1 Replies

2. Solaris

Solaris 11.2 dump device "kernel without ZFS metadata"

I've never seen this, is it normal for 11.2? Anyway to change it back to dumping metadata or is this simply an overly verbose message I may ignore? kernel without ZFS metadata (4 Replies)
Discussion started by: LittleLebowski
4 Replies

3. UNIX for Advanced & Expert Users

LVM - restore metadata on other disk

Hi guys, I would like to ask your opinion about my theory, how to fix my broken LVM without risking any data loss. I use Archlinux at home. I just love this distro, even it gives me a lots of work (particularly after system updates). Basic system spec: AMD FX(tm)-6100 Six-Core Processor... (1 Reply)
Discussion started by: lyynxxx
1 Replies

4. UNIX for Dummies Questions & Answers

dump display to a file

Hi: I want to dump whatever command I type on the terminal + whatever is the result of that command's execution to one file. Is it possible in unix? Rgds, Indu (3 Replies)
Discussion started by: indu_shr
3 Replies

5. Shell Programming and Scripting

print metadata to jpg

Hi all, I would like to write a scipt that gets gps informatoin from a jpg and print 's it on the lower left corner, In order to get the gps data I have found a tool called jhead. In know that with the help of the imagemagick command convert it is possible to print text on the pictures. ... (11 Replies)
Discussion started by: flash80
11 Replies

6. Programming

How to use a core dump file

Hi All, May be it is a stupid question, but, I would like to know what is the advantage using a core dump file at the moment of debugging using gdb. I know a core dump has information about the state of the application when it crashed, but, what is the difference between debugging using the... (2 Replies)
Discussion started by: lagigliaivan
2 Replies

7. Shell Programming and Scripting

Dump an array in a file

Hi, I'm wondering if there's a way to dump the content of an array into a specified part of a file. I know that I can redirect the output in a way that the array adds in the text file, this is done with ">>", but doing by this way, puts the array at the end of the file, and I'm asking for some... (3 Replies)
Discussion started by: IMD
3 Replies

8. Shell Programming and Scripting

Importing dump file

Hi, I am trying to import 22 .dmp files but facing the problem with the last table file it never ends the import command, only the table is created but the rows of the table don't get imported. This is the problem with only ine table rest 21 tables are being imported properly. Thanks in... (2 Replies)
Discussion started by: anushilrai
2 Replies

9. UNIX for Dummies Questions & Answers

help, what is the difference between core dump and panic dump?

help, what is the difference between core dump and panic dump? (1 Reply)
Discussion started by: aileen
1 Replies
Login or Register to Ask a Question