Okay, so this one is a bit above my knowledge level so I'm hoping for some pointers.
Here's the scenario:
I have a backup system on my network that makes single file images of the machines it's backing up and uses an sqlite database to keep track of everything. As is pretty typical with backup systems, it simply creates new backups and deletes the old. It also names these files with the date and time it ran as part of the file name. There is no way to alter this behavior.
I also have a cloud backup service that uses delta-blocking to only upload changed file blocks.
Now, because the local backup system is creating new files, if I simply point the cloud backups to those folders it's going to try and upload the whole dang thing every time since it sees it as a completely new file (which it technically is). However, the new file is the same format and everything as the old one, and after doing some test, if I can maintain a hardlink of a single name to the most current backup image, the cloud backup service sees it as the same file and uses the delta-blocking to only update data that is different in the new image from the previous.
I am decent with SQL queries so I've got as far as using the data in the DB and some fancy string manipulation to create the list of commands that would do this for me... i.e. query output is several lines similar to:
I'm stuck at creating a bash script or something that I can schedule to run that will rm the old links, run the query, and use the results to create the new links... unless you guys think there's a better or easier way. I'm just going this direction because SQL is what I know. I was actually going to try to create a bash script that basically creates another bash script by echoing a few lines of commands, appending the query output somehow, and then executes what it just created... that's when I decided to post here.
Question: doesn't the cloud storage have deduplication?
This should automatically find identical blocks on different files and avoid separate blocks on storage devices. Of course it must first upload the whole new file.
So, if the upload time/volume is the concern, your links can make sense.
On most platforms ln -f "overwrites" an existing link.
and then relink file.upload with
You can check with
which two files are hard-linked.
Last edited by MadeInGermany; 06-19-2015 at 05:22 PM..
Question: doesn't the cloud storage have deduplication?
This should automatically find identical blocks on different files and avoid separate blocks on storage devices. Of course it must first upload the whole new file.
So, if the upload time/volume is the concern, your links can make sense.
I assume they do, but yes, the time it takes to upload the amount of data is the issue I'm trying to solve.
Quote:
Originally Posted by MadeInGermany
On most platforms ln -f "overwrites" an existing link.
and then relink file.upload with
You can check with
which two files are hard-linked.
Awesome! That helps a lot. So now I've essentially got a query that creates the list of commands I need to run. Is this really as easy as making a 2 line bash script like so
Hi there,
I have multiple rows of data.
For example:
S/N | Name| Age
I would like to store them into sqlite database after doing some grepping in CSV and output them into console/html format.
Will it be possible? (1 Reply)
Here is the query (and some sample results) I plan to use to build a new timeline page in the mockup vue.js usercp I am working on.
When the postid is the same as lastpostid, this means the timeline entry will be - "{{Member}} Started Discussion {{Thread Title}} at {{date and time}}" and when... (4 Replies)
Hi,
So I downloaded this kernel source and was able to build it successfully.
But I want to add this SDK source code inside, can anyone help me how to do this? Note that the SDK source can be built by itself.
I added the SDK in the main Makefile:
init-y := init/
#added SDK... (0 Replies)
Hi,
I need to query Oracle database for 100 users. I have these 100 users in a file. I need a shell script which would read this User file (one user at a time) & query database.
For instance:
USER CITY
--------- ----------
A CITY_A
B CITY_B
C ... (2 Replies)
hlow all
i have file like this :
BDG0100.2011091620162100CF5341.DAT
BDG0100.2011091720175500CF5342.DAT
BDG0100.2011091820192900CF5343.DAT
BDG0100.2011091920210600CF5344.DAT
but now i want make file like this
20110916.DAT
20110919.DAT
20110918.DAT
20110919.DAT
so what i can do that... (3 Replies)
Hi,
I have a requirement as below which needs to be done viz UNIX shell script
(1) I have to connect to an Oracle database
(2) Exexute "SELECT field_status from table 1" query on one of the tables.
(3) Based on the result that I get from point (2), I have to update another table in the... (6 Replies)
Hi everyone,
I have a requirement that requires me to fill an sqlite database with 100,000 entries (no duplicates).
I will start out by giving the command that will insert the values necessary to populate the database:
# sqlite /var/local/database/dblist "insert into list... (2 Replies)
Hello All,
I am trying to write a Perl script that is using 'SQLite' as the application needs a very light weight Database. I wanted to know how to catch exceptions when I run queries in SQLite. Without this the Perl script comes to a halt everytime an exception occurs. Please help.
Regards,... (4 Replies)
i'm on freebsd 5.2.1, and from a fresh installation i've used pkg_add for the latest ported version of apache, as well as installing php 5. supposedly php5 comes with native support for sqlite (in the binary package), and this is what i added.
i am trying to install a site engine (the 'gyrator'... (0 Replies)