I have a script show below, but this script loads data from txt file into a table,
but i have multiple input files(xyzload.txt,xyz1load.txt,xyz2load.txt......) in the unix folder ,
can we load these files in diff tables (table 1, table2 .............) in load.
or
can we load this files in the same table with the file name as another column
-----------------script-----------------
Code:
Last edited by bartus11; 04-26-2014 at 06:50 PM..
Reason: Please use [code][/code] tags.
I need some help from Oracle and UNIX expertise point of view.
I have two tables, METADATA_A and METADATA_B. I need to switch loading these tables. If we load METADATA_A today, the following
week we would have to load METADATA_B.
There is a public synonym "METADATA" that sits on top of... (2 Replies)
Hi ,
Can you guys please help as I have list of files xaa, xab, xac.........xza for eg in which to perform load the 1st (xaa) and insert into table, then only proceed for the 2nd , 3rd and so forth.
In other words, before 1st one finished, 2nd one shall not load and insert to table, and so... (0 Replies)
Hi ,
I want to read the data from 9 tables in oracle DB into 9 different files in the same connection instance (session). I am able to get data from one table to one file with below code :
X=`sqlplus -s user/pwd@DB <<eof
select col1 from table1;
EXIT;
eof`
echo $X>myfile
Can anyone... (2 Replies)
I'm pretty new to the database world and I've run into a mental block of sorts. I've been unable to find the answer anywhere. Here's my problem: I have several tables and everything is as normalized as possible (as I've been lead to understand normalization.) Normalization has lead to some... (1 Reply)
Say I have two tables like below..
status
HId sName dName StartTime EndTime
1 E E 9:10 10:10
2 E F 9:15 10:15
3 G H 9:17 10:00
logic
Id devName capacity free Line
1 E 123 34 1
2 E 345 ... (3 Replies)
Hi everyone,
I once again got stuck with merging tables and was wondering if someone could help me out on that problem.
I have a number of tab delimited tables which I need to merge into one big one. All tables have the same header but a different number of rows (this could be changed if... (6 Replies)
multiple files to load into different tables,
I have a script show below, but this script loads data from txt file into a table,
but i have multiple input files(xyzload.txt,xyz1load.txt,xyz2load.txt......) in the unix folder ,
can we load these files in diff tables (table 1, table2... (0 Replies)
I want to lookup values from two different tables based on common columns and append. The trick is the column to be looked up is not fixed and varies , so it has to be detected from the header. How can I achieve this at once, for multiple data files, but lookup tables fixed.
The two lookup... (5 Replies)
Hi,
I need to load data from two files to a single table.
My requirement is that I get two files in which a few column data are manadatory.
These files are identified based on the file name.
For example, I have two files ABCFile and BCDFile. ABCFile has mandatory data in column 3 and 4... (0 Replies)
Hello All,
just wanted to export multiple tables from oracle sql using unix shell script to csv file and the below code is exporting only the first table.
Can you please suggest why? or any better idea?
export FILE="/abc/autom/file/geo_JOB.csv"
Export= `sqlplus -s dev01/password@dEV3... (16 Replies)
Discussion started by: Hope
16 Replies
LEARN ABOUT DEBIAN
bulk_loader
BULK_LOADER(1)BULK_LOADER(1)NAME
bulk_loader - PgQ consumer that loads urlencoded records to slow databases
SYNOPSIS
bulk_loader.py [switches] config.ini
DESCRIPTION
bulk_loader is PgQ consumer that reads url encoded records from source queue and writes them into tables according to configuration file.
It is targeted to slow databases that cannot handle applying each row as separate statement. Originally written for BizgresMPP/greenplumDB
which have very high per-statement overhead, but can also be used to load regular PostgreSQL database that cannot manage regular
replication.
Behaviour properties: - reads urlencoded "logutriga" records. - does not do partitioning, but allows optionally redirect table events. -
does not keep event order. - always loads data with COPY, either directly to main table (INSERTs) or to temp tables (UPDATE/COPY) then
applies from there.
Events are usually procuded by pgq.logutriga(). Logutriga adds all the data of the record into the event (also in case of updates and
deletes).
QUICK-START
Basic bulk_loader setup and usage can be summarized by the following steps:
1. pgq and logutriga must be installed in source databases. See pgqadm man page for details. target database must also have pgq_ext
schema.
2. edit a bulk_loader configuration file, say bulk_loader_sample.ini
3. create source queue
$ pgqadm.py ticker.ini create <queue>
4. Tune source queue to have big batches:
$ pgqadm.py ticker.ini config <queue> ticker_max_count="10000" ticker_max_lag="10 minutes" ticker_idle_period="10 minutes"
5. create target database and tables in it.
6. launch bulk_loader in daemon mode
$ bulk_loader.py -d bulk_loader_sample.ini
7. start producing events (create logutriga trggers on tables) CREATE OR REPLACE TRIGGER trig_bulk_replica AFTER INSERT OR UPDATE ON
some_table FOR EACH ROW EXECUTE PROCEDURE pgq.logutriga(<queue>)
CONFIG
Common configuration parameters
job_name
Name for particulat job the script does. Script will log under this name to logdb/logserver. The name is also used as default for PgQ
consumer name. It should be unique.
pidfile
Location for pid file. If not given, script is disallowed to daemonize.
logfile
Location for log file.
loop_delay
If continuisly running process, how long to sleep after each work loop, in seconds. Default: 1.
connection_lifetime
Close and reconnect older database connections.
log_count
Number of log files to keep. Default: 3
log_size
Max size for one log file. File is rotated if max size is reached. Default: 10485760 (10M)
use_skylog
If set, search for [./skylog.ini, ~/.skylog.ini, /etc/skylog.ini]. If found then the file is used as config file for Pythons logging
module. It allows setting up fully customizable logging setup.
Common PgQ consumer parameters
pgq_queue_name
Queue name to attach to. No default.
pgq_consumer_id
Consumers ID to use when registering. Default: %(job_name)s
Config options specific to bulk_loader
src_db
Connect string for source database where the queue resides.
dst_db
Connect string for target database where the tables should be created.
remap_tables
Optional parameter for table redirection. Contains comma-separated list of <oldname>:<newname> pairs. Eg: oldtable1:newtable1,
oldtable2:newtable2.
load_method
Optional parameter for load method selection. Available options:
0
UPDATE as UPDATE from temp table. This is default.
1
UPDATE as DELETE+COPY from temp table.
2
merge INSERTs with UPDATEs, then do DELETE+COPY from temp table.
LOGUTRIGA EVENT FORMAT
PgQ trigger function pgq.logutriga() sends table change event into queue in following format:
ev_type
(op || ":" || pkey_fields). Where op is either "I", "U" or "D", corresponging to insert, update or delete. And pkey_fields is
comma-separated list of primary key fields for table. Operation type is always present but pkey_fields list can be empty, if table has
no primary keys. Example: I:col1,col2
ev_data
Urlencoded record of data. It uses db-specific urlecoding where existence of = is meaningful - missing = means NULL, present = means
literal value. Example: id=3&name=str&nullvalue&emptyvalue=
ev_extra1
Fully qualified table name.
COMMAND LINE SWITCHES
Following switches are common to all skytools.DBScript-based Python programs.
-h, --help
show help message and exit
-q, --quiet
make program silent
-v, --verbose
make program more verbose
-d, --daemon
make program go background
Following switches are used to control already running process. The pidfile is read from config then signal is sent to process id specified
there.
-r, --reload
reload config (send SIGHUP)
-s, --stop
stop program safely (send SIGINT)
-k, --kill
kill program immidiately (send SIGTERM)
03/13/2012 BULK_LOADER(1)