Creating script to read many files and load into database via single control file
Hi,
I have many files but with only 2 names , I want to load the data of that file into database through sqlldr with single control file. how can i do that ?????
Example:
now these files should be loaded into same database but different tables.
COntrol FIle:
Last edited by Neo; 12-15-2019 at 11:04 AM..
Reason: Code Tags Please See YT Video on this: https://youtu.be/4BuPvWJV__k
Since i'm new to scripting i'm findind it difficult to code a script. The script has to be an executable with 2 paramters passed to it.The Parameters are
1. The Control file name(.ctl file)
2. The Data file name(.csv file)
Does anybody have an idea about it? :confused: (3 Replies)
Hi....can you guys help me out in this script??
Below is a portion text file and it contains these:
GEF001 000093625 MKL002510 000001 000000 000000 000000 000000 000000 000001
GEF001 000093625 MKL003604 000001 000000 000000 000000 000000 000000 000001
GEF001 000093625 MKL005675 000001... (1 Reply)
Need shell script to read two file at same time and print output in single file
Example I have two files 1) file1.txt 2) file2.txt
File1.txt contains
Aaa
Bbb
Ccc
Ddd
Eee
Fff
File2.txt contains
Zzz
Yyy
Xxx (10 Replies)
I have to create a single file from three files, Please see below for samples:
day.txt
20090101
20090102
item.txt
123456789101
12345678910209
1234567891
str.txt
1
12
123
output.txt
20090101123456789101 1 0
2009010112345678910209 12 ... (2 Replies)
Hi,
We have almost 45,000 data files created by a script daily. The file names are of format-ODS.POS.<pharmacyid>.<table name>.<timestamp>.dat. There will be one data file like this for each pharmacy and each table.(Totally around 45,000)
The requirement is to create a control file for each... (2 Replies)
Hello everyone,
I am new to shell scripting/ loading data into a database.
I want to load data into Oracle database using SQL loader. Can some one please explain why do we need unix shell script to load the data into the database? Also can someone please explain what has to be in that script?... (5 Replies)
I need to read a text file that contain columns of data, i need to read 1st column as a function to call, and others are the data i need to get into a ksh script.
I am quite new to ksh scripting, i am not very sure how to read each row line by line and the data in each columns of that line, set... (3 Replies)
Hi,
I want your help to see if there is a way wherein we can load a set of files which are lying in unix server into my database using sql loader.
Also the person who will be running sql loader does not have access to unix system.
So is there a way i can provide any sql loader script wherein... (1 Reply)
All,
I am trying to create a report on the duration of an ETL load from the file arrival to the final dump in to a database for SLA's.
Does anyone have any guidance or ideas on how metadata can be extracted; information of a file: like file name, created timestamp, count of records and load... (1 Reply)
Delete
---- Original post, restored by mod after being deleted by abhilashnair ----
I have a requirement where, I need to create a control file which will have 3 columns in the header row as below:
Filename
Count
Checksum
This above control file has to contain metadata as above... (2 Replies)
Discussion started by: abhilashnair
2 Replies
LEARN ABOUT DEBIAN
bp_fast_load_gff
BP_FAST_LOAD_GFF(1p) User Contributed Perl Documentation BP_FAST_LOAD_GFF(1p)NAME
bp_fast_load_gff.pl - Fast-load a Bio::DB::GFF database from GFF files.
SYNOPSIS
% bp_fast_load_gff.pl -d testdb dna1.fa dna2.fa features1.gff features2.gff ...
DESCRIPTION
This script loads a Bio::DB::GFF database with the features contained in a list of GFF files and/or FASTA sequence files. You must use the
exact variant of GFF described in Bio::DB::GFF. Various command-line options allow you to control which database to load and whether to
allow an existing database to be overwritten.
This script is similar to load_gff.pl, but is much faster. However, it is hard-coded to use MySQL and probably only works on Unix
platforms due to its reliance on pipes. See bp_load_gff.pl for an incremental loader that works with all databases supported by
Bio::DB::GFF, and bp_bulk_load_gff.pl for a fast MySQL loader that supports all platforms.
NOTES
If the filename is given as "-" then the input is taken from standard input. Compressed files (.gz, .Z, .bz2) are automatically
uncompressed.
FASTA format files are distinguished from GFF files by their filename extensions. Files ending in .fa, .fasta, .fast, .seq, .dna and their
uppercase variants are treated as FASTA files. Everything else is treated as a GFF file. If you wish to load -fasta files from STDIN,
then use the -f command-line swith with an argument of '-', as in
gunzip my_data.fa.gz | bp_fast_load_gff.pl -d test -f -
The nature of the load requires that the database be on the local machine and that the indicated user have the "file" privilege to load the
tables and have enough room in /usr/tmp (or whatever is specified by the $TMPDIR environment variable), to hold the tables transiently.
If your MySQL is version 3.22.6 and was compiled using the "load local file" option, then you may be able to load remote databases with
local data using the --local option.
About maxfeature: the default value is 100,000,000 bases. If you have features that are close to or greater that 100Mb in length, then the
value of maxfeature should be increased to 1,000,000,000. This value must be a power of 10.
If the list of GFF or fasta files exceeds the kernel limit for the maximum number of command-line arguments, use the --long_list
/path/to/files option.
The adaptor used is dbi::mysqlopt. There is currently no way to change this.
COMMAND-LINE OPTIONS
Command-line options can be abbreviated to single-letter options. e.g. -d instead of --database.
--database <dsn> Mysql database name
--create Reinitialize/create data tables without asking
--local Try to load a remote database using local data.
--user Username to log in as
--fasta File or directory containing fasta files to load
--password Password to use for authentication
--long_list Directory containing a very large number of
GFF and/or FASTA files
--maxfeature Set the value of the maximum feature size (default 100Mb; must be a power of 10)
--group A list of one or more tag names (comma or space separated)
to be used for grouping in the 9th column.
--gff3_munge Activate GFF3 name munging (see Bio::DB::GFF)
--summary Generate summary statistics for drawing coverage histograms.
This can be run on a previously loaded database or during
the load.
--Temporary Location of a writable scratch directory
SEE ALSO
Bio::DB::GFF, bulk_load_gff.pl, load_gff.pl
AUTHOR
Lincoln Stein, lstein@cshl.org
Copyright (c) 2002 Cold Spring Harbor Laboratory
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See DISCLAIMER.txt for
disclaimers of warranty.
perl v5.14.2 2012-03-02 BP_FAST_LOAD_GFF(1p)