05-21-2014
For speed, start at the DB and work back:
Teradata FastLoad - Wikipedia, the free encyclopedia
FastLoad works with flat files and empty tables, perl can write flat files, and you can make an empty staging table. The big challenge is getting data inside the db; table to table is usually much faster and low overhead. For stremaing data, think mini-batches and be amazed how near real time it can be. You could even do simple inserts for lowest latency on a simple connection, buffering input with another thread, until a buffer high water line is passed, and then switch to mini-batch until a low water line is passed. Writing the next file while the current one is being FastLoad'd and unstaged means you can get a high peak capability with minimal latency (another thread). The higher the load, the bigger the batches and latency get, but economy of scale softens the curve. If the buffering format is FastLoad compatible, moving buffered to file is faster and easy.
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
I want to connect to bteq using Perl script and i am unaware of how to do this?
In Shell script it is very simple,
Bteq<<-END
.logon .....
...
.quit ...
END
but what is the syntex for perl?
Please help me out .
Thanks
Kunal (1 Reply)
Discussion started by: kunal_dixit
1 Replies
2. Shell Programming and Scripting
Hi,
I want to write a shell script to compare two tables in teradata.these tables are present on different servers.
I want to connect to both servers in single login in order to fetch and compare the data in one go.
Thanks (1 Reply)
Discussion started by: monika
1 Replies
3. UNIX for Dummies Questions & Answers
Hi,
I am trying to run some SQL scripts under the UNIX server using BTEQ. When I try to create a log file, the file gets populated only to the point where I log into BTEQ.
The log file for the running of the actual script does not seem to be stored.
Would any one know ehy this could be... (3 Replies)
Discussion started by: zsrinathz
3 Replies
4. Programming
I am trying execute a sql file from the script and the sql file has the following code snippet, which throws out the error given below
FOR C_FINELINE_LP AS CURSOR C_SLS FOR
SELECT * FROM WM_UTIL.FLT_DEP
WHERE LOAD_IND = 'N'
DO
.....
.....
....
END FOR;
FOR C_FLTSLS_STR_LP AS... (0 Replies)
Discussion started by: yschd
0 Replies
5. Programming
How do i connect from C program to teradata Database?
The C program is being executed from a Unix script, AIX.
I am calling a C program from a Unix shell script and the C Program executes some SQLs on Teradata Database. (3 Replies)
Discussion started by: yschd
3 Replies
6. Shell Programming and Scripting
Hi,
I want a script for connecting teradata to load the file to teradata table.
Can you please help me out.
Thanks in advance. (1 Reply)
Discussion started by: victory
1 Replies
7. Shell Programming and Scripting
I am using below code to connect terdata and getting the query result in a file.Now i want to use same code for different tables,plz tell me how to pass table name as parameter.i tried using as below code but not working.
bteq < /download/viv/dev/ops/Scripts/ter.sh FLTORGTKR_ORG_etc..
... (1 Reply)
Discussion started by: katakamvivek
1 Replies
8. Shell Programming and Scripting
Hi, I am trying to use Teradata fastexport in ksh, but getting error as below
temp1.ksh: line 7: syntax error at line 10: `newline' unexpected
below is my code:
#!/bin/ksh
LOGON_STR="TDDB/user,paswd;"
DATAFILE=/path/a.lst;
DEBUG=0
>$DATAFILE
fexp > /dev/null... (3 Replies)
Discussion started by: usrrenny
3 Replies
9. Shell Programming and Scripting
Hi All,
I need to write Unix shell script. To star with : I need to do some file checking on unix file system, then based on file existance, I need to run diff SQL in Teradata Bteq. After that, depending on Results of SQL, I need to code other shell scripting like moving file, within same... (4 Replies)
Discussion started by: Shilpi Gupta
4 Replies
10. Shell Programming and Scripting
I have values below for which diff field is giving error like
"invalid time interval" in teradata
Might be it is not doing calculation anymore after exceeding minute(4) value
END_TS 2/2/2018 08:50:49.000000
START_TS 1/5/2018 17:30:02.000000
SLA_TIME 23:59:59.000000
select... (0 Replies)
Discussion started by: himanshupant
0 Replies
LEARN ABOUT DEBIAN
uucpsend.ctl
UUCPSEND.CTL(5) Administration UUCPSEND.CTL(5)
NAME
uucpsend.ctl - list of sites to feed via uucpsend
DESCRIPTION
The file /etc/news/uucpsend.ctl specifies the default list of sites to be fed by uucpsend(8). The program is able to read site information
from other related configuration files as well.
Comments begin with a hash mark (``#'') and continue through the end of the line. Blank lines and comments are ignored. All other lines
should consist of six fields separated by a colon. Each line looks like
site:max_size:queue_size:header:compressor:args
The first field site is the name of the site as specified in the newsfeeds(5) file. This is also the name of the UUCP system connected to
this site.
The second field max_size describes the maximum size of all batches in kbytes that may be sent to this site. If this amount of batches is
reached, this site will not be batched with this run and a reason will be logged into the logfile. This test includs all UUCP jobs, not
only the ones sent to rnews (performing ``du -s'').
The third field queue_size specifies the maximum size in kbytes of one batch. This argument is passed directly to batcher(8).
The fourth field header defines the text that shall appear in the command header of every batch file. `#! ' is prefixed each batch. Nor-
mally you'll need cunbatch for compress, gunbatch or zunbatch for gzip. This header is important since there is not standard way to handle
gzip'ed batches. Using this and the next argument you're also able to use any compressor you like. So you receive a certain amount of
flexibility by using uucpsend. If you don't want to have any compression leave the field empty.
The fifth field compressor names a program that reads from stdin and writes to stdout. Normally it modifies the input stream by compress-
ing it, such as compress(1) or gzip(1).
The sixth field args consists of additional arguments that are passed directly to uux when sending the batch.
One entry in the main configuration file is mandatory. There must exist a line containing the default values for all these variables. To
achieve this the pseudo site /default/ is used.
One default entry could look like this:
/default/:2000:200:cunbatch:compress:-r -n
This reflects a minimal setup. The maximal size that may be used by the UUCP spool directory is 2MB. Each batch will be max. 200 kBytes
big. The header of each batch will contain the string `cunbatch' and compress(1) is used to compress the batches. `-r -n' is passed to
uux(1) which means no notification will be sent if uux was successful and uux won't start the uucico(8) program when spooling the file.
HISTORY
Written by Martin Schulze <joey@infodrom.org> for InterNetNews. Most of the work is derived from nncpsend.ctl(5) by Landon Curt Noll
<chongo@toad.com> for InterNetNews.
SEE ALSO
batcher(8), newsfeeds(5), uucpsend(8), uux(1).
Infodrom 21 November 2001 UUCPSEND.CTL(5)