10-12-2005
I had an issue like this .
I have to delete few records from the file in the unix box and then load the data into the database .Precisely the records whose size is less then 250 . I think there is nothing in the sql loader which offers this . Checked when cluase etc .
What I am doing now is this : Read file and copy all the records with 250 chracters to a temp file . Then load the data from the temp file
I thougt the pipe will be useful in this case .
Thanks
Ashok
10 More Discussions You Might Find Interesting
1. HP-UX
I would like to know if the following can be done.
route output from an sql select directly to a pipe and compress it at the same time.
regards
Albert (2 Replies)
Discussion started by: booyena1
2 Replies
2. Programming
I currently stuck on a simple program that requires unix pipe. I'm have never programmed with unix pipe before, so if anyone can point me to the right different will be greatly appreciated!
I'm suppose to write a program that the parent spawns many child processes and each of the child process... (1 Reply)
Discussion started by: meh
1 Replies
3. Shell Programming and Scripting
Could one of you shad some light on this:
I need to split the file by determining the record count and than splitting it up into 4 files. Please note, this is not a fixed record length but rather a "|" delimited file.
I am not sure as how to handle reminder/offset for the 4th file.
For... (4 Replies)
Discussion started by: ddedic
4 Replies
4. UNIX for Dummies Questions & Answers
Looking for examples/definition of what the term Pipe means in UNIX. Please provide answers and illustrations if possible or direction. Thanks!:) (5 Replies)
Discussion started by: dfrost126
5 Replies
5. UNIX for Advanced & Expert Users
I am pretty new to UNIX. My client has a requirement where in a directory we have some files with somewhat similar name
like test_XX.txt, test_XY.txt, test_XZ.txt, test_ZZ.txt, test_ZY.txt, test_ZX.txt, test_YY.txt......Out of these files
few files have 0 bytes. Is there a way where we can go... (7 Replies)
Discussion started by: RubinPat
7 Replies
6. Shell Programming and Scripting
I am trying to convert a txt file that includes one long string of data. The lines are separated with hex value 7C (for pipe).
I am trying to process this file using SQR (Peoplesoft) so I thought the easiest thing to do would be to replace the eol char with a CRLF in unix so I can just... (4 Replies)
Discussion started by: sfedak
4 Replies
7. UNIX for Dummies Questions & Answers
EDIT: Nevermind, called a friend who is good at this stuff and he figured it out :D
Hi all,
So I'm trying to teach myself to write programs for unix in c. I am currently creating a program, and I need to pass a struct through a pipe, but I can't figure out how.
The struct I want to pass... (0 Replies)
Discussion started by: twnsfn34
0 Replies
8. Shell Programming and Scripting
Hi
I am new to Unix Shell scripting have a requirement where I have to replace the "unix 1 byte delimiter" with the "pipe" separator and also remove any carriage returns and line feeds if any
The Source File
4 QFH Jungle Hill 32-34 City Road London SE23 3UX
the output should be ... (3 Replies)
Discussion started by: dJHa
3 Replies
9. Shell Programming and Scripting
I have created a fifo named pipe in solaris, which writes the content of a file, line by line, into pipe as below:
$ mkfifo namepipe
$ cat books.txt
"how to write unix code"
"how to write oracle code"
$ cat books.txt >> namepipe &
I have a readpipe.sh script which reads the named... (2 Replies)
Discussion started by: naveen mani
2 Replies
10. UNIX for Beginners Questions & Answers
Hi All,
I'm creating a program which reads millions of bytes from the PIPE and do some processing. As the data is more, the idea is to read the pipe parallely.
Sun Solaris 8
See the code below:
#!/bin/sh
MAXTHREAD=30
awk '{print $1}' metadata.csv > nvpipe &
while
do
... (3 Replies)
Discussion started by: mr_manii
3 Replies
LEARN ABOUT DEBIAN
otfdump
OTFDUMP(1) User Commands OTFDUMP(1)
NAME
otfdump - otfdump
DESCRIPTION
otfdump - convert otf traces or parts of it into a human readable, long
version
Options:
-h, --help
show this help message
-V show OTF version
-f <n> set max number of filehandles available (default: 50)
-o <file>
output file if the ouput file is unspecified the stdout will be used
--num <a> <b>
output only records no. [a,b]
--time <a> <b> output only records with time stamp in [a,b]
--nodef
omit definition records
--noevent
omit event records
--nostat
omit statistic records
--nosnap
omit snapshot records
--nomarker
omit marker records
--nokeyvalue
omit key-value pairs
--fullkeyvalue show key-value pairs including the contents
of byte-arrays
--procs <a>
show only processes <a> <a> is a space-seperated list of process-tokens
--records <a>
show only records <a> <a> is a space-seperated list of record-type-numbers record-type-numbers can be found in OTF_Definitions.h
(OTF_*_RECORD)
-s, --silent
do not display anything except the time otfdump needed to read the tracefile
otfdump 1.10.2 May 2012 OTFDUMP(1)