Sponsored Content
Top Forums Shell Programming and Scripting How to print huge values in sqlplus variable in UNIX? Post 302827283 by bhaski2012 on Friday 28th of June 2013 11:17:14 AM
Old 06-28-2013
How to print huge values in sqlplus variable in UNIX?

Hi,

I ahve a unix code as below.
Code:
sqlTxt=$*
sqlReturn=`echo "set feedback off;
set heading off;
set term off;
set verify off;
set serveroutput on size unlimited
set lin 1000;
VARIABLE GV_return_val NUMBER;
VARIABLE GV_script_error varchar2(4000);
EXEC :GV_return_code := 0;
EXEC :GV_script_error := NULL;
WHENEVER SQLERROR EXIT ${FATAL}
$sqlTxt
/
print :GV_script_error;
exit :GV_return_code;
" | sqlplus -s ${connectStr}`

Now from another program GV_script_error variable is getting populated and they value of that charecter string is nearly 12000 charecter.

In sqlplus I can't print more than 4000 so when that huge amount of value is getting passed in this GV_script_error variable it's not priniting anything and the below error is coming as chareter size is greater than 4000.

"
Code:
PL/SQL: numeric or value error: character string buffer too small
"

So in this scenario I need to append/merge the value in two variables or I need to find out some other way to print all the charecters that is getting passed in GV_script_error variable whether it may be > 4000 .

Can you please give me a idea for this .

Thanks in advance.

So

Last edited by vbe; 06-28-2013 at 12:35 PM.. Reason: please use code tags...
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

How to pass unix variable to SQLPLUS

hi fellows, can any body tell me how to pass unix variables to oracle code is... #! /bin/ksh echo ENTER DATE VALUE's read START_DATE END_DATE sqlplus xyx/abc@oracle select * from table1 where coloumn1 between $START_DATE and $END_DATE; is this is correct way........... Thanks in... (1 Reply)
Discussion started by: chiru
1 Replies

2. UNIX for Advanced & Expert Users

passing unix variable to sqlplus without a file name

Hi, I want to input unix variable to sqlplus.The following is working fine sqlplus username/password @dummy.sql param1 param2 << EOF create user $1 identified by $2; EOF But I dont want any file name to be passed,I just want to pass the parameter. Is there any way to that?? Thanks... (3 Replies)
Discussion started by: sakthi.abdullah
3 Replies

3. UNIX for Dummies Questions & Answers

select count(*) in sqlplus into variable unix shell

Need to select count(*) from table to check for zero result in unix script (2 Replies)
Discussion started by: struggle
2 Replies

4. Shell Programming and Scripting

Passing the unix variable to sqlplus

Hi, I am writing a script which creates an external table using a shell script. My requirement is like this. Usage: . ./r.ksh <table_name> - this should create an external table. e.g . ./r.ksh abc - this should create an external table as abc_external. How do i achieve this? Please... (5 Replies)
Discussion started by: Anaramkris
5 Replies

5. UNIX for Dummies Questions & Answers

how to print the values of pcpu of any process in unix

hiiiiiiiiii this is shyam.i have written a code that it will take the process id of any process and using it print the value of pcpu etc but the problem is it prints the same value every time it is in loop the code for this is given below. while true do y=`ps... (2 Replies)
Discussion started by: ronit_ok2000
2 Replies

6. Shell Programming and Scripting

How to print array values whose name is inside a variable

I have a array as CArray=( a1 a2 ) and a1,a2,a3 are also array as: a1=(1 2 3) a2=(3 4 5) now I have this in my code: for i in `echo "${CArray}"` do echo ${$i} done It is giving error as :"bad substitution" It should give me value as 1 2 3 3 4 5 how can I get this...Can u please... (2 Replies)
Discussion started by: joshilalit2004
2 Replies

7. Shell Programming and Scripting

Piping Unix Variable Array values into AWK

#ksh Here is my code: ERRORLIST="43032 12001 12002 12003 12004 34019 49015 49016 49017 49018 49024 49025 49026 58004 72003 12005 12006 12007 12008 12011 12012 16024 16023" for ERROR in ${ERRORLIST} do awk -v l="$lastdate" '/^....-..-../&&$0>l{d=$0}d&&/Error: '"${ERROR}"'/{print... (3 Replies)
Discussion started by: k1ko
3 Replies

8. Shell Programming and Scripting

Passing multiple column values to UNIX variable

sqlplus -s $USER_ID@$SID/$PWD<<EOF>sql_1.txt set feedback off set heading off select 114032 as c_1 from dual ; EOF for i in `cat sql_1.txt` do sh script_1.sh $i Currently i am passing one column value to the single unix variable. How can i pass the values from 2... (2 Replies)
Discussion started by: rafa_fed2
2 Replies

9. UNIX for Dummies Questions & Answers

More efficient way to print variable values?

Hello, Through the process of a executing a shell script, I extract the values for a number of variables (arbitrarily declared as a through i) and towards the end I print them out like shown below: #!bin/sh # bits of code to get values for the variables ...... ...... # print the values... (4 Replies)
Discussion started by: Gussifinknottle
4 Replies

10. UNIX for Beginners Questions & Answers

Sqlplus variable UNIX

hi guys i have a sqlplus : sqlplus -s username/password << EOF @mysql.sql EOF in mysql.sql there is a count of a table, i want to write in a variabile unix. how can i do? Thanks a lot Regards Francesco. (3 Replies)
Discussion started by: Francesco_IT
3 Replies
funidx(7)							SAORD Documentation							 funidx(7)

NAME
Funidx - Using Indexes to Filter Rows in a Table SYNOPSIS
This document contains a summary of the user interface for filtering rows in binary tables with indexes. DESCRIPTION
Funtools Table Filtering allows rows in a table to be selected based on the values of one or more columns in the row. Because the actual filter code is compiled on the fly, it is very efficient. However, for very large files (hundreds of Mb or larger), evaluating the filter expression on each row can take a long time. Therefore, funtools supports index files for columns, which are used automatically during fil- tering to reduce dramatically the number of row evaluations performed. The speed increase for indexed filtering can be an order of magni- tude or more, depending on the size of the file. The funindex program creates an index on one or more columns in a binary table. For example, to create an index for the column pi in the file huge.fits, use: funindex huge.fits pi This will create an index named huge_pi.idx. When a filter expression is initialized for row evaluation, funtools looks for an index file for each column in the filter expression. If found, and if the file modification date of the index file is later than that of the data file, then the index will be used to reduce the number of rows that are evaluated in the filter. When Spatial Region Filtering is part of the expression, the columns associated with the region are checked for index files. If an index file is not available for a given column, then in general, all rows must be checked when that column is part of a filter expression. This is not true, however, when a non-indexed column is part of an AND expression. In this case, only the rows that pass the other part of the AND expression need to be checked. Thus, in some cases, filtering speed can increase significantly even if all columns are not indexed. Also note that certain types of filter expression syntax cannot make use of indices. For example, calling functions with column names as arguments implies that all rows must be checked against the function value. Once again, however, if this function is part of an AND expres- sion, then a significant improvement in speed still is possible if the other part of the AND expression is indexed. For example, note below the dramatic speedup in searching a 1 Gb file using an AND filter, even when one of the columns (pha) has no index: time fundisp huge.fits'[idx_activate=0,idx_debug=1,pha=2348&&cir 4000 4000 1]' "x y pha" x y pha ---------- ----------- ---------- 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 42.36u 13.07s 6:42.89 13.7% time fundisp huge.fits'[idx_activate=1,idx_debug=1,pha=2348&&cir 4000 4000 1]' "x y pha" x y pha ---------- ----------- ---------- idxeq: [INDEF] idxand sort: x[ROW 8037025:8070128] y[ROW 5757665:5792352] idxand(1): INDEF [IDX_OR_SORT] idxall(1): [IDX_OR_SORT] 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 3999.48 4000.47 2348 1.55u 0.37s 1:19.80 2.4% When all columns are indexed, the increase in speed can be even more dramatic: time fundisp huge.fits'[idx_activate=0,idx_debug=1,pi=770&&cir 4000 4000 1]' "x y pi" x y pi ---------- ----------- ---------- 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 42.60u 12.63s 7:28.63 12.3% time fundisp huge.fits'[idx_activate=1,idx_debug=1,pi=770&&cir 4000 4000 1]' "x y pi" x y pi ---------- ----------- ---------- idxeq: pi start=9473025,stop=9492240 => pi[ROW 9473025:9492240] idxand sort: x[ROW 8037025:8070128] y[ROW 5757665:5792352] idxor sort/merge: pi[ROW 9473025:9492240] [IDX_OR_SORT] idxmerge(5): [IDX_OR_SORT] pi[ROW] idxall(1): [IDX_OR_SORT] 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 3999.48 4000.47 770 1.67u 0.30s 0:24.76 7.9% The miracle of indexed filtering (and indeed, of any indexing) is the speed of the binary search on the index, which is of order log2(n) instead of n. (The funtools binary search method is taken from http://www.tbray.org/ongoing/When/200x/2003/03/22/Binary, to whom grateful acknowledgement is made.) This means that the larger the file, the better the performance. Conversely, it also means that for small files, using an index (and the overhead involved) can slow filtering down somewhat. Our tests indicate that on a file containing a few tens of thousands of rows, indexed filtering can be 10 to 20 percent slower than non-indexed filtering. Of course, your mileage will vary with con- ditions (disk access speed, amount of available memory, process load, etc.) Any problem encountered during index processing will result in indexing being turned off, and replaced by filtering all rows. You can turn filtering off manually by setting the idx_activate variable to 0 (in a filter expression) or the FILTER_IDX_ACTIVATE environment variable to 0 (in the global environment). Debugging output showing how the indexes are being processed can be displayed to stderr by setting the idx_debug variable to 1 (in a filter expression) or the FILTER_IDX_DEBUG environment variable to 1 (in the global environment). Currently, indexed filtering only works with FITS binary tables and raw event files. It does not work with text files. This restriction might be removed in a future release. SEE ALSO
See funtools(7) for a list of Funtools help pages version 1.4.2 January 2, 2008 funidx(7)
All times are GMT -4. The time now is 02:46 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy