The trick to SQL is thinking in sets and joins, not procedurally. This is a big leap for many procedural coders. I have seen too many loops that should be joins. For instance:
Hi,
I'm developing a system which requires me to run a ksh script from within a cgi script. What sort of syntax will I need to do this, I'm sure it's simple but can't find out how anywhere!
Thanks. (2 Replies)
Hi guys,
i know how to run a single query using mysql embedded in a shell script as follows:
`mysql -umyuser -pmypass --host myhost database<<SQL
${query};
quit
SQL`
However, how would i be able to run several queries within the same connection?
The reason for this is i am creating... (3 Replies)
Hi
I am new to this Scripting process and would like to know How can i write a ksh script that will call other ksh scripts and write the output to a file and/or email.
For example
-------
Script ABC
-------
a.ksh
b.ksh
c.ksh
I need to call all three scripts execute them and... (2 Replies)
hi i have a script called test.sh. the content is ls >> crontest.txt.
if i run manually it's giving output.but if i scheduled in crontab it's not giving output.
crontab entry:
02 * * * * /sms5/SMSHOME/eds_sh/test.sh >> /sms5/SMSHOME/eds_sh/testfile/logfile 2>&1
I am using ksh.is there... (2 Replies)
Hi,
I have a script where I make a sqlplus connection. In the script I have multiple sql queries within that sqlplus connection. I want the result of the queries to be stored in shell variables declared earlier. I dont want to use procedures. Is there anyway else.
Thanks in advance..
Cheers (6 Replies)
i ran the below in ksh...
nohup <script> &
it is runnign in background.
now how do i see if the above command is success...
i also need to bring the command to foreground and view the run details.
pls advise how to do that... (1 Reply)
Hi programmers, say I have 4 files : file1.py,file2.py,file3.py,file4.py
How do I, on a korn shell, create one file, run_all, that is one file that sequentially calls file1-file4, but only so if they complete w/o errors?
Something like:
#!/usr/bin/ksh
file1.py
/*......????*/
... (7 Replies)
how to store the count of queries in variables inside a filein shell script
my output :
filename
-------
variable1=result from 1st query
variable2=result from 2nd query
.
.
.
. (3 Replies)
How can i run sql queries from UNIX shell script and retrieve data into text docs of UNIX? :confused: (1 Reply)
Discussion started by: 24ajay
1 Replies
LEARN ABOUT REDHAT
analyze
ANALYZE(7) SQL Commands ANALYZE(7)NAME
ANALYZE - collect statistics about a database
SYNOPSIS
ANALYZE [ VERBOSE ] [ table [ (column [, ...] ) ] ]
INPUTS
VERBOSE
Enables display of progress messages.
table The name (possibly schema-qualified) of a specific table to analyze. Defaults to all tables in the current database.
column The name of a specific column to analyze. Defaults to all columns.
OUTPUTS
ANALYZE
The command is complete.
DESCRIPTION
ANALYZE collects statistics about the contents of PostgreSQL tables, and stores the results in the system table pg_statistic. Subsequently,
the query planner uses the statistics to help determine the most efficient execution plans for queries.
With no parameter, ANALYZE examines every table in the current database. With a parameter, ANALYZE examines only that table. It is further
possible to give a list of column names, in which case only the statistics for those columns are updated.
NOTES
It is a good idea to run ANALYZE periodically, or just after making major changes in the contents of a table. Accurate statistics will help
the planner to choose the most appropriate query plan, and thereby improve the speed of query processing. A common strategy is to run VAC-
UUM [vacuum(7)] and ANALYZE once a day during a low-usage time of day.
Unlike VACUUM FULL, ANALYZE requires only a read lock on the target table, so it can run in parallel with other activity on the table.
For large tables, ANALYZE takes a random sample of the table contents, rather than examining every row. This allows even very large tables
to be analyzed in a small amount of time. Note however that the statistics are only approximate, and will change slightly each time ANALYZE
is run, even if the actual table contents did not change. This may result in small changes in the planner's estimated costs shown by
EXPLAIN.
The collected statistics usually include a list of some of the most common values in each column and a histogram showing the approximate
data distribution in each column. One or both of these may be omitted if ANALYZE deems them uninteresting (for example, in a unique-key
column, there are no common values) or if the column data type does not support the appropriate operators. There is more information about
the statistics in the User's Guide.
The extent of analysis can be controlled by adjusting the default_statistics_target parameter variable, or on a column-by-column basis by
setting the per-column statistics target with ALTER TABLE ALTER COLUMN SET STATISTICS (see ALTER TABLE [alter_table(7)]). The target value
sets the maximum number of entries in the most-common-value list and the maximum number of bins in the histogram. The default target value
is 10, but this can be adjusted up or down to trade off accuracy of planner estimates against the time taken for ANALYZE and the amount of
space occupied in pg_statistic. In particular, setting the statistics target to zero disables collection of statistics for that column. It
may be useful to do that for columns that are never used as part of the WHERE, GROUP BY, or ORDER BY clauses of queries, since the planner
will have no use for statistics on such columns.
The largest statistics target among the columns being analyzed determines the number of table rows sampled to prepare the statistics.
Increasing the target causes a proportional increase in the time and space needed to do ANALYZE.
COMPATIBILITY
SQL92
There is no ANALYZE statement in SQL92.
SQL - Language Statements 2002-11-22 ANALYZE(7)