Sponsored Content
Top Forums Shell Programming and Scripting Need help with a sh script to spool directory and modify the output (Oracle cnt file) Post 302328444 by exm on Wednesday 24th of June 2009 10:24:30 AM
Old 06-24-2009
Need help with a sh script to spool directory and modify the output (Oracle cnt file)

Hi,

I'm creating a shell script to dynamically create a recreate controlfile for an Oracle database. I need to read a cold backup file system, and make some changes to these files.

Let's say for argument sake the directory name is /ebsprod_c/oradata and it looks like this:
/ebsprod_c/oradata/ctxd01.dbf
/ebsprod_c/oradata/discoverer01.dbf
/ebsprod_c/oradata/log01.dbf
/ebsprod_c/oradata/log02.dbf
/ebsprod_c/oradata/undo01.dbf
/ebsprod_c/oradata/undo02.dbf
/ebsprod_c/oradata/undo03.dbf
/ebsprod_c/oradata/undo04.dbf
/ebsprod_c/oradata/undo05.dbf

What I need is to:
a. exclude the log0* files
b. Add a apostrophe and a comma to each line, except the last line which should only have a apostrophe to make it look something like this:
'/ebsprod_c/oradata/ctxd01.dbf',
'/ebsprod_c/oradata/discoverer01.dbf',
'/ebsprod_c/oradata/undo01.dbf',
'/ebsprod_c/oradata/undo02.dbf',
'/ebsprod_c/oradata/undo03.dbf',
'/ebsprod_c/oradata/undo04.dbf',
'/ebsprod_c/oradata/undo05.dbf'

I figured out how to spool the files and add the apostrophe and comma:
for dirname in $(ls /ebsprod_c/oradata/*.dbf)
do
filename=$(echo $dirname)
echo "'$filename'," >> datafiles.tmp
done

But I can't figure out how to remove/exclude the comma from the last line and how to exclude the log0* files.

Any help would be appreciated!

Thanks!
Mark
 

9 More Discussions You Might Find Interesting

1. Solaris

removing particular lines ending with a .cnt extension in a text file

I have a text file with rows of information (it is basically a ls command information(o/p from ls command)) I need to remove the lines ending with a .cnt extension and keep the lines ending with .zip extension, how to accomplish this. I also only need the date,size and name of the file from every... (2 Replies)
Discussion started by: ramky79
2 Replies

2. Shell Programming and Scripting

To spool output from a database query

Hi all, I would want to spool file for a database query, however by using crontab, the file is not spooled. Below shows my script: ORACLE_HOME="/u01/oraprod/perpdb/10.1.0/db_1" OUTFILE="/tmp/invalid.out" FILE="$HOME/admin/scripts" $ORACLE_HOME/bin/sqlplus -s "/as sysdba"... (0 Replies)
Discussion started by: *Jess*
0 Replies

3. Shell Programming and Scripting

Help in Shell scripting to modify the User Creation script in oracle database.

Hi, I have several users to create on my test Oracle database taking the scripts from the Production Oracle database. I have a separate text file where I have user-id and passwords maintained. I need help in writing a shell script to go thru the user creation scripts and replace VALUES... (1 Reply)
Discussion started by: rparavastu
1 Replies

4. Solaris

Spool directory

hi all, I have unix box I install 2 zone on it I want to make spool directory and assign one to each zone How can I do that ? (5 Replies)
Discussion started by: coxmanchester
5 Replies

5. Shell Programming and Scripting

Help supressing spool output from screen when calling sqlplus from script

I'm calling an embedded sql from my shell script file. This sql does simple task of spooling out the contents of the table (see below my sample code) into a spool file that I specify. So far so good, but the problem is that the output is also displayed on screen which I do NOT want. How can I... (3 Replies)
Discussion started by: MxC
3 Replies

6. Shell Programming and Scripting

SQL*PLUS Spool Output

Hi, Im writing a script to run a bit of sql(via sqlplus) that pulls back some data and spools it to a file, I want the spool file to only display the data, with no sql command at the top and no reports at the bottom ie(# of records recieved). I am currently doing it via a grep command but... (1 Reply)
Discussion started by: Magezy
1 Replies

7. Shell Programming and Scripting

Need a script for automatically cleaning up /var/spool/cups directory

Hi Friends, Actually in an linux server , there was printer jobs files occupying more space in /var/spool/cups so i want a script for deleting the files once in two week since i need the latest two weeks files. Thanks in advance..Waiting for the script. (2 Replies)
Discussion started by: Mohamed Thamim
2 Replies

8. UNIX and Linux Applications

UNIX spool command not extracting complete record from the Oracle table

Hello All, I'm trying to spool an oracle table data into a csv file on unix server but the complete record is not being extracted. The record is almost 1000 characters but only 100 characters are being extracted and rest of the data getting truncated. I'm setting below options : SET... (4 Replies)
Discussion started by: venkat_reddy
4 Replies

9. Shell Programming and Scripting

Append date to sql*plus spool (log) file in shell script

SQL*Plus version : 11.2.0.4 OS : Oracle Linux 6.5 SQL*Plus is a client application to connect to oracle database. The log file for this tool is generated via spool command as shown below. I am trying to append date ( $dateString ) to spool file as shown below. $ cat test2.sh #!/bin/bash... (4 Replies)
Discussion started by: kraljic
4 Replies
XBase::FAQ(3)						User Contributed Perl Documentation					     XBase::FAQ(3)

NAME
XBase::FAQ - Frequently asked questions about the XBase.pm/DBD::XBase modules DESCRIPTION
This is a list of questions people asked since the module has been announced in fall 1997, and my answers to them. AUTHOR
Jan Pazdziora, adelton@fi.muni.cz Questions and answers What Perl version do I need? What other modules? You need perl at least 5.004. I test each new distribution agains 5.005* and 5.004_04 version of perl. You need DBI module version 1.00 or higher, if you want to use the DBD driver (which you should). Can I use XBase.pm under Windows 95/NT? Yes. It's a standard Perl module so there is no reason it shouldn't. Or, actually, there are a lot of reasons why standard thing do not work on systems that are broken, but I'm trying hard to workaround these bugs. If you find a problem on these platform, send me a description and I'll try to find yet another workaround. Is there a choice of the format of the date? The only possible format in which you can get the date and that the module expect for inserts and updates is a 8 char string 'YYYYMMDD'. It is not possible to change this format. I prefer to do the formating myself since you have more control over it. The "get_record" also returns deleted records. Why? Because. You get the _DELETED flag as the first value of the array. This gives you a possibility to decide what to do -- undelete, ignore... It's a feature -- you say you want a record of given number, you get it and get additional information, if the record is or isn't marked deleted. But with DBD::XBase, I do not see the deleted records. That's correct: DBD::XBase only gives you records that are positively in the file and not deleted. Which shows that XBase.pm is a lower level tool because you can touch records that are marked deleted, while DBD::XBase is higher level -- it gives you SQL interface and let's you work with the file more naturaly (what is deleted should stay deleted). XBase.pm cannot read files created with [your favorite tool]. Describe exactly, what you expect and what you get. Send me the file (I understand attachments, uuencode, tar, gzip and zip) so that I can check what it going on and make XBase.pm undestand your file. A small sample (three rows, or so) are generally enough but you can send the whole file if it doesn't have megabytes. Please understand How to install the module when I do not have make? On Win* platform and with ActiveState port, use ppm to install DBD::XBase from ActiveState's site. You can also just copy the files from the lib directory of the distribution to where perl can find them. Also check whether your make doesn't hide under different names (nmake, gmake). See "README". I have make but I cannot install into default directory. Ask your sysadmin to do it for your. If he refuses, fire the sysadmin. See "README" for how to install into and use nonstandard place for the module. Can I access one dbf file both from Perl and (say) Clipper? For reading -- yes. For writing -- XBase.pm has a locksh and lockex method to lock the file. The question is to what extend Clipper (or Fox* or whatever) uses the same system calls, documentation of native XBase applications doesn't tell this. So the answer is that for multiple updates you should probably consider real RDBMS system (PostgreSQL, MySQL, Oracle, to name a few). XBase.pm/DBD::XBase breaks my accented characters. No, it doesn't. The character data is returned exactly as it appears in the dbf/dbt file. You probably brought the file from different system that uses differend character encodings. So some bytes in the strings have different meaning on that system. You also probably have fonts in different encoding on that system. In the Czech language, we have about 6 different encoding that affect possition at which accented characters appear. So what you really want to do is to use some external utility to convert the strings to encoding you need -- for example, when I bring the dbf from Win*, it often is in the Windows-1250 or PC-Latin-2 encoding, while the standard is ISO-8859-2. I use my utility Cz::Cstocs to do the conversion, you maight also try GNU program recode or use Text::Iconv Perl module. How do I access the fields in the memo file? Just read the memo field, it will fetch the data from the memo file for you transparently. Matching with "field = '%str%'" doesn't work. If you want to match wildcards with DBD::XBase, you have to use "like": select * from table where field like '%str%' Can I sue you if XBase.pm/DBD::XBase corrupts my data? No. At least, I hope no. The software is provided without any warranty, in a hope you might find is usefull. Which is by the way the same as with most other software, even if you pay for that. What is different with XBase.pm/DBD::XBase is the fact that if you find out that the results are different from those expected, you are welcome to contact me, describe the problem and send me the files that give troubles to the module, and I'll try to find fix the module. What dbf/other files standard does the module support? I try to support any file that looks reasonably as dbf/dbt/fpt/smt/ndx/ntx/mdx/idx/cdx. There are many clones of XBase-like software, each adding its own extension. The module tries to accept all different variations. To do that, I need your cooperation however -- usually good description of the problem, file sample and expected results lead to rather fast patch. What SQL standard does the DBD::XBase support? If supports a reasonable subset of the SQL syntax, IMHO. So you can do select, delete, insert and update, create and drop table. If there is something that should be added, let me know and I will consider it. Having said that, I do not expect to ever support joins, for example. This module is more a parser to read files from your legacy applications that a RDBMS -- you can find plenty of them around -- use them. I downloaded you module I do not know how to install it. Did you follow the steps in the "README" and "INSTALL" files? Where did it fail? This module uses a standard way modules in Perl are installed. If you've never installed a module on your system and your system is so non-standard that the general instruction do not help, you should contact your system administrator or the support for your system. "select max(field) from table" does not work. Aggregate functions are not supported. It would probably be very slow, since the DBD doesn't make use of indexes at the moment. I do not have plans to add this support in some near future. "DBI->connect" says that the directory doesn't exist ... ... but it's there. Is DBD::XBase mad or what? The third part of the first parameter to the connect is the directory where DBD::XBase will look for the dbf files. During connect, the module checks "if -d $directory". So if it says it's not there, it's not there and the only thing DBD::XBase can do about it is to report it to you. It might be that the directory is not mounted, you do not have permissions to it, the script is running under different UID than when you try it from command line, or you use relative patch and run the script from a different directory (pwd) than you expect. Anyway, add die "Error reading $dir: $! " unless -d $dir; to your script and you will see that it's not DBD::XBase problem. The XBase.pm/dbfdump stops after reading n records ... ... why doesn't it read all 10 x n records? Check if the file isn't truncated. "dbfdump -i file.dbf" will tell you the expected number of records and length of one record, like Filename: file.dbf Version: 0x03 (ver. 3) Num of records: 65 Header length: 1313 Record length: 1117 Last change: 1998/12/18 Num fields: 40 So the expected length of the file is at least 1313 + 65 * 1117. If it's shorter, you've got damaged file and XBase.pm/dbfdump only reads as much rows as it can find in the dbf. How is this DBD::XBase related to DBD::ODBC? DBD::XBase reads the dbf files directly, using the (included) XBase.pm module. So it will run on any platform with reasonable new perl. With DBD::ODBC, you need an ODBC server, or some program, that DBD::ODBC could talk to. Many proprietary software can serve as ODBC source for dbf files, it just doesn't seem to run on Un*x systems. And is also much more resource intensive, if you just need to read the file record by record and convert it to HTML page or do similary simple operation with it. How do I pack the dbf file, after the records were deleted? XBase.pm doesn't support this directly. You'd probably want to create new table, copy the data and rename back. Patches are always welcome. Foxpro doesn't see all fields in dbf created with XBase.pm. Put 'version' => 3 options in to the create call -- that way we say that the dbf file is dBaseIII style. perl v5.12.1 2002-08-16 XBase::FAQ(3)
All times are GMT -4. The time now is 03:15 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy