Sponsored Content
Full Discussion: Dump Files
Top Forums UNIX for Dummies Questions & Answers Dump Files Post 302710575 by Newer on Thursday 4th of October 2012 02:51:51 PM
Old 10-04-2012
Tools Dump Files

Hi, i'd like to know the command line to create a dump file from a non particular table, o database. I'm working with RedHat environment.
Ayn idea would be helpful.
Thanks.
 

7 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

help, what is the difference between core dump and panic dump?

help, what is the difference between core dump and panic dump? (1 Reply)
Discussion started by: aileen
1 Replies

2. UNIX for Dummies Questions & Answers

tarring and gzipping dump files

Say I want to transfer several dump files from a Solaris machine onto a Win2k machine for storage. It was suggested that I tar and gzip the dump files before doing so. Is it completely necessary to use both of these utilities, or is it sufficient to compress multiple dump files into one gzip... (4 Replies)
Discussion started by: PSC
4 Replies

3. HP-UX

hp dump files

Does anyone know where the location of HP-UX dump files get written too, when I do a system reset from the CM issuing a TC, its will always do a system dump, but not sure where the dump is located. (2 Replies)
Discussion started by: csaunders
2 Replies

4. UNIX for Dummies Questions & Answers

identifying core dump files.

I have come into a business environtment problem and had been 10+ years since the last time I did any unix admin work. A long time ago some mainframe person created an app that talked to a mainframe on UNIX and wrote a c program with "core" in the file name to indicate that the file was the... (2 Replies)
Discussion started by: pcooke2002
2 Replies

5. Red Hat

Process does not dump any core files when crashed even if coredumpsize is unlimited

Hello Im using redhat and try to debug my application , its crashes and in strace I also see it has problems , but I can't see any core dump I configured all the limit ( im using .cshrc ) and it looks like this : cputime unlimited filesize unlimited datasize unlimited... (8 Replies)
Discussion started by: umen
8 Replies

6. Shell Programming and Scripting

Split large zone file dump into multiple files

I have a large zone file dump that consists of ; DNS record for the adomain.com domain data1 data2 data3 data4 data5 CRLF CRLF CRLF ; DNS record for the anotherdomain.com domain data1 data2 data3 data4 data5 data6 CRLF (7 Replies)
Discussion started by: Bluemerlin
7 Replies

7. Shell Programming and Scripting

Multiple .gz decompress files and dump other directory

I have code below for i in *.gz; do gzip -dc $i /home/vizion/Desktop/gzipfile/; done one more for i in *.gz; do gunzip -dc $i /home/vizion/Desktop/gzipfile/; done both are getting error: "gunzip: /home/vizion/Desktop/gzipfile/ is a directory -- ignored " i have requirement below in... (3 Replies)
Discussion started by: Chenchireddy
3 Replies
DB5.1_DUMP(1)						      General Commands Manual						     DB5.1_DUMP(1)

NAME
db5.1_dump - Write database to flat-text format SYNOPSIS
db5.1_dump [-klNpRrV] [-d ahr] [-f output] [-h home] [-P password] [-s database] file DESCRIPTION
The db5.1_dump utility reads the database file file and writes it to the standard output using a portable flat-text format understood by the db5.1_load utility. The file argument must be a file produced using the Berkeley DB library functions. OPTIONS
-d Dump the specified database in a format helpful for debugging the Berkeley DB library routines. a Display all information. h Display only page headers. r Do not display the free-list or pages on the free list. This mode is used by the recovery tests. The output format of the -d option is not standard and may change, without notice, between releases of the Berkeley DB library. -f Write to the specified file instead of to the standard output. -h Specify a home directory for the database environment; by default, the current working directory is used. -k Dump record numbers from Queue and Recno databases as keys. -l List the databases stored in the file. -N Do not acquire shared region mutexes while running. Other problems, such as potentially fatal errors in Berkeley DB, will be ignored as well. This option is intended only for debugging errors, and should not be used under any other circumstances. -P Specify an environment password. Although Berkeley DB utilities overwrite password strings as soon as possible, be aware there may be a window of vulnerability on systems where unprivileged users can see command-line arguments or where utilities are not able to overwrite the memory containing the command-line arguments. -p If characters in either the key or data items are printing characters (as defined by isprint(3)), use printing characters in file to represent them. This option permits users to use standard text editors and tools to modify the contents of databases. Note: different systems may have different notions about what characters are considered printing characters, and databases dumped in this manner may be less portable to external systems. -R Aggressively salvage data from a possibly corrupt file. The -R flag differs from the -r option in that it will return all possible data from the file at the risk of also returning already deleted or otherwise nonsensical items. Data dumped in this fashion will almost certainly have to be edited by hand or other means before the data is ready for reload into another database -r Salvage data from a possibly corrupt file. When used on a uncorrupted database, this option should return equivalent data to a nor- mal dump, but most likely in a different order. -s Specify a single database to dump. If no database is specified, all databases in the database file are dumped. -V Write the library version number to the standard output, and exit. Dumping and reloading Hash databases that use user-defined hash functions will result in new databases that use the default hash function. Although using the default hash function may not be optimal for the new database, it will continue to work correctly. Dumping and reloading Btree databases that use user-defined prefix or comparison functions will result in new databases that use the default prefix and comparison functions. In this case, it is quite likely that the database will be damaged beyond repair permitting nei- ther record storage or retrieval. The only available workaround for either case is to modify the sources for the db5.1_load utility to load the database using the correct hash, prefix, and comparison functions. The db5.1_dump utility output format is documented in the Dump Output Formats section of the Berkeley DB Reference Guide. The db5.1_dump utility may be used with a Berkeley DB environment (as described for the -h option, the environment variable DB_HOME, or because the utility was run in a directory containing a Berkeley DB environment). In order to avoid environment corruption when using a Berkeley DB environment, db5.1_dump should always be given the chance to detach from the environment and exit gracefully. To cause db5.1_dump to release all environment resources and exit cleanly, send it an interrupt signal (SIGINT). Even when using a Berkeley DB database environment, the db5.1_dump utility does not use any kind of database locking if it is invoked with the -d, -R, or -r arguments. If used with one of these arguments, the db5.1_dump utility may only be safely run on databases that are not being modified by any other process; otherwise, the output may be corrupt. The db5.1_dump utility exits 0 on success, and >0 if an error occurs. ENVIRONMENT
DB_HOME If the -h option is not specified and the environment variable DB_HOME is set, it is used as the path of the database home, as described in DB_ENV->open. AUTHORS
Sleepycat Software, Inc. This manual page was created based on the HTML documentation for db_dump from Sleepycat, by Thijs Kinkhorst <thijs@kinkhorst.com>, for the Debian system (but may be used by others). 28 January 2005 DB5.1_DUMP(1)
All times are GMT -4. The time now is 03:40 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy