Sponsored Content
Full Discussion: Copy large dump file
Top Forums Shell Programming and Scripting Copy large dump file Post 302182703 by era on Monday 7th of April 2008 11:15:31 AM
Old 04-07-2008
If you are not using scp compression, add that.

Maybe look at rsync as well. It can use scp as the channel, but add some intelligence to avoid needless transfers.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

help, what is the difference between core dump and panic dump?

help, what is the difference between core dump and panic dump? (1 Reply)
Discussion started by: aileen
1 Replies

2. Solaris

making copy of 0 level dump via ufsdump

Hi how do u make "copy" of o level dump taken via ufsdumo in solaris? To elaborate, imagine you have taken a 0 level dump via the following command ufsdump 0ulf /dev/rmt/1n / and then again execute the same command to take a second 0 level dump Now take an incremental dump ufsdump 1ulf... (2 Replies)
Discussion started by: vishalsngh
2 Replies

3. Shell Programming and Scripting

Importing dump file

Hi, I am trying to import 22 .dmp files but facing the problem with the last table file it never ends the import command, only the table is created but the rows of the table don't get imported. This is the problem with only ine table rest 21 tables are being imported properly. Thanks in... (2 Replies)
Discussion started by: anushilrai
2 Replies

4. Programming

How to use a core dump file

Hi All, May be it is a stupid question, but, I would like to know what is the advantage using a core dump file at the moment of debugging using gdb. I know a core dump has information about the state of the application when it crashed, but, what is the difference between debugging using the... (2 Replies)
Discussion started by: lagigliaivan
2 Replies

5. Programming

Best way to dump metadata to file: when and by who?

Hi, my application (actually library) indexes a file of many GB producing tables (arrays of offset and length of the data indexed) for later reuse. The tables produced are pretty big too, so big that I ran out of memory in my process (3GB limit), when indexing more than 8GB of file or so.... (9 Replies)
Discussion started by: emitrax
9 Replies

6. Solaris

How to safely copy full filesystems with large files (10Gb files)

Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS. I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies

7. Shell Programming and Scripting

Split large zone file dump into multiple files

I have a large zone file dump that consists of ; DNS record for the adomain.com domain data1 data2 data3 data4 data5 CRLF CRLF CRLF ; DNS record for the anotherdomain.com domain data1 data2 data3 data4 data5 data6 CRLF (7 Replies)
Discussion started by: Bluemerlin
7 Replies

8. Solaris

copy crash dump file

Hi gurus, I will be glad if anyone can help me with this: How do you copy a crash dump file to send to your support provider? Thanks lots guys. (1 Reply)
Discussion started by: cjashu
1 Replies

9. Red Hat

Issues restoring a large dump file

Post deleted. (0 Replies)
Discussion started by: Nobody_knows_me
0 Replies

10. Shell Programming and Scripting

How to copy very large directory trees

I have constant trouble with XCOPY/s for multi-gigabyte transfers. I need a utility like XCOPY/S that remembers where it left off if I reboot. Is there such a utility? How about a free utility (free as in free beer)? How about an md5sum sanity check too? I posted the above query in another... (3 Replies)
Discussion started by: siegfried
3 Replies
dput.cf(5)                                                      File Formats Manual                                                     dput.cf(5)

NAME
dput.cf - Debian package upload tool configuration file DESCRIPTION
This manpage gives a brief overview of dput's configuration file and the available options in it. dput is a tool to upload Debian packages to the archive. FORMAT
dput.cf consists of different groups of configuration options, one for each host where you want to be able to upload packages. Hosts are defined using an identifier header with a short name for the host, enclosed in square brackets. Note that only if multiple such headers are encountered in the configuration, only the group following the last header is considered. This is done to avoid confusion when overrid- ing a global configuration file with a user-specific one. There's a special identifier, [DEFAULT], which holds default parameters for all the hosts. The defaults can be overridden by redefining them again in each host section. The available parameters are listed below: fqdn This is the fully qualified domain name that will be used (can be specified as host:port for HTTP, HTTPS and FTP). login Your login on the machine named before. A single asterisk * will cause the scp and rsync uploaders to not use supply a login name when calling to ssh, scp, and rsync. incoming The directory that you should upload the files to. method The method that you want to use for uploading the files. Currently, dput accepts the following values for method: ftp the package will be uploaded via ftp, either anonymously or using a login/password. Note that ftp is unencrypted so you should not use password authentication with this. http and https the package will be uploaded via http or https using the PUT method as specified in WebDAV. The upload method will prompt for a password if necessary. scp the package will be uploaded using ssh's scp. This transfers files using a secure ssh tunnel, and needs an account on the upload machine. rsync the package will be uploaded using rsync over ssh. This is similar to scp, but can save some bandwidth if the destination file already exists on the upload server. It also needs a login on the remote machine as it uses ssh. local the package will be "uploaded" locally using /usr/bin/install. This transfers files to a local incoming directory, and needs appropriate permissions set on that directory. hash The hash algorithm that should be used in calculating the checksum of the files before uploading them. Currently, dput accepts the following values for hash: md5 use the md5 algorithm for calculation sha use the sha algorithm for calculation allow_unsigned_uploads This defines if you are allowed to upload files without a GnuPG signature to this host or not. allow_dcut This defines if you are allowed to upload a dcut changes file to the queue to remove or move files. distributions This defines a comma-separated list of distributions that this host accepts, used to guess the host to use when none is given on the command line. allowed_distributions A regular expression (of Python re module syntax) that the distribution field must match or dput will refuse the upload. delayed Set a numeric default parameter for delayed uploads (i.e. uploads to this queue will be delayed the specified number of days. Defaults to the empty string, meaning no delay. This only works with upload queues that support delayed uploads. run_lintian This option defines if lintian should be run before the package will be uploaded or not. If the package is not lintian clean, the upload will not happen. run_dinstall This options defines if dinstall -n should be run after the package has been uploaded or not. This is an easy way to test if your package would be installed into the archive or not. check_version This option defines if dput should check if the user has installed the package in his system for testing it before putting it into the archive. If the user has not installed and tested it, dput will reject the upload. passive_ftp This option defines if dput should use passive ftp or active ftp for uploading a package to one of the upload queues. By default, dput uses passive ftp connections. If you need to use active ftp connections, set passive_ftp to 0. progress_indicator This integer option defines if dput should display a progress indicator for the upload. (Currently implemented in ftp only.) Supported values: 0 (default) - no progress, 1 - rotating progress indicator, and 2 - kilobyte counter. scp_compress This option defines if the scp upload to the host will be compressed, or not. This option is only used for the 'scp' upload method, and has been found to decrease upload time for slow links, and increase upload times for faster links. ssh_config_options The arguments of this config options should be ssh config file options in the style documented in ssh_config(5). They will be passed to all automatic invocations of ssh and scp by dput. Note that you can define multiline (dput) configuration options by indenting the second line with whitespace (i.e. similar to RFC822 header continuations). post_upload_command This option defines a command to be run by dput after a successful upload. pre_upload_command This option defines a command to be run by dput before a upload happens. default_host_main This defines the default host for packages that are allowed to be uploaded to the main archive. This variable is used when guessing the host to upload to. BUGS
Please send bug reports to the author. FILES
/etc/dput.cf global dput configuration file ~/.dput.cf peruser dput configuration file AUTHOR
Christian Kurz. Updated by Thomas Viehmann <tv@beamnet.de>. Many other people have contributed to this code. See the Thanks file. SEE ALSO
dput(1) /usr/share/doc/dput COMMENTS
The author appreciates comments and suggestions from you, if any. April 8, 2001 dput.cf(5)
All times are GMT -4. The time now is 02:22 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy