Sponsored Content
Full Discussion: Copy large dump file
Top Forums Shell Programming and Scripting Copy large dump file Post 302182698 by in2nix4life on Monday 7th of April 2008 11:09:25 AM
Old 04-07-2008
Not sure what OS you're using, but you could try using the 'split' command to break the files down to smaller chunks, then use the 'cat' command on the otherside to join them back together:

i.e.
split -b 1m largefile FILE_

then to rejoin:

cat FILE_* > largefile

Hope this helps.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

help, what is the difference between core dump and panic dump?

help, what is the difference between core dump and panic dump? (1 Reply)
Discussion started by: aileen
1 Replies

2. Solaris

making copy of 0 level dump via ufsdump

Hi how do u make "copy" of o level dump taken via ufsdumo in solaris? To elaborate, imagine you have taken a 0 level dump via the following command ufsdump 0ulf /dev/rmt/1n / and then again execute the same command to take a second 0 level dump Now take an incremental dump ufsdump 1ulf... (2 Replies)
Discussion started by: vishalsngh
2 Replies

3. Shell Programming and Scripting

Importing dump file

Hi, I am trying to import 22 .dmp files but facing the problem with the last table file it never ends the import command, only the table is created but the rows of the table don't get imported. This is the problem with only ine table rest 21 tables are being imported properly. Thanks in... (2 Replies)
Discussion started by: anushilrai
2 Replies

4. Programming

How to use a core dump file

Hi All, May be it is a stupid question, but, I would like to know what is the advantage using a core dump file at the moment of debugging using gdb. I know a core dump has information about the state of the application when it crashed, but, what is the difference between debugging using the... (2 Replies)
Discussion started by: lagigliaivan
2 Replies

5. Programming

Best way to dump metadata to file: when and by who?

Hi, my application (actually library) indexes a file of many GB producing tables (arrays of offset and length of the data indexed) for later reuse. The tables produced are pretty big too, so big that I ran out of memory in my process (3GB limit), when indexing more than 8GB of file or so.... (9 Replies)
Discussion started by: emitrax
9 Replies

6. Solaris

How to safely copy full filesystems with large files (10Gb files)

Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS. I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies

7. Shell Programming and Scripting

Split large zone file dump into multiple files

I have a large zone file dump that consists of ; DNS record for the adomain.com domain data1 data2 data3 data4 data5 CRLF CRLF CRLF ; DNS record for the anotherdomain.com domain data1 data2 data3 data4 data5 data6 CRLF (7 Replies)
Discussion started by: Bluemerlin
7 Replies

8. Solaris

copy crash dump file

Hi gurus, I will be glad if anyone can help me with this: How do you copy a crash dump file to send to your support provider? Thanks lots guys. (1 Reply)
Discussion started by: cjashu
1 Replies

9. Red Hat

Issues restoring a large dump file

Post deleted. (0 Replies)
Discussion started by: Nobody_knows_me
0 Replies

10. Shell Programming and Scripting

How to copy very large directory trees

I have constant trouble with XCOPY/s for multi-gigabyte transfers. I need a utility like XCOPY/S that remembers where it left off if I reboot. Is there such a utility? How about a free utility (free as in free beer)? How about an md5sum sanity check too? I posted the above query in another... (3 Replies)
Discussion started by: siegfried
3 Replies
SPLIT(1)						    BSD General Commands Manual 						  SPLIT(1)

NAME
split -- split a file into pieces SYNOPSIS
split [-a suffix_length] [-b byte_count[k|m]] [-l line_count] [-p pattern] [file [name]] DESCRIPTION
The split utility reads the given file and breaks it up into files of 1000 lines each. If file is a single dash ('-') or absent, split reads from the standard input. The options are as follows: -a suffix_length Use suffix_length letters to form the suffix of the file name. -b byte_count[k|m] Create smaller files byte_count bytes in length. If ``k'' is appended to the number, the file is split into byte_count kilobyte pieces. If ``m'' is appended to the number, the file is split into byte_count megabyte pieces. -l line_count Create smaller files n lines in length. -p pattern The file is split whenever an input line matches pattern, which is interpreted as an extended regular expression. The matching line will be the first line of the next output file. This option is incompatible with the -b and -l options. If additional arguments are specified, the first is used as the name of the input file which is to be split. If a second additional argument is specified, it is used as a prefix for the names of the files into which the file is split. In this case, each file into which the file is split is named by the prefix followed by a lexically ordered suffix using suffix_length characters in the range ``a-z''. If -a is not speci- fied, two letters are used as the suffix. If the name argument is not specified, the file is split into lexically ordered files named with the prefix ``x'' and with suffixes as above. ENVIRONMENT
The LANG, LC_ALL, LC_CTYPE and LC_COLLATE environment variables affect the execution of split as described in environ(7). EXIT STATUS
The split utility exits 0 on success, and >0 if an error occurs. SEE ALSO
csplit(1), re_format(7) STANDARDS
The split utility conforms to IEEE Std 1003.1-2001 (``POSIX.1''). HISTORY
A split command appeared in Version 3 AT&T UNIX. BUGS
The maximum line length for matching patterns is 65536. BSD
August 21, 2005 BSD
All times are GMT -4. The time now is 06:52 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy