Sponsored Content
Full Discussion: File size exceeding 2GB
Top Forums UNIX for Dummies Questions & Answers File size exceeding 2GB Post 4659 by mod on Monday 30th of July 2001 06:53:32 AM
Old 07-30-2001
If changing the filesystem to "largefiles" doesn't work than try to upgrade your gzip-programm to a newer version ... especially gzip from 10.20 doesn't support files >2.0GB ... you can download a never version (also a newer tar-version ...) from

http://hpux.cs.utah.edu/

 

10 More Discussions You Might Find Interesting

1. Programming

C++ Problem, managing >2Gb file

My C++ program returns 'Disk Full' Message when I tried to manage a file larger than 2Gb. the process is very simple: based on a TXT file, the process combine the information generating another temporary file (generating the error) to fillup a database. My FS, during the process, reaches 40%...... (4 Replies)
Discussion started by: ASOliveira
4 Replies

2. Solaris

SUN Solaris 9 - Is there a 2GB file size limit?

Hi I am using SUN/Solaris 9 and I was told that some unix versions have 2GB size limit. Does this applies to SUN/Solaris 9? Thanks. (2 Replies)
Discussion started by: GMMike
2 Replies

3. Shell Programming and Scripting

efficiently split a 2GB text file into two

Can an expert kindly write an efficient Linux ksh script that will split a large 2 GB text file into two? Here is a couple of sample record from that text file: "field1","field2","field3",11,22,33,44 "TG","field2b","field3b",1,2,3,4 The above rows are delimited by commas. This script is to... (2 Replies)
Discussion started by: ihot
2 Replies

4. UNIX for Dummies Questions & Answers

MAX file size limited to 2GB

Hi All, We are running HP rp7400 box with hpux 11iv1. Recently, we changed 3 kernel parameters a) msgseg from 32560 to 32767 b) msgmnb from 65536 to 65535 c) msgssz from 128 to 256 Then we noticed that all application debug file size increase upto 2GB then it stops. So far we did not... (1 Reply)
Discussion started by: mhbd
1 Replies

5. AIX

Creating > 2GB file

I am trying to execute a database dump to a file, but can't seem to get around the 2GB file size. I have tried setting the user limit to -1, but no luck. (4 Replies)
Discussion started by: markper
4 Replies

6. Linux

unzipping file > 2gb

I am not able to unzip file greater then 2gb, Any suggestions how to do that in linux? Regards, Manoj (5 Replies)
Discussion started by: manoj.solaris
5 Replies

7. UNIX for Advanced & Expert Users

How to create a file more than 2GB

Hi, I am executing a SQL query and the output is more than 2GB. Hence the process is failing. How can I have a file created more than 2GB ? Thanks, Risshanth (1 Reply)
Discussion started by: risshanth
1 Replies

8. HP-UX

2GB file size limit

Greetings, I'm attempting to dump a filesystem from a RHEL5 Linux server to a VXFS filesystem on an HP-UX server. The VXFS filesystem is large file enabled and I've confirmed that I can copy/scp a file >2GB to the filesystem. # fsadm -F vxfs /os_dumps largefiles # mkfs -F vxfs -m... (12 Replies)
Discussion started by: bkimura
12 Replies

9. UNIX for Dummies Questions & Answers

Delete the file which crossed 2GB

Hi , I wants to create the bash script for deleting the specified 2gb file and wants to take the backup before doing that. please help me how to do the same,I use RHEL5 server (22 Replies)
Discussion started by: Rahulne25
22 Replies

10. Fedora

/var/log/btmp size 2.2Gb daily

Hello, One Fedora server is facing the issue that daily /var/log/btmp grows to 2.2Gb or more. I need your help to determine the cause and isolate it. Thank you! (6 Replies)
Discussion started by: feroccimx
6 Replies
expand_dump(8)						      System Manager's Manual						    expand_dump(8)

NAME
expand_dump - Produces a non-compressed kernel crash dump file SYNOPSIS
/usr/sbin/expand_dump input-file output-file DESCRIPTION
By default, kernel crash dump files (vmzcore.#) are compressed during the crash dump. Compressed core files can be examined by the latest versions of debugging tools that have been recompiled to support compressed crash dump files. However, not all debugging tools may be upgraded on a given system, or you may want to examine a crash dump from a remote system using an older version of a tool. The expand_dump utility produces a file that can be read by tools that have not been upgraded to support compressed crash dump files. This non-compressed version can also be read by any upgraded tool. This utility can only be used with compressed crash dump files, and does not support any other form of compressed file. You cannot use other decompression tools such as compress, gzip, or zip on a compressed crash dump file. Note that the non-compressed file will require significantly more disk storage space as it is possible to achieve compression ratios of up to 60:1. Check the available disk space before running expand_dump and estimate the size of the non-compressed file as follows: Run tests by halting your system and forcing a crash as described in the Kernel Debugging manual. Use an upgraded debugger to determine the value of the variable dumpsize. Multiply this vale by the 8Kb page size to approximate the required disk space of the non-compressed crash-dump. Run expand_dump and pipe the output file to /dev/null, noting the size of the file that is printed when expand_dump completes its task. RETURN VALUES
Successful completion of the decompression. The user did not supply the correct number of command line arguments. The input file could not be read. The input file is not a compressed dump, or is corrupted. The output file could not be created or opened for writing and truncated. There was some problem writing to the output file (probably a full disk). The input file is not formated consistantly. It is probably corrupted. The input file could not be correctly decompressed. It is probably corrupted. EXAMPLES
expand_dump vmzcore.4 vmcore.4 SEE ALSO
Commands: dbx(1), kdbx(8), ladebug(1), savecore(8) Kernel Debugging System Administration expand_dump(8)
All times are GMT -4. The time now is 09:14 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy