Greetings,
I'm attempting to dump a filesystem from a RHEL5 Linux server to a VXFS filesystem on an HP-UX server. The VXFS filesystem is large file enabled and I've confirmed that I can copy/scp a file >2GB to the filesystem.
However, when using the Linux dump utility, it fails 2GB into the dump:
I've confirmed that I can dump from the same RHEL5 server to another RHEL5 server (ext3 filesystem) without issue so it doesn't seem to be a dump limitation. As I've mentioned, I've also confirmed that can scp a large file (8GB) from the same Linux server to the VXFS filesystem without issue. There seems to be some issue between Linux dump and the HP-UX server/filesystem. This is really starting to drive me nuts. We were previously dumping (via the same script) to a Solaris server without issue. After re-pointing the script to HP-UX, we now now have an issue.
Can someone please shed some light on this? I've tried various dump options and nothing seems to make a difference.
Thanks,
- Bill
Last edited by Scott; 11-09-2011 at 02:24 PM..
Reason: Code tags
Thanks for reaching out.
I'm primarily a Linux administrator and don't dabble much with HP-UX so if you need additional info, please let me know.
The HP-UX server is attached to EMC storage. Our Linux servers were previously backing up to a legacy Sun Solaris server but we've run out of space there so I'm trying to shift the scripts to now backup to the HP server. I've created the logical volume and filesystem from scratch. As mentioned, everything seems to be working as expected with the exception of using dump from Linux to this filesystem. The Linux servers are using the dump options "0uf". I've tried 0auf to no avail. Thanks again for reaching out.
- Bill
---------- Post updated at 12:11 PM ---------- Previous update was at 12:08 PM ----------
Keep in mind that the exact same script works flawlessly to both a Solaris server and another Linux server. As soon as I change one of the variables to point to the HP-UX server, it craps out after 2GB every time. The dump is over SSH. I've also tried RSH but got the same results. Thanks.
---------- Post updated at 12:20 PM ---------- Previous update was at 12:11 PM ----------
Proof that the filesystem in question does in fact support large files:
(I've also scp'd an 8GB file from the same Linux server to the filesystem)
Last edited by Scott; 11-09-2011 at 02:24 PM..
Reason: Code tags
I took time answering (people in my office...) and replied without having the possibility to see your last post...
Did you do a man of dump on HP-UX ? I remember it differs a little...
Hi,
Am trying to run zip -r on a 2.4G directory and it is failing with the error below. I believe this is because of the 2G limit of the zip program.
server101(oper01)/u01/temp$: date
Thu Mar 15 12:53:44 NZDT 2012
server101(oper01)/u01/temp$: ls -l
total 8
drwxr-x--x 4 oracle dba ... (1 Reply)
Hi All,
I want to store 32KB of file in Oracle DB into CLOB field. I am not able to insert more than 32KB of file into CLOB. So i want to put a limit on the file size. I am using k shell.
My file size will dynamically increase its size, i want to check the file size if it is more than 32KB... (1 Reply)
Hi All,
We are running HP rp7400 box with hpux 11iv1.
Recently, we changed 3 kernel parameters
a) msgseg from 32560 to 32767
b) msgmnb from 65536 to 65535
c) msgssz from 128 to 256
Then we noticed that all application debug file size increase upto 2GB then it stops. So far we did not... (1 Reply)
Any idea how to get around this limit? I have a 42GB database backup file (.dmp) taking up disk space because neither tar nor cpio are able to put it onto a tape. I am on a SUN Solaris using SunOS 5.8. I would appreciate whatever help can be provided. Thanks! (9 Replies)
Hi,
I have a problem writing or copying a file 2GB or larger to either the second or third disk on my C8000. I've searched this forum and found some good information on this but still nothing to solve the problem.
I'm running hpux 11i, JFS3.3 and disk version 4 (from fstyp) on all 3 disks.
... (2 Replies)
Can anybody help me?
How to increase file size limit in aix 5.2? I have already specified in /etc/security/limits file :
default:
fsize = -1
core = 2097151
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = 2000 (2 Replies)
I want to have a permanent file created - and limit the size that this file can grow.. I want a circular file..
ie max size of file is 10 mb.. and if any new data written to file the oldest data removed..
How can I do this?
I am on solaris 9 x86 (3 Replies)
I am working on HP-Unix.
I have a 600 MB file in compressed form.
During decompression, when file size reaches
2GB, decompression aborts.
What should be done? (3 Replies)