2GB file size limit


 
Thread Tools Search this Thread
Operating Systems HP-UX 2GB file size limit
# 1  
Old 11-09-2011
2GB file size limit

Greetings,
I'm attempting to dump a filesystem from a RHEL5 Linux server to a VXFS filesystem on an HP-UX server. The VXFS filesystem is large file enabled and I've confirmed that I can copy/scp a file >2GB to the filesystem.
Code:
# fsadm -F vxfs /os_dumps
largefiles

# mkfs -F vxfs -m /dev/vg02/os_dumps
mkfs -F vxfs -o ninode=unlimited,bsize=1024,version=4,inosize=256,logsize=16384,largefiles /dev/vg02/os_dumps 2097152000

However, when using the Linux dump utility, it fails 2GB into the dump:
Code:
DUMP: write: File too large
  DUMP: write error 2097170 blocks into volume 1: File too large
  DUMP: Do you want to rewrite this volume?: ("yes" or "no")   DUMP: write: File too large
  DUMP: write: File too large

I've confirmed that I can dump from the same RHEL5 server to another RHEL5 server (ext3 filesystem) without issue so it doesn't seem to be a dump limitation. As I've mentioned, I've also confirmed that can scp a large file (8GB) from the same Linux server to the VXFS filesystem without issue. There seems to be some issue between Linux dump and the HP-UX server/filesystem. This is really starting to drive me nuts. We were previously dumping (via the same script) to a Solaris server without issue. After re-pointing the script to HP-UX, we now now have an issue.

Can someone please shed some light on this? I've tried various dump options and nothing seems to make a difference.

Thanks,
- Bill

Last edited by Scott; 11-09-2011 at 02:24 PM.. Reason: Code tags
# 2  
Old 11-09-2011
Perhaps if you said a bit more about your HPUX system and OS, I could start thinking a bit...
# 3  
Old 11-09-2011
Can you also post the dump command line with the options that you are using to dump this ext3 fs from rhel to hpux?
# 4  
Old 11-09-2011
Thanks for reaching out.
Code:
# uname -a
HP-UX corvette B.11.11 U 9000/800 1756503870 unlimited-user license

I'm primarily a Linux administrator and don't dabble much with HP-UX so if you need additional info, please let me know.

The HP-UX server is attached to EMC storage. Our Linux servers were previously backing up to a legacy Sun Solaris server but we've run out of space there so I'm trying to shift the scripts to now backup to the HP server. I've created the logical volume and filesystem from scratch. As mentioned, everything seems to be working as expected with the exception of using dump from Linux to this filesystem. The Linux servers are using the dump options "0uf". I've tried 0auf to no avail. Thanks again for reaching out.

- Bill

---------- Post updated at 12:11 PM ---------- Previous update was at 12:08 PM ----------

Keep in mind that the exact same script works flawlessly to both a Solaris server and another Linux server. As soon as I change one of the variables to point to the HP-UX server, it craps out after 2GB every time. The dump is over SSH. I've also tried RSH but got the same results. Thanks.

---------- Post updated at 12:20 PM ---------- Previous update was at 12:11 PM ----------

Proof that the filesystem in question does in fact support large files:
(I've also scp'd an 8GB file from the same Linux server to the filesystem)
Code:
corvette]:/os_dumps/blades # dd if=/dev/zero of=8gb_file bs=8k count=1048576
1048576+0 records in
1048576+0 records out

[corvette]:/os_dumps/blades # ls -l
total 16809888
-rw-r-----   1 root       sys        8589934592 Nov  9 12:18 8gb_file

[corvette]:/os_dumps/blades # bdf .
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg02/os_dumps 2097152000 8485800 2072348504    0% /os_dumps


Last edited by Scott; 11-09-2011 at 02:24 PM.. Reason: Code tags
# 5  
Old 11-09-2011
Can you give us the output of the next commands:
Code:
model
vgdisplay vg02
lvdisplay /dev/vg02/os_dumps

# 6  
Old 11-09-2011
Code:
[corvette]:/os_dumps/blades # model
9000/800/rp8420

[corvette]:/os_dumps/blades # vgdisplay vg02
--- Volume groups ---
VG Name                     /dev/vg02
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      1      
Open LV                     1      
Max PV                      16     
Cur PV                      10     
Act PV                      10     
Max PE per PV               65535        
VGDA                        20  
PE Size (Mbytes)            128             
Total PE                    22981   
Alloc PE                    16000   
Free PE                     6981    
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0                     

[corvette]:/os_dumps/blades # lvdisplay /dev/vg02/os_dumps
--- Logical volumes ---
LV Name                     /dev/vg02/os_dumps
VG Name                     /dev/vg02
LV Permission               read/write   
LV Status                   available/syncd           
Mirror copies               0            
Consistency Recovery        MWC                 
Schedule                    parallel     
LV Size (Mbytes)            2048000         
Current LE                  16000     
Allocated PE                16000       
Stripes                     0       
Stripe Size (Kbytes)        0                   
Bad block                   on           
Allocation                  strict                    
IO Timeout (Seconds)        default


Last edited by Scott; 11-09-2011 at 02:23 PM.. Reason: Please start using code tags. Thanks.
# 7  
Old 11-09-2011
I took time answering (people in my office...) and replied without having the possibility to see your last post...
Did you do a man of dump on HP-UX ? I remember it differs a little...
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

zip -r <directory> failing due to 2GB limit - Just asking opinion

Hi, Am trying to run zip -r on a 2.4G directory and it is failing with the error below. I believe this is because of the 2G limit of the zip program. server101(oper01)/u01/temp$: date Thu Mar 15 12:53:44 NZDT 2012 server101(oper01)/u01/temp$: ls -l total 8 drwxr-x--x 4 oracle dba ... (1 Reply)
Discussion started by: newbie_01
1 Replies

2. Shell Programming and Scripting

Limit on a File size.

Hi All, I want to store 32KB of file in Oracle DB into CLOB field. I am not able to insert more than 32KB of file into CLOB. So i want to put a limit on the file size. I am using k shell. My file size will dynamically increase its size, i want to check the file size if it is more than 32KB... (1 Reply)
Discussion started by: rajeshorpu
1 Replies

3. UNIX for Advanced & Expert Users

file size limit?

hi, how can I find out what the limit of a file size is on unix? thanks (6 Replies)
Discussion started by: JamesByars
6 Replies

4. UNIX for Dummies Questions & Answers

MAX file size limited to 2GB

Hi All, We are running HP rp7400 box with hpux 11iv1. Recently, we changed 3 kernel parameters a) msgseg from 32560 to 32767 b) msgmnb from 65536 to 65535 c) msgssz from 128 to 256 Then we noticed that all application debug file size increase upto 2GB then it stops. So far we did not... (1 Reply)
Discussion started by: mhbd
1 Replies

5. Filesystems, Disks and Memory

tar 2GB limit

Any idea how to get around this limit? I have a 42GB database backup file (.dmp) taking up disk space because neither tar nor cpio are able to put it onto a tape. I am on a SUN Solaris using SunOS 5.8. I would appreciate whatever help can be provided. Thanks! (9 Replies)
Discussion started by: SLKRR
9 Replies

6. UNIX for Advanced & Expert Users

File Size Limit

Hi, I have a problem writing or copying a file 2GB or larger to either the second or third disk on my C8000. I've searched this forum and found some good information on this but still nothing to solve the problem. I'm running hpux 11i, JFS3.3 and disk version 4 (from fstyp) on all 3 disks. ... (2 Replies)
Discussion started by: HaidoodFaulkauf
2 Replies

7. AIX

file size limit

Can anybody help me? How to increase file size limit in aix 5.2? I have already specified in /etc/security/limits file : default: fsize = -1 core = 2097151 cpu = -1 data = -1 rss = -1 stack = -1 nofiles = 2000 (2 Replies)
Discussion started by: vjm
2 Replies

8. Solaris

File size limit

I want to have a permanent file created - and limit the size that this file can grow.. I want a circular file.. ie max size of file is 10 mb.. and if any new data written to file the oldest data removed.. How can I do this? I am on solaris 9 x86 (3 Replies)
Discussion started by: frustrated1
3 Replies

9. Solaris

SUN Solaris 9 - Is there a 2GB file size limit?

Hi I am using SUN/Solaris 9 and I was told that some unix versions have 2GB size limit. Does this applies to SUN/Solaris 9? Thanks. (2 Replies)
Discussion started by: GMMike
2 Replies

10. UNIX for Dummies Questions & Answers

File size exceeding 2GB

I am working on HP-Unix. I have a 600 MB file in compressed form. During decompression, when file size reaches 2GB, decompression aborts. What should be done? (3 Replies)
Discussion started by: Nadeem Mistry
3 Replies
Login or Register to Ask a Question