Sponsored Content
Operating Systems Linux File size limitation in Linux Post 302930286 by jim mcnamara on Thursday 1st of January 2015 05:24:21 PM
Old 01-01-2015
Consider: sparse files.

A sparse file that shows 10MB used space when copied or tar-ed may occupy exponentially more file space in the new file or tar file.... Those things are the bane of backups.

I am not saying this applies here but it should be considered when the backup of nnGB will not fit in a backup file of nnGB + a tiny amount.

Sparse file have "holes", this has nice diagrams:

Sparse file - Wikipedia, the free encyclopedia

Also note that you can mount file systems in such a way as to "obscure" an underlying group of files. Ex:
a directory tree with 2GB total in the whole thing :/path/to/confusion
If you mount a file system on /path/to/confusion on the confusion directory, all of the files are there but they no longer are visible to some tools. Results of du versus df is one of them.

This may produce the same weird results being discussed. Again, you may want to consider it. Look in fstab for confirmation.
 

10 More Discussions You Might Find Interesting

1. UNIX Desktop Questions & Answers

Size Limitation for a user directory

Hi all, I want to set a size limitation for some user in the system, for an example, each user only have 5MB free space in the system. The user cannot user more than 5 MB space. Is it possible to do this? Thanks! (1 Reply)
Discussion started by: felix_koo
1 Replies

2. HP-UX

HP-UX 11i - File Size Limitation And Number Of Folders Limitation

Hi All, Can anyone please clarify me the following questions: 1. Is there any file size limitation in HP-UX 11i, that I can able to create upto certain size of file (say 2 GB) and not more then that???? 2. At max. how many files we can able to keep inside a folder???? 3. How many... (2 Replies)
Discussion started by: sundeep_mohanty
2 Replies

3. Shell Programming and Scripting

File size limitation of unix sort command.

hi , iam trying to sort millions of records which is delimited and i cant able to use sort command more than 60 million..if i try to do so i got an message stating that "File size limit exceeded",Is there any file size limit for using sort command.. How can i solve this problem. thanks ... (7 Replies)
Discussion started by: cskumar
7 Replies

4. Linux

File size limitation for rcp

Hi I am trying to rcp a file from Solaris box to Linux. When the file size is 2,205,255,047, the rcp fails with the message Jan 10 01:11:53 hqsas167 rsh: pam_authenticate: error Authentication failed However when I rcp a file with smaller size - 9,434,477 - the rcp completes with... (2 Replies)
Discussion started by: schoubal
2 Replies

5. Shell Programming and Scripting

Size limitation in Tar command

Hi to every body there, I am new this forum and this is my first post. I am a new user of Unix, is there any size limitation of files while creating tar file. Thanks in advance (4 Replies)
Discussion started by: Manvar Khan
4 Replies

6. Shell Programming and Scripting

fetchmail - log file size limitation

Hi, I am using fetchmail in my application so as to download mails to the localhost where the application is hosted from the mailserver.Fetchmail is configured as as to run as a daemon polling mails during an interval of 1sec. So my concern here is, during each 2sec it is writing two... (10 Replies)
Discussion started by: DILEEP410
10 Replies

7. UNIX for Advanced & Expert Users

Find command -size option limitation ?

Hi All, I ran code in test environment to find the files more than 1TB given below is a snippet from code: FILE_SYSTEM=/home/arun MAX_FILE_LIMIT=1099511627776 find $FILE_SYSTEM -type f -size +"$MAX_FILE_LIMIT"c -ls -xdev 2>/dev/null | while read fname do echo "File larger than... (3 Replies)
Discussion started by: Arunprasad
3 Replies

8. Solaris

How to extend 2 GB file size limitation

Hello All, I am using a SunOS machine. My application creates output files for the downstream systems. However output files are restricted to 2GB of file size in SunOS due to which I am forced to create multiple files which is not supported by the downstream due to some limitations. Is... (5 Replies)
Discussion started by: pasupuleti81
5 Replies

9. UNIX for Advanced & Expert Users

size for sum variable limitation on awk

Hello first, truth been told, I'm not even close to be advanced user. I'm posting here because maybe my question is complicated enough to need your expert help I need to use awk (or nawk - I don't have gawk) to validate some files by computing the total sum for a large numeric variable. It... (1 Reply)
Discussion started by: cwitarsa
1 Replies

10. Linux

File size limitation in the EST 2012 x86_64 GNU/Linux

Hello Friends, I tried to take tar backup in my server, but it ended with an error. It said that: /home/back/pallava_backup/fbackup_backup/stape_config /home/back/romam_new.tar.gz tar: /home/backup/back.tar.gz: Cannot write: No space left on device tar: Error is not recoverable: exiting... (10 Replies)
Discussion started by: siva3492
10 Replies
NTFSCLONE(8)						      System Manager's Manual						      NTFSCLONE(8)

NAME
ntfsclone - Efficiently clone, image, restore or rescue an NTFS SYNOPSIS
ntfsclone [OPTIONS] SOURCE ntfsclone --save-image [OPTIONS] SOURCE ntfsclone --restore-image [OPTIONS] SOURCE ntfsclone --metadata [OPTIONS] SOURCE DESCRIPTION
ntfsclone will efficiently clone (copy, save, backup, restore) or rescue an NTFS filesystem to a sparse file, image, device (partition) or standard output. It works at disk sector level and copies only the used data. Unused disk space becomes zero (cloning to sparse file), encoded with control codes (saving in special image format), left unchanged (cloning to a disk/partition) or filled with zeros (cloning to standard output). ntfsclone can be useful to make backups, an exact snapshot of an NTFS filesystem and restore it later on, or for developers to test NTFS read/write functionality, troubleshoot/investigate users' issues using the clone without the risk of destroying the original filesystem. The clone, if not using the special image format, is an exact copy of the original NTFS filesystem from sector to sector thus it can be also mounted just like the original NTFS filesystem. For example if you clone to a file and the kernel has loopback device and NTFS sup- port then the file can be mounted as mount -t ntfs -o loop ntfsclone.img /mnt/ntfsclone Windows Cloning If you want to copy, move or restore a system or boot partition to another computer, or to a different disk or partition (e.g. hda1->hda2, hda1->hdb1 or to a different disk sector offset) then you will need to take extra care. Usually, Windows will not be able to boot, unless you copy, move or restore NTFS to the same partition which starts at the same sector on the same type of disk having the same BIOS legacy cylinder setting as the original partition and disk had. The ntfsclone utility guarantees to make an exact copy of NTFS but it won't deal with booting issues. This is by design: ntfsclone is a filesystem, not system utility. Its aim is only NTFS cloning, not Windows cloning. Hereby ntfsclone can be used as a very fast and reliable build block for Windows clonning but itself it's not enough. You can find useful tips following the related links on the below page http://wiki.linux-ntfs.org/doku.php?id=ntfsclone Sparse Files A file is sparse if it has unallocated blocks (holes). The reported size of such files are always higher than the disk space consumed by them. The du command can tell the real disk space used by a sparse file. The holes are always read as zeros. All major Linux filesystem like, ext2, ext3, reiserfs, Reiser4, JFS and XFS, supports sparse files but for example the ISO 9600 CD-ROM filesystem doesn't. Handling Large Sparse Files As of today Linux provides inadequate support for managing (tar, cp, gzip, gunzip, bzip2, bunzip2, cat, etc) large sparse files. The only main Linux filesystem having support for efficient sparse file handling is XFS by the XFS_IOC_GETBMAPX ioctl(2). However none of the com- mon utilities supports it. This means when you tar, cp, gzip, bzip2, etc a large sparse file they will always read the entire file, even if you use the "sparse support" options. bzip2(1) compresses large sparse files much better than gzip(1) but it does so also much slower. Moreover neither of them handles large sparse files efficiently during uncompression from disk space usage point of view. At present the most efficient way, both speed and space-wise, to compress and uncompress large sparse files by common tools would be using tar(1) with the options -S (handle sparse files "efficiently") and -j (filter the archive through bzip2). Although tar still reads and analyses the entire file, it doesn't pass on the large data blocks having only zeros to filters and it also avoids writing large amount of zeros to the disk needlessly. But since tar can't create an archive from the standard input, you can't do this in-place by just reading ntfsclone standard output. Even more sadly, using the -S option results serious data loss since the end of 2004 and the GNU tar maintainers didn't release fixed versions until the present day. The Special Image Format It's also possible, actually it's recommended, to save an NTFS filesystem to a special image format. Instead of representing unallocated blocks as holes, they are encoded using control codes. Thus, the image saves space without requiring sparse file support. The image format is ideal for streaming filesystem images over the network and similar, and can be used as a replacement for Ghost or Partition Image if it is combined with other tools. The downside is that you can't mount the image directly, you need to restore it first. To save an image using the special image format, use the -s or the --save-image option. To restore an image, use the -r or the --re- store-image option. Note that you can restore images from standard input by using '-' as the SOURCE file. Metadata-only Cloning One of the features of ntfsclone is that, it can also save only the NTFS metadata using the option -m or --metadata and the clone still will be mountable. In this case all non-metadata file content will be lost and reading them back will result always zeros. The metadata-only image can be compressed very well, usually to not more than 1-8 MB thus it's easy to transfer for investigation, trou- bleshooting. In this mode of ntfsclone, NONE of the user's data is saved, including the resident user's data embedded into metadata. All is filled with zeros. Moreover all the file timestamps, deleted and unused spaces inside the metadata are filled with zeros. Thus this mode is inappro- priate for example for forensic analyses. Please note, filenames are not wiped out. They might contain sensitive information, so think twice before sending such an image to anybody. OPTIONS
Below is a summary of all the options that ntfsclone accepts. Nearly all options have two equivalent names. The short name is preceded by - and the long name is preceded by -- . Any single letter options, that don't take an argument, can be combined into a single command, e.g. -fv is equivalent to -f -v . Long named options can be abbreviated to any unique prefix of their name. -o, --output FILE Clone NTFS to the non-existent FILE. If FILE is '-' then clone to the standard output. -O, --overwrite FILE Clone NTFS to FILE, overwriting if exists. -s, --save-image Save to the special image format. This is the most efficient way space and speed-wise if imaging is done to the standard output, e.g. for image compression, encryption or streaming through a network. -r, --restore-image Restore from the special image format specified by SOURCE argument. If the SOURCE is '-' then the image is read from the standard input. --rescue Ignore disk read errors so disks having bad sectors, e.g. dying disks, can be rescued the most efficiently way, with minimal stress on them. Ntfsclone works at the lowest, sector level in this mode too thus more data can be rescued. The contents of the unreadable sectors are filled by character '?' and the beginning of such sectors are marked by "BadSectoR". -m, --metadata Clone ONLY METADATA (for NTFS experts). Moreover only cloning to a file is allowed. You can't metadata-only clone to a device, im- age or standard output. --ignore-fs-check Ignore the result of the filesystem consistency check. This option is allowed to be used only with the --metadata option, for the safety of user's data. The clusters which cause the inconsistency are saved too. -f, --force Forces ntfsclone to proceed if the filesystem is marked "dirty" for consistency check. -h, --help Show a list of options with a brief description of each one. EXIT CODES
The exit code is 0 on success, non-zero otherwise. EXAMPLES
Clone NTFS on /dev/hda1 to /dev/hdc1: ntfsclone --overwrite /dev/hdc1 /dev/hda1 Save an NTFS to a file in the special image format: ntfsclone --save-image --output backup.img /dev/hda1 Restore an NTFS from a special image file to its original partition: ntfsclone --restore-image --overwrite /dev/hda1 backup.img Save an NTFS into a compressed image file: ntfsclone --save-image -o - /dev/hda1 | gzip -c > backup.img.gz Restore an NTFS volume from a compressed image file: gunzip -c backup.img.gz | ntfsclone --restore-image --overwrite /dev/hda1 - Backup an NTFS volume to a remote host, using ssh. Please note, that ssh may ask for a password! ntfsclone --save-image --output - /dev/hda1 | gzip -c | ssh host 'cat > backup.img.gz' Restore an NTFS volume from a remote host via ssh. Please note, that ssh may ask for a password! ssh host 'cat backup.img.gz' | gunzip -c | ntfsclone --restore-image --overwrite /dev/hda1 - Stream an image file from a web server and restore it to a partition: wget -qO - http://server/backup.img | ntfsclone --restore-image --overwrite /dev/hda1 - Clone an NTFS volume to a non-existent file: ntfsclone --output ntfs-clone.img /dev/hda1 Pack NTFS metadata for NTFS experts. Please note that bzip2 runs very long but results usually at least 10 times smaller archives than gzip. ntfsclone --metadata --output ntfsmeta.img /dev/hda1 bzip2 ntfsmeta.img Unpacking NTFS metadata into a sparse file: bunzip2 -c ntfsmeta.img.bz2 | cp --sparse=always /proc/self/fd/0 ntfsmeta.img KNOWN ISSUES
There are no known problems with ntfsclone. If you think you have found a problem then please send an email describing it to the develop- ment team: linux-ntfs-dev@lists.sourceforge.net Sometimes it might appear ntfsclone froze if the clone is on ReiserFS and even CTRL-C won't stop it. This is not a bug in ntfsclone, howev- er it's due to ReiserFS being extremely inefficient creating large sparse files and not handling signals during this operation. This Reis- erFS problem was improved in kernel 2.4.22. XFS, JFS and ext3 don't have this problem. AUTHORS
ntfsclone was written by Szabolcs Szakacsits with contributions from Per Olofsson (special image format support) and Anton Altaparmakov. AVAILABILITY
ntfsclone is part of the ntfsprogs package and is available at: http://www.linux-ntfs.org/content/view/19/37 The latest manual pages are available at: http://man.linux-ntfs.org/ Additional up-to-date information can be found furthermore at: http://wiki.linux-ntfs.org/doku.php?id=ntfsclone SEE ALSO
ntfsresize(8) ntfsprogs(8) xfs_copy(8) debugreiserfs(8) e2image(8) ntfsprogs 1.13.1 February 2006 NTFSCLONE(8)
All times are GMT -4. The time now is 05:40 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy