Sponsored Content
Full Discussion: Copying of large files fail
Top Forums Shell Programming and Scripting Copying of large files fail Post 302417945 by bigearsbilly on Saturday 1st of May 2010 03:09:32 PM
Old 05-01-2010
are you cp-ing them onto a local filesystem that cannot handle huge files?
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

copying a large filesystem

Hi there In my organisation we have a solaris network with /home being automounted from /export/home on a central file server (usual stuff) however, the guy who originally set this up only allocated 3gb to /export/home and now we are really struggling for space. I have a new 18gb disk installed... (3 Replies)
Discussion started by: hcclnoodles
3 Replies

2. Filesystems, Disks and Memory

Strange difference in file size when copying LARGE file..

Hi, Im trying to take a database backup. one of the files is 26 GB. I am using cp -pr to create a backup copy of the database. after the copying is complete, if i do du -hrs on the folders i saw a difference of 2GB. The weird fact is that the BACKUP folder was 2 GB more than the original one! ... (1 Reply)
Discussion started by: 0ktalmagik
1 Replies

3. UNIX for Dummies Questions & Answers

Copying large file problem on SVR4 Unix

We have 3 Unix servers all running SVR4 Unix 1.4. I have no problems copying files to and from 2 of the servers using either the rcp command or ftp but when i come to transfer large files to the third server the copy gives up part way through and crashes this server. Copying smaller files using RCP... (7 Replies)
Discussion started by: coatesd
7 Replies

4. UNIX for Advanced & Expert Users

copying of files by userB, dir & files owned by userA

I am userB and have a dir /temp1 This dir is owned by me. How do I recursively copy files from another users's dir userA? I need to preserve the original user who created files, original group information, original create date, mod date etc. I tried cp -pr /home/userA/* . ... (2 Replies)
Discussion started by: Hangman2
2 Replies

5. UNIX for Dummies Questions & Answers

Copying a Large File

I have a large file that I append entries to the end of every few seconds. Its grown to >150MB. Its basically a log file but a perl script is writing to it. I need to make a copy of it to a new directory. I realize the latest entries occuring while the copy is taking place will not be recorded... (1 Reply)
Discussion started by: lforum
1 Replies

6. Solaris

How to safely copy full filesystems with large files (10Gb files)

Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS. I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies

7. Shell Programming and Scripting

Start copying large file while its still being restored from tape

Hello, I need to copy a 700GB tape-image file over a network. I want to start the copy process before the tape-image has finished being restored from the tape. The tape restore speed is about 78 Mbps and the file transfer speed over the network is about 45 Mbps I don't want to use a pipe, since... (7 Replies)
Discussion started by: swamik
7 Replies

8. SCO

Need advice: Copying large CSV report files off SCO system

I have a SCO Unix server from 1999 running SCO 5.0.5 and some ancient accounting software called Real World A report writer program on the system is used to generate CSV files from accounting that we write with DOSCOPY commands to 3.5" floppies In the next 60 days we will be decommissioning... (11 Replies)
Discussion started by: magnetman
11 Replies

9. Shell Programming and Scripting

Copying number by looking a large file

Hi All, I have a big file which looks like this: abc 34.32 cdf 343.45 computer 1.34 ladder 2.3422 I have some 100000 .TXT files which look like this: computer cdf align I have to open each of the text files and read the words from the text files. Then I have to look into that... (2 Replies)
Discussion started by: shoaibjameel123
2 Replies

10. Shell Programming and Scripting

Copying large files in a bash script stops execution

Hello, I'm new to this forum and like to first of all say hello to everyone. I've got a really annoying problem at the moment. I'm trying to rsync some files (about 200MB with one file of 120MB) from a Raspberry PI with raspbian to a debian server via rsync. This procedure is stored in a... (3 Replies)
Discussion started by: wex_storm
3 Replies
ALLOC_HUGEPAGES(2)					     Linux Programmer's Manual						ALLOC_HUGEPAGES(2)

NAME
alloc_hugepages, free_hugepages - allocate or free huge pages SYNOPSIS
void *alloc_hugepages(int key, void *addr, size_t len, int prot, int flag); int free_hugepages(void *addr); DESCRIPTION
The system calls alloc_hugepages() and free_hugepages() were introduced in Linux 2.5.36 and removed again in 2.5.54. They existed only on i386 and ia64 (when built with CONFIG_HUGETLB_PAGE). In Linux 2.4.20, the syscall numbers exist, but the calls fail with the error ENOSYS. On i386 the memory management hardware knows about ordinary pages (4 KiB) and huge pages (2 or 4 MiB). Similarly ia64 knows about huge pages of several sizes. These system calls serve to map huge pages into the process's memory or to free them again. Huge pages are locked into memory, and are not swapped. The key argument is an identifier. When zero the pages are private, and not inherited by children. When positive the pages are shared with other applications using the same key, and inherited by child processes. The addr argument of free_hugepages() tells which page is being freed: it was the return value of a call to alloc_hugepages(). (The memory is first actually freed when all users have released it.) The addr argument of alloc_hugepages() is a hint, that the kernel may or may not follow. Addresses must be properly aligned. The len argument is the length of the required segment. It must be a multiple of the huge page size. The prot argument specifies the memory protection of the segment. It is one of PROT_READ, PROT_WRITE, PROT_EXEC. The flag argument is ignored, unless key is positive. In that case, if flag is IPC_CREAT, then a new huge page segment is created when none with the given key existed. If this flag is not set, then ENOENT is returned when no segment with the given key exists. RETURN VALUE
On success, alloc_hugepages() returns the allocated virtual address, and free_hugepages() returns zero. On error, -1 is returned, and errno is set appropriately. ERRORS
ENOSYS The system call is not supported on this kernel. FILES
/proc/sys/vm/nr_hugepages Number of configured hugetlb pages. This can be read and written. /proc/meminfo Gives info on the number of configured hugetlb pages and on their size in the three variables HugePages_Total, HugePages_Free, Hugepagesize. CONFORMING TO
These calls are specific to Linux on Intel processors, and should not be used in programs intended to be portable. NOTES
These system calls are gone; they existed only in Linux 2.5.36 through to 2.5.54. Now the hugetlbfs filesystem can be used instead. Mem- ory backed by huge pages (if the CPU supports them) is obtained by using mmap(2) to map files in this virtual filesystem. The maximal number of huge pages can be specified using the hugepages= boot parameter. COLOPHON
This page is part of release 4.15 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at https://www.kernel.org/doc/man-pages/. Linux 2017-09-15 ALLOC_HUGEPAGES(2)
All times are GMT -4. The time now is 12:59 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy