Sponsored Content
Full Discussion: Copy huge files system
Operating Systems AIX Copy huge files system Post 302511336 by bakunin on Wednesday 6th of April 2011 02:32:01 PM
Old 04-06-2011
Quote:
Originally Posted by Mr.AIX
I have tried to use your suggested command but it got hanged after some time
You haven't said so until now. If so, what exactly was the error? The method i described (tar) has worked for me countless times, also on such amounts of data and even more.

Using "backup" and "restore", "cpio" or "savevg" will equally work and probably at roughly the same speed as "tar".

One thing though: you can't cancel a job after some time and expect it to have finished the task - if you have to cancel it that means it was still running and if it was running it means it hasn't finished what it was doing.

I hope this helps.

bakunin
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Difference between two huge files

Hi, As per my requirement, I need to take difference between two big files(around 6.5 GB) and get the difference to a output file without any line numbers or '<' or '>' in front of each new line. As DIFF command wont work for big files, i tried to use BDIFF instead. I am getting incorrect... (13 Replies)
Discussion started by: pyaranoid
13 Replies

2. UNIX for Advanced & Expert Users

Huge files manipulation

Hi , i need a fast way to delete duplicates entrys from very huge files ( >2 Gbs ) , these files are in plain text. I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but it always ended with the same result (memory core dump) In using HP-UX large servers. Any advice will... (8 Replies)
Discussion started by: Klashxx
8 Replies

3. UNIX for Dummies Questions & Answers

copy and paste certain many lines of huge file in linux

Dear All, I am working with windoes OS but remote a linux machine. I wonder the way to copy an paste some part of a huge file in linux machine. the contain of file like as follow: ... dump annealling all custom 10 anneal_*.dat id type x y z q timestep 0.02 run 200000 Memory... (2 Replies)
Discussion started by: ariesto
2 Replies

4. Shell Programming and Scripting

Compare 2 folders to find several missing files among huge amounts of files.

Hi, all: I've got two folders, say, "folder1" and "folder2". Under each, there are thousands of files. It's quite obvious that there are some files missing in each. I just would like to find them. I believe this can be done by "diff" command. However, if I change the above question a... (1 Reply)
Discussion started by: jiapei100
1 Replies

5. UNIX for Dummies Questions & Answers

Copy huge data into vi editor

Hi All, HP-UX dev4 B.11.11 U 9000/800 3251073457 I need to copy huge data from windows text file to vi editor. when I tried copy huge data, the format of data is not preserverd and appered to scatterd through the vi, something like give below. Please let me know, how can I correct this? ... (18 Replies)
Discussion started by: alok.behria
18 Replies

6. Shell Programming and Scripting

Copy files with pattern from ext4 to cifs file system

Hi I have a shell script to copy a pattern of files from Linux to Windows Filesystem. When i execute the below command cp -av TOUT_05-02-13* Windows/Folder `TOUT_05-02-13-19:02:37.tar.gz' -> `Windows/Folder/SYSOUT_05-02-13-19:02:37.tar.gz' cp: cannot create regular file... (5 Replies)
Discussion started by: rakeshkumar
5 Replies

7. Shell Programming and Scripting

Aggregation of Huge files

Hi Friends !! I am facing a hash total issue while performing over a set of files of huge volume: Command used: tail -n +2 <File_Name> |nawk -F"|" -v '%.2f' qq='"' '{gsub(qq,"");sa+=($156<0)?-$156:$156}END{print sa}' OFMT='%.5f' Pipe delimited file and 156 column is for hash totalling.... (14 Replies)
Discussion started by: Ravichander
14 Replies

8. Solaris

The Fastest for copy huge data

Dear Experts, I would like to know what's the best method for copy data around 3 mio (spread in a hundred folders, size each file around 1kb) between 2 servers? I already tried using Rsync and tar command. But using these command is too long. Please advice. Thanks Edy (11 Replies)
Discussion started by: edydsuranta
11 Replies

9. Solaris

Backup for NAS huge File system

Gents, I have NAS File System mounted in Solaris as \Sysapp with size 8 TB the problem once the backup stared it is impacting the performance of the OS. Do you have any idea how to can we backup this FS with fast scenario without impacting the OS. Backup type : Netbackup (3 Replies)
Discussion started by: AbuAliiiiiiiiii
3 Replies

10. Solaris

Split huge File System

Gents I have huge NAS File System as /sys with size 10 TB and I want to Split each 1TB in spirit File System to be mounted in the server. How to can I do that without changing anything in the source. Please your support. (1 Reply)
Discussion started by: AbuAliiiiiiiiii
1 Replies
lprm(1) 							    Apple Inc.								   lprm(1)

NAME
lprm - cancel print jobs SYNOPSIS
lprm [ -E ] [ -U username ] [ -h server[:port] ] [ -P destination[/instance] ] [ - ] [ job ID(s) ] DESCRIPTION
lprm cancels print jobs that have been queued for printing. If no arguments are supplied, the current job on the default destination is cancelled. You can specify one or more job ID numbers to cancel those jobs or use the - option to cancel all jobs. OPTIONS
The lprm command supports the following options: -E Forces encryption when connecting to the server. -P destination[/instance] Specifies the destination printer or class. -U username Specifies an alternate username. -h server[:port] Specifies an alternate server. COMPATIBILITY
The CUPS version of lprm is compatible with the standard Berkeley lprm command. SEE ALSO
cancel(1), lp(1), lpq(1), lpr(1), lpstat(1), http://localhost:631/help COPYRIGHT
Copyright 2007-2013 by Apple Inc. 28 August 2009 CUPS lprm(1)
All times are GMT -4. The time now is 03:03 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy