Sponsored Content
Full Discussion: Copy huge files system
Operating Systems AIX Copy huge files system Post 302511336 by bakunin on Wednesday 6th of April 2011 02:32:01 PM
Old 04-06-2011
Quote:
Originally Posted by Mr.AIX
I have tried to use your suggested command but it got hanged after some time
You haven't said so until now. If so, what exactly was the error? The method i described (tar) has worked for me countless times, also on such amounts of data and even more.

Using "backup" and "restore", "cpio" or "savevg" will equally work and probably at roughly the same speed as "tar".

One thing though: you can't cancel a job after some time and expect it to have finished the task - if you have to cancel it that means it was still running and if it was running it means it hasn't finished what it was doing.

I hope this helps.

bakunin
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Difference between two huge files

Hi, As per my requirement, I need to take difference between two big files(around 6.5 GB) and get the difference to a output file without any line numbers or '<' or '>' in front of each new line. As DIFF command wont work for big files, i tried to use BDIFF instead. I am getting incorrect... (13 Replies)
Discussion started by: pyaranoid
13 Replies

2. UNIX for Advanced & Expert Users

Huge files manipulation

Hi , i need a fast way to delete duplicates entrys from very huge files ( >2 Gbs ) , these files are in plain text. I tried all the usual methods ( awk / sort /uniq / sed /grep .. ) but it always ended with the same result (memory core dump) In using HP-UX large servers. Any advice will... (8 Replies)
Discussion started by: Klashxx
8 Replies

3. UNIX for Dummies Questions & Answers

copy and paste certain many lines of huge file in linux

Dear All, I am working with windoes OS but remote a linux machine. I wonder the way to copy an paste some part of a huge file in linux machine. the contain of file like as follow: ... dump annealling all custom 10 anneal_*.dat id type x y z q timestep 0.02 run 200000 Memory... (2 Replies)
Discussion started by: ariesto
2 Replies

4. Shell Programming and Scripting

Compare 2 folders to find several missing files among huge amounts of files.

Hi, all: I've got two folders, say, "folder1" and "folder2". Under each, there are thousands of files. It's quite obvious that there are some files missing in each. I just would like to find them. I believe this can be done by "diff" command. However, if I change the above question a... (1 Reply)
Discussion started by: jiapei100
1 Replies

5. UNIX for Dummies Questions & Answers

Copy huge data into vi editor

Hi All, HP-UX dev4 B.11.11 U 9000/800 3251073457 I need to copy huge data from windows text file to vi editor. when I tried copy huge data, the format of data is not preserverd and appered to scatterd through the vi, something like give below. Please let me know, how can I correct this? ... (18 Replies)
Discussion started by: alok.behria
18 Replies

6. Shell Programming and Scripting

Copy files with pattern from ext4 to cifs file system

Hi I have a shell script to copy a pattern of files from Linux to Windows Filesystem. When i execute the below command cp -av TOUT_05-02-13* Windows/Folder `TOUT_05-02-13-19:02:37.tar.gz' -> `Windows/Folder/SYSOUT_05-02-13-19:02:37.tar.gz' cp: cannot create regular file... (5 Replies)
Discussion started by: rakeshkumar
5 Replies

7. Shell Programming and Scripting

Aggregation of Huge files

Hi Friends !! I am facing a hash total issue while performing over a set of files of huge volume: Command used: tail -n +2 <File_Name> |nawk -F"|" -v '%.2f' qq='"' '{gsub(qq,"");sa+=($156<0)?-$156:$156}END{print sa}' OFMT='%.5f' Pipe delimited file and 156 column is for hash totalling.... (14 Replies)
Discussion started by: Ravichander
14 Replies

8. Solaris

The Fastest for copy huge data

Dear Experts, I would like to know what's the best method for copy data around 3 mio (spread in a hundred folders, size each file around 1kb) between 2 servers? I already tried using Rsync and tar command. But using these command is too long. Please advice. Thanks Edy (11 Replies)
Discussion started by: edydsuranta
11 Replies

9. Solaris

Backup for NAS huge File system

Gents, I have NAS File System mounted in Solaris as \Sysapp with size 8 TB the problem once the backup stared it is impacting the performance of the OS. Do you have any idea how to can we backup this FS with fast scenario without impacting the OS. Backup type : Netbackup (3 Replies)
Discussion started by: AbuAliiiiiiiiii
3 Replies

10. Solaris

Split huge File System

Gents I have huge NAS File System as /sys with size 10 TB and I want to Split each 1TB in spirit File System to be mounted in the server. How to can I do that without changing anything in the source. Please your support. (1 Reply)
Discussion started by: AbuAliiiiiiiiii
1 Replies
BACKUP_JOBS(8)						       AFS Command Reference						    BACKUP_JOBS(8)

NAME
       backup_jobs - Lists pending and running operations in interactive mode

SYNOPSIS
       jobs [-help]

       j [-h]

DESCRIPTION
       The backup jobs command lists the job ID number and status of each backup operation running or pending in the current interactive session.

       This command can be issued in interactive mode only. If the issuer of the backup interactive command included the -localauth flag, the
       -cell argument, or both, those settings apply to this command also.

       To terminate operations that appear in the output, issue the backup kill command and identify the operation to cancel with the job ID
       number from this command's output.

       To check the status of a Tape Coordinator, rather than of a certain operation, use the backup status command.

OPTIONS
       -help
	   Prints the online help for this command. All other valid options are ignored.

OUTPUT
       The output always includes the expiration date and time of the tokens that the backup command interpreter is using during the current
       interactive session, in the following format:

	  <date>   <time>: TOKEN EXPIRATION

       If the execution date and time specified for a scheduled dump operation is later than <date time>, then its individual line (as described
       in the following paragraphs) appears below this line to indicate that the current tokens will not be available to it.

       If the issuer of the backup command included the -localauth flag when entering interactive mode, the line instead reads as follows:

	  :  TOKEN NEVER EXPIRES

       The entry for a scheduled dump operation has the following format:

	  Job <job_ID>:  <timestamp>:  dump  <volume_set>  <dump_level>

       where

       <job_ID>
	   Is a job identification number assigned by the Backup System.

       <timestamp>
	   Indicates the date and time the dump operation is to begin, in the format month/date/year hours:minutes (in 24-hour format)

       <volume_set>
	   Indicates the volume set to dump.

       <dump_level>
	   Indicates the dump level at which to perform the dump operation.

       The line for a pending or running operation of any other type has the following format:

	  Job <job_ID>:  <operation>  <status>

       where

       <job_ID>
	   Is a job identification number assigned by the Backup System.

       <operation>
	   Identifies the operation the Tape Coordinator is performing, which is initiated by the indicated command:

	   Dump (dump name)
	       Initiated by the backup dump command. The dump name has the following format:

		   <volume_set_name>.<dump_level_name>

	   Restore
	       Initiated by the backup diskrestore, backup volrestore, or backup volsetrestore command.

	   Labeltape (tape_label)
	       Initiated by the backup labeltapen command. The tape_label is the name specified by the backup labeltape command's -name or -pname
	       argument.

	   Scantape
	       Initiated by the backup scantape command.

	   SaveDb
	       Initiated by the backup savedb command.

	   RestoreDb
	       Initiated by the backup restoredb command.

       <status>
	   Indicates the job's current status in one of the following messages. If no message appears, the job is either still pending or has
	   finished.

	   number Kbytes, volume volume_name
	       For a running dump operation, indicates the number of kilobytes copied to tape or a backup data file so far, and the volume
	       currently being dumped.

	   number Kbytes, restore.volume
	       For a running restore operation, indicates the number of kilobytes copied into AFS from a tape or a backup data file so far.

	   [abort requested]
	       The backup kill command was issued, but the termination signal has yet to reach the Tape Coordinator.

	   [abort sent]
	       The operation is canceled by the backup kill command.  Once the Backup System removes an operation from the queue or stops it from
	       running, it no longer appears at all in the output from the command.

	   [butc contact lost]
	       The backup command interpreter cannot reach the Tape Coordinator. The message can mean either that the Tape Coordinator handling
	       the operation was terminated or failed while the operation was running, or that the connection to the Tape Coordinator timed out.

	   [done]
	       The Tape Coordinator has finished the operation.

	   [drive wait]
	       The operation is waiting for the specified tape drive to become free.

	   [operator wait]
	       The Tape Coordinator is waiting for the backup operator to insert a tape in the drive.

EXAMPLES
       The following example shows that two restore operations and one dump operation are running (presumably on different Tape Coordinators) and
       that the backup command interpreter's tokens expire on 22 April 1999 at 10:45 am:

	  backup> jobs
	  Job 1: Restore, 1306 Kbytes, restore.volume
	  Job 2: Dump (user.sunday1), 34 Kbytes, volume user.pat.backup
	  Job 3: Restore, 2498 Kbytes, restore.volume
		 04/22/1999 10:45: TOKEN EXPIRATION

PRIVILEGE REQUIRED
       None. However, queuing any operation requires privilege, and it is possible to issue this command only within the interactive session in
       which the jobs are queued.

SEE ALSO
       backup(8), backup_interactive(8), backup_kill(8), backup_quit(8)

COPYRIGHT
       IBM Corporation 2000. <http://www.ibm.com/> All Rights Reserved.

       This documentation is covered by the IBM Public License Version 1.0.  It was converted from HTML to POD by software written by Chas
       Williams and Russ Allbery, based on work by Alf Wachsmann and Elizabeth Cassell.

OpenAFS 							    2012-03-26							    BACKUP_JOBS(8)
All times are GMT -4. The time now is 06:04 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy