Start copying large file while its still being restored from tape


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Start copying large file while its still being restored from tape
# 1  
Old 07-12-2011
Start copying large file while its still being restored from tape

Hello, I need to copy a 700GB tape-image file over a network. I want to start the copy process before the tape-image has finished being restored from the tape. The tape restore speed is about 78 Mbps and the file transfer speed over the network is about 45 Mbps I don't want to use a pipe, since that will limit the tape drive speed to 45Mbps (and I want it to run at full speed). Ideally I would like to have some kind of temporary file used as a buffer that I could write to and read from simultaneously. Restoring the image from tape takes about 2hrs 40min, and copying the file over the network about 4hrs. So running both operations one after the other takes 6hrs 40min. I'm looking at ways of reducing the total time by starting the copy process while the tape is still restoring the file. Note: I don't want to restore the tape image directly over the network for various reasons, including permissions issues etc... I have seen the mbuffer command, which seems to offer what I need on a linux platform. However I'm using the shell on a Mac OS which doesn't provide that. Any suggestions on how I could accomplish this would be much appreciated. Kind Regards Swami
# 2  
Old 07-12-2011
Assuming the file is text try starting up tail -f as soon as the file starts to appear on the disk. ie., almost right after starting the restore operation. Set up ssh keys on the remote node first.

Code:
tail -f /path/to/file_being_restored | ssh  me@remote "cat > /path/to/new/file/newcopy_of_file "

# 3  
Old 07-13-2011
Thanks Jim, Actually - it is a binary file. I don't need ssh since its just transferring the data to a shared directory, which I can access just like a normal dir Here is my existing command: taperead -b 1048576 > tape.img Should I do something like this: taperead -b 1048576 > tape.img & tail -f tape.img > /volumes/shared/tape.img What if the tape.img file doesn't yet exist when the tail command is executed? The tape drive takes a little time to configure itself before it starts restoring the data
# 4  
Old 07-13-2011
I'd put tail in the background instead of your tape restore. You can just create an empty file to make sure tail doesn't throw an error.
Code:
: > localfile # Truncate or create zero byte file
tail -f filename > /path/to/nfsfile &
restore_from_tape > localfile

---------- Post updated at 10:22 AM ---------- Previous update was at 10:19 AM ----------

The trouble comes from how to tell tail when it's finished. It's binary-safe I think, but only writes entire lines -- inconvenient when your file may not actually end in a newline. You may have to append a newline onto your local file to kick the last 'line' out of it: echo >> filename Then wait for the file sizes to be equal, kill tail, and truncate both files one byte shorter.
# 5  
Old 07-14-2011
Thanks for the reply - sounds like quite a bit of a hack to get it to work though. Does anyone know if it's possible to compile mbuffer for Mac OS X. Would be a much nicer solution....
# 6  
Old 07-14-2011
Don't have a fully modern mac to check that on right now, but I suspect not, since it uses clock_gettime, a POSIX-compliant feature which the older version of OSX I have available definitely doesn't have. It was developed on Linux and Solaris so may need GNU features too.

I also checked in the fink repository, don't see it.

I suppose if the network is guaranteed to be slower than the tape you could just cat it, it'll never catch up until the tape's done. cat shouldn't care about the file size changing, it goes until EOF whatever that may be. Give the tape a head start to build up some steam.

Code:
restore-process > localfile &
sleep 10
cat < newfile > /path/to/remotefile


Last edited by Corona688; 07-14-2011 at 01:49 PM..
# 7  
Old 07-15-2011
Thanks - yes I was also thinking of this. Much simpler!
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Copying large files in a bash script stops execution

Hello, I'm new to this forum and like to first of all say hello to everyone. I've got a really annoying problem at the moment. I'm trying to rsync some files (about 200MB with one file of 120MB) from a Raspberry PI with raspbian to a debian server via rsync. This procedure is stored in a... (3 Replies)
Discussion started by: wex_storm
3 Replies

2. UNIX for Dummies Questions & Answers

Copying tape-to-tape on UNIX

I am using a 4mm tape to backup my Unix system. However, I wanted to make a copy all of the files and archive headers (or just the archive headers if that's possible) created on one of my tapes to another 4mm tape. I only have one tape drive. Is there a command that will complete such task? ... (1 Reply)
Discussion started by: acoco
1 Replies

3. Shell Programming and Scripting

Copying number by looking a large file

Hi All, I have a big file which looks like this: abc 34.32 cdf 343.45 computer 1.34 ladder 2.3422 I have some 100000 .TXT files which look like this: computer cdf align I have to open each of the text files and read the words from the text files. Then I have to look into that... (2 Replies)
Discussion started by: shoaibjameel123
2 Replies

4. UNIX for Dummies Questions & Answers

Copying a Large File

I have a large file that I append entries to the end of every few seconds. Its grown to >150MB. Its basically a log file but a perl script is writing to it. I need to make a copy of it to a new directory. I realize the latest entries occuring while the copy is taking place will not be recorded... (1 Reply)
Discussion started by: lforum
1 Replies

5. Shell Programming and Scripting

Copying of large files fail

Hi, I have a process which duplicates files for different environments. As the files arrive, my script (korn shell) makes copies of them (giving a unique name) and then renames the original file so that my process won't get triggered again. I don't like it either, but it's what we were told to... (4 Replies)
Discussion started by: GoldenEye4ever
4 Replies

6. UNIX for Dummies Questions & Answers

Copying large file problem on SVR4 Unix

We have 3 Unix servers all running SVR4 Unix 1.4. I have no problems copying files to and from 2 of the servers using either the rcp command or ftp but when i come to transfer large files to the third server the copy gives up part way through and crashes this server. Copying smaller files using RCP... (7 Replies)
Discussion started by: coatesd
7 Replies

7. AIX

Copying to tape drive throws error

Hi All I am trying to copy files present in a partition (server 2) which is mounted to a different server (server 1) as tape drive is connected to it. I ran the below command to copy files within a partition: svr01:root:/sunfileserver> tar -cvf * a <foldername>/<filename>/<filename> a... (4 Replies)
Discussion started by: vathsan
4 Replies

8. UNIX for Dummies Questions & Answers

Writing large files to tape

I have a zipped file that is ~ 10GB. I tried tarring it off to a tape, but I receive: tar: <filename> too large to archive. Use E function modifier. The file is stored on a UFS mount, so I was unable to use ufsdump. What other options do I have? (I don't have a local file system large... (3 Replies)
Discussion started by: FredSmith
3 Replies

9. Filesystems, Disks and Memory

Strange difference in file size when copying LARGE file..

Hi, Im trying to take a database backup. one of the files is 26 GB. I am using cp -pr to create a backup copy of the database. after the copying is complete, if i do du -hrs on the folders i saw a difference of 2GB. The weird fact is that the BACKUP folder was 2 GB more than the original one! ... (1 Reply)
Discussion started by: 0ktalmagik
1 Replies

10. UNIX for Dummies Questions & Answers

copying a large filesystem

Hi there In my organisation we have a solaris network with /home being automounted from /export/home on a central file server (usual stuff) however, the guy who originally set this up only allocated 3gb to /export/home and now we are really struggling for space. I have a new 18gb disk installed... (3 Replies)
Discussion started by: hcclnoodles
3 Replies
Login or Register to Ask a Question