Useful for "eye-balling" the status of a process or whatever. But: No way to trigger a command (automatically) upon a condition, and Do you really want to sit there and watch for files to come in? If you do, then :
is fine.
Quote:
also in your while statement, when would the condition not be true?
Never. You'd have to kill the process. It's equivalent to removing the while loop and putting the script as a cron job that runs ever 5 minutes.
Quote:
What is your if statement actually doing?
It's running the pipeline of ls and grep. The grep is looking for any line. If ls finds no files, it outputs no line, and the grep search fails. If the grep search fails, it exits with 1 which (in BASH logic) means false, and so the if condition fails. If the grep finds at least one line, this means there are files and so the if condition succeeds and the THEN portion is executed.
Quote:
Originally Posted by nulinux
We do remove them after transfer. another note is that although the names will change as will the byte size, they will always end in *.dat.
Are there other files in the directory besides these?
Quote:
Originally Posted by nulinux
One of other problems is as you mention what if the files are detected before they are finished transferring to the host, before they are sent back out?
There are at least four solutions to this. In the worst case, you can use what is suggested by ddreggors. Here are solutions to try:
Move-after-write. Modify (or configure) the process that places the incoming file. When creating the file, it names the file in a distinctive way (different extension, prefixed with ., different path, etc). After closing, it moves / renames the file in a way your script expects.
Keep control-log file. Modify (or configure) the process that places the incoming file. After closing the file, it appends the name of the file to a control file, kept inside the mailbox folder (as "something.ctl"). Your script will rename this file, and then read it for a list of file names to sftp.
Use flock. This will only work IF the process, which places the incoming files, uses flock(2) on files that it creates. Use the script to use flock (shell command) to each file before it is moved.
ddreggors solution is a bit resource-intensive, and VERY linux-specific, but it will work if the uploading process does not close/reopen the file in between writes AND if it removes the file after failure (if the upload process is interrupted and the file is not completely transferred). It will fail if there are any other benign processes reading the incoming files. Here is a re-write of that solution which is more efficient (ie, doesn't use additional forks):
Last edited by otheus; 08-21-2008 at 05:01 AM..
Reason: formatting fix
it will work if the uploading process does not close/reopen the file in between writes AND if it removes the file after failure (if the upload process is interrupted and the file is not completely transferred).
Nice rewrite and great points but I thought I might mention that in the quote above you make it seem like it will NOT work if "the upload process is interrupted and the file is not completely transferred". While that is not what one would want the process will still work, you will just end up with a blank or partially written file. What I mean to say is that the loop will still run and do what it is meant to do but it will preform the actions on a file that is not intact. This would happen with ANY process as far as I know. If the transfer fails then you have a bad file and no way to tell without looking inside manually (vi, nano, etc...) or diff against the original (which you might not have).
Now on the other hand, it would be a very nice feature if the upload/transfer process failed and the file was removed (as you already mentioned).
As to the Linux specific code... my bad, I am from a Linux world and should remember that nix* means *more* than Linux usually.
I also make the assumtion that this user wants a quick easy to create solution. I base this on very little knowledge granted but there seems to be an air of "quick dirty hack" (not to be taken as a bad thing, just a real world thing) written in the initial question.
You're right. I shouldn't have been so critical of your solution. The user clearly stated linux and flock (as a shell-script) is distribution-specific.
Thanks to both of you guys, you brought up some really valid points. After reading your points, I realized that a "quick dirty hack" might not be the best solution as it is prone to numerous errors, mainly the sftp portion kicking off before the files are finished transferring before being sent back out. So I am probably going to add as much logic as possible.
There will always be 4 files ending in *.dat, but the files names will change. They are the only files and after transferring the directory can be totally cleaned. Flock sounds like a good idea, but I 'm not sure if the host that initially sends the files supports flock. I think I can use the lsof and I was also looking into incron and fileschanged utilities. Have you guys ever used these within a script?
- The file or directory that you want to monitor must exist at program start.
I am using rhel 4 so incron is not available. I have isntalled fileschanged and it will monitor the creation of files as well and I have tested it, but then it goes backthe same question before what if the next sftp step starts to process before the files are finished?
I need bash script that monitor folders for new pdf files and create xml file for rss feed with newest files on the list. I have some script, but it reports errors.
#!/bin/bash
SYSDIR="/var/www/html/Intranet"
HTTPLINK="http://TYPE.IP.ADDRESS.HERE/pdfs"
FEEDTITLE="Najnoviji dokumenti na... (20 Replies)
i'm trying to find the most efficient way to monitor specific files in different directories in one go.
/var/log/
/var/app/
/var/db/
each one of these directories can have subdirectories which need to be looked into as well.
I want to find any file in this directory that has the name... (7 Replies)
Hi
I am looking for a help in designing a bash script on linux which can do below:-
1) Look in a specific directory for any new files
2) Mail the content of the new file
Appreciate any help
Regards
Neha (5 Replies)
Hi,
I need to write a directory monitor, i.e. a korn shell script which would
Report changes to the directory contents, like:
added file1,
deleted file2,
updated file3 ,
created subdir (optional)...
There is no specific file pattern.
So far I have written a little script... (1 Reply)
Hello all,
Can anyone please guide / help me in the following task....
I have a directory where some external users will upload pdf files. The filename of these pdf will be of a particular format (<id>-<first name>_<last name>_<some number>.pdf)
I want to make a script such that it takes... (6 Replies)
I'm am looking for a cheap way to trigger a script when a new file is written in a specific directory. AIX 5.3. It is a production system, so no kernel patching (i.e. inotify).
Filemon and audtiing are too expensive.
Thanks in advance. (2 Replies)
Good morning.
I have been attempting to find a way to monitor the capacity of a directory so that when it reaches 80% or higher I can send an event.
I was able to find a script that does this for the whole drive by I can not seem to figure out how to do this for just a single directory.
... (1 Reply)
Hi All,
We are having important config files in an directory which was accessable by all
/auto/config/Testbed/>ls
config1.intial
config2.intial
config3.inital
often we find that some of the lines are missing in config files, we doubt if some one is removing.
I would like to write... (0 Replies)
Hello,
I am a newbie who is attempting to write a script to monitor a directory for a set of 3 files that I am expecting to get ftp'd. Occasionally, we suspend operations for maintenance etc. but we still get the files so there can be more than 1 set. If there is more than 1 set, I would like... (2 Replies)