Quote:
Originally Posted by
greenworld123
the process prints some garbage and then with proper text. it is not the process writing garbage i think.
I think you have already gotten very good advice about how to solve it but it might be helping you to understand what is going on:
When a process "opens" a file it calls some OS function (namely
fopen()) and part of this "opening" is that the OS sets up an environment through which the process can access the file. Part of this is to find out how big (=how many bytes) a file is. The process also gets a "place" where it "stands" right now. This "place" can be moved forward, backwards, etc., but only within the limits of the length of the file.
Say, a program opens a file and is told that the file is 10 bytes long. Right now it "stands" on byte 1 and it can read it, which would move the place it stands forward to byte 2, etc.. It can also do things like "go forward 3 bytes and then read (or write) 2 bytes from there". It can also add to the file, which would increase the size so that now it can position its place to byte 11. But if it tries to do something impossible (like "go to byte number <behind the current length>" it would receive an error because the OS "knows" that the file is only as long as it is.
All this works well as long as one process accesses a file. But in your case a process opened a file and wrote lots of bytes into it, making its length some big number in the "internal bookkeeping" of the OS. Now a second process (your shell command) truncated the file and but for the first program it is still as long as it was when it last added to it. If it tries to read something from further up (like when it tries to print the content) of course it will get garbage because what it reads is some random block on a disk which is not part of the file any more - but the program won't know that.
Log writing processes should therefore NOT write into log files continuously but open and close the log for every write action separately.
I hope this helps.
bakunin