I can see both sides of this. reborg has a valid point. If you cap a log file at n lines, you run the risk that line n+1 will be critical information. On the other hand, several times a year I get dragged out of bed because because a server crashed with /var filled due to something like a 5 GB /var/tmp/some_dumb_program.log. It's hard to carefully peruse a 5 GB text file, especially at 3 in the morning, but it's usually my sense that capping the file at, say, half a Gig would not have been a tragedy.
As for if it can be done, yes, but with some caveats. One idea is to create a separate filesystem, say, /scratch. If /scratch is 1 GB, put the file there and it won't exceed 1 GB. You might not have the resources to create a separate filesystem though.
Another option is ulimit, but there is a danger here and a possible obstacle. Your syntax is using &>
file which is syntax that I have never seen. But that is close to >&
file which would be csh syntax. My csh does not have a ulimit command. Both ksh and bash do. But they need a ulimit facility in the kernel to do that. I don't know if darwin has ulimit. If it does you will need to you a shell with ulimit support. Switching shells is attractive for other reasons:
Csh Programming Considered Harmful That's the obstacle...can you use ulimit?
As for the danger, you don't want to simply toss a "ulimit -f 100" or whatever at the top of your script. rsync may need to write a file larger than that just as it syncs. I'm not sure if ulimit affects sockets, but my reading of "man write" suggests that it might. If so, even if the local rsync is reading files and sending them to another system, its socket could be capped. So you want to limit only the log file. In ksh, this should be safe....
rsync
options 2>&1 | (ulimit -f 100 ; cat > rsync.log)
The ulimit happens in a subshell which then runs a cat command and the cat command will be limited in the size of file it can create.