>> == me
> = reborg
(Only one level of quoting seems to be supported on this forum, so I am reverting to old usenet style attribution.)
>>Personally, I think that it should always be an error to use an uninitialized variable.
>You are entitled to thinks so.
>I disagree but that is a question of opinion, by all means do that.
>I just said it was unnecessary in this case.
I feel more strongly that -n should always be on, or at least be the default behavior then I do about -u. I can live with -u not being on if every shell implementation has a standard behavior that it must adhere to (e.g. that an unitialized variable always has an empty string value, will never cause a null pointer exeception, etc). When you have problems is when different implementations subtly differ, which seems to be a major problem in the unix world (in contrast to, say, Java, which is properly stringent in implementation consistency). In cases like this, my experience is that it is better to be safe than sorry.
>>If I had written sh, it would not even be an option.
>Sorry I didn't follow you here.
I would not have set -n as an option to sh; set -n would be the default behavior. Possibly I -might- make set +n available if someone could convince me that there was a real need (e.g. performance increase) in some cases. As mentioned above, I would likewise have set -u be the default behavior unless the uninitialzed behavior is consistent and safe enough that this is not an issue. In which case I recant.
>>First, why would I ever want to use the full path to a command? That makes it totally unportable!...
>The purpose of PATH is to tell the shell in which directories and in which order to look for commands.
>A properly written script would not normally depend on login environment to know that.
Of course your script has to rely on the login environment to know where commands are--what else can it do, guess?
In the examples that you provide, you assumed that find (or a suitable version of find) is always found in /usr/bin/, but what if it isn't? Then your script is broken and the user will have to debug the script and fix it by hand.
If you instead rely on the PATH, it is a) far likelier that the user will have gotten things like having find on his path straightened out long before he runs this script and b) even if find is not there, he is much more likely to know how to do a generic unix task like fix his path then to debug and edit a custom script.
>The script will currently break if the user has a different command
>(or different non-compatible version of a command) with the same name
>as one of the ones you use in their PATH before the one you want
Assuming that /usr/bin/find exists and is the suitable version likewise is open to failure. No win there.
>or has a command aliased to behave in a way you don't expect which would be worse.
Agreed: aliases is one (the only?) failure mode that hard coding the path to find inside the script will overcome.
Unless the use has done something amazing, like alias find to rm or something, the worst that should happen in this script if they have aliased find is that it does not work correctly.
Is there a good way to detect if a command is actually being aliased? And to unalias it, at least inside the script, and use the raw command?
Actually, now that I think more on this, if you really think that aliases are a dangerous failure mode then you have to go to extreme lengths to cope with it. For instance, with the cs.sh script that I have presented so far, you not only need to worry about the find command being aliased, but you need to worry about every single other command in that script which could also be aliased: set, echo, while, read... Its basically impossible! Larry Wall was right about it being easier to write a new shell than to port (or guarantee the correct working of) a shell script.
>I'm guessing you mean non-recursive, since it recurses by default:
Right: I mistyped.
>find <dir> \( ! -name <dir> -prune \)
I think that the complete command that I want (for dir = "./") is more like
That -nowarn option is critical, else you always get this nasty warning message (at least with GNU find):
find: warning: Unix filenames usually don't contain slashes (though pathnames do). That means that '-name `./'' will probably evaluate to false all the time on this system. You might find the '-wholename' test more useful, or perhaps '-samefile'. Alternatively, if you are using GNU grep, you could use 'find ... -print0 | grep -FzZ `./''.
But it appears that -nowarn is not a POSIX option, so we are back to square one...
>It has little or nothing to do with laziness.
>It is more effort for the programmer to write the message
The "more effort for the programmer" is the laziness that I referred to.
And its actually not all that much extra effort, assuming you are actually doing the right thing and argument checking your code, because at that point you already have all the conditional logic put in; you simply need to notify the user too when things go wrong. But if you have done no arg checking and just blindly proceed, then yes it is more work.
>...I am not saying don't print the message that says what is wrong,
>just display the generic form as a reminder too or tell the user how to do so, this kind of thing:
I agree: decent behavior is to print both the specific error, as well as to print proper usage. The version of the script file in my next post below does this. You were also correct in saying that the proper usage should be printed out somehow (I chose to write a function to do this) then to merely store it as program comments.
>>Do people routinely write korn shell, c shell, etc script files
>>and use the same .sh file extension for all of them,
>>instead of using something sensible like .ksh, .csh, etc?
>Yes.
>>I do not even know if other shells support the -n syntax check option,
>>so I would just as soon ignore all script files except those purporting to be bourne shell scripts.
>bourne derived shells such as ksh and bash do.
The version of the script file presented in my next post below retains searching for .sh files and assumes that whatever actually shell is executing it (and hence will be doing the syntax checking) is also a shell that is compatible with all the other .sh files.
For instance, on both my cygwin and linux boxes, the shell is actually bash, and its syntax checker seems to have no problem with bash (but not bourne) constructs like [[ in .sh files.
At this point, I think that this is the most reasonable approach, even if it is not a cure all.
Writing a POSIX compatible find seems to be the hangup; my newbie scripting skills are not up to it.
What is that supposed to mean? You only believe in tools supporting mechanisms and not rigid policies?
Assuming that that is the case, I half agree and half disagree with you.
The languages that best tradeoff among productivity, elegance, robustness, maintainability, and performance always make compromises between providing freedom of expression and implementing policies.
XML's runaway rampant success compared to SGML's languishing in obscurity (except for one child, HTML) is precisely because the rigidity of its policies (simple but has to be correct syntax) was wisely chosen. It gave enough freedom of expression to do anything that you want, but limits that expression to a form that is unambiguous and easily parsed.
Java has won out over C++ for many of the same reasons.
Back to shell scripts: I think that robust and safe behavior should be the default, and higher performance options that conflict with this should not. Especially with today's computers. Maybe the choices made 30 years ago with the slow machines of that time were necessary, however.
Quote:
So what was that you said about portability?
I would like this script to be portable, and I think that I have been rationally considering all proposals (e.g. from reborg) to fix it, even if I do not agree with everything he has said so far. I am willing to be educated!
Quote:
Normally they
(a) print usage
(b) say what option was offending
(c) do not try and read your mind.
All of which I agree with. I was talking about experiences with unix that I have had where (b) was left out, and I was left staring for a long time at my input trying to determine why it did not match (a). Thats what I have a problem with.
ok, don't think I'm beating up on you here, I am not. I'm only try to pass on what experience has taught me, and wouldn't bother if I though I was wasting my time, and certainly wouldn't do it just to be confrontational.
Never make assumptions of user competence or how a user likes to customize a shell, I have seen that flawed logic break more scripts than I care to remember. Anyway experience will teach you this.
In Solaris for example you have BSD compatibility versions of some binaries in /usr/ucb, I know a number of people who like the Berkley versions of various commands and put these in the PATH before that standard ones. The output of some of these is different from the standard ones, for example 'ps'. Likewise in /usr/sfw/bin ou have a number of GNU versions of commands, while these are less problematic since they are generally a superset of POSIX versions and are usually prefixed by 'g' eg. 'gawk' and 'gtar' it is none the less not as predictable as would be desired.
You can never discount the possibility that someone has an alias or function in their environment with the same name as a standard command which they either don't know about or don't use, I have seen that many times.
You can unalias each command you use before using it and redirect error output to /dev/null, but be aware of shell functions in login shells also, and also that a true bourne shell does not support aliases so the set -e will kill the script if you do this.
Quote:
Actually, now that I think more on this, if you really think that aliases are a dangerous failure mode then you have to go to extreme lengths to cope with it. For instance, with the cs.sh script that I have presented so far, you not only need to worry about the find command being aliased, but you need to worry about every single other command in that script which could also be aliased: set, echo, while, read... Its basically impossible!
Yes, that's why you use full paths It is much easier to port a path to a command than an entire script, especially if you define all the commands at the start of the script, and if you really want to you can use "uname -s" to determine which platform you are on and set the path for any OS you know about. In a true bourne shell you don't have to worry about aliases, in bash or korn or POSIX sh you do.
And that is not the fully story. Don't forget security and abuse. By giving the full path you make sure you know what you are running for that reason alone your script would fail a code review in many companies.
Quote:
I think that the complete command that I want (for dir = "./") is more like
That -nowarn option is critical, else you always get this nasty warning message,
find: warning: Unix filenames usually don't contain slashes (though pathnames do). That means that '-name `./'' will probably evaluate to false all the time on this system. You might find the '-wholename' test more useful, or perhaps '-samefile'. Alternatively, if you are using GNU grep, you could use 'find ... -print0 | grep -FzZ `./''.
But it appears that -nowarn is not a POSIX option, so we are back to square one...
No, again why bother with non-POSIX semantics when the POSIX version works fine? The nowarn in GNU find is one of those GNU options that I personally think was pointless, the same thing is simply accomplished with stderr redirection.
or if you really want the / for some reason:
Quote:
>It has little or nothing to do with laziness.
>It is more effort for the programmer to write the message
The "more effort for the programmer" is the laziness that I referred to.
That's a bit of an oxymoron I really don't follow the logic but no matter, it's a moot point.
Quote:
I agree: decent behavior is to print both the specific error, as well as to print proper usage. The version of the script file in my next post below does this. You were also correct in saying that the proper usage should be printed out somehow (I chose to write a function to do this) then to merely store it as program comments.
better !
Quote:
For instance, on both my cygwin and linux boxes, the shell is actually bash, and its syntax checker seems to have no problem with bash (but not bourne) constructs like [[ in .sh files.
That's bad not good in a lot of ways. It will blow up on a system with a true old bourne shell (Solaris). I always try to write for the lowest common denominator if I want compatibility, however you should know on this point note that a "true" bourne shell is not POSIX and most of those festure are in the POSIX definition, so I leave that to your judgement.
Quote:
Writing a POSIX compatible find seems to be the hangup; my newbie scripting skills are not up to it.
It's not really that difficult, see above And what it does aside from making the script portable, it makes your skillset portable, which is more important. My number one piece of advice to anyone learning scripting would be to use the standard behaviour of a utility, and learn how to do things without gnu extensions first, then add them when you are happy that you know the long way so that you are capable of writing code for a more restrictive system.
On another note about writing portable code, if you are interested, it's always a good idea to have a few different OS to try on. Cygwin will give you more or less Linux behavior so I would generally count that as only one platform. A multiboot system or VMWare with a few different OSes is always nice for testing scripts, and a general rule, if your script works on Solaris using the default implementations of commands it will port fairly easily because Solaris tends to be conservative in what the functionality commands implement, some even less than POSIX with POSIX versions being available in /usr/xpg4/bin. So my reccommendations would be to add FreeBSD and Solaris to the mix.
ok, don't think I'm beating up on you here, I am not. I'm only try to pass on what experience has taught me, and wouldn't bother if I though I was wasting my time, and certainly wouldn't do it just to be confrontational.
No worries: I have never felt that you are beating me up on anything. I solicited feedback, and you generously shared your knowledge with me. I appreciate that. My delay in responding is merely because I have been so busy for so long.
Speaking of overwork, I have no more time to invest in this shell script, but I have these final points to make:
1) I now think that aliases for this script (e.g. an alias for find) are no problem at all. Reason: aliases defined on the shell before executing this script are not inherited by the script when the user runs it (at least on cygwin and linux--is this true of all unixes?). Execute
on the command line; the first line should have the output
while the third should have output
Then put this as the contents of the file t.sh:
and on the command line execute
I get as its output
which proves that shell script files do not inherit aliases from their parent shell, at least on my systems.
Since my cs.sh file does not use any internal aliases, either explicitly or implicitly (e.g. by sourcing .bashrc or something), it should be immune from aliases.
If it was vulnerable to aliases, then the way that I would temporarily suppress them is by encasing all commands inside single quotes (e.g. 'find'), an idea that I got from Alias (Unix shell - Wikipedia, the free encyclopedia)
2) I have been struggling for a while now to understand why you think that hardcoding full command paths inside a script--which will require a decent percentage of users to have to open the shell file and understand it and then modify it--is easier than having the user modify their path if there are issues with, say, the find command.
I stand by my approach, particularly for who my target is with this shell script (primarily script developers, not the general user).
But I now think that I can see maybe where you are coming from: are you a sys admin at a large company? Or at least a sys admin on a machine with many different user accounts but whose system level configuration (e.g. where the commands are installed) you control? Or maybe you simply always use the same unix variant so that you reliably know what paths to use. In this case, if you are responsible for writing a shell script that has to work for all your different users, then I can see why you use hard coded paths.
3) Security:
Quote:
And that is not the fully story. Don't forget security and abuse. By giving the full path you make sure you know what you are running for that reason alone your script would fail a code review in many companies.
That sounds crazy! I use relative paths, for example, in all kinds of programming applications and they are utterly invaluable (e.g. in having a configuration that always works regardless of where the user installs my package). I cannot imagine that too many companies have a policy that anything other than full paths is a security hazard.
No worries: I have never felt that you are beating me up on anything. I solicited feedback, and you generously shared your knowledge with me. I appreciate that. My delay in responding is merely because I have been so busy for so long.
Speaking of overwork, I have no more time to invest in this shell script, but I have these final points to make:
1) I now think that aliases for this script (e.g. an alias for find) are no problem at all. Reason: aliases defined on the shell before executing this script are not inherited by the script when the user runs it
You are correct here.
Quote:
2) I have been struggling for a while now to understand why you think that hardcoding full command paths inside a script--which will require a decent percentage of users to have to open the shell file and understand it and then modify it--is easier than having the user modify their path if there are issues with, say, the find command.
Not easier, better. I am not suggesting that people randomly update the script, simply that it is ported by someone on a new platform.
Quote:
But I now think that I can see maybe where you are coming from: are you a sys admin at a large company? Or at least a sys admin on a machine with many different user accounts but whose system level configuration (e.g. where the commands are installed) you control? Or maybe you simply always use the same unix variant so that you reliably know what paths to use. In this case, if you are responsible for writing a shell script that has to work for all your different users, then I can see why you use hard coded paths.
Both a large company, and large user systems on multiple platforms. Though I have not been a true sys-admin for several years, I have been working as an enterprise product integration specialist and system architect for a number of years with responsibility for a shell code base of hundreds of scripts and countless thousands of lines of code. In theory your approach sounds easier, but in practice it does not scale well and makes code more difficult to port.
Quote:
That sounds crazy! I use relative paths, for example, in all kinds of programming applications and they are utterly invaluable (e.g. in having a configuration that always works regardless of where the user installs my package). I cannot imagine that too many companies have a policy that anything other than full paths is a security hazard.
Not at all crazy, in fact it's a pretty basic and sensible precaution, and is very common practice. The package location is one of the cases for using a defined environment variable which the user sets.
eg.
then in the script you do something like:
If you need a relative behavior you should anchor the script and work relative to the anchor point.
Let us assume that you have a simple layout like this:
Your script depends on script.conf for some information to do it's job.
The relative approach:
Now you have to be in the directory to run the script as ./script.sh otherwise the relative path will not work.
On the other hand if you do:
You can run the script from anywhere, and you have some degree of control over what is being run.
Hello,
I have a directory where sometimes appear a certain file name - and I'd like to be notified by email when that happens... so what command or script I may use?
e.g. if there's a file named "adam" in the directory named "dir1" then send a mail to "abc@abc.com".. it needs to permanently... (5 Replies)
#!/bin/ksh
#This script will check status of load balancer in AIX servers from hopbox
#Steps to do as folows :
#Login to server
#netstat -ani | grep <IP>
#check if the output contains either lo0 OR en0
#if the above condition matches, validation looks good
#else, send an email with impacted... (7 Replies)
Hi, Gurus,
I need a scripts to check specified file if it exists or not at certain time (say every month between 5th and 7th). if file exists do something otherwise do another thing.
can anybody help this?
Thanks in advance
:wall: (3 Replies)
I am trying to write a script that checks whether or not, a file exists on multiple servers.
My code / logic so far is:
#!/usr/bin/ksh
print "Enter File name to be checked"
read MYFILE
ssh server1 "
cd /var/opt/logs ;
if
then
... (4 Replies)
All,
Is there a way to keep checking for a file over and over again in the same script for an interval of time?
Ie
If {
mail -user
continue checking until file arrives
file arrives
tasks
exit
I don't want the script to run each time and email the user each time a file... (4 Replies)
To check a bash script syntax without executing it we use:
bash -n scriptname
What should be the equivalent command for checking a ksh script? (8 Replies)
Hi,
I'm trying to check a filesize within a script and then excute a relevant action. An example is below:
if
then rm $filename
rm $filename2
elif
then rm $filename2
fi
Basically if $filename2 has a filesize of 0 then I want both files to be removed, but... (6 Replies)
Hi, Need help for a Script for checking and reporting database file sizes in a directory.
Request you to please give your valuable inputs.
Thanks a lot in advance.
Best Regards,
Marconi (1 Reply)
Hi,
I have a really, what I hope is, simple question.
I'm looking for a simple way to see whether a file exists or not and then perform an action based on whether it exists or not. An example of what I tried is as follows:
if
then {
echo "File mysql exists"
... (1 Reply)