What's your most useful shell?


Poll: What's your most useful shell?
Poll Options
What's your most useful shell?

 
Thread Tools Search this Thread
Top Forums UNIX for Advanced & Expert Users What's your most useful shell?
# 190  
Old 02-28-2011
Quote:
Originally Posted by tetsujin
My point is that adding scoped lifetime management to file descriptors in a shell wouldn't fall nearly as far outside the current model as you suggest.
Noted. I don't agree.
Quote:
read x < $f #Is that quite simple enough?
We already use $. This'd change the meaning of plenty of existing code where $f would be a file name. Unless you decide to do both, in which case this is a line of code that does two completely different things depending on what type of variable it is. You're not just adding anonymous files to your shell, you're adding operator overloading!
Quote:
Alternately, "read x <&$f" would make it plain that $f is an open file and not a filename.
$ means string, period. Pick something else.
Quote:
some_command $f # It's actually not too hard to make this do what the user intends...
How do you know that's what the user wants? How do you know the command even reads from stdin? How much code would this break in what bizzare ways when someone opens a file by accident? If you want implicit redirections, you redirect a code block to get everything inside it redirected.

This line will also do completely different things under different circumstances -- string or file -- even though it isn't a branch or control statement. New kinds of spaghetti logic can be made out of it. And now you need more shell extensions to tell the difference between file and string to avoid these weird corner cases.
Quote:
First, the shell could handle it like process redirection: assign the file $f some numeric FD in the new process, then in place of the argument $f, substitute /dev/fd/(some number)
If you still think the shell assigns these numbers, I don't know what to tell you.

Yes, that mode of anonymous redirection increases the code block level, that's unavoidable -- how else are you going to define scope but code blocks?

For global, but anonymous filedescriptors, How about this for a syntax:
Code:
# must be a function, alias, or builtin
open FILENAME FILEDES "r"

cat <&$FILEDES

That even works, if you write an 'open' function, since the second line isn't a new syntax.
# 191  
Old 02-28-2011
Corona688, tetsujin,

What does your discussion have to do with the question "What is Your Most Useful Shell?"

If we created a new thread with your posts, what would the title of that new thread be?
# 192  
Old 03-01-2011
Sorry for going off-topic. "Features of a new shell" might do as a topic if you split it.
# 193  
Old 03-17-2011
Quote:
Originally Posted by Neo
Corona688, tetsujin,

What does your discussion have to do with the question "What is Your Most Useful Shell?"
I don't limit my answer to that question to the shells that have already been written. Any programmer has the ability to write a new shell, and thus at least the potential to effect some change to what a shell is. So I approach the question in terms of where we could go next.

If you feel like moving the discussion, by all means...

Quote:
Originally Posted by Corona688
Quote:
NESTED QUOTES ARE USEFUL!
read x < $f #Is that quite simple enough?
Noted. I don't agree. We already use $. This'd change the meaning of plenty of existing code where $f would be a file name.
This is one reason why I suggested "read x <&$f" as an alternative form - it makes it very clear to anyone reading the code that the caller is reading from an open file descriptor (regardless of whether $f is a plain, numeric FD, or some kind of special object representing an open file), rather than specifying a filename for redirection.

Quote:
$ means string, period.
Well, $ generally means some substitution is happening. It is always a string at present - in part because the shell can't really deal in anything else.

But it could... There's no reason it couldn't.

$ also generally means a variable's value is being taken. That's the sense in which I'm using it.

Quote:
Quote:
some_command $f # It's actually not too hard to make this do what the user intends...
How do you know that's what the user wants? How do you know the command even reads from stdin?
Who said anything about the command reading from stdin?

Probably what the user wants, if they pass an open file descriptor as an argument to a command they're running, is to have the command operate on that file. To accomplish this, you need to attach that file descriptor to the new process, and you need to tell the new process where that file descriptor is.

In simple cases, you don't need this. You can just attach the files to the new process's stdin and stdout (and maybe stderr, though I think it would be poor practice to put anything but error messages out on stderr) and rely on the program to use those file descriptors in the usual way, or even implement specialized behavior (i.e. detect if stdin is a terminal, or a pipe, or a file, see if it's open read-only or read-write, see if it's seekable - and switch behaviors accordingly) - that works fine if you're only dealing with one or two files, and especially if the way you're using them fits the idea of stdin/stdout. (That is, it's kind of poor style to attach a file that's primarily used for input to stdout... And you wouldn't want to clobber a new program's stdin and stdout just to attach two open files to it, if you need that stdin/stdout for something else. Simple redirection should just be used where it makes sense, IMO.)

So you can get around that by attaching the open file to another file descriptor in the new process. Fortunately it doesn't matter too much where the file winds up - the new process just needs to know where that file is so it can operate on it. So if you had a program that took numeric FDs as arguments:

Code:
exec {fd}<>$filename
cmd --file-fd=$fd   # Current shells export all open FDs to new child processes...  not really appropriate IMO

If the command expects a filename, you could form a /dev/fd path:

Code:
exec {fd}<>$filename
cmd --filename=/dev/fd/$fd

One thing I'm suggesting here is that new child processes shouldn't get that file descriptor by default - so the first example would look more like this:
Code:
# If we suppose cmd doesn't get the file at ($fd) by default, then we need to use redirection syntax to provide it:
cmd --file-fd=$fd $fd<&$fd

But if $fd is something that knows it's a file descriptor, then specifying it as an argument is enough to tell the shell that the command should have that file descriptor left open across the exec() call. And since most programs take filenames, not numeric file descriptors, as arguments, the shell can handle forming that /dev/fd/ path itself:
Code:
# Some special syntax to make $fd an "open file descriptor object" goes here...
cmd --filename=$fd
# Shell knows $fd is an open file,
#so the substitution expands to /dev/fd/ whatever
# and the command which got $fd as an argument also gets the file itself in its file descriptor table.

This form is analogous to bash process substitution:
Code:
cmd_a <(cmd_b)

This runs cmd_b and cmd_a concurrently: the output of cmd_b is attached to a pipe, the other end of the pipe is bound to an open file descriptor in cmd_a's process, and "<(cmd_b)" is replaced with a filename from /dev/fd identifying the file descriptor. In this way, if cmd_a takes a filename as an argument, an open file handle can be specified in place of that argument. It's kind of a funky way of doing things but it works, it's a viable way of passing open file descriptors to new processes which take filename arguments.

It's an effective mechanism for passing in open FD's to new processes. It works with a lot of software that already exists (i.e. software that takes filenames) and it's useful for cases where you can't simply bind the file to one of the "standard" file descriptors.

Quote:
Quote:
First, the shell could handle it like process redirection: assign the file $f some numeric FD in the new process, then in place of the argument $f, substitute /dev/fd/(some number)
If you still think the shell assigns these numbers, I don't know what to tell you.
I'm talking about process substitution, not command substitution or normal redirection. I guess I used the wrong terminology there, sorry.

Yes, the shell does assign those numbers. It has to. It's the only way process substitution can work.

Quote:
Yes, that mode of anonymous redirection increases the code block level, that's unavoidable -- how else are you going to define scope but code blocks?
There are various little reasons I'm not too thrilled with that approach: like having to specify the filenames at the end of the block, instead of wherever it makes sense within the block... What if you've got an "if" statement that tests whether a file exists, and you want the file to be opened only if the condition is true? (Your "then" clause winds up with an extra, unnecessary level of nesting, if you want the open file to be scoped...) What if there's multiple different conditions that could lead to different files being opened for the same purpose? (I guess if you did the regular scoped file-open and then re-opened the same fd, the new file would be open but the scoping would be intact...)

Also, the scoping mechanism appears to be broken in my installed version of bash... (4.1.5) - the file stays open. I don't know offhand if this is considered a bug.
Code:
$ bash --version   #reports 4.1.5
$ while false; do read x <&$f; echo $x; done {f}<./some-file
# goes nowhere, does nothing, of course.  But...
$ read x <&$f; echo $x   #Guess what?  $f is still open!

Quote:
For global, but anonymous filedescriptors, How about this for a syntax:
Code:
# must be a function, alias, or builtin
open FILENAME FILEDES "r"

cat <&$FILEDES

That even works, if you write an 'open' function, since the second line isn't a new syntax.
In fact, we already have this, just with a different syntax:

Code:
exec {FILEDES}<FILENAME
cat <&$FILEDES

Frankly, it's taken me a while to get up to speed with all the features of Bash (esp. "relatively new" ones - non-Posix stuff introduced in the last 10-15 years that most people don't use... It's hard to find information on some of these features, frankly... Which is a shame because it's interesting stuff and it addresses some important issues...) - I think this feature does address the majority of what I had seen as the problem. Specifically, it solves the issue of the user having to manually select numeric FDs for newly-opened files (and ensure that these do not conflict with other open files).
# 194  
Old 03-17-2011
Quote:
Originally Posted by tetsujin
This is one reason why I suggested "read x <&$f" as an alternative form - it makes it very clear to anyone reading the code that the caller is reading from an open file descriptor (regardless of whether $f is a plain, numeric FD, or some kind of special object representing an open file), rather than specifying a filename for redirection.
That's already valid syntax that already does something completely different.

Code:
FD=0
cat <&$FD

Shoehorning your solution in there will break existing code. Pick something else.
Quote:
Well, $ generally means some substitution is happening. It is always a string at present - in part because the shell can't really deal in anything else.

But it could... There's no reason it couldn't.
...aside from the fact that you want this to be a shell language, that is. Among other things that means weak typing. If you don't want a shell language, don't use a shell language.
Quote:
There are various little reasons I'm not too thrilled with that approach: like having to specify the filenames at the end of the block
No you don't.

Code:
<file_in >file_out cat

[edit] that doesn't work for shell builtins like while. You can bodge it in with <file cat | while but that's not elegant. The ability to do so could be a useful addition.

Quote:
...instead of wherever it makes sense within the block... What if you've got an "if" statement that tests whether a file exists, and you want the file to be opened only if the condition is true?
Then you check if the file exists, and open it. You have all those features already. Nothing's forcing you to redirect into code blocks except that it frequently makes sense to do so; if you don't want to use this feature sensibly, don't use a shell language.

Writing shell code frequently means different decisions about how you lay out your code because of the way code blocks work. If you don't want that, don't use a shell.
Quote:
Also, the scoping mechanism appears to be broken in my installed version of bash... (4.1.5) - the file stays open. I don't know offhand if this is considered a bug.
I'd presume not. Why should {f} close just because it's on the same line? It's not like stdin closes.

Last edited by Corona688; 03-17-2011 at 04:10 PM..
# 195  
Old 03-18-2011
(regarding "cmd <&$fd")

Quote:
Originally Posted by Corona688
That's already valid syntax that already does something completely different.
It redirects from an open file descriptor. It's conceptually the same operation whether $fd is a numeric file descriptor or some "special file descriptor object".

Quote:
Code:
FD=0
cat <&$FD

Shoehorning your solution in there will break existing code. Pick something else.
You know, I'm not placing too high a priority on backward compatibility here. If people have existing code that works with an existing shell, they can go on using that code with the shell it was written for. I wouldn't expect code written for bash to run perfectly in ksh, either. Any extensions either shell provides over baseline Bourne shell is very likely not going to work.

That said, I don't see how my idea breaks anything. It could coexist with explicitly numbered FDs as long as the mechanisms for opening the file are kept distinct.

But explicitly numbered FDs for files opened in the shell aren't really that valuable a feature anyway. If someone opens a file in the shell, they actually have no need to know the number of that file descriptor. (Assuming, as I generally do, that the shell shouldn't indiscriminately export all open file handles to every job it runs.) At most, people need to be able to know what numeric file descriptor an open file will open in a job if they want the job to have access to that FD - and even that is a fairly rare case. I'd just as soon break compatibility if it leads to a superior overall design.

Quote:
...aside from the fact that you want this to be a shell language, that is. Among other things that means weak typing.
Traditionally, it means weak typing. There's no reason a new shell would have to follow the same pattern. From my experience working with PHP I'm not a huge fan of weak typing, honestly.

Anyway, what I've described is already a form of weak typing. It's weak typing in the same sense that integer variables can only take integer values, but can be inserted into strings without explicit conversion.

Code:
# Assume $fd is a special object representing an open file:
$ some_cmd $fd    # I've already described how this could work, assuming the dominant case where the command takes filenames as input...
$ some_cmd < $fd  # And how this could work...  Possibly with "<&" instead of "<"
$ some_cmd 5< $fd  # And this could be made to work as well

The important principle of weak typing, that you can put whatever wherever and have a reasonable (but not foolproof) expectation that it'll do what you want, is preserved.

Quote:
Quote:
("If you redirect a block, you have to put the redirection at the end of the block...")
No you don't.

Code:
<file_in >file_out cat

[edit] that doesn't work for shell builtins like while.
Well, shell builtins like while were exactly what we were discussing... What you described here isn't redirecting a block at all.

But, as you suggest - there's no apparent reason this couldn't be added to bash or similar shells... So it's not like it's something that would require revolutionary change to address the issue - unless I'm missing some subtle syntactic issue that makes it so.

Quote:
You can bodge it in with <file cat | while but that's not elegant.
It also only gets you one file... If you dup the file to another FD inside the block, the open file's lifetime is no longer tied to the scope of the block.

Quote:
Quote:
...instead of wherever it makes sense within the block... What if you've got an "if" statement that tests whether a file exists, and you want the file to be opened only if the condition is true?
Then you check if the file exists, and open it. You have all those features already. Nothing's forcing you to redirect into code blocks except that it frequently makes sense to do so
But the whole point of redirecting into code blocks (for the purposes of this discussion) was to gain scoped lifetime for open files... If it can't be used effectively then we're kind of back where we started on the whole "scoped file" issue, and it's something current shells don't usefully provide.

Quote:
Quote:
Also, the scoping mechanism appears to be broken in my installed version of bash... (4.1.5) - the file stays open. I don't know offhand if this is considered a bug.
Code:
$ bash --version   #reports 4.1.5
$ while false; do read x <&$f; echo $x; done {f}<./some-file
# goes nowhere, does nothing, of course.  But...
$ read x <&$f; echo $x   #Guess what?  $f is still open!

I'd presume not. Why should {f} close just because it's on the same line? It's not like stdin closes.
If you were redirecting stdin for the block, yes that stdin would close, and you'd get your previous stdin back - even though the builtin runs in the shell's own process.

Though I guess what I described in bash is only the case if you're making a dynamic FD assignment as part of the redirection:

Code:
$ some_cmd 7<./some_file
# ./some_file is closed when some_cmd terminates.
# The assignment of this particular file to this particular FD affects only this one job.
# This works whether some_cmd is a builtin or an external command

$ some_cmd {f}<./some_file
# If some_cmd is an external command:
# * The assignment of $f will only apply to the job (and then only if $f is exported)
# * The file at file descriptor $f will be open for the job only
# If some_cmd is a builtin:
# * The assignment of $f will be applied to the shell process
# * The file at the file descriptor given by $f will remain open after some_cmd terminates.

So I guess I was wrong there - "scoped file open" isn't broken in Bash 4, it's just incompatible with dynamically assigned file descriptors... It's an unfortunate disparity and it plays out as one more limitation of the current implementation of scoped-lifetime open files in the shell. But it's probably just a bug and not an intended behavior. And, of course, it only applies to bash...

Quote:
Writing shell code frequently means different decisions about how you lay out your code because of the way code blocks work.
But the way code blocks work can be changed. If an alternate design would make the shell a nicer environment to work in, then such change is a good thing. Programming languages evolve over time. Even shells have gained lots of new features over the years. Numeric variables, expanded substitution and history recall syntax, arrays, regexes, coprocs...

(side note, I was pleasantly surprised when I found out, just the other day, that bash 4 includes coprocs! Seems like a nice implementation, too... the FDs for communicating with the new coproc are dynamically assigned and stored into an array variable...)

Quote:
If you don't want a shell language, don't use a shell language
if you don't want to use this feature sensibly, don't use a shell language.
If you don't want that, don't use a shell.
And again, and again, and again with this crap.

Look, the fact that what I come up with doesn't fit what you're used to from a shell doesn't make it not a shell. It could be not a Posix shell, or not a Bourne-compatible shell - but "shell" doesn't mean exactly and only what the usual Unix implementations provide.

To me the defining characteristics of a shell that set it apart from other programming languages are:
1: Its primary mode of operation is to run other programs, provide them with input and do useful things with their output.
2: It's well-suited for interactive use, but the way it's used interactively also translates nicely to non-interactive use (scripting).

This is a pretty broad definition, of course, but a pretty essential one. It leaves a lot of room for different approaches to shell design - many of which wouldn't nicely fit with Unix shell tradition. I think it can be worthwhile to break from tradition - or if nothing else to consider it. Status quo shouldn't blind us to the potential offered by other possibilities. But to be clear what I want is a shell. I just see no reason for this dichotomy... It doesn't have to be "The Unix shell as it's presently implemented" or "not a shell at all".

(EDIT): Yeah, I guess I'd agree that it's probably time for this whole discussion to be extricated from this particular thread. Smilie I might go with "speculative shell features" as a title - but whatever works, really...

Last edited by tetsujin; 03-18-2011 at 04:03 PM..
# 196  
Old 03-24-2011
Quote:
Originally Posted by tetsujin
It's conceptually the same operation whether $fd is a numeric file descriptor or some "special file descriptor object".
Maybe it's "conceptionally" the same but it's not actually the same. That's kind of more important.
Quote:
You know, I'm not placing too high a priority on backward compatibility here. If people have existing code that works with an existing shell, they can go on using that code with the shell it was written for. I wouldn't expect code written for bash to run perfectly in ksh, either.
But code written for the Bourne shell ought to and does work in any of them -- but might not work in your "shell" because your extensions break compatibility with basic, basic, basic Bourne shell features. Pick something else.
Quote:
That said, I don't see how my idea breaks anything.
It changes the meaning of existing code and existing variables, that's how. Even worse, it does it implicitly. It also completely changes the language from a weakly typed one into some bizarre mixture of weak and strong types.

Perl has a lot of advantages but the way it handles variables can be quite annoying. Try and print something and get ARRAY0x00000000 all over the place because you forgot what type your variable was, or some fancy library's documentation forgot to tell you what to expect, but because the language was derived from the weakly-typed AWK language there's mechanisms in place to bonk you on the head when you use a list as an array or a hash as a reference or etc etc etc. Imagine that in a shell. Current shells do a fairly good job of combining everything under the umbrella of "string". If you start adding variables that have no sensible meanings as strings, a Perl-like mess is what you get.
Quote:
...But explicitly numbered FDs for files opened in the shell aren't really that valuable a feature anyway.
It's one of the most important features of the shell and a fundamental part of how inter-process communication works in a shell. It's very powerful even if you don't think you need it. If you don't want it, you don't have to use a shell language.

I don't think you're really reading what I'm writing here.

Last edited by Corona688; 03-24-2011 at 11:23 AM..
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Pass C shell array to another C shell script(csh) and shell(sh)

Dear Friends, Please help me on this my script name is send.csh In this i have written the statement like this set args = ( city state country price ) I want to pass this array to another c shell called receiver.csh. and i want to use it in this c shell or how to pass to... (2 Replies)
Discussion started by: SA_Palani
2 Replies

2. UNIX for Dummies Questions & Answers

Shell script to read lines in a text file and filter user data Shell Programming and Scripting

sxsaaas (3 Replies)
Discussion started by: VikrantD
3 Replies

3. Shell Programming and Scripting

Any shell or hack that makes the shell command line take vi commands?

basically i'm tired of hitting the left arrow a few dozen times when correcting a mistake or modifying a history command i'd like to use vim style key shortcuts while on the command line so that a 55 moves the cursor 55 places to the left... and i want all the other vi goodies, search of... (3 Replies)
Discussion started by: marqul
3 Replies

4. Shell Programming and Scripting

calling 'n' number of shell scripts based on dependency in one shell script.

Hello gurus, I have three korn shell script 3.1, 3.2, 3.3. I would like to call three shell script in one shell script. i m looking for something like this call 3.1; If 3.1 = "complete" then call 3.2; if 3.2 = ''COMPlete" then call 3.3; else exit The... (1 Reply)
Discussion started by: shashi369
1 Replies

5. Shell Programming and Scripting

simple shell - how to get a parameter typed in a shell script

Hi, I am new to unix and using linux 7.2. I would like to create a script that would make it easyer for me to run my java programms. At the moment I have to type java myJavaprogram I am trying to write a script that will allow me to type something like this "myscript myJavaprogram" or maybe... (4 Replies)
Discussion started by: cmitulescu
4 Replies

6. Shell Programming and Scripting

How to run cmds after changing to a new env (shell) in a shell script

Hi, I am using HP-UNIX. I have a requirement as below I have to change env twice like: cadenv <env> cadenv <env> ccm start -d /dbpath ccm tar -xvf *.tar ccm rcv .... mv *.tar BACKUP but after I do the first cadenv <env> , I am unable to execute any of the later commands . ... (6 Replies)
Discussion started by: charlei
6 Replies

7. Shell Programming and Scripting

Help need to make a shell script run for ffmpeg vhook watermaking in shell

i have a small problem getting a batxh shell script to run in shell this is the code the problem seems to be centered around the ffmpeg command, something maybe to do with the ' ' wrapping around the vhook part command this is a strange problem , if i take the ffmpeg command and... (1 Reply)
Discussion started by: wingchun22
1 Replies

8. Linux

How to Start a Shell as Login shell instead of ordinary shell

Hi I tried with bash --login option. but the output is siva:~$ bash --login siva:~$ is there any way to make the shell ask for user id and password ( and login as different user instead of using sudo / su ) Thx in advance Siva (3 Replies)
Discussion started by: Sivaswami
3 Replies

9. AIX

Difference between writing Unix Shell script and AIX Shell Scripts

Hi, Please give me the detailed Differences between writing Unix Shell script and AIX Shell Scripts. Thanks in advance..... (0 Replies)
Discussion started by: haroonec
0 Replies

10. Shell Programming and Scripting

How to run unix commands in a new shell inside a shell script?

Hi , I am having one situation in which I need to run some simple unix commands after doing "chroot" command in a shell script. Which in turn creates a new shell. So scenario is that - I need to have one shell script which is ran as a part of crontab - in this shell script I need to do a... (2 Replies)
Discussion started by: hkapil
2 Replies
Login or Register to Ask a Question