Sponsored Content
Top Forums UNIX for Advanced & Expert Users What's your most useful shell? Post 302505677 by tetsujin on Thursday 17th of March 2011 02:43:45 PM
Old 03-17-2011
Quote:
Originally Posted by Neo
Corona688, tetsujin,

What does your discussion have to do with the question "What is Your Most Useful Shell?"
I don't limit my answer to that question to the shells that have already been written. Any programmer has the ability to write a new shell, and thus at least the potential to effect some change to what a shell is. So I approach the question in terms of where we could go next.

If you feel like moving the discussion, by all means...

Quote:
Originally Posted by Corona688
Quote:
NESTED QUOTES ARE USEFUL!
read x < $f #Is that quite simple enough?
Noted. I don't agree. We already use $. This'd change the meaning of plenty of existing code where $f would be a file name.
This is one reason why I suggested "read x <&$f" as an alternative form - it makes it very clear to anyone reading the code that the caller is reading from an open file descriptor (regardless of whether $f is a plain, numeric FD, or some kind of special object representing an open file), rather than specifying a filename for redirection.

Quote:
$ means string, period.
Well, $ generally means some substitution is happening. It is always a string at present - in part because the shell can't really deal in anything else.

But it could... There's no reason it couldn't.

$ also generally means a variable's value is being taken. That's the sense in which I'm using it.

Quote:
Quote:
some_command $f # It's actually not too hard to make this do what the user intends...
How do you know that's what the user wants? How do you know the command even reads from stdin?
Who said anything about the command reading from stdin?

Probably what the user wants, if they pass an open file descriptor as an argument to a command they're running, is to have the command operate on that file. To accomplish this, you need to attach that file descriptor to the new process, and you need to tell the new process where that file descriptor is.

In simple cases, you don't need this. You can just attach the files to the new process's stdin and stdout (and maybe stderr, though I think it would be poor practice to put anything but error messages out on stderr) and rely on the program to use those file descriptors in the usual way, or even implement specialized behavior (i.e. detect if stdin is a terminal, or a pipe, or a file, see if it's open read-only or read-write, see if it's seekable - and switch behaviors accordingly) - that works fine if you're only dealing with one or two files, and especially if the way you're using them fits the idea of stdin/stdout. (That is, it's kind of poor style to attach a file that's primarily used for input to stdout... And you wouldn't want to clobber a new program's stdin and stdout just to attach two open files to it, if you need that stdin/stdout for something else. Simple redirection should just be used where it makes sense, IMO.)

So you can get around that by attaching the open file to another file descriptor in the new process. Fortunately it doesn't matter too much where the file winds up - the new process just needs to know where that file is so it can operate on it. So if you had a program that took numeric FDs as arguments:

Code:
exec {fd}<>$filename
cmd --file-fd=$fd   # Current shells export all open FDs to new child processes...  not really appropriate IMO

If the command expects a filename, you could form a /dev/fd path:

Code:
exec {fd}<>$filename
cmd --filename=/dev/fd/$fd

One thing I'm suggesting here is that new child processes shouldn't get that file descriptor by default - so the first example would look more like this:
Code:
# If we suppose cmd doesn't get the file at ($fd) by default, then we need to use redirection syntax to provide it:
cmd --file-fd=$fd $fd<&$fd

But if $fd is something that knows it's a file descriptor, then specifying it as an argument is enough to tell the shell that the command should have that file descriptor left open across the exec() call. And since most programs take filenames, not numeric file descriptors, as arguments, the shell can handle forming that /dev/fd/ path itself:
Code:
# Some special syntax to make $fd an "open file descriptor object" goes here...
cmd --filename=$fd
# Shell knows $fd is an open file,
#so the substitution expands to /dev/fd/ whatever
# and the command which got $fd as an argument also gets the file itself in its file descriptor table.

This form is analogous to bash process substitution:
Code:
cmd_a <(cmd_b)

This runs cmd_b and cmd_a concurrently: the output of cmd_b is attached to a pipe, the other end of the pipe is bound to an open file descriptor in cmd_a's process, and "<(cmd_b)" is replaced with a filename from /dev/fd identifying the file descriptor. In this way, if cmd_a takes a filename as an argument, an open file handle can be specified in place of that argument. It's kind of a funky way of doing things but it works, it's a viable way of passing open file descriptors to new processes which take filename arguments.

It's an effective mechanism for passing in open FD's to new processes. It works with a lot of software that already exists (i.e. software that takes filenames) and it's useful for cases where you can't simply bind the file to one of the "standard" file descriptors.

Quote:
Quote:
First, the shell could handle it like process redirection: assign the file $f some numeric FD in the new process, then in place of the argument $f, substitute /dev/fd/(some number)
If you still think the shell assigns these numbers, I don't know what to tell you.
I'm talking about process substitution, not command substitution or normal redirection. I guess I used the wrong terminology there, sorry.

Yes, the shell does assign those numbers. It has to. It's the only way process substitution can work.

Quote:
Yes, that mode of anonymous redirection increases the code block level, that's unavoidable -- how else are you going to define scope but code blocks?
There are various little reasons I'm not too thrilled with that approach: like having to specify the filenames at the end of the block, instead of wherever it makes sense within the block... What if you've got an "if" statement that tests whether a file exists, and you want the file to be opened only if the condition is true? (Your "then" clause winds up with an extra, unnecessary level of nesting, if you want the open file to be scoped...) What if there's multiple different conditions that could lead to different files being opened for the same purpose? (I guess if you did the regular scoped file-open and then re-opened the same fd, the new file would be open but the scoping would be intact...)

Also, the scoping mechanism appears to be broken in my installed version of bash... (4.1.5) - the file stays open. I don't know offhand if this is considered a bug.
Code:
$ bash --version   #reports 4.1.5
$ while false; do read x <&$f; echo $x; done {f}<./some-file
# goes nowhere, does nothing, of course.  But...
$ read x <&$f; echo $x   #Guess what?  $f is still open!

Quote:
For global, but anonymous filedescriptors, How about this for a syntax:
Code:
# must be a function, alias, or builtin
open FILENAME FILEDES "r"

cat <&$FILEDES

That even works, if you write an 'open' function, since the second line isn't a new syntax.
In fact, we already have this, just with a different syntax:

Code:
exec {FILEDES}<FILENAME
cat <&$FILEDES

Frankly, it's taken me a while to get up to speed with all the features of Bash (esp. "relatively new" ones - non-Posix stuff introduced in the last 10-15 years that most people don't use... It's hard to find information on some of these features, frankly... Which is a shame because it's interesting stuff and it addresses some important issues...) - I think this feature does address the majority of what I had seen as the problem. Specifically, it solves the issue of the user having to manually select numeric FDs for newly-opened files (and ensure that these do not conflict with other open files).
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

How to run unix commands in a new shell inside a shell script?

Hi , I am having one situation in which I need to run some simple unix commands after doing "chroot" command in a shell script. Which in turn creates a new shell. So scenario is that - I need to have one shell script which is ran as a part of crontab - in this shell script I need to do a... (2 Replies)
Discussion started by: hkapil
2 Replies

2. AIX

Difference between writing Unix Shell script and AIX Shell Scripts

Hi, Please give me the detailed Differences between writing Unix Shell script and AIX Shell Scripts. Thanks in advance..... (0 Replies)
Discussion started by: haroonec
0 Replies

3. Linux

How to Start a Shell as Login shell instead of ordinary shell

Hi I tried with bash --login option. but the output is siva:~$ bash --login siva:~$ is there any way to make the shell ask for user id and password ( and login as different user instead of using sudo / su ) Thx in advance Siva (3 Replies)
Discussion started by: Sivaswami
3 Replies

4. Shell Programming and Scripting

Help need to make a shell script run for ffmpeg vhook watermaking in shell

i have a small problem getting a batxh shell script to run in shell this is the code the problem seems to be centered around the ffmpeg command, something maybe to do with the ' ' wrapping around the vhook part command this is a strange problem , if i take the ffmpeg command and... (1 Reply)
Discussion started by: wingchun22
1 Replies

5. Shell Programming and Scripting

How to run cmds after changing to a new env (shell) in a shell script

Hi, I am using HP-UNIX. I have a requirement as below I have to change env twice like: cadenv <env> cadenv <env> ccm start -d /dbpath ccm tar -xvf *.tar ccm rcv .... mv *.tar BACKUP but after I do the first cadenv <env> , I am unable to execute any of the later commands . ... (6 Replies)
Discussion started by: charlei
6 Replies

6. Shell Programming and Scripting

simple shell - how to get a parameter typed in a shell script

Hi, I am new to unix and using linux 7.2. I would like to create a script that would make it easyer for me to run my java programms. At the moment I have to type java myJavaprogram I am trying to write a script that will allow me to type something like this "myscript myJavaprogram" or maybe... (4 Replies)
Discussion started by: cmitulescu
4 Replies

7. Shell Programming and Scripting

calling 'n' number of shell scripts based on dependency in one shell script.

Hello gurus, I have three korn shell script 3.1, 3.2, 3.3. I would like to call three shell script in one shell script. i m looking for something like this call 3.1; If 3.1 = "complete" then call 3.2; if 3.2 = ''COMPlete" then call 3.3; else exit The... (1 Reply)
Discussion started by: shashi369
1 Replies

8. Shell Programming and Scripting

Any shell or hack that makes the shell command line take vi commands?

basically i'm tired of hitting the left arrow a few dozen times when correcting a mistake or modifying a history command i'd like to use vim style key shortcuts while on the command line so that a 55 moves the cursor 55 places to the left... and i want all the other vi goodies, search of... (3 Replies)
Discussion started by: marqul
3 Replies

9. UNIX for Dummies Questions & Answers

Shell script to read lines in a text file and filter user data Shell Programming and Scripting

sxsaaas (3 Replies)
Discussion started by: VikrantD
3 Replies

10. Shell Programming and Scripting

Pass C shell array to another C shell script(csh) and shell(sh)

Dear Friends, Please help me on this my script name is send.csh In this i have written the statement like this set args = ( city state country price ) I want to pass this array to another c shell called receiver.csh. and i want to use it in this c shell or how to pass to... (2 Replies)
Discussion started by: SA_Palani
2 Replies
All times are GMT -4. The time now is 12:05 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy