Sponsored Content
The Lounge What is on Your Mind? Speculative Shell Feature Brainstorming Post 302509404 by tetsujin on Wednesday 30th of March 2011 05:32:13 PM
Old 03-30-2011
Quote:
Originally Posted by Corona688
Quote:
If two wgets (with no /dev/tty) are run concurrently with shared stdout, the display winds up corrupted as the two processes write to stdout simultaneously:
Naturally. I'm not sure it's the shell's job to fix this, though.
In the design I've suggested here, the shell doesn't take responsibility for fixing that. Rather, it merely provides a mechanism that allows an external program to fix that... The shell creates the PTYs for each thread (if the user requests TTY sharing) and connects them appropriately, but the job of determining how those PTY masters are used is left up to the program specified by the user.

And if someone doesn't want that, they don't specify the TTY multiplexer option when they write their loop. They can just specify the "-j" option to thread their loop iterations and not sweat the details of how output is interleaved or how TTY access is managed. The end result there is that the user gets a jumbled display if their loop iterations write to the TTY at the same time - but that's their choice.

Quote:
Taking over the terminal that way also means there's no longer one place to type input and one place to read output. That's fine in screen when you created the windows and know what they are, but when they create and destroy themselves in droves, the interface might seem just as bad a scramble as you were trying to fix.
In the design I described, "screen" windows don't get created and destroyed during the loop.

Rather, when writing the loop, if the user has explicitly requested a terminal-sharing mechanism be used to synchronize display between the multiple loop iterations being run concurrently, then there will be one terminal created for each thread, not one for each iteration.

This doesn't create a perfect display, because as each loop iteration ends, a new one takes its place on the display. There's no display of history, basically. But it's a quick & dirty way for the user to get a display that's at least readable...

Quote:
I suppose you could reserve the lower half of the screen for normal console-like I/O but you'd have to emulate it with a pty, which'd turn your shell into a hacking tool -- people could spoof password logins by default.
Could you explain the password spoofing issue to me? Depending on the nature of the issue it could obviously be serious...

Quote:
Not that it wouldn't be useful but maybe these things really do deserve to be separate.
Well, they are separate... What I'm describing here is a mechanism that would allow someone to easily provide complicated behavior via external utilities. The added syntax and functionality just makes it a lot easier to hook up the needed pipes and PTYs.

Quote:
Again remember that you're writing a programming language, not a GUI.
I consider a shell to be a programming language and a UI. To me, it's pretty much the unique defining characteristic of a shell.

Quote:
People might put it to uses you don't expect, or might not be able to because you didn't give them means to do what they "usually" don't need to.
Considering the "usual" case is still useful, though, for deciding what things should be well-supported and convenient in the syntax.

Quote:
Holding too strictly to things which prettify the terminal could make your language hard to use without one. There's a lot of different ways programs could print output. Do you want to add special modes for all of them, or just give the programmer a way to get at what's there?
In this case I'm proposing only that the shell provides the mechanism that a programmer would need to easily "prettify" the output himself. Strictly speaking it's a capability that's mostly already present in the shell - it's just not something that's easy to do.

For instance, to do the equivalent of multi-threaded loops in a current version of bash: running each loop step as a background subshell job would be a pretty simple way to accomplish that.

If you wanted to limit how many of them run at once (after all, running a substantially higher number of jobs than your number of CPU cores at best is going to get you around some I/O blocking, at worst it's going to slow you down via VM thrashing) - that's more complicated. When the loop starts up you need to make sure the first four iterations run in the background, and then on successive loop iterations you need to wait for an already-running previous step to terminate before running the next one in the background. This could be done with something like the wait() call (I know there's the "wait" builtin - honestly I never got them to work right for subshells run in the background) - or if that were unusable for whatever reason, one could create a pipe (mkfifo wouldn't work unless you could open both ends of the fifo before the loop starts) and have each loop iteration write a byte to the pipe when it's done and attempt to read a byte from the pipe before it starts. ("wait"ing is probably the better approach)

Then if you wanted to distribute input to the stdin of the individual loop iterations, or combine output from the individual loop iterations to produce a single stdout stream (implementing some kind of useful synchronization method to produce the desired kind of output) - then first off individual loop steps need to know not only when one of those four "slots" is open, but it'd need to know which one - the separate "threads" have to have "identity" so they can attach to distinct points on the multiplexer/demultiplexer. Then for an output multiplexer, you need to create a pipe for the stdout of each "thread" and feed the "downstream" end of each of those pipes to the multiplexer program, and redirect each loop iteration to the proper "upstream" end when you run it...

Then if you want to do TTY sharing, you need to create a set of PTYs (open /dev/ptmx, assuming we're not running on a system that lacks that - get the TTY name and use it to open the slave end - notably I think we're missing "ttyname" in the shell) and distribute them to the respective "threads" (one would need a means of setting that PTY as the controlling terminal for the job) as with the I/O pipes, and then feed the PTY master filenames or file descriptors to the command that takes responsibility for managing the display... And then, I guess, rely on the terminal sharing program (screen or whatever) to propagate signals to the individual jobs... (There's other ways you could do this, too, like run multiple instances of a program in screen, and send these instances instructions on how to perform each piece of work you want to do in your loop)

So not much of what I describe is beyond current shells' capabilities - it's just not an easy thing to do. Smilie The idea is to think about what kinds of facilities the shell can reasonably provide that will make it easier for people to do whatever they want to do.

Of course, even just implementing the "-j" option without any of the multiplexing stuff would be useful for a lot of cases.


Going back to the issue of shell threading:
I hadn't thought about the interaction between fork() and threads - But, then, "threads" in an interpreted language don't have to be interpreted as actual execution threads: Python (the C implementation) for instance, implements threads internally. From the perspective of the OS these threads don't exist (they're not separate entities in the scheduler) but within the context of the language itself they work as any other threads implementation would.

That could complicate the implementation of built-ins if those built-ins might block on input or output - so I guess I'd have to think about that one, consider how much complication it'd introduce to the implementation of built-ins vs. the impact of using real threads and having to synchronize access to the environment and deal with the fork() issue you describe...

If the implementation did use real threads, another option for dealing with forking would be to fork off a process that does nothing but listen to a pipe that tells it what to fork and run, and a Unix domain socket that feeds it file descriptors to attach to the new processes... Of course it'd also have to communicate back information about when jobs terminate... Apart from the fact that it solves the thread problem pretty handily, it seems like kind of an ugly solution, really.

As for the other impacts of threading - parens could still be used specifically to specify a subshell context (after all, both bash and ksh provide curly braces as a way to group commands without creating a subshell context) - the main impact then would be that things would not be implicitly shuffled off to subshell context as a result of their position in a pipeline. (Whether that's better than, say, ksh's approach of establishing the convention that only the last part of a pipeline is in the current job is debatable, I guess. I think it'd be preferable. "Don't subshell anything unless I say so" instead of "accept as a necessity that all but one of the commands on a pipeline must be run in a separate process and therefore a separate environment")
 

4 More Discussions You Might Find Interesting

1. SCO

BASH-like feature. is possible ?

Greetings... i was wondering if there is any shell configuration or third party application that enables the command history by pressing keyboard up arrow, like GNU/BASH does or are there an SCO compatible bash versio to download? where ? just wondering (sory my stinky english) (2 Replies)
Discussion started by: nEuRoMaNcEr
2 Replies

2. Shell Programming and Scripting

Creating a command history feature in a simple UNIX shell using C

I'm trying to write a history feature to a very simple UNIX shell that will list the last 10 commands used when control-c is pressed. A user can then run a previous command by typing r x, where x is the first letter of the command. I'm having quite a bit of trouble figuring out what I need to do, I... (2 Replies)
Discussion started by: -=Cn=-
2 Replies

3. UNIX for Dummies Questions & Answers

brainstorming automated response

I am managing a database of files for which there is a drop-box and multiple users. what i would like to do is set a criteria for files coming into the drop-box based on file structure. (they must be like this W*/image/W*-1234/0-999.tif) If the files do not match the criteria i want them to be... (1 Reply)
Discussion started by: Movomito
1 Replies

4. UNIX for Beginners Questions & Answers

Can we create any check-point feature in shell ?

I have a script as below and say its failed @ function fs_ck {} then it should exit and next time i execute it it should start from fs_ck {} only Please advise #!/bin/bash logging {} fs_ck {} bkp {} dply {} ## main function### echo Sstarting script echo '####' logging fs_ck... (3 Replies)
Discussion started by: abhaydas
3 Replies
All times are GMT -4. The time now is 05:48 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy