Sponsored Content
The Lounge What is on Your Mind? Speculative Shell Feature Brainstorming Post 302509407 by Corona688 on Wednesday 30th of March 2011 06:41:37 PM
Old 03-30-2011
Quote:
Originally Posted by tetsujin
That allows you to use object lifetime to control resource allocation - including allocation of resources the shell may not have been explicitly designed to manage.
I'm actually not against that. As long as a variable always acts like a variable, the special cases I keep harping about don't exist. That's the whole point of polymorphism, yes? Lets you use different things the same way?

Perl does a really bad job of polymorphism. It's specialized itself into a big mess of special types and operators that are all context-sensitive. Depending on what kind of variable it is, print $variable, "\n" could print a string, a blank line, a mess of crap, the length of something, a pointer dump like ARRAY(0x123851023), a syntax error, or a host of other things. You can spend hours just trying to figure out what kind of variable someone's perl module hands off to you, let alone how to use it.

The shell has avoided this by keeping the meaning of its operators consistent and only having one interface for variables, strings. ${something} means string. ${#something} means length. ${*} means all arguments. "${@}" means all arguments without splitting on IFS. They work alone or in combinations, on arrays or strings or environment vars or local vars. The shell knows the difference and that's enough. In the few cases you're not allowed to combine them, that's always syntax error, not a silent substitution of nothing, not a length you didn't ask for, not a garbled mess, not a pointer dump. You can always expect to access a shell variable the same way no matter what it is, and the interface is flexible enough to encompass most possibilities. That's polymorphism.

And also why I think expanding the concept of variable instead of tacking the new concept of scoped files onto the side of it would work better. Let the variables be accessed the same way, as strings, since it's already convenient for the programmer, fits nicely with the existing syntax for redirecting files, and wouldn't need huge amounts of surplus new syntax to add. All that needs to be different about the variable is the ability to close a file when it goes out of scope. The scoping doesn't have to be something that causes the programmer to need new syntax to use, any more than a C programmer would use a stack variable differently than a global one.

---------- Post updated at 04:41 PM ---------- Previous update was at 03:52 PM ----------

Quote:
Originally Posted by tetsujin
In the design I've suggested here, the shell doesn't take responsibility for fixing that. Rather, it merely provides a mechanism that allows an external program to fix that... The shell creates the PTYs for each thread (if the user requests TTY sharing) and connects them appropriately, but the job of determining how those PTY masters are used is left up to the program specified by the user.
Does that have to be done as part of the parallel loop? That sounds like something you might want to do in general, inside or outside a loop. Let the parallel loop be the parallel loop and the TTY sharer be the TTY sharer.
Quote:
In the design I described, "screen" windows don't get created and destroyed during the loop.

Rather, when writing the loop, if the user has explicitly requested a terminal-sharing mechanism be used to synchronize display between the multiple loop iterations being run concurrently, then there will be one terminal created for each thread, not one for each iteration.
That could be confusing if the threads end up doing different things on each iteration.
Quote:
This doesn't create a perfect display, because as each loop iteration ends, a new one takes its place on the display. There's no display of history, basically. But it's a quick & dirty way for the user to get a display that's at least readable...
I agree it could be useful but I'm not sure it should be part of the shell script. You end up with shell scripts that require expect to run, and start spewing garbage to stdout if you try to disentangle them.
Quote:
Could you explain the password spoofing issue to me? Depending on the nature of the issue it could obviously be serious...
Utilities like ssh and su require password input to be from a terminal, but PTY's count. Something that gives you quick and easy access to PTY's, let alone in a high-performance parallel manner, isn't the sort of tool you want to just leave lying around -- another reason expect's a last resort.
Quote:
I consider a shell to be a programming language and a UI. To me, it's pretty much the unique defining characteristic of a shell.
True, but take a look at the way these features are made. A shell script isn't going to blow up because someone disabled tab-completion -- shell scripts don't use tab completion, that's a user thing. Nor are they going to start typing in pgup to recall the last command and blow up when history features turn out to be unavailable, that's a user thing too. You cannot write scripts depending on these features and that's intentional -- the shell might care if it's in a terminal or not, but the program usually doesn't.

That's what I mean by not building it in, making it something like a debug flag -- to change the behavior of something to let it pop up these little windows. The script doesn't have to know or care whether you're snooping on its threads outputs or not, that can be left up to the shell. After all, what would know better about the current terminal than the shell?
Quote:
Considering the "usual" case is still useful, though, for deciding what things should be well-supported and convenient in the syntax.
It doesn't have to be all-or-nothing. I think it's possible to make a syntax that's both convenient and general-purpose.
Quote:
If you wanted to limit how many of them run at once (after all, running a substantially higher number of jobs than your number of CPU cores at best is going to get you around some I/O blocking, at worst it's going to slow you down via VM thrashing) - that's more complicated.
Not THAT complicated.

Code:
#!/bin/bash

THREADS=5       ;       WAIT=0  ;       END=0

# Wait for next thread.
function waitnext
{       # Needs BASH/KSH
        wait "${PIDS[$(( (WAIT++) % THREADS))]}"
}

for ((N=0; N<100; N++))
do
        [ "$((END-WAIT))" -ge "$THREADS" ] && waitnext
        sleep 1 & PIDS[$(( (END++) % THREADS ))]=$!
done
# wait for ALL remaining processes
wait

There's a shell feature missing that'd make this better and easier -- the ability to wait for any one process without saying which one. Right now you have a choice between waiting for one specific process or waiting for all of them. [edit] Or trapping SIGCHLD. That'd work, but would work better if shell had something like semaphores

Quote:
Going back to the issue of shell threading:
I hadn't thought about the interaction between fork() and threads - But, then, "threads" in an interpreted language don't have to be interpreted as actual execution threads: Python (the C implementation) for instance, implements threads internally. From the perspective of the OS these threads don't exist (they're not separate entities in the scheduler) but within the context of the language itself they work as any other threads implementation would.
That's just timesharing. If you want actual benefits from threading, you must do multithreading and/or multiprocessing.
Quote:
If the implementation did use real threads, another option for dealing with forking would be to fork off a process that does nothing but listen to a pipe that tells it what to fork and run, and a Unix domain socket that feeds it file descriptors to attach to the new processes...
Kind of what i said but in more detail
Quote:
Of course it'd also have to communicate back information about when jobs terminate... Apart from the fact that it solves the thread problem pretty handily, it seems like kind of an ugly solution, really.
It is, and could cause other problems. If the parent closes a file descriptor, this child launcher will have to follow its lead somehow.

There's a better way to do this but I don't quite remember it. Something to do with fork callbacks.
Quote:
As for the other impacts of threading - parens could still be used specifically to specify a subshell context (after all, both bash and ksh provide curly braces as a way to group commands without creating a subshell context)
...What? Really? Smilie

That's perfect for scoping your files! Allow local variables in braces like that. Your files could be some of these variables, containing FD numbers. When they go out of scope, the files close.

Last edited by Corona688; 03-30-2011 at 08:01 PM..
 

4 More Discussions You Might Find Interesting

1. SCO

BASH-like feature. is possible ?

Greetings... i was wondering if there is any shell configuration or third party application that enables the command history by pressing keyboard up arrow, like GNU/BASH does or are there an SCO compatible bash versio to download? where ? just wondering (sory my stinky english) (2 Replies)
Discussion started by: nEuRoMaNcEr
2 Replies

2. Shell Programming and Scripting

Creating a command history feature in a simple UNIX shell using C

I'm trying to write a history feature to a very simple UNIX shell that will list the last 10 commands used when control-c is pressed. A user can then run a previous command by typing r x, where x is the first letter of the command. I'm having quite a bit of trouble figuring out what I need to do, I... (2 Replies)
Discussion started by: -=Cn=-
2 Replies

3. UNIX for Dummies Questions & Answers

brainstorming automated response

I am managing a database of files for which there is a drop-box and multiple users. what i would like to do is set a criteria for files coming into the drop-box based on file structure. (they must be like this W*/image/W*-1234/0-999.tif) If the files do not match the criteria i want them to be... (1 Reply)
Discussion started by: Movomito
1 Replies

4. UNIX for Beginners Questions & Answers

Can we create any check-point feature in shell ?

I have a script as below and say its failed @ function fs_ck {} then it should exit and next time i execute it it should start from fs_ck {} only Please advise #!/bin/bash logging {} fs_ck {} bkp {} dply {} ## main function### echo Sstarting script echo '####' logging fs_ck... (3 Replies)
Discussion started by: abhaydas
3 Replies
All times are GMT -4. The time now is 03:54 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy