I was testing how the restart of daemons reacted to a shortage of mysql connections, so I drop the limit, kill a daemon, wait for new messages, then raise the limit, wait for the message count to quiesce and move to the next daemon. It works fine except for the bash blowing out junk from $() onto the console. I guess I am singularly cursed.
Maybe it comes from the ENV?
I never heard of the input generating shell caring that it was feeding mysql. I suspect the messages are on stderr or tty, since they cannot pipe through mysql! It is a very obnoxious thing for a shell to do. Maybe I have to take it to bash-dev-land.
Yes, I too had a theory that the last command was sitting in a buffer but I couldn't account for the selective repetition, the overlap and the fact that it stops.
Imho it's something to do with MySQL because the issue starts when MySQL gets its first program line. I suspect that it is by-passing unix I/O mechanisms and messing with the raw device (and upsetting the Shell in the process).
If it doesn't need a terminal context, maybe try backgrounding the MySQL program.
I seem to remember you advising me that the following ksh construct doesn't work in bash (but if it does, it's worth a try):
I seem to remember you advising me that the following ksh construct doesn't work in bash (but if it does, it's worth a try):
Nothing wrong with that construct. ksh evaluates pipes in a different order is all, so read on the end of a pipe-chain would set values in the subshell in a bourne shell, but the parent shell in ksh.
Yes, bash is less handy at
which has to be recoded, often to:
The 'read x' saves calling 'line', if your LINUX/UNIX has 'line' installed, else 'head -1'. I suspect 'head -1' is not fully equivalent to 'line', like when reading a fd with many commands, but the 'head -1' is equivalent, if an extra fork and exec, to 'read x' (both have a FILE* stdin that reads ahead, which is fine for throughput and reduced overhead but no good for reading the same fd with many commands). Bash is mostly just as handy with
as long as you do not, later, want variables set in there including x, as it is in a subshell. Sometimes a bash script can be fixed with just a more explicit sub-shell where the variable is read and used (only)):
It is an article of faith in bash that any ksh thing that malfunctions is a feature.
If I could make the $(...) echoing '...' out happen on demand, I could take a shot at seeing what and where it is writing that, but alas, it just comes and goes. I am not buying the mysql reaches up the stdin pipe theory. Pipes have a very few and bland set of behaviors, thank goodness.
I have been wishing for a revised stdio, unified FILE* library that would flush all output FILE* when blocking any, and read input ahead when blocking on other FILE*. Then, there would always be low latency, and fflush() or buffering restrictions like line or none would not be necessary for most programs, so you get good buffering during high throughput and yet low latency during data drought, all with no code. You would only need such controls when many threads/processes write the same fd or file, to avoid splitting/mixing lines/packets.
I am not buying the mysql reaches up the stdin pipe theory.
Try this:
Commandline mysql features a full-fledged line editor complete with arrow keys and history recall. It can and does mess with your terminal. It doesn't have to "reach up stdin" to do this -- it can open /dev/tty and mess with it direct. Any interactive program has the potential to do this, really. How else would more work when stuck on the end of a pipe chain?
Quote:
Originally Posted by DGPickett
Yes, bash is less handy at
which has to be recoded, often to:
You can also do
Quote:
It is an article of faith in bash that any ksh thing that malfunctions is a feature.
It's not a BASH problem, or at least not just a BASH problem. Lots of non-bash bourne shells have the same shortcoming. ksh's behavior is an extension.
---------- Post updated at 01:29 PM ---------- Previous update was at 01:17 PM ----------
Quote:
Originally Posted by DGPickett
I have been wishing for a revised stdio, unified FILE* library that would flush all output FILE* when blocking any, and read input ahead when blocking on other FILE*. Then, there would always be low latency, and fflush() or buffering restrictions like line or none would not be necessary for most programs, so you get good buffering during high throughput and yet low latency during data drought, all with no code.
Like the memory buffer used for pipes, and the caching+read-ahead Linux and UNIX do for files? That's handled decently well at the kernel level. (I've noticed some changes lately, actually, in pipes being flushed more often.) The trouble I've found is how to tell the kernel when not to do that. Any operation on a large file fills the file cache with junk that'll probably not be reused, and you need to posix_fadvise every step of the way to prevent it...
I think the default line-buffering for printf() and such is to make situations like this more efficient:
With buffering, that's only one context switch, when write() is called by printf("\n"); Turn off all buffering and it's 11 context switches.
I suppose a FILE* could be double buffered and use aio to move the data! Maybe I will write a lib! Double buffering (i/o in one buffer area while you memory write/read another buffer area) got seriously neglected in UNIX except within TCP sockets.
I have never used the 'cmd <<< word', being pretty close to 'echo word | cmd' or 'cmd <<!
word
!', and in this '<<<$(...)' case, takes data from stdout a to stdin b, which is normally a pipe's simple job.
I suppose that if all non-ENV shell variables of a session were in an mmap'd file, all sub-shells could see and change all variables. Of course, it might get a bit tricky with the locking and moving as things expand, soon becoming another heap in need of GC.
BASH Gurus: Anyone know how to append continuous output command appending to a file, but limit that file to no more than 20 lines? The program I have running is simply monitoring my UDP port 53 for incoming packets endlessly. I just need to keep this file from going over 20 lines. Once the file... (3 Replies)
I searched the forums for command logging and the user "Driver" seemed to provide a script for logging shell commands with related info like date and time. The subject was "logging command invocations -cmdlog" . I would be interested in this script.
Thanks (0 Replies)
HI ,
I have a simple script that moves files from one folder to another folder, I have already done the open-ssh server settings and the script is working fine and is able to transfer the files from one folder to another but right now I myself execute this script by using my creditianls to... (4 Replies)
Is it possible to store all standard-out of a bash script and the binaries it calls in a log file AND still display the stdout on screen?
I know this is possible to store ALL stdout/stderr of a script to a single log file like:
exec 1>&${logFile}
exec 2>&1
But running a script with the... (3 Replies)
I'm looking at allowing remote telnet into my server.
like any security-minded administrator, I want to log what my users type on the telnet session.
I'm using the script command to generate transcripts of the users session.
I have /etc/profile set to automatically start the script command... (2 Replies)
Hi.
I have the script shown below. If I execute it form the command line it seems to work properly, but when I fun it using the unix "at" command
"at -m now < ./kill-at-job.sh"
It appears to hang. Below is the script, the input file, and the execution as reported in the e-mail from the "at"... (3 Replies)
I am looking for a really good command logging tool to improve the auditing of my servers. I have previously used snoopy but this is currently a bit flaky and causing serious problems for me, it doesn't look like it's been maintained since 2004, it didn't even want to compile until I added -fPIC... (1 Reply)
Does anyone have a simple method of logging all shell commands typed by a user (csh in our case)?
- I could enable auditing, but this would be overkill
- I could enable process accounting, but AFAIK, this does not log arguments
Thanks all. (2 Replies)
Hi, I am trying to recollect the command used to log a file.
We use this command just before starting, say, installation. At the end you get a file capturing the series of commands you used during the course of time and sytems response.
Could anybody please help.
Thanks,
Dasa (3 Replies)
Hi all...
I've completed the task of deploying SSH over my 400 servers.
I don't know if i'm right or wrong, but ssh doesn't do any command-logging, does it?
Is there a app i can use to log all commands passed ( besides the usual .sh_history), whith no modification possible by the user, and how... (2 Replies)