There's no need to use an external program when it can be done many times faster with bash (or any POSIX shell).
the idea of not using external program to do stuff that the shell can't do any better is a bit illogical. Can you show an example where using the shell to do what the OP wants to do as compared to an external program such as awk (or others) puts using the shell at a speed advantage? If possible, show how much faster it can get.
On the other hand, what the OP is doing might be part of a bigger project and not just comparing 2 numbers. In that kind of scenario, its not even wise to use just the shell. In fact, a better programming language should be used. One other thing you neglected in terms of "how much faster" is the development time of creating the script and coping with "compatibility issues". Imagine how much time one will waste to come up with the solution that you proposed, especially comparing the decimal part. It takes so much less time when a simple syntax of a > b is enough to do the job (and simpler to understand too)
Just $0.02
Last edited by ghostdog74; 11-08-2009 at 10:19 PM..
the idea of not using external program to do stuff that the shell can't do any better is a bit illogical. Can you show an example where using the shell to do what the OP wants to do as compared to an external program such as awk (or others) puts using the shell at a speed advantage? If possible, show how much faster it can get.
When dealing with a string, an external program is many, many times slower than using shell internals.
In the time it take to create the new process, a hundred (give or take) lines of shell code can be executed.
I have an 80-line shell function, fpmul, which multiplies floating point numbers.
Awk is much slower:
Quote:
On the other hand, what the OP is doing might be part of a bigger project and not just comparing 2 numbers. In that kind of scenario, its not even wise to use just the shell.
That is exactly when a more efficient use of the shell is important.
Quote:
In fact, a better programming language should be used.
On the contrary, complicated programs can be written in the shell that perform faster than those using another language.
There are times when an external program is better, especially dealing with large files.
Calling an external command to deal with a single string is the most inefficient code possible.
Quote:
One other thing you neglected in terms of "how much faster" is the development time of creating the script and coping with "compatibility issues".
Write for the POSIX shell and there are no compatibility issues.
Write for bash or ksh when extra efficiency is needed for specific tasks.
Since most people will have some familiarity with the shell, it is faster to learn to use it well than to learn a new programming language.
[indent]
When dealing with a string, an external program is many, many times slower than using shell internals.
not many times slower.... you are using echo to pipe to awk, an extra pipe process increase the time.
its just 0.003s. not that big a difference. Besides that, for an apple to apple comparison, its not even accurate. The above awk statement already had the 2nd field stored in memory and can be used anytime, whereas the shell one does not.
Quote:
I have an 80-line shell function, fpmul, which multiplies floating point numbers.
who would want to write and maintain an 80 lines code instead of one line ? your timing and my timing may differ since we may have different processors. Here's my timing:
same with bc
In the end, it doesn't really matter a big deal. The only big deal is, you write 80 lines of code to perform only a mere 0.003s difference with a one liner.
Quote:
On the contrary, complicated programs can be written in the shell that perform faster than those using another language.
that's not true . Complicated programs should not be written with shell, period. They should be written with languages like Python, or Perl with modules. There are simply many things that shell can't do very well.
Quote:
There are times when an external program is better, especially dealing with large files.
a tool that can process small and large files efficiently is still better than one that only works well on small files
Quote:
Since most people will have some familiarity with the shell, it is faster to learn to use it well than to learn a new programming language.
not really true. the shell's syntax is not exactly friendly to use. A programming language for example Python has easy to read syntax, and with that respect, reading and deciphering code is "faster", thereby easier to learn than shell.
Nowadays its not about speed of program execution, but speed of script development and easier code maintenance.
not many times slower.... you are using echo to pipe to awk, an extra pipe process increase the time.
Not much different from the pipe, as a subshell adds very little time compared to that of a new process.
Quote:
its just 0.003s. not that big a difference. Besides that, for an apple to apple comparison, its not even accurate. The above awk statement already had the 2nd field stored in memory and can be used anytime, whereas the shell one does not.
who would want to write and maintain an 80 lines code instead of one line ?
What maintenance? It's a black box function that was written once, many years ago and hasn't been touched since.
Its use is a one-liner.
Quote:
your timing and my timing may differ since we may have different processors. Here's my timing:
same with bc
In the end, it doesn't really matter a big deal. The only big deal is, you write 80 lines of code to perform only a mere 0.003s difference with a one liner.
No, I now write one line of code: fpmul ...
Quote:
that's not true . Complicated programs should not be written with shell, period.
Absolute nonsense.
The shell is a very good programming language. It is the only one I need.
Quote:
They should be written with languages like Python, or Perl with modules. There are simply many things that shell can't do very well.
There are also things that python and perl don't do well.
File globbing and external commands are seamless in the shell; not so in other languages.
Quote:
a tool that can process small and large files efficiently is still better than one that only works well on small files
But ridiculously inefficient to use them on a single string, which I how I see them often used.
How often have you seen something like:
Such coding can slow a script to a crawl.
Quote:
not really true. the shell's syntax is not exactly friendly to use. A programming language for example Python has easy to read syntax, and with that respect, reading and deciphering code is "faster", thereby easier to learn than shell.
Nowadays its not about speed of program execution, but speed of script development and easier code maintenance.
Speed of execution is still very important.
The difference between a command that executes immediately and one that takes a second or two is the difference between a good user experience and a bad one.
Development of shell programs can be just as fast as writing perl or python, and just as legible.
Hi , I have a file which contains text like
A|Mau|Code|12|Detail
B|Mau|Code|20|Header
I want to write a command using awk which will output
A|Mau|Code|12.00|Detail
B|Mau|Code|20.00|Header
I used a command like awk -F"|" {printf "%s|%s|%s|%.2f|%s",$1,$2,$3,$4,$5}' which does the... (4 Replies)
Hi!
I found and then adapt the code for my pipeline...
awk -F"," -vOFS="," '{printf "%0.2f %0.f\n",$2,$4}' xxx > yyy
I add -F"," -vOFS="," (for input and output as csv file) and I change the columns and the number of decimal...
It works but I have also some problems... here my columns
... (7 Replies)
Hi,
i need to move the decimal point from a file listing some numbers like this :
49899.50
49914.55
49894.48
49939.65
49879.44
49919.57
49934.62
49944.67
49954.72 (1 Reply)
Hello,
I am having a problem when i execute following script on RHEL 6.4. Same script works fine on another machine where I have same version of RHEL and KSH.
Below is the rpm and RHEL version.
ossvm12(0)> rpm -qa | grep ksh
ksh-20100621-19.el6.x86_64
ossvm12(0)> cat... (7 Replies)
Hi All,
Can some one help me in identifying the significance of character "$" ,Which is playing critical role in matching decimal point numbers as below.
$ echo "01#.01"|awk '{if ($0 ~ /^+(\.*)?$/) print}'
$ echo "01#.01"|awk '{if ($0 ~ /^+(\.*)?/) print}'
01#.01
$
Regards,
Rmkganesh. (3 Replies)
For numbers between 0 and 1 the below logic is not working.
Output of above shall be "correct" but its echoing "incorrect".Kindly suggest
a=.1
if
then
echo correct
else echo incorrect
fi
Video tutorial on how to use code tags in The UNIX and Linux Forums. (3 Replies)
Hi
In Unix, I have a file with some numbers like :
45600
12345
I want to insert a decimal point for these numbers based on user input.
If the input is 2, the numbers should be changed to
456.00
123.45
If the input is 3, the numbers should be changed to
45.600
12.345
Can... (2 Replies)
Im trying to compare two numbers with decimals but its not working as expected.
a=1
b=1.1
if
then echo "equal"
fi
When I do this it says that the numbers are equal. Ultimately Im using -le and -ge in the if statements but I tested with -eq for simplicity.
Any way to make this... (3 Replies)
Hi all. Using /bin/sh on an HPUX system.
I want to place a decimal in the field 2 charactors from the right (yes, converting to currency). The field lengths are variable. Here's what I'm doing:
exec < filename
while read FIELD1 FIELD2
do
FIELD1="echo $FIELD1 | sed 'syntax that will... (4 Replies)
Hi,
I have input with decimal point ( 9.99 ) for hours variable hrs.
I need to change it to seconds.
Here is my code:
secs=`/usr/ucb/echo $hrs*3600 |bc`
But I don't want to see the decimal point.
I can use awk to trim it if there is one.
I am just wondering if there is better standard... (2 Replies)