On a hosted linux environment which I have very little control over, I have a PHP script that takes in X number of floats, performs Y number of simple recursive arithmetic calculations, and produces some output for display to the user.
When I first created the script, 'X' and 'Y' were both in the single digits, and execution time wasn't really important because I was in the micro-second range, so that simple php script was good enough.
Today, the number of variables and calculations are rapidly increasing. I'm still in the sub-second range, but as time goes on I may need to start paying attention to performance.
My site is not revenue producing so I need to keep it on a simple, cheap hosted environment. If I were to rewrite those simple calculations into some compiled language, would I necessarily see substantial performance gains? Or would I see better results looking for a host with better resources (more cores, memory)? PHP doesn't really do multithreading well so I imagine with a heavily recursive application I would see benefit rewriting?
I realize this is a very newbie question. Thanks for any advice.
On a hosted linux environment which I have very little control over, I have a PHP script that takes in X number of floats, performs Y number of simple recursive arithmetic calculations, and produces some output for display to the user.
Without seeing the code, we can't possibly say if it's slow, or why.
Without seeing the code, we can't possibly optimize it.
Without seeing the code, we can't possibly say how difficult it could be to replace it with a compiled language.
Without seeing the code, we can't possibly tell whether there'd be any benefit to doing so.
I was hoping for a more generalized discussion. My code is full of so many oddball things I'd have to try to explain... at its root it's just simple calculations, like:
pseudocode:
It's nothing groundbreaking, just simple multiplications. In my case it's just... a lot of them. Currently thousands of multiplications per request, which takes less than a second.
As more variables are thrown in, each request will grow to hundreds of thousands of calculations per request, at which point processing time will begin to become a concern.
I suppose my question is - knowing PHP reasonably well, is there a way I can optimize the current code to run faster, maybe find some way to tap multiple cores simultaneously via some type of multithreading, or would I be better served finding a faster provider, or should I suck it up and learn Java or C++ or whatever-else could allow me to send in some values and perform a million calculations as fast as possible given a simple, generic linux hosting account.
General discussion can only tell you general things. In short, not too helpful.
How to optimize also depends on the code to be optimized. A general conversation can't tell you much.
That function, for instance, can be replaced with a table lookup. You can only calculate factorials so high anyway before your integers or even your floats overflow. Just make a big array, so that instead of making 17 function calls to calculate one number, you make one array lookup.
If you insist on calculating it, there's no point recursing here, just loop.
C can make things like this much faster, yes. I think I wrote a loop to calculate fibbonaci numbers that was 2 or 3 instructions long once. But arrays beat even that. One array lookup in C can amount to one instruction.
But C isn't the best thing in the world for dealing with CGI, something PHP excels at. What you gain by using C might be lost by having to write PHP's easy features from scratch.
Bottom line: compiled code will execute faster than interpreted code. If that is your question. And if you are really doing factorials (or anything as deteministic as that) consider table look ups. An example of that is a primality test on numbers< 2^31.
bsearch on an ordered table of numbers completes in very few operations compared with seiving.
The other thing: if you are actually doing something mathy download and use GSL. You can use it in your hosted environment.
Hi,
I have a requirement wherein i need to have a generic file watcher in place.
On presence of a file in a particular directory,the file watcher should identify the related config file and execute the series of the shell scripts mentioned in the config file.
eg.Config file
a.sh
b.sh... (7 Replies)
I analysed disk performance with blktrace and get some data:
read:
8,3 4 2141 2.882115217 3342 Q R 195732187 + 32
8,3 4 2142 2.882116411 3342 G R 195732187 + 32
8,3 4 2144 2.882117647 3342 I R 195732187 + 32
8,3 4 2145 ... (1 Reply)
I have a tab delimited HUGE file (13 million records) with Detail, Metadata and Summary records.
Sample File looks like this
M BESTWESTERN 4 ACTIVITY_CNT_L12 A 3
M AIRTRAN 4 ACTIVITY_CNT_L12 A 3
D BESTWESTERN FIRSTNAME LASTNAME 209 N SANBORN AVE
D BESTWESTERN FIRSTNAME LASTNAME 6997... (25 Replies)
Hello,
I have a Supermicro server with a P4SCI mother board running Debian Sarge 3.1. This is the "dmidecode" output related to RAM info:
RAM speed information is incomplete.. "Current Speed: Unknown", is there anyway/soft to get the speed of installed RAM modules? thanks!!
Regards :)... (0 Replies)
How do I update, change, reconfigure or whatever it is that I have to do, in order to rid the GENERIC label. It just means that it is the basic kernel shipped with the OS right? Im using FBSD 4.5 (2 Replies)