Tim Bass
Wed, 21 Nov 2007 10:39:54 +0000
Opher Etzion has kindly asked me to write a paragraph or two on commercial-off-the-shelf* (COTS) software versus (hard) coding software*in event processing applications.*
My thoughts on this topic are similar to my earlier blog musings,
Latency Takes a Back Seat to Accuracy in CEP*Applications.
If you buy a EP engine (today)*because it permits you run some quick-and-dirty (rule-based) analytics against a stream of incoming events, and you can do this quickly without spending considerable software development costs, and the learning curve and implementation*curve for*the COTS is*relatively low,*this could be a good business decision, obviously.** Having a software license for an EP engine that permits you to quickly develop and change analytics, variables and parameters on-the-fly is useful.*
On the other hand, the current state of many*CEP platforms, and their declarative programming modelling capabilities,*is that they focus on If-Then-Else, Event-Condition-Action (ECA), rule-based analytics.* Sophisticated processing requires more functionality that just ECA rules, because most advanced detection-oriented applications are not just*ECA solutions.
For many classes*of EP applications today, writing code may still be the best way to achieve the results (accuracy, confidence) you are looking for, because CEP software platforms have not yet evolved to plug-and-play analytical platforms, providing a wide range of sophisticated analytics in combination with*quality modelling tools for the all business users and their advanced detection requirements.
For this reason, and others which I don’t have time to write about today, I don’t think that we can*say blanket statements that
“CEP is about using engines versus writing programs or hard coding procedures.”** Event processing, in context to*specific business problems,*is the “what” and using a CEP/EP modelling tool and execution engine is only one of the possible”hows” in an event processing architecture.**
As we know, CEP/EP*engines, and the market place for using them, are still evolving and maturing; hence, there are many CEP/EP applications, today, and in the foreseeable future, that require hard coding to meet performance objectives, when performance is measured by actual business-decision*results (accuracy).*
Furthermore, as many of our friends point out, if you truely want the fastest, lowest latency possible, you need to be as close to the “metal” as possible, so C and C++ will always be faster than Java*byte code running in a sandbox written in C or C++.***
And, as you (Opher) correctly point out, along with most of the end users we talk to, they*do not process
megaevents per second on a single platform, so this is a marketing red herring.* This brings us back to our discussions on the future of distributed object caching, grid computing and virtulization - so I’ll stop and go out for some fried rice and shrimp, and some cold Singha beer.
Source...