Our firm started and maintained the
Linux Benchmarks for many years (and then retired them). During that time we received many benchmarks for all kinds of UNIX and Linux systems (including HP-UX, Sun, and more), thousands of Byte UNIX Benchmarks (see
http://linux.silkroad.com for the 'retirement page').
Before running the Benchmarks, I was a firm believer that SCSI outperforms IDE. After reviewing and ranking over a thousand benchmarks, it became obvious that, in general, SCSI-based UNIX or LINUX systems DO NOT outperform EIDE based systems. Yes, the raw numbers show that SCSI is faster, but this does not appear in any noticeable (useful) benchmark output.
In fact, given the same OS, MB, CPU, Memory, etc. sometimes EIDE systems seems to outperform SCSI.
SCSI tends to 'crap out' also.... one bad or misconfigured termination and your machine is dead. EIDE does not have this problem.
So, unless you are the very rare person who needs to attach 7 devices to a host adapter versus 2, or need the (perhaps not realizable) speed of a SCSI(n) interface; any gain in speed (perhaps none) is greatly lost in the negatives: less reliability and a much greater cost.
On the other hand, if you are running a configuration (like HP-UX ServiceGuard) or similar system that can failover CPUs and attach a single disk, you must use a SCSI bus. It cannot be done (to my knowledge) with an EIDE bus.
For 99.9 percent of the users in the world, EIDE offers comparable performance and at a much less cost with much less complexity.
The bottom line:
After running the Linux Benchmarks for a few years, I retired all SCSI host adapters and disks on all our home office systems. So, for the average home UNIX or Linux user, go EIDE. Big businesses with other requirements are another story; the original poster (with an Intel PC) is more-than-likely a user who does not need SCSI.