Indeed as a rule the combination of lvm striping with high end SAN storage (double or triple striping) is to be avoided, both for reasons of simplicity and performance.
Remarkably perhaps, I have come across situations, where lvm striping actually did make a serious performance difference (improvement) with high end SAN Storage and sequential read IO, but only with a narrow width (say 4-8) at small stripe sizes (128KiB-256KiB).
This was because there was a
front-end bottle neck and at the same it was difficult to increase IO queue sizes, which is extra important because of the serial nature of Fibre Channel SANs (the bottle neck could even be observed when the cache-hit ratio was at 100% at the front-end storage level).
By using narrow striping it was possible to increase the effective IO queue size, while at the same time not confusing the prefetch algorithms of the SAN storage. The LUN's in the LVM narrow stripe had to be from different physical disk sets in the backend SAN Storage level, in case of a storage array architecture where this would make a difference). At the SAN storage, this translated in a nice even spread of backend usage, without hot spots.
Situations where this mattered was with databases with quite a bit of sequential read IO. This happened with Oracle databases that were never fully optimized, because the standard query specifications kept changing, which in my experience is the situation that occurs often. Another situation is when out of necessity reports or other batches need to run during on-line usage.
Conversely, I have come across a situation where a large stripe size was used (4 MiB) with a large stripe width (16) and that really confused storage, thwarting the prefetch algorithms, and all IO was done with small sizes, bringing sequential read IO to a crawl, while the storage processors were working overtime.
So as usual in performance tuning: "it depends...."