Some clarifications first:
How does "striping" work?
A disk (that is: one from the last 20 years) has a small (some MBs) buffer memory. Whenever a cylinder is read the data of this cylinder are copied to this bufer and this data is therefore available relatively fast, but the disk needs some "downtime" after this to read and provide the next cylinder. If you have several disks and allocate space in a round-robin way that means that one disk after the other is queried and while it spends the time providing the next cylinder the other disks are queried. The system as a whole looks faster from the outside than any single disk could be.
What means "inter-policy maximum"?
It means that a maximum possible number of PVs (basically hdisk devices belonging to one VG) are used to allocate the PPs for a LV. If you have 5 PVs (disks) in a VG and allocate 5 PPs to a LV "maximum" would distribute them over all these PVs while "minimum" would allocate them sequentially on one disk (always given that each of these disks has enough free PPs).
How does this relate to "PP striping"?
In fact: mostly not at all nowadays. The first reason is that "disks" (what AIX knows as hdisk devices) are rarely physical disks any more but typically LUNs from a storage system or at least RAID sets with a (caching) controller. Such storage systems allocate the space they present to the outside already in such a way like described above. You can't gain any speed from striping twice but - because of a possible
Moiré pattern - there is a small chance of a worsening effect (all the disk I/O effectively landing on the same physical disk).
The second reason is: many years ago there typically were PP sizes of 4 or 8MB. This is a size which fits well into the buffer memory of the hdisk. Nowadays we have way bigger PP sizes (see the example above, 128MB in vbe's posting is rather at the lower end) and these big PPs won't fit into drives buffer memory, therefore you won't gain only a little speed because the disk, after having provided the "fast data", will have to provide some "slow data" before I/O changes to another disk.
There is even a third reason: modern controllers have awful lots of cache, typically 1-several GB. These big caches make striping obsolete, because any effect striping could have would not be noticeable anyway.
A final aspect: to benefit from striping the most the access pattern should be sequential because this makes sure the access happens evenly distributed across all disks. You mention that you use a RDBMS (Oracle), so the overwhelming majority of disk access will be not sequential at all but random access. To speed such a random access striping will not help you (or very little at all), the most you will get out of some cache. If you prefer cache to be in hardware (caching controller, etc.) or software (SGA, Unix buffering I/O via "file memory", etc.) ia rather a matter of taste. I guess there won't be all too much difference between equal amounts of cache memory cming from any of these sources.
I hope this helps.
bakunin