Thank you for your replies.
Quote:
Originally Posted by
Corona688
ext4 uses generic_file_llseek for lseek, and I find this implementation for that in fs/read_write.c:
(...)
So really, nothing to it, and the only thing that could be blocking is that mutex...
I think you've saturated the kernel with so many simultaneous system calls to the same inode that they're competing for i_mutex.
(...)
I'm trying to wrap my mind around this... The mutex should be released after the lseek, right? Is the mutex active while writing? Otherwise the behaviour explanied below wouldn't make sense to me, as either lseek while reading would be slow as well or the mutex should be released rather quickly... :S
Quote:
Originally Posted by
fpmurphy
(...)
However, the behavior you see if what I would expect. Writes by their very nature are going to take longer than reads. Reads can come from cache. Writes cannot.
I would hardly believe this statement to be generally true as writes can be asynchronous, but that is another story.
The point is that I'm having huge lseek latencies when running a benchmark where 100 threads are writing randomly into files compared to 100 threads randomly reading files:
a) read, lseek, read, lseek, read, lseek,...
mean read latency: ~4ms
mean lseek latency: ~0,001ms
b) write, lseek, write, lseek, ...
mean write latency: ~10ms
mean lseek latency: ~8ms