News Linux dev delivers 6% file system performance increase – says ‘it was literally a five minute job’

Status
Not open for further replies.
'Nobody really needs' is a recipe for finding the people who need it. Who wants to bet on this being rolled back when they find out whose need they are ignoring?
 
'Nobody really needs' is a recipe for finding the people who need it. Who wants to bet on this being rolled back when they find out whose need they are ignoring?
When syscalls take at least several hundreds of ns, it's hard to see a case where someone is going to nitpick over a few ns. It's below the noise floor. They really could've just truncated it to microsecond precision and it would've been fine.
 
"File system performance" is a little misleading. What Jens has done is to optimize the overhead for I/O submission. So if you have a workload which is isusing a large number of I/O's, and if the storage device is fast enough that the overhead in recording time information for iostats dominates. For example, on an hard drive or a USB thumdrive, this change is probably not going to be really noticeable. Even for a consumer grade, SATA-attached SSD, it's probably not going to be that great of an improvement. Furthermore, it's only going to be a high IOPS workload, such as a 4k random read or random write workload. If your workload is a streaming read or streaming write workload, again, it's not going to be that big of a deal.
 
  • Like
Reactions: bit_user
"File system performance" is a little misleading. What Jens has done is to optimize the overhead for I/O submission. So if you have a workload which is isusing a large number of I/O's, and if the storage device is fast enough that the overhead in recording time information for iostats dominates.
Thanks for the clarification.

I do wonder what's the purpose of sampling the clock. Is it just for updating the file metadata?

Also, it sounds as though it applies to all file I/O in Linux - not just io_uring. Can you confirm?
 
Last edited:
'Nobody really needs' is a recipe for finding the people who need it. Who wants to bet on this being rolled back when they find out whose need they are ignoring?
High frequency trading firms do need nanosecond level logging and optimize performance around that information.
 
  • Like
Reactions: bit_user
High frequency trading firms do need nanosecond level logging and optimize performance around that information.
They're not talking about globally caching the high resolution timer. This is just for file I/O, as far as I understand.

I'm totally with you on the general need for nanosecond-resolution precision (or, at least sub-microsecond). It's definitely useful for synchronizing entries from different logfiles and for detailed performance analysis.

If we go back and look at specific filesystems, I'm not sure if they all even support nanosecond resolution. I believe XFS only added this somewhat recently.
 
Status
Not open for further replies.