Revision 78076bb64aa8ba5b7207c38b2660a9e10ffa8cc7 authored by Jens Axboe on 05 December 2019, 02:56:40 UTC, committed by Jens Axboe on 05 December 2019, 03:12:58 UTC
We recently changed this from a single list to an rbtree, but for some
real life workloads, the rbtree slows down the submission/insertion
case enough so that it's the top cycle consumer on the io_uring side.
In testing, using a hash table is a more well rounded compromise. It
is fast for insertion, and as long as it's sized appropriately, it
works well for the cancellation case as well. Running TAO with a lot
of network sockets, this removes io_poll_req_insert() from spending
2% of the CPU cycles.

Reported-by: Dan Melnic <dmm@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
1 parent 08bdcc3
History
File Mode Size
Kconfig -rw-r--r-- 531 bytes
Makefile -rw-r--r-- 100 bytes
yama_lsm.c -rw-r--r-- 12.0 KB

back to top