]>
Commit | Line | Data |
---|---|---|
6d6ac1c1 VG |
1 | CFQ ioscheduler tunables |
2 | ======================== | |
3 | ||
4 | slice_idle | |
5 | ---------- | |
6 | This specifies how long CFQ should idle for next request on certain cfq queues | |
7 | (for sequential workloads) and service trees (for random workloads) before | |
8 | queue is expired and CFQ selects next queue to dispatch from. | |
9 | ||
10 | By default slice_idle is a non-zero value. That means by default we idle on | |
11 | queues/service trees. This can be very helpful on highly seeky media like | |
12 | single spindle SATA/SAS disks where we can cut down on overall number of | |
13 | seeks and see improved throughput. | |
14 | ||
15 | Setting slice_idle to 0 will remove all the idling on queues/service tree | |
16 | level and one should see an overall improved throughput on faster storage | |
17 | devices like multiple SATA/SAS disks in hardware RAID configuration. The down | |
18 | side is that isolation provided from WRITES also goes down and notion of | |
19 | IO priority becomes weaker. | |
20 | ||
21 | So depending on storage and workload, it might be useful to set slice_idle=0. | |
22 | In general I think for SATA/SAS disks and software RAID of SATA/SAS disks | |
23 | keeping slice_idle enabled should be useful. For any configurations where | |
24 | there are multiple spindles behind single LUN (Host based hardware RAID | |
25 | controller or for storage arrays), setting slice_idle=0 might end up in better | |
26 | throughput and acceptable latencies. | |
27 | ||
28 | CFQ IOPS Mode for group scheduling | |
29 | =================================== | |
30 | Basic CFQ design is to provide priority based time slices. Higher priority | |
31 | process gets bigger time slice and lower priority process gets smaller time | |
32 | slice. Measuring time becomes harder if storage is fast and supports NCQ and | |
33 | it would be better to dispatch multiple requests from multiple cfq queues in | |
34 | request queue at a time. In such scenario, it is not possible to measure time | |
35 | consumed by single queue accurately. | |
36 | ||
37 | What is possible though is to measure number of requests dispatched from a | |
38 | single queue and also allow dispatch from multiple cfq queue at the same time. | |
39 | This effectively becomes the fairness in terms of IOPS (IO operations per | |
40 | second). | |
41 | ||
42 | If one sets slice_idle=0 and if storage supports NCQ, CFQ internally switches | |
43 | to IOPS mode and starts providing fairness in terms of number of requests | |
44 | dispatched. Note that this mode switching takes effect only for group | |
45 | scheduling. For non-cgroup users nothing should change. |