PageRenderTime 30ms CodeModel.GetById 18ms RepoModel.GetById 0ms app.codeStats 0ms

/Documentation/admin-guide/hw-vuln/core-scheduling.rst

https://github.com/tiwai/sound
ReStructuredText | 226 lines | 192 code | 34 blank | 0 comment | 0 complexity | 7cdf5a77b8175bcf5478446f6cda6bbb MD5 | raw file
  1. .. SPDX-License-Identifier: GPL-2.0
  2. ===============
  3. Core Scheduling
  4. ===============
  5. Core scheduling support allows userspace to define groups of tasks that can
  6. share a core. These groups can be specified either for security usecases (one
  7. group of tasks don't trust another), or for performance usecases (some
  8. workloads may benefit from running on the same core as they don't need the same
  9. hardware resources of the shared core, or may prefer different cores if they
  10. do share hardware resource needs). This document only describes the security
  11. usecase.
  12. Security usecase
  13. ----------------
  14. A cross-HT attack involves the attacker and victim running on different Hyper
  15. Threads of the same core. MDS and L1TF are examples of such attacks. The only
  16. full mitigation of cross-HT attacks is to disable Hyper Threading (HT). Core
  17. scheduling is a scheduler feature that can mitigate some (not all) cross-HT
  18. attacks. It allows HT to be turned on safely by ensuring that only tasks in a
  19. user-designated trusted group can share a core. This increase in core sharing
  20. can also improve performance, however it is not guaranteed that performance
  21. will always improve, though that is seen to be the case with a number of real
  22. world workloads. In theory, core scheduling aims to perform at least as good as
  23. when Hyper Threading is disabled. In practice, this is mostly the case though
  24. not always: as synchronizing scheduling decisions across 2 or more CPUs in a
  25. core involves additional overhead - especially when the system is lightly
  26. loaded. When ``total_threads <= N_CPUS/2``, the extra overhead may cause core
  27. scheduling to perform more poorly compared to SMT-disabled, where N_CPUS is the
  28. total number of CPUs. Please measure the performance of your workloads always.
  29. Usage
  30. -----
  31. Core scheduling support is enabled via the ``CONFIG_SCHED_CORE`` config option.
  32. Using this feature, userspace defines groups of tasks that can be co-scheduled
  33. on the same core. The core scheduler uses this information to make sure that
  34. tasks that are not in the same group never run simultaneously on a core, while
  35. doing its best to satisfy the system's scheduling requirements.
  36. Core scheduling can be enabled via the ``PR_SCHED_CORE`` prctl interface.
  37. This interface provides support for the creation of core scheduling groups, as
  38. well as admission and removal of tasks from created groups::
  39. #include <sys/prctl.h>
  40. int prctl(int option, unsigned long arg2, unsigned long arg3,
  41. unsigned long arg4, unsigned long arg5);
  42. option:
  43. ``PR_SCHED_CORE``
  44. arg2:
  45. Command for operation, must be one off:
  46. - ``PR_SCHED_CORE_GET`` -- get core_sched cookie of ``pid``.
  47. - ``PR_SCHED_CORE_CREATE`` -- create a new unique cookie for ``pid``.
  48. - ``PR_SCHED_CORE_SHARE_TO`` -- push core_sched cookie to ``pid``.
  49. - ``PR_SCHED_CORE_SHARE_FROM`` -- pull core_sched cookie from ``pid``.
  50. arg3:
  51. ``pid`` of the task for which the operation applies.
  52. arg4:
  53. ``pid_type`` for which the operation applies. It is one of
  54. ``PR_SCHED_CORE_SCOPE_``-prefixed macro constants. For example, if arg4
  55. is ``PR_SCHED_CORE_SCOPE_THREAD_GROUP``, then the operation of this command
  56. will be performed for all tasks in the task group of ``pid``.
  57. arg5:
  58. userspace pointer to an unsigned long for storing the cookie returned by
  59. ``PR_SCHED_CORE_GET`` command. Should be 0 for all other commands.
  60. In order for a process to push a cookie to, or pull a cookie from a process, it
  61. is required to have the ptrace access mode: `PTRACE_MODE_READ_REALCREDS` to the
  62. process.
  63. Building hierarchies of tasks
  64. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  65. The simplest way to build hierarchies of threads/processes which share a
  66. cookie and thus a core is to rely on the fact that the core-sched cookie is
  67. inherited across forks/clones and execs, thus setting a cookie for the
  68. 'initial' script/executable/daemon will place every spawned child in the
  69. same core-sched group.
  70. Cookie Transferral
  71. ~~~~~~~~~~~~~~~~~~
  72. Transferring a cookie between the current and other tasks is possible using
  73. PR_SCHED_CORE_SHARE_FROM and PR_SCHED_CORE_SHARE_TO to inherit a cookie from a
  74. specified task or a share a cookie with a task. In combination this allows a
  75. simple helper program to pull a cookie from a task in an existing core
  76. scheduling group and share it with already running tasks.
  77. Design/Implementation
  78. ---------------------
  79. Each task that is tagged is assigned a cookie internally in the kernel. As
  80. mentioned in `Usage`_, tasks with the same cookie value are assumed to trust
  81. each other and share a core.
  82. The basic idea is that, every schedule event tries to select tasks for all the
  83. siblings of a core such that all the selected tasks running on a core are
  84. trusted (same cookie) at any point in time. Kernel threads are assumed trusted.
  85. The idle task is considered special, as it trusts everything and everything
  86. trusts it.
  87. During a schedule() event on any sibling of a core, the highest priority task on
  88. the sibling's core is picked and assigned to the sibling calling schedule(), if
  89. the sibling has the task enqueued. For rest of the siblings in the core,
  90. highest priority task with the same cookie is selected if there is one runnable
  91. in their individual run queues. If a task with same cookie is not available,
  92. the idle task is selected. Idle task is globally trusted.
  93. Once a task has been selected for all the siblings in the core, an IPI is sent to
  94. siblings for whom a new task was selected. Siblings on receiving the IPI will
  95. switch to the new task immediately. If an idle task is selected for a sibling,
  96. then the sibling is considered to be in a `forced idle` state. I.e., it may
  97. have tasks on its on runqueue to run, however it will still have to run idle.
  98. More on this in the next section.
  99. Forced-idling of hyperthreads
  100. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  101. The scheduler tries its best to find tasks that trust each other such that all
  102. tasks selected to be scheduled are of the highest priority in a core. However,
  103. it is possible that some runqueues had tasks that were incompatible with the
  104. highest priority ones in the core. Favoring security over fairness, one or more
  105. siblings could be forced to select a lower priority task if the highest
  106. priority task is not trusted with respect to the core wide highest priority
  107. task. If a sibling does not have a trusted task to run, it will be forced idle
  108. by the scheduler (idle thread is scheduled to run).
  109. When the highest priority task is selected to run, a reschedule-IPI is sent to
  110. the sibling to force it into idle. This results in 4 cases which need to be
  111. considered depending on whether a VM or a regular usermode process was running
  112. on either HT::
  113. HT1 (attack) HT2 (victim)
  114. A idle -> user space user space -> idle
  115. B idle -> user space guest -> idle
  116. C idle -> guest user space -> idle
  117. D idle -> guest guest -> idle
  118. Note that for better performance, we do not wait for the destination CPU
  119. (victim) to enter idle mode. This is because the sending of the IPI would bring
  120. the destination CPU immediately into kernel mode from user space, or VMEXIT
  121. in the case of guests. At best, this would only leak some scheduler metadata
  122. which may not be worth protecting. It is also possible that the IPI is received
  123. too late on some architectures, but this has not been observed in the case of
  124. x86.
  125. Trust model
  126. ~~~~~~~~~~~
  127. Core scheduling maintains trust relationships amongst groups of tasks by
  128. assigning them a tag that is the same cookie value.
  129. When a system with core scheduling boots, all tasks are considered to trust
  130. each other. This is because the core scheduler does not have information about
  131. trust relationships until userspace uses the above mentioned interfaces, to
  132. communicate them. In other words, all tasks have a default cookie value of 0.
  133. and are considered system-wide trusted. The forced-idling of siblings running
  134. cookie-0 tasks is also avoided.
  135. Once userspace uses the above mentioned interfaces to group sets of tasks, tasks
  136. within such groups are considered to trust each other, but do not trust those
  137. outside. Tasks outside the group also don't trust tasks within.
  138. Limitations of core-scheduling
  139. ------------------------------
  140. Core scheduling tries to guarantee that only trusted tasks run concurrently on a
  141. core. But there could be small window of time during which untrusted tasks run
  142. concurrently or kernel could be running concurrently with a task not trusted by
  143. kernel.
  144. IPI processing delays
  145. ~~~~~~~~~~~~~~~~~~~~~
  146. Core scheduling selects only trusted tasks to run together. IPI is used to notify
  147. the siblings to switch to the new task. But there could be hardware delays in
  148. receiving of the IPI on some arch (on x86, this has not been observed). This may
  149. cause an attacker task to start running on a CPU before its siblings receive the
  150. IPI. Even though cache is flushed on entry to user mode, victim tasks on siblings
  151. may populate data in the cache and micro architectural buffers after the attacker
  152. starts to run and this is a possibility for data leak.
  153. Open cross-HT issues that core scheduling does not solve
  154. --------------------------------------------------------
  155. 1. For MDS
  156. ~~~~~~~~~~
  157. Core scheduling cannot protect against MDS attacks between the siblings
  158. running in user mode and the others running in kernel mode. Even though all
  159. siblings run tasks which trust each other, when the kernel is executing
  160. code on behalf of a task, it cannot trust the code running in the
  161. sibling. Such attacks are possible for any combination of sibling CPU modes
  162. (host or guest mode).
  163. 2. For L1TF
  164. ~~~~~~~~~~~
  165. Core scheduling cannot protect against an L1TF guest attacker exploiting a
  166. guest or host victim. This is because the guest attacker can craft invalid
  167. PTEs which are not inverted due to a vulnerable guest kernel. The only
  168. solution is to disable EPT (Extended Page Tables).
  169. For both MDS and L1TF, if the guest vCPU is configured to not trust each
  170. other (by tagging separately), then the guest to guest attacks would go away.
  171. Or it could be a system admin policy which considers guest to guest attacks as
  172. a guest problem.
  173. Another approach to resolve these would be to make every untrusted task on the
  174. system to not trust every other untrusted task. While this could reduce
  175. parallelism of the untrusted tasks, it would still solve the above issues while
  176. allowing system processes (trusted tasks) to share a core.
  177. 3. Protecting the kernel (IRQ, syscall, VMEXIT)
  178. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  179. Unfortunately, core scheduling does not protect kernel contexts running on
  180. sibling hyperthreads from one another. Prototypes of mitigations have been posted
  181. to LKML to solve this, but it is debatable whether such windows are practically
  182. exploitable, and whether the performance overhead of the prototypes are worth
  183. it (not to mention, the added code complexity).
  184. Other Use cases
  185. ---------------
  186. The main use case for Core scheduling is mitigating the cross-HT vulnerabilities
  187. with SMT enabled. There are other use cases where this feature could be used:
  188. - Isolating tasks that needs a whole core: Examples include realtime tasks, tasks
  189. that uses SIMD instructions etc.
  190. - Gang scheduling: Requirements for a group of tasks that needs to be scheduled
  191. together could also be realized using core scheduling. One example is vCPUs of
  192. a VM.