Files
RedBear-OS/local/patches/kernel/P9-numa-topology.patch
T
vasilito 34360e1e4f feat: P0-P6 kernel scheduler + relibc threading comprehensive implementation
P0-P2: Barrier SMP, sigmask/pthread_kill races, robust mutexes, RT scheduling, POSIX sched API
P3: PerCpuSched struct, per-CPU wiring, work stealing, load balancing, initial placement
P4: 64-shard futex table, REQUEUE, PI futexes (LOCK_PI/UNLOCK_PI/TRYLOCK_PI), robust futexes, vruntime tracking, min-vruntime SCHED_OTHER selection
P5: setpriority/getpriority, pthread_setaffinity_np, pthread_setname_np, pthread_setschedparam (Redox)
P6: Cache-affine scheduling (last_cpu + vruntime bonus), NUMA topology kernel hints + numad userspace daemon

Stability fixes: make_consistent stores 0 (dead TID fix), cond.rs error propagation, SPIN_COUNT adaptive spinning, Sys::open &str fix, PI futex CAS race, proc.rs lock ordering, barrier destroy

Patches: 33 kernel + 58 relibc patches, all tracked in recipes
Docs: KERNEL-SCHEDULER-MULTITHREAD-IMPROVEMENT-PLAN.md updated, SCHEDULER-REVIEW-FINAL.md created
Architecture: NUMA topology parsing stays userspace (numad daemon), kernel stores lightweight NumaTopology hints
2026-04-30 18:21:48 +01:00

69 lines
1.9 KiB
Diff

diff --git a/src/numa.rs b/src/numa.rs
new file mode 100644
index 0000000..40c5a06
--- /dev/null
+++ b/src/numa.rs
@@ -0,0 +1,62 @@
+/// NUMA topology hints for the kernel scheduler.
+/// NUMA discovery (SRAT/SLIT parsing) is performed by a userspace daemon
+/// (numad) via /scheme/acpi/, then pushed to the kernel via scheme:numa.
+/// The kernel stores a lightweight copy for O(1) scheduling lookups.
+use crate::cpu_set::{LogicalCpuId, LogicalCpuSet};
+use core::sync::atomic::{AtomicBool, Ordering};
+
+const MAX_NUMA_NODES: usize = 8;
+
+#[derive(Clone, Debug)]
+pub struct NumaHint {
+ pub node_id: u8,
+ pub cpus: LogicalCpuSet,
+}
+
+pub struct NumaTopology {
+ pub nodes: [Option<NumaHint>; MAX_NUMA_NODES],
+ pub initialized: AtomicBool,
+}
+
+impl NumaTopology {
+ pub const fn new() -> Self {
+ const NONE: Option<NumaHint> = None;
+ Self {
+ nodes: [NONE; MAX_NUMA_NODES],
+ initialized: AtomicBool::new(false),
+ }
+ }
+
+ pub fn node_for_cpu(&self, cpu: LogicalCpuId) -> Option<u8> {
+ for node in self.nodes.iter().flatten() {
+ if node.cpus.contains(cpu) {
+ return Some(node.node_id);
+ }
+ }
+ None
+ }
+
+ pub fn same_node(&self, cpu1: LogicalCpuId, cpu2: LogicalCpuId) -> bool {
+ self.node_for_cpu(cpu1) == self.node_for_cpu(cpu2)
+ }
+}
+
+static mut NUMA_TOPOLOGY: NumaTopology = NumaTopology::new();
+
+pub fn topology() -> &'static NumaTopology {
+ unsafe { &NUMA_TOPOLOGY }
+}
+
+pub fn init_default() {
+ let topo = topology();
+ if topo.initialized.swap(true, Ordering::AcqRel) {
+ return;
+ }
+ unsafe {
+ let topo_mut = &mut *core::ptr::addr_of_mut!(NUMA_TOPOLOGY);
+ topo_mut.nodes[0] = Some(NumaHint {
+ node_id: 0,
+ cpus: LogicalCpuSet::all(),
+ });
+ }
+}