feat: kirigami builds (QML gate cleared)

- QNetworkReply stub header for Redox cross-build
- GuiPrivate + Network in find_package
- QElapsedTimer include fix
- networkAccessManager null stub in icon.cpp
- Primitives target links Qt6::Network for headers
This commit is contained in:
2026-05-04 15:29:00 +01:00
parent 15d77b6254
commit 30e36e53ec
7 changed files with 196 additions and 4 deletions
@@ -0,0 +1,154 @@
# Red Bear OS — CPU/DMA/IRQ/MSI/Scheduler Fix Plan
**Date**: 2026-05-04
**Status**: Proposed
**Source of truth**: Linux kernel 7.0 (local/reference/linux-7.0/)
## 1. Problem Statement
Five critical integration gaps in the microkernel architecture:
| Gap | Severity | Impact |
|-----|----------|--------|
| MSI absent from kernel | CRITICAL | All NVMe/GPU/NIC on legacy INTx |
| DMA/IOMMU not integrated | CRITICAL | DMA buffers unprotected |
| PIT tick (148Hz) vs LAPIC (1000Hz) | HIGH | Scheduler 6x slower than Linux |
| Global scheduler lock | HIGH | Serializes all context switches |
| Thread creation (3 IPC hops) | HIGH | 3x slower than Linux clone() |
## 2. Phase 1: MSI/MSI-X in Kernel (Week 1-3)
### T1.1: MSI Capability Parsing
- File: kernel arch/x86_shared/device/msi.rs (new)
- Linux ref: arch/x86/kernel/apic/msi.c (391 lines)
- Parse MSI/MSI-X capability structures from PCI config
- Extract: Message Address, Message Data, Mask Bits, Pending Bits
- Support per-vector masking via MSI-X Table
### T1.2: MSI Message Composition
- Linux ref: __irq_msi_compose_msg()
- Compose APIC destination + vector into address/data pair
- Handle: dest mode (phys/logical), redirection hint
- Support: interrupt remapping (DMAR) for IOMMU
### T1.3: Vector Allocation Matrix
- File: kernel arch/x86_shared/device/vector.rs (new)
- Linux ref: arch/x86/kernel/apic/vector.c (1387 lines)
- Dynamic per-CPU vector allocation (replace static irq+32)
- Track: allocated/free per CPU, reserved system vectors
- Vector migration on CPU hotplug
### T1.4: MSI IRQ Domain
- Modify: kernel scheme/irq.rs
- Register MSI IRQs via new scheme operations
- Dispatch through existing interrupt handler path
- Wire LAPIC timer to scheduler tick (partially done)
### T1.5: Userspace MSI Consumer
- File: redox-driver-sys source/src/irq.rs
- Expose MSI allocation/enable to driver daemons
- Quirk-aware fallback: FORCE_LEGACY, NO_MSI, NO_MSIX
## 3. Phase 2: DMA/IOMMU Integration (Week 3-5)
### T2.1: Coherent DMA API
- File: kernel memory/dma.rs (new)
- Linux ref: kernel/dma/mapping.c (1016 lines)
- dma_alloc_coherent(size, phys) -> vaddr
- dma_free_coherent(vaddr, size, phys)
### T2.2: Streaming DMA API
- dma_map_single(cpu_addr, size, dir) -> dma_addr_t
- dma_unmap_single(dma_addr, size, dir)
- Cache coherence per architecture
### T2.3: Scatter-Gather DMA
- Linux ref: lib/scatterlist.c
- dma_map_sg / dma_unmap_sg
- Discontiguous physical pages
### T2.4: IOMMU DMA Remapping
- File: iommu daemon dma_remap.rs (new)
- Wire dma_map_* through IOMMU page tables
- IOVA allocation, page table programming, TLB invalidation
- Integrate with existing 4411-line iommu daemon
### T2.5: SWIOTLB Fallback
- Linux ref: kernel/dma/swiotlb.c
- Bounce buffer for <4GB devices
- DMA_TO_DEVICE / DMA_FROM_DEVICE copy
## 4. Phase 3: Scheduler Improvements (Week 4-6)
### T3.1: LAPIC Timer as Primary Tick
- Calibrate LAPIC timer against PIT (one-time)
- Set Periodic mode at 1000Hz (1ms tick)
- PIT fallback if LAPIC fails
- Already partially done: timer enabled, IDT entry added
### T3.2: Per-CPU Scheduler Locks
- Replace global CONTEXT_SWITCH_LOCK with per-CPU spinlock
- Lock-free runqueue manipulation
- Cross-CPU lock only during load balancing
### T3.3: Load Balancing
- Linux ref: kernel/sched/fair.c load_balance()
- Idle CPUs steal work from overloaded CPUs
- Per-CPU load average, nr_running
- IPI-based context pull
### T3.4: RT Scheduling Class
- Linux ref: kernel/sched/rt.c
- FIFO and Round-Robin classes
- Priority inheritance
- RT throttling: 95% CPU cap/sec
### T3.5: TSC-Deadline Timer
- Use IA32_TSC_DEADLINE MSR for precise tick
- True tickless operation
- TSC calibration via HPET or PIT
## 5. Phase 4: Thread Creation (Week 6-7)
### T4.1: Batched Thread Creation
- Batch new-thread requests (reduce IPC)
- Pre-allocate stack pages during fork
### T4.2: Kernel Thread Pool
- Pre-create idle kernel threads
- Reuse via object pool
### T4.3: Shared Memory IPC
- Use shm for proc scheme bulk ops
- Avoid data copy through IPC channel
## 6. Dependencies
Phase 1 (MSI): T1.1 -> T1.2 -> T1.3 -> T1.4 -> T1.5
Phase 2 (DMA): T2.1 -> T2.2 -> T2.3 -> T2.4 -> T2.5
Phase 3 (Sched): T3.1 -> T3.5 -> T3.2 -> T3.3 -> T3.4
Phase 4 (Thread): T4.1 -> T4.2 -> T4.3
Phase 1+2 independent (parallel). Phase 2.4 needs Phase 1.3.
Phase 3.1 partially done (start immediately).
## 7. Timeline
| Phase | Duration | Cumulative |
|-------|----------|------------|
| Phase 1 (MSI) | 3 weeks | Week 3 |
| Phase 2 (DMA/IOMMU) | 3 weeks | Week 5 |
| Phase 3 (Scheduler) | 3 weeks | Week 7 |
| Phase 4 (Threads) | 2 weeks | Week 7 |
Total: 7 weeks (2 devs parallel Phase 1+2)
## 8. Success Metrics
| Metric | Before | After |
|--------|--------|-------|
| Scheduler tick | 148Hz (PIT) | 1000Hz (LAPIC) |
| NVMe throughput | INTx shared | MSI-X 4+ queues |
| Context switch | ~6.75ms | ~1ms |
| Thread create | 3 IPC hops | 2 IPC hops |
| DMA safety | Unprotected | IOMMU-mapped |