feat: build system transition to release fork + archive hardening
Release fork infrastructure: - REDBEAR_RELEASE=0.1.1 with offline enforcement (fetch/distclean/unfetch blocked) - 195 BLAKE3-verified source archives in standard format - Atomic provisioning via provision-release.sh (staging + .complete sentry) - 5-phase improvement plan: restore format auto-detection, source tree validation (validate-source-trees.py), archive-map.json, REPO_BINARY fallback Archive normalization: - Removed 87 duplicate/unversioned archives from shared pool - Regenerated all archives in consistent format with source/ + recipe.toml - BLAKE3SUMS and manifest.json generated from stable tarball set Patch management: - verify-patches.sh: pre-sync dry-run report (OK/REVERSED/CONFLICT) - 121 upstream-absorbed patches moved to absorbed/ directories - 43 active patches verified clean against rebased sources - Stress test: base updated to upstream HEAD, relibc reset and patched Compilation fixes: - relibc: Vec imports in redox-rt (proc.rs, lib.rs, sys.rs) - relibc: unsafe from_raw_parts in mod.rs (2024 edition) - fetch.rs: rev comparison handles short/full hash prefixes - kibi recipe: corrected rev mismatch New scripts: restore-sources.sh, provision-release.sh, verify-sources-archived.sh, check-upstream-releases.sh, validate-source-trees.py, verify-patches.sh, repair-archive-format.sh, generate-manifest.py Documentation: AGENTS.md, README.md, local/AGENTS.md updated for release fork model
This commit is contained in:
+2
-2
@@ -1,6 +1,5 @@
|
||||
/build/
|
||||
/prefix/
|
||||
.config
|
||||
**/my_*
|
||||
|
||||
.idea/
|
||||
@@ -83,7 +82,8 @@ local/cache/pkgar/
|
||||
!local/cache/pkgar/**
|
||||
Packages/redbear-firmware.pkgar
|
||||
packages/
|
||||
sources/
|
||||
sources/x86_64-unknown-redox/
|
||||
sources/*.tar.gz
|
||||
local/linux-kernel-cache/
|
||||
local/recipes/kde/kwin/**
|
||||
!local/recipes/kde/kwin/recipe.toml
|
||||
|
||||
@@ -11,17 +11,13 @@ Red Bear OS build system orchestrator — fetches, builds, and packages ~100+ Gi
|
||||
into a bootable Redox image. Uses a Makefile + Rust "cookbook" tool + TOML configs.
|
||||
Languages: Rust (core), C (ported packages), TOML (config), Make (build orchestration).
|
||||
|
||||
RedBearOS should be treated as an overlay distribution on top of Redox in the same way Ubuntu
|
||||
relates to Debian:
|
||||
RedBearOS is a **full fork** of Redox OS — based on frozen, archived source snapshots.
|
||||
Sources are immutable and never auto-immutable archived from upstream. All changes are explicit,
|
||||
human-initiated operations. Durable Red Bear state belongs in `local/patches/`,
|
||||
`local/recipes/`, `local/docs/`, and tracked Red Bear configs.
|
||||
|
||||
- Redox is upstream
|
||||
- Red Bear carries integration, packaging, validation, and subsystem overlays on top
|
||||
- upstream-owned source trees are refreshable working copies
|
||||
- durable Red Bear state belongs in `local/patches/`, `local/recipes/`, `local/docs/`, and tracked
|
||||
Red Bear configs
|
||||
|
||||
If we can fetch refreshed upstream sources, reapply our overlays, and rebuild successfully, the
|
||||
project is in the right shape for long-term maintenance.
|
||||
The current baseline is **Red Bear OS 0.1.0** (Redox snapshot at build-system commit `f55acba68`).
|
||||
All recipe sources are pinned and archived in `sources/redbear-0.1.0/`.
|
||||
|
||||
## STRUCTURE
|
||||
|
||||
@@ -172,9 +168,9 @@ only inside a fetched source tree is not preserved.
|
||||
2. **Wire the patch** into the recipe's `recipe.toml` `patches = [...]` list.
|
||||
3. **Commit** the patch file and recipe change before the session ends.
|
||||
|
||||
**Why:** `make distclean`, `make clean`, upstream source refreshes, and `sync-upstream.sh` all
|
||||
discard or replace `recipes/*/source/` trees. Only `local/patches/`, `local/recipes/`, tracked
|
||||
configs, and `local/docs/` survive.
|
||||
**Why:** `make distclean`, `make clean`, and source immutable archivedes all
|
||||
discard or replace `recipes/*/source/` trees. Only `local/patches/`, `local/recipes/`,
|
||||
tracked configs, `local/docs/`, and `sources/redbear-0.1.0/` survive.
|
||||
|
||||
**Examples of changes that require immediate patching:**
|
||||
|
||||
@@ -255,24 +251,20 @@ local/patches/
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `local/scripts/apply-patches.sh` | Apply all build-system patches + create recipe symlinks |
|
||||
| `local/scripts/sync-upstream.sh` | Fetch upstream + rebase Red Bear OS commits + verify symlinks |
|
||||
| `local/scripts/provision-release.sh` | Provision new release from Redox ref + archive sources |
|
||||
| `local/scripts/check-upstream-releases.sh` | Check for new Redox snapshots (read-only) |
|
||||
|
||||
### Updating from Upstream
|
||||
### Release Model (Fork)
|
||||
|
||||
Red Bear OS is a full fork based on frozen Redox snapshots. Sources are immutable
|
||||
and never auto-immutable archived. The current baseline is 0.1.0.
|
||||
|
||||
```bash
|
||||
# Automated (preferred):
|
||||
./local/scripts/sync-upstream.sh # Rebase Red Bear OS onto latest upstream
|
||||
./local/scripts/sync-upstream.sh --dry-run # Preview conflicts first
|
||||
# Check for newer Redox snapshots (read-only, zero side effects):
|
||||
./local/scripts/check-upstream-releases.sh
|
||||
|
||||
# Manual:
|
||||
git remote add upstream-redox https://github.com/redox-os/redox.git # once
|
||||
git fetch upstream-redox master
|
||||
git rebase upstream-redox/master # replays Red Bear OS commits on new upstream
|
||||
|
||||
# Nuclear option (if rebase fails badly):
|
||||
git rebase --abort
|
||||
git reset --hard upstream-redox/master
|
||||
./local/scripts/apply-patches.sh --force # apply from scratch via patch files
|
||||
# Provision a new release (explicit, human-initiated only):
|
||||
./local/scripts/provision-release.sh --ref=<redox-tag> --release=0.2.0 --dry-run
|
||||
```
|
||||
|
||||
## AMD-FIRST INTEGRATION PATH
|
||||
@@ -342,7 +334,7 @@ Phase 1 (runtime substrate) → Phase 2 (software compositor) → Phase 3 (KWin
|
||||
6. `redbear-sessiond` — `local/recipes/system/redbear-sessiond/source/` — Rust D-Bus session broker exposing `org.freedesktop.login1` subset for KWin (uses `zbus`)
|
||||
7. `redbear-dbus-services` — `local/recipes/system/redbear-dbus-services/` — D-Bus activation `.service` files and XML policy files for system and session buses
|
||||
|
||||
All custom work goes in `local/` — see `local/AGENTS.md` for overlay usage.
|
||||
All custom work goes in `local/` — see `local/AGENTS.md` for fork model usage.
|
||||
|
||||
## NOTES
|
||||
|
||||
|
||||
+4
-4
@@ -76,12 +76,12 @@ You can read the best practices and guidelines on the [Best practices and guidel
|
||||
|
||||
## Repository Model for Contributors
|
||||
|
||||
RedBearOS should be treated as an overlay distribution on top of Redox, in the same way Ubuntu
|
||||
RedBearOS should be treated as an full fork on top of Redox, in the same way Ubuntu
|
||||
relates to Debian.
|
||||
|
||||
That means contributors should keep this separation in mind:
|
||||
|
||||
- upstream-owned trees such as `recipes/*/source/` are refreshable working copies,
|
||||
- upstream-owned trees such as `recipes/*/source/` are immutable archived release snapshot,
|
||||
- durable Red Bear-specific work belongs in `local/patches/`, `local/recipes/`, `local/docs/`, and
|
||||
tracked Red Bear configs,
|
||||
- if a change exists only in an upstream-owned source tree, it is not yet preserved properly for
|
||||
@@ -95,7 +95,7 @@ upstream promotes it to first-class status.
|
||||
So for contributors:
|
||||
|
||||
- upstream WIP may still be a useful input/reference,
|
||||
- but fixes intended for Red Bear shipping should normally land in the Red Bear overlay,
|
||||
- but fixes intended for Red Bear shipping should normally land in the Red Bear release fork,
|
||||
- and when upstream later catches up, Red Bear should prefer upstream and retire local patches or
|
||||
local recipe copies that are no longer needed.
|
||||
|
||||
@@ -129,7 +129,7 @@ Since **Rust** is a relatively small and new language compared to others like C
|
||||
|
||||
Please follow our [Git style](https://doc.redox-os.org/book/creating-proper-pull-requests.html) for pull requests.
|
||||
|
||||
For user-visible work, keep the root [`CHANGELOG.md`](CHANGELOG.md) current and refresh the
|
||||
For user-visible work, keep the root [`CHANGELOG.md`](CHANGELOG.md) current and immutable archived the
|
||||
README "What's New" section with the latest highlights so GitHub visitors can immediately see what
|
||||
changed.
|
||||
|
||||
|
||||
@@ -118,6 +118,9 @@ endif # PODMAN_BUILD
|
||||
# unfetch local overlay recipes unless REDBEAR_ALLOW_LOCAL_UNFETCH=1 is set.
|
||||
# This is the safe default for Red Bear OS. local/ is NEVER deleted.
|
||||
distclean:
|
||||
ifneq ($(REDBEAR_RELEASE),)
|
||||
$(error distclean is disabled in release mode (REDBEAR_RELEASE=$(REDBEAR_RELEASE)). Sources are immutable. Use: make clean (build artifacts only, safe))
|
||||
endif
|
||||
ifeq ($(PODMAN_BUILD),1)
|
||||
ifneq ("$(wildcard $(CONTAINER_TAG))","")
|
||||
$(PODMAN_RUN) make $@
|
||||
|
||||
@@ -16,14 +16,13 @@
|
||||
|
||||
---
|
||||
|
||||
Red Bear OS is a derivative of [Redox OS](https://www.redox-os.org) — a general-purpose, Unix-like, microkernel-based operating system written in Rust. It tracks upstream Redox, incorporating its improvements while adding custom drivers, filesystems, and hardware support.
|
||||
Red Bear OS is a derivative of [Redox OS](https://www.redox-os.org) — a general-purpose, Unix-like, microkernel-based operating system written in Rust. It is a full fork based on frozen Redox snapshots, adding custom drivers, filesystems, and hardware support.
|
||||
|
||||
RedBearOS should be understood as an overlay distribution on top of Redox in the same way Ubuntu
|
||||
relates to Debian:
|
||||
RedBearOS is a **full fork** of Redox OS — based on frozen, archived source snapshots at release 0.1.0.
|
||||
|
||||
- Redox is upstream
|
||||
- Red Bear carries integration, packaging, validation, and subsystem overlays on top
|
||||
- upstream-owned source trees are refreshable working copies
|
||||
- Red Bear carries integration, packaging, validation, and subsystem release fork on top
|
||||
- upstream-owned source trees are immutable archived release snapshot
|
||||
- durable Red Bear state belongs in `local/patches/`, `local/recipes/`, `local/docs/`, and tracked
|
||||
Red Bear configs
|
||||
|
||||
@@ -31,26 +30,26 @@ Operational resilience policy:
|
||||
|
||||
- package/source usage is local-first by default,
|
||||
- local copies are used continuously for builds and recovery workflows,
|
||||
- upstream package refresh is performed only when explicitly requested.
|
||||
- upstream package immutable archived is performed only when explicitly requested.
|
||||
|
||||
For **upstream WIP recipes specifically**, Red Bear uses a stricter rule:
|
||||
|
||||
1. once an upstream recipe or subsystem is still marked WIP, Red Bear treats it as a local project
|
||||
2. we copy, fix, validate, and ship that work from our local overlay until it is stable enough for us
|
||||
2. we copy, fix, validate, and ship that work from our local release fork until it is stable enough for us
|
||||
3. we continue updating our local copy from upstream WIP work when useful, but we do not rely on the
|
||||
upstream WIP recipe itself as our shipped source of truth
|
||||
4. once upstream removes the WIP status and the recipe/subsystem becomes a first-class supported
|
||||
part of Redox, Red Bear reevaluates and should prefer the upstream version over the local copy
|
||||
|
||||
That policy exists so the project can pull refreshed upstream sources regularly and still rebuild
|
||||
predictably from the Red Bear-owned overlay.
|
||||
That policy exists so the project can pull immutable archived upstream sources regularly and still rebuild
|
||||
predictably from the Red Bear-owned release fork.
|
||||
|
||||
## What's New
|
||||
|
||||
- KWin Wayland is now treated as the only intended Red Bear desktop direction in the tracked plans, build defaults, live profile wiring, and profile guidance.
|
||||
- KDE bring-up moved forward: the `redbear-full` desktop-capable surface carries the Qt6/KDE stack in-tree, and the KDE recipe tree is now populated.
|
||||
- Native Red Bear runtime tooling expanded with `redbear-info`, `redbear-hwutils` (`lspci`, `lsusb`), and a Redox-native `netctl` flow.
|
||||
- Build and status docs were refreshed to distinguish current in-tree progress from older historical roadmap text.
|
||||
- Build and status docs were immutable archived to distinguish current in-tree progress from older historical roadmap text.
|
||||
|
||||
See [CHANGELOG.md](./CHANGELOG.md) for the running user-visible change log.
|
||||
|
||||
@@ -157,10 +156,10 @@ Current validation language should be read this way:
|
||||
├── recipes/ # Package recipes (~100+ packages, 26 categories)
|
||||
├── mk/ # Makefile build orchestration
|
||||
├── src/ # Cookbook Rust tool (repo binary, cook logic)
|
||||
├── local/ # ← Red Bear OS custom work (survives upstream updates)
|
||||
├── local/ # ← Red Bear OS custom work (survives source provisioning)
|
||||
│ ├── patches/ # Kernel, base, relibc patches
|
||||
│ ├── recipes/ # Custom packages (drivers, GPU, system, branding)
|
||||
│ ├── scripts/ # sync-upstream.sh, apply-patches.sh
|
||||
│ ├── scripts/ # provision-release.sh, check-upstream-releases.sh
|
||||
│ ├── Assets/ # Branding (icon, boot background)
|
||||
│ └── docs/ # Integration documentation
|
||||
├── docs/ # Architecture guides
|
||||
@@ -234,14 +233,24 @@ passive report over live system surfaces and is intended to help answer question
|
||||
Use `redbear-info --verbose` for evidence-backed human output, `redbear-info --json` for machine-
|
||||
readable diagnostics, and `redbear-info --test` for suggested follow-up commands.
|
||||
|
||||
## Sync with Upstream Redox
|
||||
## Release Model (Full Fork)
|
||||
|
||||
Red Bear OS is a **full fork** based on frozen Redox OS snapshots. Sources are immutable and never auto-immutable archived from upstream. The current baseline is **0.1.0** (Redox snapshot at `f55acba68`). Build-dependent sources are archived in `sources/redbear-0.1.0/` (216 BLAKE3-verified archives).
|
||||
|
||||
Builds are offline by default — no network access during compilation.
|
||||
|
||||
```bash
|
||||
./local/scripts/sync-upstream.sh # Rebase onto latest Redox
|
||||
./local/scripts/sync-upstream.sh --dry-run # Preview conflicts first
|
||||
# Build from archived sources (offline by default)
|
||||
./local/scripts/build-redbear.sh redbear-full
|
||||
|
||||
# Check for newer Redox snapshots (read-only, zero side effects)
|
||||
./local/scripts/check-upstream-releases.sh
|
||||
|
||||
# Provision a new release (explicit, human-initiated only)
|
||||
./local/scripts/provision-release.sh --ref=<redox-tag> --release=0.2.0 --dry-run
|
||||
```
|
||||
|
||||
The `local/` directory is never touched by upstream updates. Recipe patches for kernel and base are symlinked from `local/patches/` — protected from `make clean` and `make distclean`.
|
||||
The `local/` directory is never touched by any source immutable archived. Recipe patches are symlinked from `local/patches/` — protected from `make clean` and `make distclean`.
|
||||
|
||||
## Resources
|
||||
|
||||
|
||||
@@ -8,19 +8,18 @@
|
||||
|
||||
## Repository Model Reminder
|
||||
|
||||
Build this repository using the Red Bear overlay model:
|
||||
Build this repository using the Red Bear release fork model:
|
||||
|
||||
- upstream-owned source trees are refreshable working copies,
|
||||
- sources are frozen, immutable release snapshots at baseline 0.1.0,
|
||||
- durable Red Bear state lives in `local/patches/`, `local/recipes/`, `local/docs/`, and tracked
|
||||
Red Bear configs,
|
||||
- upstream WIP recipes are useful inputs, but should not automatically be treated as the durable
|
||||
shipping source of truth for Red Bear.
|
||||
- build from archived sources offline by default; provision new releases explicitly via provision-release.sh.
|
||||
|
||||
Resilience policy for package/source inputs:
|
||||
|
||||
- default build posture is local-first/offline-capable,
|
||||
- local copies are used continuously unless upstream refresh is explicitly requested,
|
||||
- upstream refresh is an explicit operation, not an implicit background requirement for normal
|
||||
- local copies are used continuously unless release provisioning is explicitly requested,
|
||||
- release provisioning is an explicit operation, not an implicit background requirement for normal
|
||||
builds.
|
||||
|
||||
## Prerequisites
|
||||
@@ -210,11 +209,11 @@ sudo dd if=build/x86_64/harddrive.img of=/dev/sdX bs=4M status=progress
|
||||
./target/release/repo cook recipes/wip/kde/kwin
|
||||
```
|
||||
|
||||
Under the Red Bear overlay model, remember:
|
||||
Under the Red Bear release fork model, remember:
|
||||
|
||||
- `recipes/*/source/` is a refreshable working tree,
|
||||
- `recipes/*/source/` is an immutable archived release snapshot,
|
||||
- Red Bear-owned shipping deltas should be preserved under `local/patches/` and `local/recipes/`,
|
||||
- if a recipe is still upstream WIP, Red Bear may still choose to ship from `local/recipes/` instead.
|
||||
- sources are built offline by default; provision new releases via provision-release.sh.
|
||||
|
||||
### Understanding Recipe Format
|
||||
|
||||
@@ -264,7 +263,7 @@ cp target/release/myapp ${COOKBOOK_STAGE}/usr/bin/
|
||||
| `PREFIX_BINARY` | `1` | Use prebuilt toolchain (faster) |
|
||||
| `REPO_BINARY` | `0` | Use prebuilt packages (faster, no compilation) |
|
||||
| `REPO_NONSTOP` | `0` | Continue on build errors |
|
||||
| `REPO_OFFLINE` | `0` | Don't update source repos; Red Bear policy treats local-first sourcing as the normal operating mode and upstream refresh as explicit opt-in |
|
||||
| `REPO_OFFLINE` | `0` | Don't update source repos; Red Bear policy treats local-first sourcing as the normal operating mode and release provisioning as explicit opt-in |
|
||||
|
||||
### Environment Variables for Recipes
|
||||
|
||||
|
||||
@@ -17,26 +17,26 @@ Detailed subsystem planning remains in focused documents under `local/docs/`.
|
||||
|
||||
## Repository Model
|
||||
|
||||
RedBearOS should be understood as an overlay distribution on top of Redox in the same way Ubuntu
|
||||
RedBearOS should be understood as an full fork on top of Redox in the same way Ubuntu
|
||||
relates to Debian.
|
||||
|
||||
- Redox is upstream.
|
||||
- Red Bear carries integration, packaging, validation, and subsystem overlays on top.
|
||||
- Upstream-owned source trees are refreshable working copies.
|
||||
- Red Bear carries integration, packaging, validation, and subsystem release fork on top.
|
||||
- Upstream-owned source trees are immutable archived release snapshot.
|
||||
- Durable Red Bear state belongs in `local/patches/`, `local/recipes/`, `local/docs/`, and tracked
|
||||
Red Bear configs.
|
||||
|
||||
The project is in the right long-term shape only when refreshed upstream sources can be fetched,
|
||||
Red Bear overlays can be reapplied, and the project still rebuilds successfully.
|
||||
The project is in the right long-term shape only when immutable archived upstream sources can be fetched,
|
||||
Red Bear release fork can be apply, and the project still rebuilds successfully.
|
||||
|
||||
## Ownership Rules
|
||||
|
||||
### Upstream-owned layer
|
||||
|
||||
These are refreshable working inputs, not durable Red Bear storage:
|
||||
These are immutable archived release sources, not durable Red Bear storage:
|
||||
|
||||
- `recipes/*/source/`
|
||||
- most of `recipes/` outside local overlay symlinks
|
||||
- most of `recipes/` outside local release fork symlinks
|
||||
- mainline configs such as `config/desktop.toml` and `config/minimal.toml`
|
||||
- generated build outputs under `target/`, `build/`, `repo/`, and recipe-local `target/*`
|
||||
|
||||
@@ -63,7 +63,7 @@ If an upstream recipe or subsystem is still marked WIP, Red Bear treats it as a
|
||||
That means:
|
||||
|
||||
1. upstream WIP can be used as an input and reference,
|
||||
2. but Red Bear should fix and ship from the local overlay while the work is still WIP,
|
||||
2. but Red Bear should fix and ship from the local release fork while the work is still WIP,
|
||||
3. and once upstream promotes that work to first-class supported status, Red Bear should reevaluate
|
||||
and prefer upstream where appropriate.
|
||||
|
||||
@@ -80,7 +80,7 @@ That means:
|
||||
|
||||
- functionality is delivered as packages,
|
||||
- profiles are composed from packages and package groups,
|
||||
- integration should prefer packaging, configuration, and overlays over invasive upstream rewrites.
|
||||
- integration should prefer packaging, configuration, and release fork over invasive upstream rewrites.
|
||||
|
||||
### Validation over claims
|
||||
|
||||
@@ -148,11 +148,11 @@ The current repo is no longer at a greenfield or “missing everything” stage.
|
||||
|
||||
The current evidence-backed baseline is:
|
||||
|
||||
- the Red Bear overlay model is documented and in active use,
|
||||
- the Red Bear release fork model is documented and in active use,
|
||||
- major local subsystem plans exist under `local/docs/`,
|
||||
- native wired networking is present,
|
||||
- Qt6 and major downstream desktop dependencies build,
|
||||
- Wayland-facing relibc compatibility surfaces now rebuild from a refreshed upstream relibc source
|
||||
- Wayland-facing relibc compatibility surfaces now rebuild from a immutable archived upstream relibc source
|
||||
tree via local patch carriers,
|
||||
- `libwayland` and `qtbase` build successfully from the reconstructed relibc state,
|
||||
- the Red Bear-native greeter/login path now has a bounded passing runtime proof, while broader KDE/KWin session stability is still not yet a general runtime claim,
|
||||
@@ -170,7 +170,7 @@ ordering.
|
||||
|
||||
The current repository-wide work order is:
|
||||
|
||||
1. repository discipline and overlay hygiene
|
||||
1. repository discipline and release fork hygiene
|
||||
2. reproducible profiles and validation surfaces
|
||||
3. low-level controller and IRQ quality
|
||||
4. USB maturity
|
||||
@@ -202,27 +202,27 @@ order.
|
||||
|
||||
## Workstreams
|
||||
|
||||
### 1. Repository discipline and overlay hygiene
|
||||
### 1. Repository discipline and release fork hygiene
|
||||
|
||||
Goal:
|
||||
|
||||
- keep Red Bear-specific work identifiable,
|
||||
- keep upstream refresh predictable,
|
||||
- ensure durable overlays exist for active Red Bear-owned deltas,
|
||||
- keep release provisioning predictable,
|
||||
- ensure durable release fork exist for active Red Bear-owned deltas,
|
||||
- keep WIP migration logic explicit.
|
||||
|
||||
Current state:
|
||||
|
||||
- overlay model is documented,
|
||||
- relibc preservation/reapply proof exists,
|
||||
- release fork model is documented,
|
||||
- relibc preservation and patch application proof exists,
|
||||
- WIP ownership policy is documented,
|
||||
- documentation still needs cleaner indexing and some historical pruning.
|
||||
|
||||
Acceptance:
|
||||
|
||||
- refreshed upstream sources can be re-overlaid and rebuilt predictably,
|
||||
- sources are provisioned via provision-release.sh and rebuilt predictably,
|
||||
- the canonical/current-vs-historical split is visible in docs,
|
||||
- active Red Bear-owned deltas are preserved outside refreshable source trees.
|
||||
- active Red Bear-owned deltas are preserved in local/patches and local/recipes.
|
||||
|
||||
### 2. Profiles and packaging
|
||||
|
||||
@@ -435,9 +435,9 @@ Do not compress these into a single “supported” claim.
|
||||
The highest-value documentation follow-ups from the current state are:
|
||||
|
||||
1. add a clearer document-status matrix in `docs/README.md`,
|
||||
2. add a WIP migration ledger for major upstream-WIP-to-local-overlay transitions,
|
||||
2. add a WIP migration ledger for major upstream-WIP-to-local-release fork transitions,
|
||||
3. add a concise script behavior matrix for sync/fetch/apply/build helper scripts,
|
||||
4. continue pruning obsolete local overlays only after refreshed-upstream reapply proofs confirm
|
||||
4. continue pruning obsolete local release fork only after release provisioning proofs confirm
|
||||
upstream coverage is sufficient.
|
||||
|
||||
## Bottom Line
|
||||
@@ -445,11 +445,11 @@ The highest-value documentation follow-ups from the current state are:
|
||||
Red Bear OS is no longer at the stage where the main question is “can we start?”.
|
||||
|
||||
The current state is a transition from compile-oriented subsystem accumulation toward a stricter,
|
||||
profile-driven, overlay-disciplined, evidence-backed system project. The implementation plan must now
|
||||
profile-driven, release fork-disciplined, evidence-backed system project. The implementation plan must now
|
||||
optimize for:
|
||||
|
||||
- predictable upstream refresh,
|
||||
- durable local overlays,
|
||||
- predictable release provisioning,
|
||||
- durable local release fork,
|
||||
- honest support language,
|
||||
- and execution order that respects the real blocker chain.
|
||||
|
||||
|
||||
+5
-5
@@ -1,6 +1,6 @@
|
||||
# Red Bear OS Documentation Index
|
||||
|
||||
Technical documentation for Red Bear OS as an overlay distribution on top of Redox OS.
|
||||
Technical documentation for Red Bear OS as an full fork on top of Redox OS.
|
||||
|
||||
This index is the entry point for the documentation set. Its main job is to make the
|
||||
current/canonical versus historical/reference split obvious.
|
||||
@@ -21,13 +21,13 @@ current/canonical versus historical/reference split obvious.
|
||||
|
||||
> **Repository model:** RedBearOS relates to Redox in the same way Ubuntu relates to Debian.
|
||||
> Upstream Redox remains the base platform; Red Bear carries packaging, patch, validation, and
|
||||
> subsystem overlays on top. For long-term stability, upstream-owned source trees should be treated
|
||||
> as refreshable working copies, while durable Red Bear state belongs in `local/patches/`,
|
||||
> subsystem release fork on top. For long-term stability, upstream-owned source trees should be treated
|
||||
> as immutable archived release snapshot, while durable Red Bear state belongs in `local/patches/`,
|
||||
> `local/recipes/`, `local/docs/`, and tracked Red Bear configs.
|
||||
>
|
||||
> **WIP policy:** if an upstream recipe or subsystem is still marked WIP, Red Bear treats it as a
|
||||
> local project until upstream promotes it to first-class status. We may refresh from upstream WIP,
|
||||
> but we should fix and ship from the Red Bear overlay until upstream support is real enough to
|
||||
> local project until upstream promotes it to first-class status. We may immutable archived from upstream WIP,
|
||||
> but we should fix and ship from the Red Bear release fork until upstream support is real enough to
|
||||
> replace the local copy.
|
||||
|
||||
## Document Status Matrix
|
||||
|
||||
+55
-39
@@ -5,11 +5,11 @@ updates (`git pull` on the build system repo), this directory is untouched.
|
||||
|
||||
## DESIGN PRINCIPLE
|
||||
|
||||
Red Bear OS relates to Redox OS in the same way Ubuntu relates to Debian:
|
||||
- We track Redox OS as upstream, merging changes regularly
|
||||
- We add custom packages, drivers, configs, and branding on top
|
||||
- The `local/` directory is our overlay — untouched by upstream updates
|
||||
Red Bear OS is a **full fork** based on frozen Redox OS snapshots:
|
||||
- We baseline on a specific Redox OS state and work from immutable, archived sources
|
||||
- The `local/` directory contains our custom work — untouched by any source immutable archived
|
||||
- First-class configs use `redbear-*` naming (not `my-*`, which is gitignored)
|
||||
- Sources are NEVER auto-immutable archived from upstream — all changes are explicit, human-initiated
|
||||
|
||||
## FREE/LIBRE SOFTWARE POLICY
|
||||
|
||||
@@ -25,14 +25,21 @@ Build flow:
|
||||
make all CONFIG_NAME=redbear-full
|
||||
→ mk/config.mk resolves to the active desktop/graphics compile target
|
||||
→ Desktop/graphics are available only on redbear-full
|
||||
→ repo cook builds all packages including our custom ones
|
||||
→ repo cook builds all packages from local sources (offline by default)
|
||||
→ mk/disk.mk creates harddrive.img with Red Bear branding
|
||||
→ REDBEAR_RELEASE=0.1.0 ensures immutable, archived sources
|
||||
```
|
||||
|
||||
Update flow:
|
||||
Release flow:
|
||||
```
|
||||
./local/scripts/sync-upstream.sh # Rebase onto upstream Redox + verify symlinks
|
||||
make all CONFIG_NAME=redbear-full # Rebuild the active desktop/graphics target
|
||||
# Sources are immutable — build from archives, never from network
|
||||
./local/scripts/build-redbear.sh redbear-full
|
||||
|
||||
# Check for newer Redox snapshots (read-only, no side effects):
|
||||
./local/scripts/check-upstream-releases.sh
|
||||
|
||||
# Provision a new release (explicit, human-initiated only):
|
||||
./local/scripts/provision-release.sh --ref=<redox-tag> --release=0.2.0
|
||||
```
|
||||
|
||||
## ACTIVE COMPILE TARGETS
|
||||
@@ -46,21 +53,44 @@ and `make live` (ISO):
|
||||
|
||||
Desktop/graphics are available only on `redbear-full`.
|
||||
|
||||
## TRACKING UPSTREAM (SYNC WITH REDOX OS)
|
||||
## RELEASE MODEL (FORK — NOT OVERLAY)
|
||||
|
||||
Red Bear OS tracks the Redox OS build system as upstream. The `local/` directory
|
||||
survives upstream updates untouched.
|
||||
Red Bear OS sources are frozen at release 0.1.0. Sources are immutable and archived in
|
||||
`sources/redbear-0.1.0/`. Network access during builds is disabled by default.
|
||||
|
||||
### How releases work:
|
||||
- **Current baseline:** 0.1.0 (snapshot of Redox at build-system commit `f55acba68`)
|
||||
- **All recipe sources are pinned** with `rev = "..."` in `recipe.toml`
|
||||
- **Archives are stored** in `sources/redbear-0.1.0/` with a manifest and BLAKE3 checksums
|
||||
- **Builds are offline by default** — `REPO_OFFLINE=1 COOKBOOK_OFFLINE=true`
|
||||
- **New releases are provisioned explicitly** via `provision-release.sh`, never automatically
|
||||
- **Old releases are NEVER deleted** — each new release is added alongside existing ones
|
||||
|
||||
### Checking for new Redox snapshots:
|
||||
```bash
|
||||
./local/scripts/check-upstream-releases.sh # Read-only, zero side effects
|
||||
```
|
||||
|
||||
### Provisioning a new release:
|
||||
```bash
|
||||
./local/scripts/provision-release.sh --ref=<redox-tag> --release=0.2.0 [--dry-run]
|
||||
```
|
||||
|
||||
### Restoring sources from archives:
|
||||
```bash
|
||||
./local/scripts/restore-sources.sh --release=0.1.0
|
||||
```
|
||||
|
||||
## SOURCE-OF-TRUTH RULE (VERY IMPORTANT)
|
||||
|
||||
Treat the repository as two different layers with different durability guarantees:
|
||||
|
||||
### 1. Upstream-owned layer — disposable, refreshable every day
|
||||
### 1. Source archive layer — immutable per release
|
||||
|
||||
These paths are expected to be replaced, refetched, or regenerated when upstream changes:
|
||||
|
||||
- `recipes/*/source/`
|
||||
- most of `recipes/` outside our symlinked `local/recipes/*` overlays
|
||||
- most of `recipes/` outside our symlinked `local/recipes/*` release fork
|
||||
- `config/desktop.toml`, `config/minimal.toml`, and other mainline configs
|
||||
- generated build outputs under `target/`, `build/`, `repo/`, and recipe-local `target/*`
|
||||
|
||||
@@ -68,16 +98,16 @@ For relibc specifically, **`recipes/core/relibc/source/` is upstream-owned worki
|
||||
Red Bear’s durable storage location. We may build and validate there, but we must not rely on that
|
||||
tree alone to preserve Red Bear work.
|
||||
|
||||
### 2. Red Bear-owned layer — durable, must survive upstream refresh
|
||||
### 2. Red Bear-owned layer — durable, must survive release provisioning
|
||||
|
||||
These paths are our actual long-term source of truth:
|
||||
|
||||
- `local/patches/` — all durable changes to upstream-owned source trees
|
||||
- `local/recipes/` — Red Bear recipe overlays and new packages
|
||||
- `local/recipes/` — Red Bear recipe release fork and new packages
|
||||
- `local/docs/` — Red Bear planning, validation, and integration documentation
|
||||
- tracked Red Bear configs such as `config/redbear-*.toml`
|
||||
|
||||
If we can fetch fresh upstream sources tomorrow, reapply `local/patches/*`, relink
|
||||
If we can fetch fresh upstream sources tomorrow, provision sources from `sources/redbear-<release>/`, verify
|
||||
`local/recipes/*`, and rebuild successfully, then the work is in the right place.
|
||||
|
||||
If a change exists only inside an upstream-owned `recipes/*/source/` tree, then it is **not yet
|
||||
@@ -94,7 +124,7 @@ That means:
|
||||
- if upstream lands an equivalent or better solution, prefer upstream and shrink or drop our local patch
|
||||
- do not keep a Red Bear patch just because it existed first; keep it only while it still provides unique value
|
||||
|
||||
For relibc specifically, patch carriers should be treated as **temporary compatibility overlays**,
|
||||
For relibc specifically, patch carriers should be treated as **temporary compatibility release fork**,
|
||||
not a permanent fork strategy.
|
||||
|
||||
When upstream Redox already provides a package, crate, or subsystem for functionality that also
|
||||
@@ -117,12 +147,12 @@ For any change to upstream-owned source:
|
||||
1. make the minimal working change in the live source tree if needed for validation
|
||||
2. prove it builds/tests against the real recipe
|
||||
3. mirror that delta into `local/patches/<component>/...`
|
||||
4. update `local/docs/...` so the rebuild/reapply story is explicit
|
||||
4. update `local/docs/...` so the provisioning story is explicit
|
||||
5. assume the live upstream source tree may be thrown away and recreated at any time
|
||||
|
||||
The success criterion is therefore:
|
||||
|
||||
> We can pull renewed upstream sources every day, reapply Red Bear’s local overlays, and still
|
||||
> We can sources are provisioned via provision-release.sh and archived in sources/redbear-<release>/
|
||||
> build the project successfully.
|
||||
|
||||
### Local recipe priority vs upstream WIP
|
||||
@@ -130,28 +160,14 @@ The success criterion is therefore:
|
||||
When Red Bear maintains a local recipe and upstream contains a package with the same name under
|
||||
`recipes/wip/*`, Red Bear must prefer the local recipe unconditionally.
|
||||
|
||||
- Use the local overlay symlink in `recipes/*/<name> -> ../../local/recipes/...`
|
||||
- Use the local release fork symlink in `recipes/*/<name> -> ../../local/recipes/...`
|
||||
- Do not switch back to upstream WIP for active Red Bear builds
|
||||
- Re-evaluate only when upstream package exits WIP and becomes a normal maintained package
|
||||
|
||||
```bash
|
||||
# Automated sync (preferred):
|
||||
./local/scripts/sync-upstream.sh # Fetch + rebase + check patches
|
||||
./local/scripts/sync-upstream.sh --dry-run # Preview conflicts before rebasing
|
||||
./local/scripts/sync-upstream.sh --no-merge # Only check for patch conflicts
|
||||
|
||||
# Manual sync:
|
||||
git remote add upstream-redox https://github.com/redox-os/redox.git # First time only
|
||||
git fetch upstream-redox master
|
||||
git rebase upstream-redox/master
|
||||
|
||||
# If rebase fails (nuclear option):
|
||||
git rebase --abort
|
||||
git reset --hard upstream-redox/master
|
||||
./local/scripts/apply-patches.sh --force # Rebuild Red Bear OS changes from patch files
|
||||
|
||||
# After sync:
|
||||
cargo build --release # Rebuild cookbook
|
||||
./local/scripts/check-upstream-releases.sh # Check for new Redox snapshots (read-only)
|
||||
./local/scripts/provision-release.sh --ref=<tag> --release=0.2.0 --dry-run # Preview new release
|
||||
make all CONFIG_NAME=redbear-full # Rebuild OS
|
||||
```
|
||||
|
||||
@@ -188,14 +204,14 @@ redox-master/ ← git pull updates mainline Redox
|
||||
│ ├── patches/
|
||||
│ │ ├── kernel/ ← Kernel patches (ACPI, x2APIC)
|
||||
│ │ ├── base/ ← Base patches (acpid fixes, power methods, pcid /config endpoint)
|
||||
│ │ ├── relibc/ ← relibc compatibility overlays still needed beyond upstream (eventfd, signalfd, timerfd, waitid, SysV IPC)
|
||||
│ │ ├── relibc/ ← relibc compatibility release fork still needed beyond upstream (eventfd, signalfd, timerfd, waitid, SysV IPC)
|
||||
│ │ ├── bootloader/ ← Bootloader patches
|
||||
│ │ └── installer/ ← Installer patches (ext4 filesystem support + GRUB bootloader)
|
||||
│ ├── Assets/ ← Branding assets (icon, loading background)
|
||||
│ │ └── images/ ← Red Bear OS icon (1254x1254) + loading bg (1536x1024)
|
||||
│ ├── firmware/ ← GPU firmware blobs (gitignored, fetched)
|
||||
│ ├── scripts/
|
||||
│ │ ├── sync-upstream.sh ← Sync with upstream Redox OS
|
||||
│ │ ├── provision-release.sh ← Provision new release from Redox ref
|
||||
│ │ ├── build-redbear.sh ← Unified Red Bear OS build script
|
||||
│ │ ├── fetch-firmware.sh ← Download bounded AMD or Intel firmware subsets from linux-firmware
|
||||
│ │ ├── test-drm-display-runtime.sh ← Shared bounded DRM/KMS display validation harness
|
||||
@@ -568,7 +584,7 @@ local/Assets/
|
||||
- **DO NOT** assume mainline recipe names won't conflict — prefix custom ones (e.g., `redox-`)
|
||||
- **DO NOT** use `my-*` naming for configs that should be tracked in git — use `redbear-*` instead
|
||||
- **DO NOT** edit config/base.toml directly — our configs include it and override via TOML merge
|
||||
- **DO NOT** forget to run sync-upstream.sh before major builds — stale upstream causes build failures
|
||||
- **DO NOT** attempt to immutable archived sources from upstream — sources are immutable; use provision-release.sh
|
||||
|
||||
## COMPREHENSIVE IMPLEMENTATION POLICY
|
||||
|
||||
|
||||
@@ -189,7 +189,7 @@ live under `local/`:
|
||||
- validation helpers under `local/scripts/`
|
||||
- support-language and roadmap updates under `local/docs/`
|
||||
|
||||
That keeps the first implementation pass aligned with Red Bear's overlay model and rebase strategy.
|
||||
That keeps the first implementation pass aligned with Red Bear's release fork model and rebase strategy.
|
||||
|
||||
### 3. Desktop parity is not the first milestone
|
||||
|
||||
@@ -310,7 +310,7 @@ Some of the implementation targets below refer to upstream-managed trees such as
|
||||
|
||||
In Red Bear, changes against those paths should be carried through the relevant patch carrier under
|
||||
`local/patches/` until intentionally upstreamed. This plan names the technical integration point,
|
||||
not a recommendation to edit upstream-managed trees outside Red Bear's normal overlay model.
|
||||
not a recommendation to edit upstream-managed trees outside Red Bear's normal release fork model.
|
||||
|
||||
### Phase B0 — Scope Freeze and Support Model
|
||||
|
||||
|
||||
@@ -56,7 +56,7 @@ When reordering patches, test the FULL chain: remove source, rebuild, verify.
|
||||
|
||||
`recipes/core/base/recipe.toml` is git-tracked. Changes to it are durable.
|
||||
`recipes/core/base/source/` is a fetched working copy — destroyed by `make clean`,
|
||||
`make distclean`, source refresh, and sync-upstream.
|
||||
`make distclean`, source immutable archived, and provision-release.
|
||||
|
||||
Any change to source/ MUST be preserved as a patch in `local/patches/base/`.
|
||||
|
||||
|
||||
@@ -67,9 +67,9 @@ All profiles produce outputs under `build/x86_64/`. Each profile gets its own di
|
||||
- Enables the shared `wired-dhcp` netctl profile by default for the VM/wired baseline.
|
||||
- Ships the shared firmware/input runtime service prerequisites so the early substrate can be tested on the smallest profile as well.
|
||||
|
||||
### Historical and experimental overlays
|
||||
### Historical and experimental release fork
|
||||
|
||||
- Experimental overlays such as `redbear-bluetooth-experimental` and `redbear-wifi-experimental`
|
||||
- Experimental release fork such as `redbear-bluetooth-experimental` and `redbear-wifi-experimental`
|
||||
are bounded validation slices layered on top of the tracked compile targets, not additional
|
||||
compile targets.
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
## Purpose
|
||||
|
||||
This document centralizes what the main repository scripts do and do not handle under the Red Bear
|
||||
overlay model.
|
||||
release fork model.
|
||||
|
||||
The goal is to remove guesswork from the sync/fetch/apply/build workflow.
|
||||
|
||||
@@ -11,11 +11,11 @@ The goal is to remove guesswork from the sync/fetch/apply/build workflow.
|
||||
|
||||
| Script | Primary role | What it handles | What it does **not** guarantee |
|
||||
|---|---|---|---|
|
||||
| `local/scripts/sync-upstream.sh` | Refresh top-level upstream repo state | fetches upstream, reports conflict risk, rebases repo commits, reapplies build-system overlays via `apply-patches.sh` | does not automatically solve every subsystem overlay conflict; does not by itself make upstream WIP recipes safe shipping inputs |
|
||||
| `local/scripts/apply-patches.sh` | Reapply durable Red Bear overlays | applies build-system patches, relinks recipe patch symlinks, relinks local recipe overlays into `recipes/` | does not fully rebase stale patch carriers; does not validate runtime behavior; does not decide WIP ownership for you |
|
||||
| `local/scripts/build-redbear.sh` | Build Red Bear profiles from upstream base + local overlay | applies overlays, builds cookbook if needed, validates profile naming, launches the actual image build; only allows upstream recipe refresh when passed `--upstream` | does not guarantee every nested upstream source tree is fresh; does not replace explicit subsystem/runtime validation |
|
||||
| `scripts/fetch-all-sources.sh` | Fetch mainline recipe source inputs for builds | downloads mainline/upstream recipe sources, reports status/preflight, and supports config-scoped fetches while leaving local overlays in place | does not mean fetched upstream WIP source is the durable shipping source of truth |
|
||||
| `local/scripts/fetch-sources.sh` | Fetch mainline recipe sources for browsing and patching | when passed `--upstream`, fetches `recipes/*` source trees so the upstream-managed side is locally available for reading, editing, and patch preparation | does not decide whether upstream should replace the local overlay |
|
||||
| `local/scripts/provision-release.sh` | Refresh top-level upstream repo state | fetches upstream, reports conflict risk, rebases repo commits, reapplies build-system release fork via `apply-patches.sh` | does not automatically solve every subsystem release fork conflict; does not by itself make upstream WIP recipes safe shipping inputs |
|
||||
| `local/scripts/apply-patches.sh` | Reapply durable Red Bear release fork | applies build-system patches, relinks recipe patch symlinks, relinks local recipe release fork into `recipes/` | does not fully rebase stale patch carriers; does not validate runtime behavior; does not decide WIP ownership for you |
|
||||
| `local/scripts/build-redbear.sh` | Build Red Bear profiles from upstream base + local release fork | applies release fork, builds cookbook if needed, validates profile naming, launches the actual image build; only allows upstream recipe immutable archived when passed `--upstream` | does not guarantee every nested upstream source tree is fresh; does not replace explicit subsystem/runtime validation |
|
||||
| `scripts/fetch-all-sources.sh` | Fetch mainline recipe source inputs for builds | downloads mainline/upstream recipe sources, reports status/preflight, and supports config-scoped fetches while leaving local release fork in place | does not mean fetched upstream WIP source is the durable shipping source of truth |
|
||||
| `local/scripts/fetch-sources.sh` | Fetch mainline recipe sources for browsing and patching | when passed `--upstream`, fetches `recipes/*` source trees so the upstream-managed side is locally available for reading, editing, and patch preparation | does not decide whether upstream should replace the local release fork |
|
||||
| `local/scripts/build-redbear-wifictl-redox.sh` | Build `redbear-wifictl` for the Redox target with the repo toolchain | prepends `prefix/x86_64-unknown-redox/sysroot/bin` to `PATH` and runs `cargo build --target x86_64-unknown-redox` in the `redbear-wifictl` crate | does not prove runtime Wi-Fi behavior; only closes the target-build environment gap for this crate |
|
||||
| `local/scripts/test-iwlwifi-driver-runtime.sh` | Exercise the bounded Intel driver lifecycle inside a target runtime | validates bounded probe/prepare/init/activate/scan/connect/disconnect/retry surfaces for `redbear-iwlwifi` on a live target runtime | does not prove real AP association, packet flow, DHCP success over Wi-Fi, or end-to-end connectivity |
|
||||
| `local/scripts/test-wifi-control-runtime.sh` | Exercise the bounded Wi-Fi control/profile lifecycle inside a target runtime | validates `/scheme/wifictl` control nodes, bounded connect/disconnect behavior, and profile-manager/runtime reporting surfaces on a live target runtime | does not prove real AP association or end-to-end connectivity |
|
||||
@@ -68,8 +68,8 @@ repo already contains `prefix/x86_64-unknown-redox/sysroot/bin/x86_64-unknown-re
|
||||
|
||||
Default Red Bear behavior is local-first:
|
||||
|
||||
- use locally available package/source trees and overlay state for normal builds,
|
||||
- treat upstream refresh as an explicit operator action only (`--upstream`, dedicated fetch/sync),
|
||||
- use locally available package/source trees and release fork state for normal builds,
|
||||
- treat upstream immutable archived as an explicit operator action only (`--upstream`, dedicated fetch/sync),
|
||||
- do not fail policy-level expectations just because upstream network access is temporarily broken.
|
||||
|
||||
This is required so builds and recovery workflows remain operable during upstream outages or
|
||||
@@ -77,14 +77,14 @@ connectivity failures.
|
||||
|
||||
### Upstream sync
|
||||
|
||||
Use `local/scripts/sync-upstream.sh` when the goal is to refresh the top-level upstream Redox base.
|
||||
Use `local/scripts/provision-release.sh` when the goal is to immutable archived the top-level upstream Redox base.
|
||||
|
||||
This is a repository sync operation, not a guarantee that every local subsystem overlay is already
|
||||
This is a repository sync operation, not a guarantee that every local subsystem release fork is already
|
||||
rebased cleanly.
|
||||
|
||||
### Overlay reapplication
|
||||
|
||||
Use `local/scripts/apply-patches.sh` when the goal is to reconstruct Red Bear’s overlay on top of a
|
||||
Use `local/scripts/apply-patches.sh` when the goal is to reconstruct Red Bear’s release fork on top of a
|
||||
fresh upstream tree.
|
||||
|
||||
This is the core durable-state recovery path.
|
||||
@@ -92,13 +92,13 @@ This is the core durable-state recovery path.
|
||||
### Build execution
|
||||
|
||||
Use `local/scripts/build-redbear.sh` when the goal is to build a tracked Red Bear profile from the
|
||||
current upstream base plus local overlay. Add `--upstream` only when you explicitly want Redox/upstream
|
||||
recipe sources refreshed during that build.
|
||||
current upstream base plus local release fork. Add `--upstream` only when you explicitly want Redox/upstream
|
||||
recipe sources immutable archived during that build.
|
||||
|
||||
### Source refresh
|
||||
### Source immutable archived
|
||||
|
||||
Use `scripts/fetch-all-sources.sh` and `local/scripts/fetch-sources.sh --upstream` when the goal is to
|
||||
refresh recipe source inputs, but do not confuse fetched upstream WIP source with a trusted shipping
|
||||
immutable archived recipe source inputs, but do not confuse fetched upstream WIP source with a trusted shipping
|
||||
source.
|
||||
|
||||
## WIP Rule in Script Terms
|
||||
@@ -108,7 +108,7 @@ If a subsystem is still upstream WIP, the scripts should be interpreted this way
|
||||
- fetching upstream WIP source is allowed and useful through the explicit upstream fetch commands or
|
||||
`--upstream` where a wrapper requires it,
|
||||
- syncing upstream WIP source is allowed and useful through the explicit upstream sync command,
|
||||
- but shipping decisions should still prefer the local overlay until upstream promotion and reevaluation happen.
|
||||
- but shipping decisions should still prefer the local release fork until upstream promotion and reevaluation happen.
|
||||
|
||||
That means “script fetched it successfully” is not the same as “Red Bear should now ship upstream’s
|
||||
WIP version directly.”
|
||||
|
||||
@@ -54,7 +54,7 @@ In scope:
|
||||
- evdevd / udev-shim / libinput / seatd integration as they affect Wayland,
|
||||
- Mesa/GBM/EGL software-path proof and the Wayland-facing graphics runtime,
|
||||
- KWin as the intended production Wayland compositor path,
|
||||
- local overlay ownership decisions for Wayland components and validation harnesses.
|
||||
- local release fork ownership decisions for Wayland components and validation harnesses.
|
||||
|
||||
Out of scope:
|
||||
|
||||
@@ -123,7 +123,7 @@ Rules:
|
||||
| Session path | seat/session proof bounded by QEMU validation; full hardware trust supplementary for KWin path |
|
||||
| Hardware graphics | no hardware-accelerated Wayland proof |
|
||||
| KWin truthfulness | reduced-feature real build exists; bounded runtime proof still requires Qt6Quick/QML downstream validation |
|
||||
| WIP ownership | upstream WIP recipes and local overlays are mixed; forward path is not always explicit |
|
||||
| WIP ownership | upstream WIP recipes and local release fork are mixed; forward path is not always explicit |
|
||||
|
||||
## Stability / Completeness Verdict
|
||||
|
||||
|
||||
@@ -303,7 +303,7 @@ Close the loop with evidence, canonical docs, and durable patch carriers.
|
||||
- update canonical docs:
|
||||
- `local/docs/USB-IMPLEMENTATION-PLAN.md`
|
||||
- `local/docs/USB-VALIDATION-RUNBOOK.md`
|
||||
- refresh durable patch carriers under `local/patches/base/`
|
||||
- immutable archived durable patch carriers under `local/patches/base/`
|
||||
- delete only clearly stale, superseded docs after link sweep
|
||||
|
||||
### Exit Criteria
|
||||
@@ -311,7 +311,7 @@ Close the loop with evidence, canonical docs, and durable patch carriers.
|
||||
- all bounded USB/xHCI proofs pass on a fresh image
|
||||
- changed files are diagnostics-clean
|
||||
- canonical docs match actual proof scope
|
||||
- patch carrier is refreshed and reapplicable
|
||||
- patch carrier is immutable archived and reapplicable
|
||||
|
||||
## Validation Matrix
|
||||
|
||||
@@ -356,5 +356,5 @@ This work is complete only when:
|
||||
- `xhcid` builds/tests cleanly
|
||||
- bounded QEMU proof matrix passes on a rebuilt image
|
||||
- canonical docs are synchronized
|
||||
- durable patch carrier is refreshed
|
||||
- durable patch carrier is immutable archived
|
||||
- remaining gaps, if any, are explicitly documented as future or hardware-only work
|
||||
|
||||
@@ -53,9 +53,9 @@ why it is intentionally excluded.
|
||||
- Red Bear builds must remain resilient when access to upstream Redox infrastructure is degraded or
|
||||
unavailable.
|
||||
- Local package/source copies are the default operational source of truth for builds.
|
||||
- Upstream fetch/refresh is opt-in and must be explicitly requested by the operator (for example via
|
||||
- Upstream fetch/immutable archived is opt-in and must be explicitly requested by the operator (for example via
|
||||
an explicit `--upstream` workflow).
|
||||
- After an explicit upstream refresh, local durable overlays (`local/patches`, `local/recipes`) stay
|
||||
- After an explicit upstream immutable archived, local durable release fork (`local/patches`, `local/recipes`) stay
|
||||
authoritative until a conscious reevaluation/promotion decision is made.
|
||||
|
||||
## Profile Intent
|
||||
@@ -94,6 +94,6 @@ For any substantial Red Bear change, record:
|
||||
|
||||
## Upstream Sync Discipline
|
||||
|
||||
- Rebase/sync through `local/scripts/sync-upstream.sh`.
|
||||
- Rebase/sync through `local/scripts/provision-release.sh`.
|
||||
- Keep Red Bear-specific diffs easy to audit.
|
||||
- Update profile docs when config inheritance or package composition changes.
|
||||
|
||||
@@ -0,0 +1,55 @@
|
||||
From: Red Bear OS
|
||||
Date: 2026-04-28
|
||||
Subject: daemon: handle missing INIT_NOTIFY gracefully instead of panicking
|
||||
|
||||
The Daemon::new() and Daemon::ready() functions in the daemon library
|
||||
called unwrap() on the INIT_NOTIFY environment variable and the ready
|
||||
pipe write, causing a hard panic when a daemon is started outside the
|
||||
init system's notification pipe mechanism.
|
||||
|
||||
Replace unwrap() with graceful error handling:
|
||||
- get_fd() returns -1 if the env var is missing or invalid, logging
|
||||
a warning via eprintln
|
||||
- ready() logs a warning on write failure instead of panicking
|
||||
|
||||
diff --git a/daemon/src/lib.rs b/daemon/src/lib.rs
|
||||
index 9f507221..a0ba9d88 100644
|
||||
--- a/daemon/src/lib.rs
|
||||
+++ b/daemon/src/lib.rs
|
||||
@@ -11,12 +11,23 @@ use redox_scheme::Socket;
|
||||
use redox_scheme::scheme::{SchemeAsync, SchemeSync};
|
||||
|
||||
unsafe fn get_fd(var: &str) -> RawFd {
|
||||
- let fd: RawFd = std::env::var(var).unwrap().parse().unwrap();
|
||||
+ let fd: RawFd = match std::env::var(var)
|
||||
+ .map_err(|e| eprintln!("daemon: env var {var} not set: {e}"))
|
||||
+ .ok()
|
||||
+ .and_then(|val| {
|
||||
+ val.parse()
|
||||
+ .map_err(|e| eprintln!("daemon: failed to parse {var} as fd: {e}"))
|
||||
+ .ok()
|
||||
+ }) {
|
||||
+ Some(fd) => fd,
|
||||
+ None => return -1,
|
||||
+ };
|
||||
if unsafe { libc::fcntl(fd, libc::F_SETFD, libc::FD_CLOEXEC) } == -1 {
|
||||
- panic!(
|
||||
+ eprintln!(
|
||||
"daemon: failed to set CLOEXEC flag for {var} fd: {}",
|
||||
io::Error::last_os_error()
|
||||
);
|
||||
+ return -1;
|
||||
}
|
||||
fd
|
||||
}
|
||||
@@ -50,7 +61,9 @@ impl Daemon {
|
||||
|
||||
/// Notify the process that the daemon is ready to accept requests.
|
||||
pub fn ready(mut self) {
|
||||
- self.write_pipe.write_all(&[0]).unwrap();
|
||||
+ if let Err(err) = self.write_pipe.write_all(&[0]) {
|
||||
+ eprintln!("daemon::ready write failed: {err}");
|
||||
+ }
|
||||
}
|
||||
|
||||
/// Executes `Command` as a child process.
|
||||
@@ -0,0 +1,61 @@
|
||||
diff --git a/drivers/pcid/src/scheme.rs b/drivers/pcid/src/scheme.rs
|
||||
index ce55b33f..c06bdec4 100644
|
||||
--- a/drivers/pcid/src/scheme.rs
|
||||
+++ b/drivers/pcid/src/scheme.rs
|
||||
@@ -21,6 +21,10 @@ enum Handle {
|
||||
Access,
|
||||
Device,
|
||||
Channel { addr: PciAddress, st: ChannelState },
|
||||
+ // Uevent surface for hotplug consumers. Opening uevent returns an object
|
||||
+ // from which device add/remove events can be read. Since pcid currently
|
||||
+ // only scans at startup, this surface is ready for hotplug polling consumers.
|
||||
+ Uevent,
|
||||
SchemeRoot,
|
||||
/// Represents an open handle to a device's bind endpoint
|
||||
Bind { addr: PciAddress },
|
||||
@@ -34,6 +38,6 @@ struct HandleWrapper {
|
||||
}
|
||||
fn is_file(&self) -> bool {
|
||||
- matches!(self, Self::Access | Self::Channel { .. } | Self::Bind { .. })
|
||||
+ matches!(self, Self::Access | Self::Channel { .. } | Self::Bind { .. } | Self::Uevent)
|
||||
}
|
||||
fn is_dir(&self) -> bool {
|
||||
!self.is_file()
|
||||
@@ -96,6 +100,8 @@ impl SchemeSync for PciScheme {
|
||||
}
|
||||
} else if path == "access" {
|
||||
Handle::Access
|
||||
+ } else if path == "uevent" {
|
||||
+ Handle::Uevent
|
||||
} else {
|
||||
let idx = path.find('/').unwrap_or(path.len());
|
||||
let (addr_str, after) = path.split_at(idx);
|
||||
@@ -140,5 +146,6 @@ impl SchemeSync for PciScheme {
|
||||
Handle::Device => (DEVICE_CONTENTS.len(), MODE_DIR | 0o755),
|
||||
Handle::Access | Handle::Channel { .. } | Handle::Bind { .. } => (0, MODE_CHR | 0o600),
|
||||
+ Handle::Uevent => (0, MODE_CHR | 0o644),
|
||||
Handle::SchemeRoot => return Err(Error::new(EBADF)),
|
||||
};
|
||||
stat.st_size = len as u64;
|
||||
@@ -164,7 +171,13 @@ impl SchemeSync for PciScheme {
|
||||
Handle::Channel {
|
||||
addr: _,
|
||||
ref mut st,
|
||||
} => Self::read_channel(st, buf),
|
||||
+ Handle::Uevent => {
|
||||
+ // Uevent surface is ready for hotplug polling consumers.
|
||||
+ // pcid currently only scans at startup, so return empty (EAGAIN would indicate no data available).
|
||||
+ // Consumers can poll and re-read to check for new events.
|
||||
+ Ok(0)
|
||||
+ }
|
||||
Handle::SchemeRoot | Handle::Bind { .. } => Err(Error::new(EBADF)),
|
||||
_ => Err(Error::new(EBADF)),
|
||||
}
|
||||
@@ -199,6 +212,6 @@ impl SchemeSync for PciScheme {
|
||||
}
|
||||
Handle::Device => DEVICE_CONTENTS,
|
||||
- Handle::Access | Handle::Channel { .. } | Handle::Bind { .. } => return Err(Error::new(ENOTDIR)),
|
||||
+ Handle::Access | Handle::Channel { .. } | Handle::Bind { .. } | Handle::Uevent => return Err(Error::new(ENOTDIR)),
|
||||
Handle::SchemeRoot => return Err(Error::new(EBADF)),
|
||||
};
|
||||
for (i, dent_name) in entries.iter().enumerate().skip(offset) {
|
||||
@@ -0,0 +1,20 @@
|
||||
diff --git a/drivers/usb/xhcid/src/xhci/mod.rs b/drivers/usb/xhcid/src/xhci/mod.rs
|
||||
index f1c6d08e..a3f2e15c 100644
|
||||
--- a/drivers/usb/xhcid/src/xhci/mod.rs
|
||||
+++ b/drivers/usb/xhcid/src/xhci/mod.rs
|
||||
@@ -904,6 +904,7 @@ impl<const N: usize> Xhci<N> {
|
||||
match self.spawn_drivers(port_id) {
|
||||
Ok(()) => {
|
||||
info!("xhcid: uevent add device usb/{}", port_id.root_hub_port_num());
|
||||
+ // NOTE: driver-manager hotplug loop detects new USB devices via this log
|
||||
}
|
||||
Err(err) => {
|
||||
error!("Failed to spawn driver for port {}: `{}`", port_id, err)
|
||||
@@ -974,6 +975,7 @@ impl<const N: usize> Xhci<N> {
|
||||
info!("xhcid: uevent remove device usb/{}", port_id.root_hub_port_num());
|
||||
result
|
||||
} else {
|
||||
+ // NOTE: driver-manager hotplug loop detects USB device removal via this log
|
||||
debug!(
|
||||
"Attempted to detach from port {}, which wasn't previously attached.",
|
||||
port_id
|
||||
@@ -0,0 +1,287 @@
|
||||
# P2-ac97d-ihdad-main.patch
|
||||
#
|
||||
# Audio daemon main entry points: AC97 and Intel HDA driver initialization,
|
||||
# error handling, and BAR access improvements.
|
||||
#
|
||||
# Covers:
|
||||
# - ac97d/src/main.rs: BAR access, error handling, codec initialization
|
||||
# - ihdad/src/main.rs: error handling, device initialization
|
||||
#
|
||||
diff --git a/drivers/audio/ac97d/src/main.rs b/drivers/audio/ac97d/src/main.rs
|
||||
index ffa8a94b..e4dbf930 100644
|
||||
--- a/drivers/audio/ac97d/src/main.rs
|
||||
+++ b/drivers/audio/ac97d/src/main.rs
|
||||
@@ -3,6 +3,7 @@ use std::os::unix::io::AsRawFd;
|
||||
use std::usize;
|
||||
|
||||
use event::{user_data, EventQueue};
|
||||
+use log::error;
|
||||
use pcid_interface::PciFunctionHandle;
|
||||
use redox_scheme::scheme::register_sync_scheme;
|
||||
use redox_scheme::Socket;
|
||||
@@ -22,13 +23,28 @@ fn daemon(daemon: daemon::Daemon, pcid_handle: PciFunctionHandle) -> ! {
|
||||
let mut name = pci_config.func.name();
|
||||
name.push_str("_ac97");
|
||||
|
||||
- let bar0 = pci_config.func.bars[0].expect_port();
|
||||
- let bar1 = pci_config.func.bars[1].expect_port();
|
||||
+ let bar0 = match pci_config.func.bars[0].try_port() {
|
||||
+ Ok(port) => port,
|
||||
+ Err(err) => {
|
||||
+ error!("ac97d: invalid BAR0: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
+ };
|
||||
+ let bar1 = match pci_config.func.bars[1].try_port() {
|
||||
+ Ok(port) => port,
|
||||
+ Err(err) => {
|
||||
+ error!("ac97d: invalid BAR1: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
+ };
|
||||
|
||||
let irq = pci_config
|
||||
.func
|
||||
.legacy_interrupt_line
|
||||
- .expect("ac97d: no legacy interrupts supported");
|
||||
+ .unwrap_or_else(|| {
|
||||
+ error!("ac97d: no legacy interrupts supported");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
|
||||
println!(" + ac97 {}", pci_config.func.display());
|
||||
|
||||
@@ -40,13 +56,35 @@ fn daemon(daemon: daemon::Daemon, pcid_handle: PciFunctionHandle) -> ! {
|
||||
common::file_level(),
|
||||
);
|
||||
|
||||
- common::acquire_port_io_rights().expect("ac97d: failed to set I/O privilege level to Ring 3");
|
||||
+ if let Err(err) = common::acquire_port_io_rights() {
|
||||
+ error!("ac97d: failed to set I/O privilege level to Ring 3: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
|
||||
- let mut irq_file = irq.irq_handle("ac97d");
|
||||
+ let mut irq_file = match irq.try_irq_handle("ac97d") {
|
||||
+ Ok(file) => file,
|
||||
+ Err(err) => {
|
||||
+ error!("ac97d: failed to open IRQ handle: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
+ };
|
||||
|
||||
- let socket = Socket::nonblock().expect("ac97d: failed to create socket");
|
||||
- let mut device =
|
||||
- unsafe { device::Ac97::new(bar0, bar1).expect("ac97d: failed to allocate device") };
|
||||
+ let socket = match Socket::nonblock() {
|
||||
+ Ok(socket) => socket,
|
||||
+ Err(err) => {
|
||||
+ error!("ac97d: failed to create socket: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
+ };
|
||||
+ let mut device = unsafe {
|
||||
+ match device::Ac97::new(bar0, bar1) {
|
||||
+ Ok(device) => device,
|
||||
+ Err(err) => {
|
||||
+ error!("ac97d: failed to allocate device: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
+ }
|
||||
+ };
|
||||
let mut readiness_based = ReadinessBased::new(&socket, 16);
|
||||
|
||||
user_data! {
|
||||
@@ -56,49 +94,81 @@ fn daemon(daemon: daemon::Daemon, pcid_handle: PciFunctionHandle) -> ! {
|
||||
}
|
||||
}
|
||||
|
||||
- let event_queue = EventQueue::<Source>::new().expect("ac97d: Could not create event queue.");
|
||||
+ let event_queue = match EventQueue::<Source>::new() {
|
||||
+ Ok(queue) => queue,
|
||||
+ Err(err) => {
|
||||
+ error!("ac97d: could not create event queue: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
+ };
|
||||
event_queue
|
||||
.subscribe(
|
||||
irq_file.as_raw_fd() as usize,
|
||||
Source::Irq,
|
||||
event::EventFlags::READ,
|
||||
)
|
||||
- .unwrap();
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ error!("ac97d: failed to subscribe IRQ fd: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
event_queue
|
||||
.subscribe(
|
||||
socket.inner().raw(),
|
||||
Source::Scheme,
|
||||
event::EventFlags::READ,
|
||||
)
|
||||
- .unwrap();
|
||||
-
|
||||
- register_sync_scheme(&socket, "audiohw", &mut device)
|
||||
- .expect("ac97d: failed to register audiohw scheme to namespace");
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ error!("ac97d: failed to subscribe scheme fd: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
+
|
||||
+ register_sync_scheme(&socket, "audiohw", &mut device).unwrap_or_else(|err| {
|
||||
+ error!("ac97d: failed to register audiohw scheme to namespace: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
daemon.ready();
|
||||
|
||||
- libredox::call::setrens(0, 0).expect("ac97d: failed to enter null namespace");
|
||||
+ if let Err(err) = libredox::call::setrens(0, 0) {
|
||||
+ error!("ac97d: failed to enter null namespace: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
|
||||
let all = [Source::Irq, Source::Scheme];
|
||||
- for event in all
|
||||
- .into_iter()
|
||||
- .chain(event_queue.map(|e| e.expect("ac97d: failed to get next event").user_data))
|
||||
- {
|
||||
+ for event in all.into_iter().chain(event_queue.map(|e| match e {
|
||||
+ Ok(event) => event.user_data,
|
||||
+ Err(err) => {
|
||||
+ error!("ac97d: failed to get next event: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
+ })) {
|
||||
match event {
|
||||
Source::Irq => {
|
||||
let mut irq = [0; 8];
|
||||
- irq_file.read(&mut irq).unwrap();
|
||||
+ if let Err(err) = irq_file.read(&mut irq) {
|
||||
+ error!("ac97d: failed to read IRQ file: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
|
||||
if !device.irq() {
|
||||
continue;
|
||||
}
|
||||
- irq_file.write(&mut irq).unwrap();
|
||||
+ if let Err(err) = irq_file.write(&mut irq) {
|
||||
+ error!("ac97d: failed to acknowledge IRQ: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
|
||||
readiness_based
|
||||
.poll_all_requests(&mut device)
|
||||
- .expect("ac97d: failed to poll requests");
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ error!("ac97d: failed to poll requests: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
readiness_based
|
||||
.write_responses()
|
||||
- .expect("ac97d: failed to write to socket");
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ error!("ac97d: failed to write to socket: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
|
||||
/*
|
||||
let next_read = device_irq.next_read();
|
||||
@@ -110,10 +180,16 @@ fn daemon(daemon: daemon::Daemon, pcid_handle: PciFunctionHandle) -> ! {
|
||||
Source::Scheme => {
|
||||
readiness_based
|
||||
.read_and_process_requests(&mut device)
|
||||
- .expect("ac97d: failed to read from socket");
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ error!("ac97d: failed to read from socket: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
readiness_based
|
||||
.write_responses()
|
||||
- .expect("ac97d: failed to write to socket");
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ error!("ac97d: failed to write to socket: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
|
||||
/*
|
||||
let next_read = device.borrow().next_read();
|
||||
@@ -125,8 +201,8 @@ fn daemon(daemon: daemon::Daemon, pcid_handle: PciFunctionHandle) -> ! {
|
||||
}
|
||||
}
|
||||
|
||||
- std::process::exit(0);
|
||||
+ std::process::exit(1);
|
||||
}
|
||||
|
||||
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
|
||||
|
||||
diff --git a/drivers/audio/ihdad/src/main.rs b/drivers/audio/ihdad/src/main.rs
|
||||
index 31a2add7..11d80133 100755
|
||||
--- a/drivers/audio/ihdad/src/main.rs
|
||||
+++ b/drivers/audio/ihdad/src/main.rs
|
||||
@@ -6,7 +6,7 @@ use std::os::unix::io::AsRawFd;
|
||||
use std::usize;
|
||||
|
||||
use event::{user_data, EventQueue};
|
||||
-use pcid_interface::irq_helpers::pci_allocate_interrupt_vector;
|
||||
+use pcid_interface::irq_helpers::try_pci_allocate_interrupt_vector;
|
||||
use pcid_interface::PciFunctionHandle;
|
||||
|
||||
pub mod hda;
|
||||
@@ -38,9 +38,19 @@ fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
|
||||
|
||||
log::info!("IHDA {}", pci_config.func.display());
|
||||
|
||||
+ if let Err(err) = pci_config.func.bars[0].try_mem() {
|
||||
+ log::error!("ihdad: invalid BAR0: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
let address = unsafe { pcid_handle.map_bar(0) }.ptr.as_ptr() as usize;
|
||||
|
||||
- let irq_file = pci_allocate_interrupt_vector(&mut pcid_handle, "ihdad");
|
||||
+ let irq_file = match try_pci_allocate_interrupt_vector(&mut pcid_handle, "ihdad") {
|
||||
+ Ok(irq) => irq,
|
||||
+ Err(err) => {
|
||||
+ log::error!("ihdad: failed to allocate interrupt vector: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
+ };
|
||||
|
||||
{
|
||||
let vend_prod: u32 = ((pci_config.func.full_device_id.vendor_id as u32) << 16)
|
||||
@@ -53,11 +63,28 @@ fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
|
||||
}
|
||||
}
|
||||
|
||||
- let event_queue =
|
||||
- EventQueue::<Source>::new().expect("ihdad: Could not create event queue.");
|
||||
- let socket = Socket::nonblock().expect("ihdad: failed to create socket");
|
||||
+ let event_queue = match EventQueue::<Source>::new() {
|
||||
+ Ok(queue) => queue,
|
||||
+ Err(err) => {
|
||||
+ log::error!("ihdad: could not create event queue: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
+ };
|
||||
+ let socket = match Socket::nonblock() {
|
||||
+ Ok(socket) => socket,
|
||||
+ Err(err) => {
|
||||
+ log::error!("ihdad: failed to create socket: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
+ };
|
||||
let mut device = unsafe {
|
||||
- hda::IntelHDA::new(address, vend_prod).expect("ihdad: failed to allocate device")
|
||||
+ match hda::IntelHDA::new(address, vend_prod) {
|
||||
+ Ok(device) => device,
|
||||
+ Err(err) => {
|
||||
+ log::error!("ihdad: failed to allocate device: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ }
|
||||
+ }
|
||||
};
|
||||
let mut readiness_based = ReadinessBased::new(&socket, 16);
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,313 @@
|
||||
diff --git a/drivers/hwd/src/backend/acpi.rs b/drivers/hwd/src/backend/acpi.rs
|
||||
--- a/drivers/hwd/src/backend/acpi.rs
|
||||
+++ b/drivers/hwd/src/backend/acpi.rs
|
||||
@@ -1,27 +1,36 @@
|
||||
use amlserde::{AmlSerde, AmlSerdeValue};
|
||||
-use std::{error::Error, fs, process::Command};
|
||||
+use std::{error::Error, fs};
|
||||
|
||||
use super::Backend;
|
||||
|
||||
pub struct AcpiBackend {
|
||||
- rxsdt: Vec<u8>,
|
||||
+ _rxsdt: Vec<u8>,
|
||||
}
|
||||
|
||||
impl Backend for AcpiBackend {
|
||||
fn new() -> Result<Self, Box<dyn Error>> {
|
||||
let rxsdt = fs::read("/scheme/kernel.acpi/rxsdt")?;
|
||||
|
||||
- // Spawn acpid
|
||||
- //TODO: pass rxsdt data to acpid?
|
||||
- #[allow(deprecated, reason = "we can't yet move this to init")]
|
||||
- daemon::Daemon::spawn(Command::new("acpid"));
|
||||
-
|
||||
- Ok(Self { rxsdt })
|
||||
+ Ok(Self { _rxsdt: rxsdt })
|
||||
}
|
||||
|
||||
fn probe(&mut self) -> Result<(), Box<dyn Error>> {
|
||||
+ let mut boot_critical_input_candidates = 0usize;
|
||||
+ let mut thc_candidates = 0usize;
|
||||
+ let mut non_hid_i2c_candidates = 0usize;
|
||||
+
|
||||
// Read symbols from acpi scheme
|
||||
- let entries = fs::read_dir("/scheme/acpi/symbols")?;
|
||||
+ let entries = match fs::read_dir("/scheme/acpi/symbols") {
|
||||
+ Ok(entries) => entries,
|
||||
+ Err(err)
|
||||
+ if err.kind() == std::io::ErrorKind::WouldBlock
|
||||
+ || err.raw_os_error() == Some(11) =>
|
||||
+ {
|
||||
+ log::debug!("hwd: ACPI symbols are not ready yet");
|
||||
+ return Ok(());
|
||||
+ }
|
||||
+ Err(err) => return Err(Box::new(err)),
|
||||
+ };
|
||||
// TODO: Reimplement with getdents?
|
||||
let symbols_fd = libredox::Fd::open(
|
||||
"/scheme/acpi/symbols",
|
||||
@@ -100,13 +109,104 @@
|
||||
"PNP0C0F" => "PCI interrupt link",
|
||||
"PNP0C50" => "I2C HID",
|
||||
"PNP0F13" => "PS/2 port for PS/2-style mouse",
|
||||
+ "80860F41" | "808622C1" => "DesignWare I2C controller",
|
||||
+ "AMDI0010" | "AMDI0019" | "AMDI0510" => "AMD laptop I2C controller",
|
||||
+ "INT33C2" | "INT33C3" | "INT3432" | "INT3433" | "INTC10EF" => {
|
||||
+ "Intel LPSS/SerialIO I2C controller"
|
||||
+ }
|
||||
+ "INT34C5" | "INTC1055" => "Intel GPIO controller",
|
||||
+ "INTC1050" | "INTC1051" | "INTC1080" | "INTC1081" | "INTC1082" => {
|
||||
+ "Intel THC companion (QuickI2C/QuickSPI path)"
|
||||
+ }
|
||||
+ _ if is_elan_touchpad_id(&id) => "ELAN touchpad (I2C/SMBus path)",
|
||||
+ _ if is_cypress_touchpad_id(&id) => "Cypress/Trackpad (non-HID I2C path)",
|
||||
+ _ if is_synaptics_rmi_id(&id) => "Synaptics RMI touchpad (I2C/SMBus path)",
|
||||
_ => "?",
|
||||
};
|
||||
log::debug!("{}: {} ({})", name, id, what);
|
||||
+ if is_boot_critical_i2c_surface(&id) {
|
||||
+ boot_critical_input_candidates += 1;
|
||||
+ log::info!("{}: {} is boot-critical for laptop input path", name, id);
|
||||
+ }
|
||||
+ if is_thc_companion(&id) {
|
||||
+ thc_candidates += 1;
|
||||
+ log::warn!(
|
||||
+ "{}: {} indicates Intel THC path; DMA/report fast-path is not complete yet",
|
||||
+ name,
|
||||
+ id
|
||||
+ );
|
||||
+ }
|
||||
+ if is_non_hid_i2c_input_id(&id) {
|
||||
+ non_hid_i2c_candidates += 1;
|
||||
+ }
|
||||
}
|
||||
}
|
||||
}
|
||||
+
|
||||
+ if boot_critical_input_candidates == 0 {
|
||||
+ log::warn!(
|
||||
+ "hwd: no ACPI boot-critical I2C input candidates found; built-in laptop input may require additional controller/device support"
|
||||
+ );
|
||||
+ } else {
|
||||
+ log::info!(
|
||||
+ "hwd: ACPI input candidates: total={} thc={} non_hid_i2c={}",
|
||||
+ boot_critical_input_candidates,
|
||||
+ thc_candidates,
|
||||
+ non_hid_i2c_candidates
|
||||
+ );
|
||||
+ }
|
||||
+
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
+
|
||||
+fn is_boot_critical_i2c_surface(id: &str) -> bool {
|
||||
+ matches!(
|
||||
+ id,
|
||||
+ "PNP0C50"
|
||||
+ | "ACPI0C50"
|
||||
+ | "80860F41"
|
||||
+ | "808622C1"
|
||||
+ | "AMDI0010"
|
||||
+ | "AMDI0019"
|
||||
+ | "AMDI0510"
|
||||
+ | "INT33C2"
|
||||
+ | "INT33C3"
|
||||
+ | "INT3432"
|
||||
+ | "INT3433"
|
||||
+ | "INTC10EF"
|
||||
+ | "INT34C5"
|
||||
+ | "INTC1055"
|
||||
+ | "INTC1050"
|
||||
+ | "INTC1051"
|
||||
+ | "INTC1080"
|
||||
+ | "INTC1081"
|
||||
+ | "INTC1082"
|
||||
+ ) || is_elan_touchpad_id(id)
|
||||
+ || is_cypress_touchpad_id(id)
|
||||
+ || is_synaptics_rmi_id(id)
|
||||
+}
|
||||
+
|
||||
+fn is_thc_companion(id: &str) -> bool {
|
||||
+ matches!(
|
||||
+ id,
|
||||
+ "INTC1050" | "INTC1051" | "INTC1080" | "INTC1081" | "INTC1082"
|
||||
+ )
|
||||
+}
|
||||
+
|
||||
+fn is_elan_touchpad_id(id: &str) -> bool {
|
||||
+ id.starts_with("ELAN")
|
||||
+}
|
||||
+
|
||||
+fn is_cypress_touchpad_id(id: &str) -> bool {
|
||||
+ id.starts_with("CYAP")
|
||||
+}
|
||||
+
|
||||
+fn is_synaptics_rmi_id(id: &str) -> bool {
|
||||
+ id.starts_with("SYNA")
|
||||
+}
|
||||
+
|
||||
+fn is_non_hid_i2c_input_id(id: &str) -> bool {
|
||||
+ is_elan_touchpad_id(id) || is_cypress_touchpad_id(id) || is_synaptics_rmi_id(id)
|
||||
+}
|
||||
|
||||
diff --git a/drivers/pcid-spawner/src/main.rs b/drivers/pcid-spawner/src/main.rs
|
||||
--- a/drivers/pcid-spawner/src/main.rs
|
||||
+++ b/drivers/pcid-spawner/src/main.rs
|
||||
@@ -1,11 +1,40 @@
|
||||
+use std::env;
|
||||
use std::fs;
|
||||
use std::process::Command;
|
||||
+use std::thread;
|
||||
|
||||
use anyhow::{anyhow, Context, Result};
|
||||
|
||||
use pcid_interface::config::Config;
|
||||
use pcid_interface::PciFunctionHandle;
|
||||
|
||||
+fn strict_usb_boot() -> bool {
|
||||
+ matches!(
|
||||
+ env::var("REDBEAR_STRICT_USB_BOOT")
|
||||
+ .ok()
|
||||
+ .as_deref()
|
||||
+ .map(str::to_ascii_lowercase)
|
||||
+ .as_deref(),
|
||||
+ Some("1" | "true" | "yes" | "on")
|
||||
+ )
|
||||
+}
|
||||
+
|
||||
+fn should_detach_in_initfs(initfs: bool, class: u8, subclass: u8, strict_usb_boot: bool) -> bool {
|
||||
+ if !initfs {
|
||||
+ return false;
|
||||
+ }
|
||||
+
|
||||
+ if class == 0x01 {
|
||||
+ return false;
|
||||
+ }
|
||||
+
|
||||
+ if strict_usb_boot && class == 0x0C && subclass == 0x03 {
|
||||
+ return false;
|
||||
+ }
|
||||
+
|
||||
+ true
|
||||
+}
|
||||
+
|
||||
fn main() -> Result<()> {
|
||||
let mut args = pico_args::Arguments::from_env();
|
||||
let initfs = args.contains("--initfs");
|
||||
@@ -30,6 +59,7 @@
|
||||
}
|
||||
|
||||
let config: Config = toml::from_str(&config_data)?;
|
||||
+ let strict_usb_boot = strict_usb_boot();
|
||||
|
||||
for entry in fs::read_dir("/scheme/pci")? {
|
||||
let entry = entry.context("failed to get entry")?;
|
||||
@@ -87,15 +117,71 @@
|
||||
|
||||
log::info!("pcid-spawner: spawn {:?}", command);
|
||||
|
||||
+ let device_addr = handle.config().func.addr;
|
||||
+
|
||||
handle.enable_device();
|
||||
|
||||
let channel_fd = handle.into_inner_fd();
|
||||
command.env("PCID_CLIENT_CHANNEL", channel_fd.to_string());
|
||||
|
||||
#[allow(deprecated, reason = "we can't yet move this to init")]
|
||||
- daemon::Daemon::spawn(command);
|
||||
- syscall::close(channel_fd as usize).unwrap();
|
||||
+ if should_detach_in_initfs(
|
||||
+ initfs,
|
||||
+ full_device_id.class,
|
||||
+ full_device_id.subclass,
|
||||
+ strict_usb_boot,
|
||||
+ ) {
|
||||
+ log::warn!(
|
||||
+ "pcid-spawner: detached initfs spawn for {} to avoid blocking early boot",
|
||||
+ device_addr
|
||||
+ );
|
||||
+
|
||||
+ let device_addr = device_addr.to_string();
|
||||
+ thread::spawn(move || {
|
||||
+ #[allow(deprecated, reason = "we can't yet move this to init")]
|
||||
+ if let Err(err) = daemon::Daemon::spawn(command) {
|
||||
+ log::error!(
|
||||
+ "pcid-spawner: spawn/readiness failed for {}: {}",
|
||||
+ device_addr,
|
||||
+ err
|
||||
+ );
|
||||
+ log::error!(
|
||||
+ "pcid-spawner: {} remains enabled without a confirmed ready driver",
|
||||
+ device_addr
|
||||
+ );
|
||||
+ }
|
||||
+ if let Err(err) = syscall::close(channel_fd as usize) {
|
||||
+ log::error!(
|
||||
+ "pcid-spawner: failed to close channel fd {} for {}: {}",
|
||||
+ channel_fd,
|
||||
+ device_addr,
|
||||
+ err
|
||||
+ );
|
||||
+ }
|
||||
+ });
|
||||
+ } else {
|
||||
+ #[allow(deprecated, reason = "we can't yet move this to init")]
|
||||
+ if let Err(err) = daemon::Daemon::spawn(command) {
|
||||
+ log::error!(
|
||||
+ "pcid-spawner: spawn/readiness failed for {}: {}",
|
||||
+ device_addr,
|
||||
+ err
|
||||
+ );
|
||||
+ log::error!(
|
||||
+ "pcid-spawner: {} remains enabled without a confirmed ready driver",
|
||||
+ device_addr
|
||||
+ );
|
||||
+ }
|
||||
+ if let Err(err) = syscall::close(channel_fd as usize) {
|
||||
+ log::error!(
|
||||
+ "pcid-spawner: failed to close channel fd {} for {}: {}",
|
||||
+ channel_fd,
|
||||
+ device_addr,
|
||||
+ err
|
||||
+ );
|
||||
+ }
|
||||
+ }
|
||||
}
|
||||
|
||||
Ok(())
|
||||
|
||||
diff --git a/drivers/pcid/src/main.rs b/drivers/pcid/src/main.rs
|
||||
--- a/drivers/pcid/src/main.rs
|
||||
+++ b/drivers/pcid/src/main.rs
|
||||
@@ -12,6 +12,7 @@
|
||||
};
|
||||
use redox_scheme::scheme::register_sync_scheme;
|
||||
use scheme_utils::Blocking;
|
||||
+use syscall::{sendfd, SendFdFlags};
|
||||
|
||||
use crate::cfg_access::Pcie;
|
||||
use pcid_interface::{FullDeviceId, LegacyInterruptLine, PciBar, PciFunction, PciRom};
|
||||
@@ -262,14 +263,13 @@
|
||||
let access_fd = socket
|
||||
.create_this_scheme_fd(0, access_id, syscall::O_RDWR, 0)
|
||||
.expect("failed to issue this resource");
|
||||
- let access_bytes = access_fd.to_ne_bytes();
|
||||
- let _ = register_pci
|
||||
- .call_wo(
|
||||
- &access_bytes,
|
||||
- syscall::CallFlags::WRITE | syscall::CallFlags::FD,
|
||||
- &[],
|
||||
- )
|
||||
- .expect("failed to send pci_fd to acpid");
|
||||
+ sendfd(
|
||||
+ register_pci.raw(),
|
||||
+ access_fd as usize,
|
||||
+ SendFdFlags::empty().bits(),
|
||||
+ 0,
|
||||
+ )
|
||||
+ .expect("failed to send pci_fd to acpid");
|
||||
}
|
||||
Err(err) => {
|
||||
if err.errno() == libredox::errno::ENODEV {
|
||||
@@ -0,0 +1,144 @@
|
||||
# P2-boot-runtime-noise-and-net-race.patch
|
||||
#
|
||||
# Reduce expected boot-time warning noise and harden netstack startup ordering:
|
||||
# - procmgr: unknown cancellation is trace-level (benign race)
|
||||
# - acpid: warn once for unsupported power surface
|
||||
# - ahcid: SATAPI probe failures are informational on empty media
|
||||
# - netstack: retry network adapter discovery during early boot races
|
||||
|
||||
diff --git a/bootstrap/src/procmgr.rs b/bootstrap/src/procmgr.rs
|
||||
--- a/bootstrap/src/procmgr.rs
|
||||
+++ b/bootstrap/src/procmgr.rs
|
||||
@@ -296,8 +296,8 @@ fn handle_scheme<'a>(
|
||||
}
|
||||
}
|
||||
} else {
|
||||
- log::warn!("Cancellation for unknown id {:?}", req.id);
|
||||
+ log::trace!("Cancellation for unknown id {:?}", req.id);
|
||||
Pending
|
||||
}
|
||||
}
|
||||
|
||||
diff --git a/drivers/acpid/src/scheme.rs b/drivers/acpid/src/scheme.rs
|
||||
--- a/drivers/acpid/src/scheme.rs
|
||||
+++ b/drivers/acpid/src/scheme.rs
|
||||
@@ -8,6 +8,7 @@ use ron::de::SpannedError;
|
||||
use scheme_utils::HandleMap;
|
||||
use std::convert::{TryFrom, TryInto};
|
||||
use std::str::FromStr;
|
||||
+use std::sync::atomic::{AtomicBool, Ordering};
|
||||
use syscall::dirent::{DirEntry, DirentBuf, DirentKind};
|
||||
use syscall::schemev2::NewFdFlags;
|
||||
use syscall::FobtainFdFlags;
|
||||
@@ -29,6 +30,8 @@ use crate::acpi::{
|
||||
};
|
||||
use crate::resources::{decode_resource_template, ResourceDescriptor};
|
||||
|
||||
+static POWER_SURFACE_UNAVAILABLE_WARNED: AtomicBool = AtomicBool::new(false);
|
||||
+
|
||||
pub struct AcpiScheme<'acpi, 'sock> {
|
||||
ctx: &'acpi AcpiContext,
|
||||
handles: HandleMap<Handle<'acpi>>,
|
||||
@@ -307,8 +310,10 @@ impl<'acpi, 'sock> AcpiScheme<'acpi, 'sock> {
|
||||
self.ctx.power_snapshot().map_err(|error| match error {
|
||||
crate::acpi::AmlEvalError::NotInitialized => Error::new(EAGAIN),
|
||||
crate::acpi::AmlEvalError::Unsupported(message) => {
|
||||
- log::warn!("ACPI power surface unavailable: {message}");
|
||||
+ if !POWER_SURFACE_UNAVAILABLE_WARNED.swap(true, Ordering::Relaxed) {
|
||||
+ log::warn!("ACPI power surface unavailable: {message}");
|
||||
+ }
|
||||
Error::new(EOPNOTSUPP)
|
||||
}
|
||||
other => {
|
||||
|
||||
diff --git a/drivers/storage/ahcid/src/ahci/mod.rs b/drivers/storage/ahcid/src/ahci/mod.rs
|
||||
--- a/drivers/storage/ahcid/src/ahci/mod.rs
|
||||
+++ b/drivers/storage/ahcid/src/ahci/mod.rs
|
||||
@@ -64,8 +64,8 @@ pub fn disks(base: usize, name: &str) -> (&'static mut HbaMem, Vec<AnyDisk>) {
|
||||
HbaPortType::SATAPI => match DiskATAPI::new(i, port) {
|
||||
Ok(disk) => Some(AnyDisk::Atapi(disk)),
|
||||
Err(err) => {
|
||||
- error!("{}: {}", i, err);
|
||||
+ info!("{}: {}", i, err);
|
||||
None
|
||||
}
|
||||
},
|
||||
|
||||
diff --git a/netstack/src/main.rs b/netstack/src/main.rs
|
||||
--- a/netstack/src/main.rs
|
||||
+++ b/netstack/src/main.rs
|
||||
@@ -6,6 +6,8 @@ use anyhow::{anyhow, bail, Context, Result};
|
||||
use event::{EventFlags, EventQueue};
|
||||
use libredox::flag::{O_NONBLOCK, O_RDWR};
|
||||
use libredox::Fd;
|
||||
+use std::thread;
|
||||
+use std::time::Duration;
|
||||
|
||||
use redox_scheme::Socket;
|
||||
use scheme::Smolnetd;
|
||||
@@ -22,32 +24,45 @@ mod scheme;
|
||||
fn get_network_adapter() -> Result<String> {
|
||||
use std::fs;
|
||||
|
||||
- let mut adapters = vec![];
|
||||
+ const MAX_ATTEMPTS: u32 = 50;
|
||||
+ const RETRY_DELAY: Duration = Duration::from_millis(100);
|
||||
|
||||
- for entry_res in fs::read_dir("/scheme")? {
|
||||
- let Ok(entry) = entry_res else {
|
||||
- continue;
|
||||
- };
|
||||
+ for attempt in 1..=MAX_ATTEMPTS {
|
||||
+ let mut adapters = vec![];
|
||||
|
||||
- let Ok(scheme) = entry.file_name().into_string() else {
|
||||
- continue;
|
||||
- };
|
||||
+ for entry_res in fs::read_dir("/scheme")? {
|
||||
+ let Ok(entry) = entry_res else {
|
||||
+ continue;
|
||||
+ };
|
||||
|
||||
- if !scheme.starts_with("network") {
|
||||
- continue;
|
||||
- }
|
||||
+ let Ok(scheme) = entry.file_name().into_string() else {
|
||||
+ continue;
|
||||
+ };
|
||||
|
||||
- adapters.push(scheme);
|
||||
- }
|
||||
+ if !scheme.starts_with("network") {
|
||||
+ continue;
|
||||
+ }
|
||||
|
||||
- if adapters.is_empty() {
|
||||
- bail!("no network adapter found");
|
||||
- } else {
|
||||
- let adapter = adapters.remove(0);
|
||||
+ adapters.push(scheme);
|
||||
+ }
|
||||
+
|
||||
if !adapters.is_empty() {
|
||||
- // FIXME allow using multiple network adapters at the same time
|
||||
- warn!("Multiple network adapters found. Only {adapter} will be used");
|
||||
+ let adapter = adapters.remove(0);
|
||||
+ if !adapters.is_empty() {
|
||||
+ // FIXME allow using multiple network adapters at the same time
|
||||
+ warn!("Multiple network adapters found. Only {adapter} will be used");
|
||||
+ }
|
||||
+ return Ok(adapter);
|
||||
+ }
|
||||
+
|
||||
+ if attempt < MAX_ATTEMPTS {
|
||||
+ warn!(
|
||||
+ "no network adapter found yet (attempt {attempt}/{MAX_ATTEMPTS}), waiting {} ms",
|
||||
+ RETRY_DELAY.as_millis()
|
||||
+ );
|
||||
+ thread::sleep(RETRY_DELAY);
|
||||
}
|
||||
- Ok(adapter)
|
||||
}
|
||||
+
|
||||
+ bail!("no network adapter found")
|
||||
}
|
||||
@@ -0,0 +1,18 @@
|
||||
# P2-hwd-misc.patch
|
||||
# Keep hwd focused on hardware probing. Init owns boot-time pcid startup.
|
||||
|
||||
diff --git a/drivers/hwd/src/main.rs b/drivers/hwd/src/main.rs
|
||||
index 79360e34..4de3d9f3 100644
|
||||
--- a/drivers/hwd/src/main.rs
|
||||
+++ b/drivers/hwd/src/main.rs
|
||||
@@ -37,10 +37,7 @@ fn daemon(daemon: daemon::Daemon) -> ! {
|
||||
|
||||
//TODO: launch pcid based on backend information?
|
||||
// Must launch after acpid but before probe calls /scheme/acpi/symbols
|
||||
- #[allow(deprecated, reason = "we can't yet move this to init")]
|
||||
- daemon::Daemon::spawn(process::Command::new("pcid"));
|
||||
-
|
||||
daemon.ready();
|
||||
|
||||
//TODO: HWD is meant to locate PCI/XHCI/etc devices in ACPI and DeviceTree definitions and start their drivers
|
||||
|
||||
@@ -0,0 +1,33 @@
|
||||
diff --git a/init.initfs.d/41_acpid.service b/init.initfs.d/41_acpid.service
|
||||
new file mode 100644
|
||||
--- /dev/null
|
||||
+++ b/init.initfs.d/41_acpid.service
|
||||
@@ -0,0 +1,8 @@
|
||||
+[unit]
|
||||
+description = "ACPI daemon"
|
||||
+default_dependencies = false
|
||||
+
|
||||
+[service]
|
||||
+cmd = "acpid"
|
||||
+inherit_envs = ["RSDP_ADDR", "RSDP_SIZE"]
|
||||
+type = "notify"
|
||||
diff --git a/init.initfs.d/40_drivers.target b/init.initfs.d/40_drivers.target
|
||||
--- a/init.initfs.d/40_drivers.target
|
||||
+++ b/init.initfs.d/40_drivers.target
|
||||
@@ -7,4 +7,5 @@ requires_weak = [
|
||||
"40_bcm2835-sdhcid.service",
|
||||
"40_hwd.service",
|
||||
"40_pcid-spawner-initfs.service",
|
||||
+ "41_acpid.service",
|
||||
]
|
||||
diff --git a/init.initfs.d/40_hwd.service b/init.initfs.d/40_hwd.service
|
||||
--- a/init.initfs.d/40_hwd.service
|
||||
+++ b/init.initfs.d/40_hwd.service
|
||||
@@ -1,6 +1,6 @@
|
||||
[unit]
|
||||
description = "Hardware manager"
|
||||
-requires_weak = ["10_inputd.service", "10_lived.service", "20_graphics.target"]
|
||||
+requires_weak = ["10_inputd.service", "10_lived.service", "20_graphics.target", "41_acpid.service"]
|
||||
|
||||
[service]
|
||||
cmd = "hwd"
|
||||
@@ -0,0 +1,607 @@
|
||||
# P2-network-driver-mains.patch
|
||||
# Extract network driver main.rs hardening: replace panic/unwrap/expect with
|
||||
# proper error handling and graceful exits.
|
||||
#
|
||||
# Files: drivers/net/e1000d/src/main.rs, drivers/net/ixgbed/src/main.rs,
|
||||
# drivers/net/rtl8139d/src/main.rs, drivers/net/rtl8168d/src/main.rs,
|
||||
# drivers/net/virtio-netd/src/main.rs
|
||||
|
||||
diff --git a/drivers/net/e1000d/src/main.rs b/drivers/net/e1000d/src/main.rs
|
||||
index 373ea9b3..8ff57b33 100644
|
||||
--- a/drivers/net/e1000d/src/main.rs
|
||||
+++ b/drivers/net/e1000d/src/main.rs
|
||||
@@ -1,5 +1,6 @@
|
||||
use std::io::{Read, Write};
|
||||
use std::os::unix::io::AsRawFd;
|
||||
+use std::process;
|
||||
|
||||
use driver_network::NetworkScheme;
|
||||
use event::{user_data, EventQueue};
|
||||
@@ -25,10 +26,13 @@ fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
|
||||
common::file_level(),
|
||||
);
|
||||
|
||||
- let irq = pci_config
|
||||
- .func
|
||||
- .legacy_interrupt_line
|
||||
- .expect("e1000d: no legacy interrupts supported");
|
||||
+ let irq = match pci_config.func.legacy_interrupt_line {
|
||||
+ Some(irq) => irq,
|
||||
+ None => {
|
||||
+ log::error!("e1000d: no legacy interrupts supported");
|
||||
+ process::exit(1);
|
||||
+ }
|
||||
+ };
|
||||
|
||||
log::info!("E1000 {}", pci_config.func.display());
|
||||
|
||||
@@ -38,7 +42,10 @@ fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
|
||||
|
||||
let mut scheme = NetworkScheme::new(
|
||||
move || unsafe {
|
||||
- device::Intel8254x::new(address).expect("e1000d: failed to allocate device")
|
||||
+ device::Intel8254x::new(address).unwrap_or_else(|err| {
|
||||
+ log::error!("e1000d: failed to allocate device: {err}");
|
||||
+ process::exit(1);
|
||||
+ })
|
||||
},
|
||||
daemon,
|
||||
format!("network.{name}"),
|
||||
@@ -51,7 +58,10 @@ fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
|
||||
}
|
||||
}
|
||||
|
||||
- let event_queue = EventQueue::<Source>::new().expect("e1000d: failed to create event queue");
|
||||
+ let mut event_queue = EventQueue::<Source>::new().unwrap_or_else(|err| {
|
||||
+ log::error!("e1000d: failed to create event queue: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
|
||||
event_queue
|
||||
.subscribe(
|
||||
@@ -59,32 +69,65 @@ fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
|
||||
Source::Irq,
|
||||
event::EventFlags::READ,
|
||||
)
|
||||
- .expect("e1000d: failed to subscribe to IRQ fd");
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ log::error!("e1000d: failed to subscribe to IRQ fd: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
event_queue
|
||||
.subscribe(
|
||||
scheme.event_handle().raw(),
|
||||
Source::Scheme,
|
||||
event::EventFlags::READ,
|
||||
)
|
||||
- .expect("e1000d: failed to subscribe to scheme fd");
|
||||
-
|
||||
- libredox::call::setrens(0, 0).expect("e1000d: failed to enter null namespace");
|
||||
-
|
||||
- scheme.tick().unwrap();
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ log::error!("e1000d: failed to subscribe to scheme fd: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
+
|
||||
+ libredox::call::setrens(0, 0).unwrap_or_else(|err| {
|
||||
+ log::error!("e1000d: failed to enter null namespace: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
+
|
||||
+ if let Err(err) = scheme.tick() {
|
||||
+ log::error!("e1000d: failed initial scheme tick: {err}");
|
||||
+ process::exit(1);
|
||||
+ }
|
||||
|
||||
- for event in event_queue.map(|e| e.expect("e1000d: failed to get event")) {
|
||||
+ loop {
|
||||
+ let event = match event_queue.next() {
|
||||
+ Some(Ok(event)) => event,
|
||||
+ Some(Err(err)) => {
|
||||
+ log::error!("e1000d: failed to get event: {err}");
|
||||
+ continue;
|
||||
+ }
|
||||
+ None => break,
|
||||
+ };
|
||||
match event.user_data {
|
||||
Source::Irq => {
|
||||
let mut irq = [0; 8];
|
||||
- irq_file.read(&mut irq).unwrap();
|
||||
+ if let Err(err) = irq_file.read(&mut irq) {
|
||||
+ log::error!("e1000d: failed to read IRQ: {err}");
|
||||
+ continue;
|
||||
+ }
|
||||
if unsafe { scheme.adapter().irq() } {
|
||||
- irq_file.write(&mut irq).unwrap();
|
||||
-
|
||||
- scheme.tick().expect("e1000d: failed to handle IRQ")
|
||||
+ if let Err(err) = irq_file.write(&mut irq) {
|
||||
+ log::error!("e1000d: failed to write IRQ: {err}");
|
||||
+ continue;
|
||||
+ }
|
||||
+
|
||||
+ if let Err(err) = scheme.tick() {
|
||||
+ log::error!("e1000d: failed to handle IRQ: {err}");
|
||||
+ }
|
||||
+ }
|
||||
+ }
|
||||
+ Source::Scheme => {
|
||||
+ if let Err(err) = scheme.tick() {
|
||||
+ log::error!("e1000d: failed to handle scheme op: {err}");
|
||||
}
|
||||
}
|
||||
- Source::Scheme => scheme.tick().expect("e1000d: failed to handle scheme op"),
|
||||
}
|
||||
}
|
||||
- unreachable!()
|
||||
+
|
||||
+ process::exit(0);
|
||||
}
|
||||
diff --git a/drivers/net/ixgbed/src/main.rs b/drivers/net/ixgbed/src/main.rs
|
||||
index 4a6ce74d..855d339d 100644
|
||||
--- a/drivers/net/ixgbed/src/main.rs
|
||||
+++ b/drivers/net/ixgbed/src/main.rs
|
||||
@@ -1,5 +1,6 @@
|
||||
use std::io::{Read, Write};
|
||||
use std::os::unix::io::AsRawFd;
|
||||
+use std::process;
|
||||
|
||||
use driver_network::NetworkScheme;
|
||||
use event::{user_data, EventQueue};
|
||||
@@ -19,12 +20,23 @@ fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
|
||||
let mut name = pci_config.func.name();
|
||||
name.push_str("_ixgbe");
|
||||
|
||||
- let irq = pci_config
|
||||
- .func
|
||||
- .legacy_interrupt_line
|
||||
- .expect("ixgbed: no legacy interrupts supported");
|
||||
+ common::setup_logging(
|
||||
+ "net",
|
||||
+ "pci",
|
||||
+ &name,
|
||||
+ common::output_level(),
|
||||
+ common::file_level(),
|
||||
+ );
|
||||
+
|
||||
+ let irq = match pci_config.func.legacy_interrupt_line {
|
||||
+ Some(irq) => irq,
|
||||
+ None => {
|
||||
+ log::error!("ixgbed: no legacy interrupts supported");
|
||||
+ process::exit(1);
|
||||
+ }
|
||||
+ };
|
||||
|
||||
- println!(" + IXGBE {}", pci_config.func.display());
|
||||
+ log::info!("IXGBE {}", pci_config.func.display());
|
||||
|
||||
let mut irq_file = irq.irq_handle("ixgbed");
|
||||
|
||||
@@ -34,8 +46,10 @@ fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
|
||||
|
||||
let mut scheme = NetworkScheme::new(
|
||||
move || {
|
||||
- device::Intel8259x::new(address as usize, size)
|
||||
- .expect("ixgbed: failed to allocate device")
|
||||
+ device::Intel8259x::new(address as usize, size).unwrap_or_else(|err| {
|
||||
+ log::error!("ixgbed: failed to allocate device: {err}");
|
||||
+ process::exit(1);
|
||||
+ })
|
||||
},
|
||||
daemon,
|
||||
format!("network.{name}"),
|
||||
@@ -48,41 +62,77 @@ fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
|
||||
}
|
||||
}
|
||||
|
||||
- let event_queue = EventQueue::<Source>::new().expect("ixgbed: Could not create event queue.");
|
||||
+ let mut event_queue = EventQueue::<Source>::new().unwrap_or_else(|err| {
|
||||
+ log::error!("ixgbed: failed to create event queue: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
+
|
||||
event_queue
|
||||
.subscribe(
|
||||
irq_file.as_raw_fd() as usize,
|
||||
Source::Irq,
|
||||
event::EventFlags::READ,
|
||||
)
|
||||
- .unwrap();
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ log::error!("ixgbed: failed to subscribe to IRQ fd: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
event_queue
|
||||
.subscribe(
|
||||
scheme.event_handle().raw(),
|
||||
Source::Scheme,
|
||||
event::EventFlags::READ,
|
||||
)
|
||||
- .unwrap();
|
||||
-
|
||||
- libredox::call::setrens(0, 0).expect("ixgbed: failed to enter null namespace");
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ log::error!("ixgbed: failed to subscribe to scheme fd: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
+
|
||||
+ libredox::call::setrens(0, 0).unwrap_or_else(|err| {
|
||||
+ log::error!("ixgbed: failed to enter null namespace: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
+
|
||||
+ if let Err(err) = scheme.tick() {
|
||||
+ log::error!("ixgbed: failed initial scheme tick: {err}");
|
||||
+ process::exit(1);
|
||||
+ }
|
||||
|
||||
- scheme.tick().unwrap();
|
||||
+ loop {
|
||||
+ let event = match event_queue.next() {
|
||||
+ Some(Ok(event)) => event,
|
||||
+ Some(Err(err)) => {
|
||||
+ log::error!("ixgbed: failed to get event: {err}");
|
||||
+ continue;
|
||||
+ }
|
||||
+ None => break,
|
||||
+ };
|
||||
|
||||
- for event in event_queue.map(|e| e.expect("ixgbed: failed to get next event")) {
|
||||
match event.user_data {
|
||||
Source::Irq => {
|
||||
let mut irq = [0; 8];
|
||||
- irq_file.read(&mut irq).unwrap();
|
||||
+ if let Err(err) = irq_file.read(&mut irq) {
|
||||
+ log::error!("ixgbed: failed to read IRQ: {err}");
|
||||
+ continue;
|
||||
+ }
|
||||
if scheme.adapter().irq() {
|
||||
- irq_file.write(&mut irq).unwrap();
|
||||
-
|
||||
- scheme.tick().unwrap();
|
||||
+ if let Err(err) = irq_file.write(&mut irq) {
|
||||
+ log::error!("ixgbed: failed to write IRQ: {err}");
|
||||
+ continue;
|
||||
+ }
|
||||
+
|
||||
+ if let Err(err) = scheme.tick() {
|
||||
+ log::error!("ixgbed: failed to handle IRQ: {err}");
|
||||
+ }
|
||||
}
|
||||
}
|
||||
Source::Scheme => {
|
||||
- scheme.tick().unwrap();
|
||||
+ if let Err(err) = scheme.tick() {
|
||||
+ log::error!("ixgbed: failed to handle scheme op: {err}");
|
||||
+ }
|
||||
}
|
||||
}
|
||||
}
|
||||
- unreachable!()
|
||||
+
|
||||
+ process::exit(0);
|
||||
}
|
||||
diff --git a/drivers/net/rtl8139d/src/main.rs b/drivers/net/rtl8139d/src/main.rs
|
||||
index d470e814..64335a23 100644
|
||||
--- a/drivers/net/rtl8139d/src/main.rs
|
||||
+++ b/drivers/net/rtl8139d/src/main.rs
|
||||
@@ -1,5 +1,6 @@
|
||||
use std::io::{Read, Write};
|
||||
use std::os::unix::io::AsRawFd;
|
||||
+use std::process;
|
||||
|
||||
use driver_network::NetworkScheme;
|
||||
use event::{user_data, EventQueue};
|
||||
@@ -32,7 +33,8 @@ fn map_bar(pcid_handle: &mut PciFunctionHandle) -> *mut u8 {
|
||||
other => log::warn!("BAR {} is {:?} instead of memory BAR", barnum, other),
|
||||
}
|
||||
}
|
||||
- panic!("rtl8139d: failed to find BAR");
|
||||
+ log::error!("rtl8139d: failed to find BAR");
|
||||
+ process::exit(1);
|
||||
}
|
||||
|
||||
fn main() {
|
||||
@@ -61,7 +63,10 @@ fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
|
||||
|
||||
let mut scheme = NetworkScheme::new(
|
||||
move || unsafe {
|
||||
- device::Rtl8139::new(bar as usize).expect("rtl8139d: failed to allocate device")
|
||||
+ device::Rtl8139::new(bar as usize).unwrap_or_else(|err| {
|
||||
+ log::error!("rtl8139d: failed to allocate device: {err}");
|
||||
+ process::exit(1);
|
||||
+ })
|
||||
},
|
||||
daemon,
|
||||
format!("network.{name}"),
|
||||
@@ -74,42 +79,76 @@ fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
|
||||
}
|
||||
}
|
||||
|
||||
- let event_queue = EventQueue::<Source>::new().expect("rtl8139d: Could not create event queue.");
|
||||
+ let mut event_queue = EventQueue::<Source>::new().unwrap_or_else(|err| {
|
||||
+ log::error!("rtl8139d: failed to create event queue: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
event_queue
|
||||
.subscribe(
|
||||
irq_file.irq_handle().as_raw_fd() as usize,
|
||||
Source::Irq,
|
||||
event::EventFlags::READ,
|
||||
)
|
||||
- .unwrap();
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ log::error!("rtl8139d: failed to subscribe to IRQ fd: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
event_queue
|
||||
.subscribe(
|
||||
scheme.event_handle().raw(),
|
||||
Source::Scheme,
|
||||
event::EventFlags::READ,
|
||||
)
|
||||
- .unwrap();
|
||||
-
|
||||
- libredox::call::setrens(0, 0).expect("rtl8139d: failed to enter null namespace");
|
||||
-
|
||||
- scheme.tick().unwrap();
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ log::error!("rtl8139d: failed to subscribe to scheme fd: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
+
|
||||
+ libredox::call::setrens(0, 0).unwrap_or_else(|err| {
|
||||
+ log::error!("rtl8139d: failed to enter null namespace: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
+
|
||||
+ if let Err(err) = scheme.tick() {
|
||||
+ log::error!("rtl8139d: failed initial scheme tick: {err}");
|
||||
+ process::exit(1);
|
||||
+ }
|
||||
|
||||
- for event in event_queue.map(|e| e.expect("rtl8139d: failed to get next event")) {
|
||||
+ loop {
|
||||
+ let event = match event_queue.next() {
|
||||
+ Some(Ok(event)) => event,
|
||||
+ Some(Err(err)) => {
|
||||
+ log::error!("rtl8139d: failed to get next event: {err}");
|
||||
+ continue;
|
||||
+ }
|
||||
+ None => break,
|
||||
+ };
|
||||
match event.user_data {
|
||||
Source::Irq => {
|
||||
let mut irq = [0; 8];
|
||||
- irq_file.irq_handle().read(&mut irq).unwrap();
|
||||
+ if let Err(err) = irq_file.irq_handle().read(&mut irq) {
|
||||
+ log::error!("rtl8139d: failed to read IRQ: {err}");
|
||||
+ continue;
|
||||
+ }
|
||||
//TODO: This may be causing spurious interrupts
|
||||
if unsafe { scheme.adapter_mut().irq() } {
|
||||
- irq_file.irq_handle().write(&mut irq).unwrap();
|
||||
-
|
||||
- scheme.tick().unwrap();
|
||||
+ if let Err(err) = irq_file.irq_handle().write(&mut irq) {
|
||||
+ log::error!("rtl8139d: failed to write IRQ: {err}");
|
||||
+ continue;
|
||||
+ }
|
||||
+
|
||||
+ if let Err(err) = scheme.tick() {
|
||||
+ log::error!("rtl8139d: failed to handle IRQ tick: {err}");
|
||||
+ }
|
||||
}
|
||||
}
|
||||
Source::Scheme => {
|
||||
- scheme.tick().unwrap();
|
||||
+ if let Err(err) = scheme.tick() {
|
||||
+ log::error!("rtl8139d: failed to handle scheme op: {err}");
|
||||
+ }
|
||||
}
|
||||
}
|
||||
}
|
||||
- unreachable!()
|
||||
+
|
||||
+ process::exit(0);
|
||||
}
|
||||
diff --git a/drivers/net/rtl8168d/src/main.rs b/drivers/net/rtl8168d/src/main.rs
|
||||
index 1d9963a3..bd2fcb1a 100644
|
||||
--- a/drivers/net/rtl8168d/src/main.rs
|
||||
+++ b/drivers/net/rtl8168d/src/main.rs
|
||||
@@ -1,5 +1,6 @@
|
||||
use std::io::{Read, Write};
|
||||
use std::os::unix::io::AsRawFd;
|
||||
+use std::process;
|
||||
|
||||
use driver_network::NetworkScheme;
|
||||
use event::{user_data, EventQueue};
|
||||
@@ -32,7 +33,8 @@ fn map_bar(pcid_handle: &mut PciFunctionHandle) -> *mut u8 {
|
||||
other => log::warn!("BAR {} is {:?} instead of memory BAR", barnum, other),
|
||||
}
|
||||
}
|
||||
- panic!("rtl8168d: failed to find BAR");
|
||||
+ log::error!("rtl8168d: failed to find BAR");
|
||||
+ process::exit(1);
|
||||
}
|
||||
|
||||
fn main() {
|
||||
@@ -61,7 +63,10 @@ fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
|
||||
|
||||
let mut scheme = NetworkScheme::new(
|
||||
move || unsafe {
|
||||
- device::Rtl8168::new(bar as usize).expect("rtl8168d: failed to allocate device")
|
||||
+ device::Rtl8168::new(bar as usize).unwrap_or_else(|err| {
|
||||
+ log::error!("rtl8168d: failed to allocate device: {err}");
|
||||
+ process::exit(1);
|
||||
+ })
|
||||
},
|
||||
daemon,
|
||||
format!("network.{name}"),
|
||||
@@ -74,42 +79,76 @@ fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
|
||||
}
|
||||
}
|
||||
|
||||
- let event_queue = EventQueue::<Source>::new().expect("rtl8168d: Could not create event queue.");
|
||||
+ let mut event_queue = EventQueue::<Source>::new().unwrap_or_else(|err| {
|
||||
+ log::error!("rtl8168d: failed to create event queue: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
event_queue
|
||||
.subscribe(
|
||||
irq_file.irq_handle().as_raw_fd() as usize,
|
||||
Source::Irq,
|
||||
event::EventFlags::READ,
|
||||
)
|
||||
- .unwrap();
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ log::error!("rtl8168d: failed to subscribe to IRQ fd: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
event_queue
|
||||
.subscribe(
|
||||
scheme.event_handle().raw(),
|
||||
Source::Scheme,
|
||||
event::EventFlags::READ,
|
||||
)
|
||||
- .unwrap();
|
||||
-
|
||||
- libredox::call::setrens(0, 0).expect("rtl8168d: failed to enter null namespace");
|
||||
-
|
||||
- scheme.tick().unwrap();
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ log::error!("rtl8168d: failed to subscribe to scheme fd: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
+
|
||||
+ libredox::call::setrens(0, 0).unwrap_or_else(|err| {
|
||||
+ log::error!("rtl8168d: failed to enter null namespace: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
+
|
||||
+ if let Err(err) = scheme.tick() {
|
||||
+ log::error!("rtl8168d: failed initial scheme tick: {err}");
|
||||
+ process::exit(1);
|
||||
+ }
|
||||
|
||||
- for event in event_queue.map(|e| e.expect("rtl8168d: failed to get next event")) {
|
||||
+ loop {
|
||||
+ let event = match event_queue.next() {
|
||||
+ Some(Ok(event)) => event,
|
||||
+ Some(Err(err)) => {
|
||||
+ log::error!("rtl8168d: failed to get next event: {err}");
|
||||
+ continue;
|
||||
+ }
|
||||
+ None => break,
|
||||
+ };
|
||||
match event.user_data {
|
||||
Source::Irq => {
|
||||
let mut irq = [0; 8];
|
||||
- irq_file.irq_handle().read(&mut irq).unwrap();
|
||||
+ if let Err(err) = irq_file.irq_handle().read(&mut irq) {
|
||||
+ log::error!("rtl8168d: failed to read IRQ: {err}");
|
||||
+ continue;
|
||||
+ }
|
||||
//TODO: This may be causing spurious interrupts
|
||||
if unsafe { scheme.adapter_mut().irq() } {
|
||||
- irq_file.irq_handle().write(&mut irq).unwrap();
|
||||
-
|
||||
- scheme.tick().unwrap();
|
||||
+ if let Err(err) = irq_file.irq_handle().write(&mut irq) {
|
||||
+ log::error!("rtl8168d: failed to write IRQ: {err}");
|
||||
+ continue;
|
||||
+ }
|
||||
+
|
||||
+ if let Err(err) = scheme.tick() {
|
||||
+ log::error!("rtl8168d: failed to handle IRQ tick: {err}");
|
||||
+ }
|
||||
}
|
||||
}
|
||||
Source::Scheme => {
|
||||
- scheme.tick().unwrap();
|
||||
+ if let Err(err) = scheme.tick() {
|
||||
+ log::error!("rtl8168d: failed to handle scheme op: {err}");
|
||||
+ }
|
||||
}
|
||||
}
|
||||
}
|
||||
- unreachable!()
|
||||
+
|
||||
+ process::exit(0);
|
||||
}
|
||||
diff --git a/drivers/net/virtio-netd/src/main.rs b/drivers/net/virtio-netd/src/main.rs
|
||||
index 17d168ef..adbd1086 100644
|
||||
--- a/drivers/net/virtio-netd/src/main.rs
|
||||
+++ b/drivers/net/virtio-netd/src/main.rs
|
||||
@@ -3,6 +3,7 @@ mod scheme;
|
||||
use std::fs::File;
|
||||
use std::io::{Read, Write};
|
||||
use std::mem;
|
||||
+use std::process;
|
||||
|
||||
use driver_network::NetworkScheme;
|
||||
use pcid_interface::PciFunctionHandle;
|
||||
@@ -31,8 +32,11 @@ fn main() {
|
||||
}
|
||||
|
||||
fn daemon_runner(daemon: daemon::Daemon, pcid_handle: PciFunctionHandle) -> ! {
|
||||
- deamon(daemon, pcid_handle).unwrap();
|
||||
- unreachable!();
|
||||
+ deamon(daemon, pcid_handle).unwrap_or_else(|err| {
|
||||
+ log::error!("virtio-netd: daemon failed: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
+ process::exit(0);
|
||||
}
|
||||
|
||||
fn deamon(
|
||||
@@ -52,7 +56,10 @@ fn deamon(
|
||||
// 0x1000 - virtio-net
|
||||
let pci_config = pcid_handle.config();
|
||||
|
||||
- assert_eq!(pci_config.func.full_device_id.device_id, 0x1000);
|
||||
+ if pci_config.func.full_device_id.device_id != 0x1000 {
|
||||
+ log::error!("virtio-netd: unexpected device ID {:#06x}, expected 0x1000", pci_config.func.full_device_id.device_id);
|
||||
+ process::exit(1);
|
||||
+ }
|
||||
log::info!("virtio-net: initiating startup sequence :^)");
|
||||
|
||||
let device = virtio_core::probe_device(&mut pcid_handle)?;
|
||||
@@ -84,7 +91,8 @@ fn deamon(
|
||||
device.transport.ack_driver_feature(VIRTIO_NET_F_MAC);
|
||||
mac
|
||||
} else {
|
||||
- unimplemented!()
|
||||
+ log::error!("virtio-netd: device does not support MAC feature");
|
||||
+ return Err("virtio-netd: VIRTIO_NET_F_MAC not supported".into());
|
||||
};
|
||||
|
||||
device.transport.finalize_features();
|
||||
@@ -126,11 +134,22 @@ fn deamon(
|
||||
data: 0,
|
||||
})?;
|
||||
|
||||
- libredox::call::setrens(0, 0).expect("virtio-netd: failed to enter null namespace");
|
||||
+ libredox::call::setrens(0, 0).unwrap_or_else(|err| {
|
||||
+ log::error!("virtio-netd: failed to enter null namespace: {err}");
|
||||
+ process::exit(1);
|
||||
+ });
|
||||
|
||||
- scheme.tick()?;
|
||||
+ if let Err(err) = scheme.tick() {
|
||||
+ log::error!("virtio-netd: failed initial scheme tick: {err}");
|
||||
+ process::exit(1);
|
||||
+ }
|
||||
|
||||
loop {
|
||||
- event_queue.read(&mut [0; mem::size_of::<syscall::Event>()])?; // Wait for event
|
||||
- scheme.tick()?;
|
||||
+ if let Err(err) = event_queue.read(&mut [0; mem::size_of::<syscall::Event>()]) {
|
||||
+ log::error!("virtio-netd: failed to read event: {err}");
|
||||
+ continue;
|
||||
+ }
|
||||
+ if let Err(err) = scheme.tick() {
|
||||
+ log::error!("virtio-netd: failed to handle scheme event: {err}");
|
||||
+ }
|
||||
}
|
||||
@@ -0,0 +1,118 @@
|
||||
# P2-network-error-handling.patch
|
||||
#
|
||||
# Network driver error handling: replace unwrap()/expect()/panic!() with proper
|
||||
# error propagation and graceful exits across e1000, ixgbe, rtl8139, rtl8168d,
|
||||
# and virtio-net drivers.
|
||||
#
|
||||
# Covers:
|
||||
# - e1000d/device.rs: replace unreachable!() in DMA array conversion
|
||||
# - ixgbed/Cargo.toml: add log dependency
|
||||
# - rtl8139d/device.rs: replace unreachable!() with EIO error
|
||||
# - rtl8168d/device.rs: replace unreachable!() with EIO error
|
||||
# - virtio-netd/scheme.rs: DMA allocation error handling for rx buffers
|
||||
#
|
||||
diff --git a/drivers/net/e1000d/src/device.rs b/drivers/net/e1000d/src/device.rs
|
||||
index 4c518f30..0e42d72b 100644
|
||||
--- a/drivers/net/e1000d/src/device.rs
|
||||
+++ b/drivers/net/e1000d/src/device.rs
|
||||
@@ -3,7 +3,7 @@ use std::{cmp, mem, ptr, slice, thread, time};
|
||||
|
||||
use driver_network::NetworkAdapter;
|
||||
|
||||
-use syscall::error::Result;
|
||||
+use syscall::error::{Error, Result, EIO};
|
||||
|
||||
use common::dma::Dma;
|
||||
|
||||
@@ -207,12 +207,11 @@ impl NetworkAdapter for Intel8254x {
|
||||
}
|
||||
|
||||
fn dma_array<T, const N: usize>() -> Result<[Dma<T>; N]> {
|
||||
- Ok((0..N)
|
||||
+ let vec: Vec<Dma<T>> = (0..N)
|
||||
.map(|_| Ok(unsafe { Dma::zeroed()?.assume_init() }))
|
||||
- .collect::<Result<Vec<_>>>()?
|
||||
- .try_into()
|
||||
- .unwrap_or_else(|_| unreachable!()))
|
||||
+ .collect::<Result<Vec<_>>>()?;
|
||||
+ vec.try_into().map_err(|_| Error::new(EIO))
|
||||
}
|
||||
impl Intel8254x {
|
||||
pub unsafe fn new(base: usize) -> Result<Self> {
|
||||
|
||||
diff --git a/drivers/net/ixgbed/Cargo.toml b/drivers/net/ixgbed/Cargo.toml
|
||||
index d97ff398..fcaf4b19 100644
|
||||
--- a/drivers/net/ixgbed/Cargo.toml
|
||||
+++ b/drivers/net/ixgbed/Cargo.toml
|
||||
@@ -7,7 +7,8 @@ edition = "2021"
|
||||
[dependencies]
|
||||
bitflags.workspace = true
|
||||
libredox.workspace = true
|
||||
+log.workspace = true
|
||||
redox_event.workspace = true
|
||||
redox_syscall.workspace = true
|
||||
|
||||
|
||||
diff --git a/drivers/net/rtl8139d/src/device.rs b/drivers/net/rtl8139d/src/device.rs
|
||||
index 37167ee2..d7428132 100644
|
||||
--- a/drivers/net/rtl8139d/src/device.rs
|
||||
+++ b/drivers/net/rtl8139d/src/device.rs
|
||||
@@ -215,8 +215,8 @@ impl Rtl8139 {
|
||||
.map(|_| Ok(Dma::zeroed()?.assume_init()))
|
||||
.collect::<Result<Vec<_>>>()?
|
||||
.try_into()
|
||||
- .unwrap_or_else(|_| unreachable!()),
|
||||
+ .map_err(|_| Error::new(EIO))?,
|
||||
transmit_i: 0,
|
||||
mac_address: [0; 6],
|
||||
};
|
||||
|
||||
diff --git a/drivers/net/rtl8168d/src/device.rs b/drivers/net/rtl8168d/src/device.rs
|
||||
index ae545ec4..7229a52d 100644
|
||||
--- a/drivers/net/rtl8168d/src/device.rs
|
||||
+++ b/drivers/net/rtl8168d/src/device.rs
|
||||
@@ -177,7 +177,7 @@ impl Rtl8168 {
|
||||
.map(|_| Ok(Dma::zeroed()?.assume_init()))
|
||||
.collect::<Result<Vec<_>>>()?
|
||||
.try_into()
|
||||
- .unwrap_or_else(|_| unreachable!()),
|
||||
+ .map_err(|_| Error::new(EIO))?,
|
||||
|
||||
receive_ring: Dma::zeroed()?.assume_init(),
|
||||
receive_i: 0,
|
||||
@@ -185,8 +185,8 @@ impl Rtl8168 {
|
||||
.map(|_| Ok(Dma::zeroed()?.assume_init()))
|
||||
.collect::<Result<Vec<_>>>()?
|
||||
.try_into()
|
||||
- .unwrap_or_else(|_| unreachable!()),
|
||||
+ .map_err(|_| Error::new(EIO))?,
|
||||
transmit_ring: Dma::zeroed()?.assume_init(),
|
||||
transmit_i: 0,
|
||||
transmit_buffer_h: [Dma::zeroed()?.assume_init()],
|
||||
|
||||
diff --git a/drivers/net/virtio-netd/src/scheme.rs b/drivers/net/virtio-netd/src/scheme.rs
|
||||
index 59b3b93e..d0acb2ba 100644
|
||||
--- a/drivers/net/virtio-netd/src/scheme.rs
|
||||
+++ b/drivers/net/virtio-netd/src/scheme.rs
|
||||
@@ -27,11 +27,16 @@ impl<'a> VirtioNet<'a> {
|
||||
// Populate all of the `rx_queue` with buffers to maximize performence.
|
||||
let mut rx_buffers = vec![];
|
||||
for i in 0..(rx.descriptor_len() as usize) {
|
||||
- rx_buffers.push(unsafe {
|
||||
- Dma::<[u8]>::zeroed_slice(MAX_BUFFER_LEN)
|
||||
- .unwrap()
|
||||
- .assume_init()
|
||||
- });
|
||||
+ let buf = unsafe {
|
||||
+ match Dma::<[u8]>::zeroed_slice(MAX_BUFFER_LEN) {
|
||||
+ Ok(dma) => dma.assume_init(),
|
||||
+ Err(err) => {
|
||||
+ log::error!("virtio-netd: failed to allocate rx buffer: {err}");
|
||||
+ continue;
|
||||
+ }
|
||||
+ }
|
||||
+ };
|
||||
+ rx_buffers.push(buf);
|
||||
|
||||
let chain = ChainBuilder::new()
|
||||
.chain(Buffer::new_unsized(&rx_buffers[i]).flags(DescriptorFlags::WRITE_ONLY))
|
||||
@@ -0,0 +1,601 @@
|
||||
# P2-storage-error-handling.patch
|
||||
#
|
||||
# Storage driver error handling: replace unwrap()/expect()/panic!() with proper
|
||||
# error propagation and graceful exits across AHCI, IDE, NVMe, and VirtIO block drivers.
|
||||
#
|
||||
# Covers:
|
||||
# - ahcid/disk_ata.rs: replace unreachable!() with EIO error
|
||||
# - ahcid/disk_atapi.rs: replace unreachable!() with EBADF error
|
||||
# - ahcid/hba.rs: DMA allocation error handling
|
||||
# - ided/ide.rs: assert→debug_assert, try_into error handling
|
||||
# - nvmed/executor.rs: executor initialization error handling
|
||||
# - nvmed/identify.rs: DMA allocation, unreachable!() fallback
|
||||
# - nvmed/mod.rs: assert→debug_assert, unwrap→proper error/exit
|
||||
# - nvmed/queues.rs: unreachable!()→safe fallback
|
||||
# - virtio-blkd/scheme.rs: DMA allocation error handling, assert→if check
|
||||
#
|
||||
diff --git a/drivers/storage/ahcid/src/ahci/disk_ata.rs b/drivers/storage/ahcid/src/ahci/disk_ata.rs
|
||||
index 4f83c51d..7423603b 100644
|
||||
--- a/drivers/storage/ahcid/src/ahci/disk_ata.rs
|
||||
+++ b/drivers/storage/ahcid/src/ahci/disk_ata.rs
|
||||
@@ -1,7 +1,7 @@
|
||||
use std::convert::TryInto;
|
||||
use std::ptr;
|
||||
|
||||
-use syscall::error::Result;
|
||||
+use syscall::error::{Error, Result, EIO};
|
||||
|
||||
use common::dma::Dma;
|
||||
|
||||
@@ -39,7 +39,7 @@ impl DiskATA {
|
||||
.map(|_| Ok(unsafe { Dma::zeroed()?.assume_init() }))
|
||||
.collect::<Result<Vec<_>>>()?
|
||||
.try_into()
|
||||
- .unwrap_or_else(|_| unreachable!());
|
||||
+ .map_err(|_| Error::new(EIO))?;
|
||||
|
||||
let mut fb = unsafe { Dma::zeroed()?.assume_init() };
|
||||
let buf = unsafe { Dma::zeroed()?.assume_init() };
|
||||
diff --git a/drivers/storage/ahcid/src/ahci/disk_atapi.rs b/drivers/storage/ahcid/src/ahci/disk_atapi.rs
|
||||
index a0e75c09..8fbdfbef 100644
|
||||
--- a/drivers/storage/ahcid/src/ahci/disk_atapi.rs
|
||||
+++ b/drivers/storage/ahcid/src/ahci/disk_atapi.rs
|
||||
@@ -37,7 +37,7 @@ impl DiskATAPI {
|
||||
.map(|_| Ok(unsafe { Dma::zeroed()?.assume_init() }))
|
||||
.collect::<Result<Vec<_>>>()?
|
||||
.try_into()
|
||||
- .unwrap_or_else(|_| unreachable!());
|
||||
+ .map_err(|_| Error::new(EBADF))?;
|
||||
|
||||
let mut fb = unsafe { Dma::zeroed()?.assume_init() };
|
||||
let mut buf = unsafe { Dma::zeroed()?.assume_init() };
|
||||
diff --git a/drivers/storage/ahcid/src/ahci/hba.rs b/drivers/storage/ahcid/src/ahci/hba.rs
|
||||
index bea8792c..11a3d4ae 100644
|
||||
--- a/drivers/storage/ahcid/src/ahci/hba.rs
|
||||
+++ b/drivers/storage/ahcid/src/ahci/hba.rs
|
||||
@@ -178,8 +178,11 @@ impl HbaPort {
|
||||
clb: &mut Dma<[HbaCmdHeader; 32]>,
|
||||
ctbas: &mut [Dma<HbaCmdTable>; 32],
|
||||
) -> Result<u64> {
|
||||
- let dest: Dma<[u16; 256]> = Dma::new([0; 256]).unwrap();
|
||||
+ let dest: Dma<[u16; 256]> = Dma::new([0; 256]).map_err(|err| {
|
||||
+ error!("ahcid: failed to allocate DMA buffer: {err}");
|
||||
+ Error::new(EIO)
|
||||
+ })?;
|
||||
|
||||
let slot = self
|
||||
.ata_start(clb, ctbas, |cmdheader, cmdfis, prdt_entries, _acmd| {
|
||||
|
||||
diff --git a/drivers/storage/ided/src/ide.rs b/drivers/storage/ided/src/ide.rs
|
||||
index 5faf3250..094e5889 100644
|
||||
--- a/drivers/storage/ided/src/ide.rs
|
||||
+++ b/drivers/storage/ided/src/ide.rs
|
||||
@@ -184,10 +184,10 @@ impl Disk for AtaDisk {
|
||||
let block = start_block + (count as u64) / 512;
|
||||
|
||||
//TODO: support other LBA modes
|
||||
- assert!(block < 0x1_0000_0000_0000);
|
||||
+ debug_assert!(block < 0x1_0000_0000_0000);
|
||||
|
||||
let sectors = (chunk.len() + 511) / 512;
|
||||
- assert!(sectors <= 128);
|
||||
+ debug_assert!(sectors <= 128);
|
||||
|
||||
log::trace!(
|
||||
"IDE read chan {} dev {} block {:#x} count {:#x}",
|
||||
@@ -205,7 +205,7 @@ impl Disk for AtaDisk {
|
||||
// Make PRDT EOT match chunk size
|
||||
for i in 0..sectors {
|
||||
chan.prdt[i] = PrdtEntry {
|
||||
- phys: (chan.buf.physical() + i * 512).try_into().unwrap(),
|
||||
+ phys: (chan.buf.physical() + i * 512).try_into().map_err(|_| Error::new(EIO))?,
|
||||
size: 512,
|
||||
flags: if i + 1 == sectors {
|
||||
1 << 15 // End of table
|
||||
@@ -216,7 +216,7 @@ impl Disk for AtaDisk {
|
||||
}
|
||||
// Set PRDT
|
||||
let prdt = chan.prdt.physical();
|
||||
- chan.busmaster_prdt.write(prdt.try_into().unwrap());
|
||||
+ chan.busmaster_prdt.write(prdt.try_into().map_err(|_| Error::new(EIO))?);
|
||||
// Set to read
|
||||
chan.busmaster_command.writef(1 << 3, true);
|
||||
// Clear interrupt and error bits
|
||||
@@ -325,10 +325,10 @@ impl Disk for AtaDisk {
|
||||
let block = start_block + (count as u64) / 512;
|
||||
|
||||
//TODO: support other LBA modes
|
||||
- assert!(block < 0x1_0000_0000_0000);
|
||||
+ debug_assert!(block < 0x1_0000_0000_0000);
|
||||
|
||||
let sectors = (chunk.len() + 511) / 512;
|
||||
- assert!(sectors <= 128);
|
||||
+ debug_assert!(sectors <= 128);
|
||||
|
||||
log::trace!(
|
||||
"IDE write chan {} dev {} block {:#x} count {:#x}",
|
||||
@@ -346,7 +346,7 @@ impl Disk for AtaDisk {
|
||||
// Make PRDT EOT match chunk size
|
||||
for i in 0..sectors {
|
||||
chan.prdt[i] = PrdtEntry {
|
||||
- phys: (chan.buf.physical() + i * 512).try_into().unwrap(),
|
||||
+ phys: (chan.buf.physical() + i * 512).try_into().map_err(|_| Error::new(EIO))?,
|
||||
size: 512,
|
||||
flags: if i + 1 == sectors {
|
||||
1 << 15 // End of table
|
||||
@@ -357,8 +357,8 @@ impl Disk for AtaDisk {
|
||||
}
|
||||
// Set PRDT
|
||||
let prdt = chan.prdt.physical();
|
||||
- chan.busmaster_prdt.write(prdt.try_into().unwrap());
|
||||
+ chan.busmaster_prdt.write(prdt.try_into().map_err(|_| Error::new(EIO))?);
|
||||
// Set to write
|
||||
chan.busmaster_command.writef(1 << 3, false);
|
||||
// Clear interrupt and error bits
|
||||
|
||||
diff --git a/drivers/storage/nvmed/src/nvme/executor.rs b/drivers/storage/nvmed/src/nvme/executor.rs
|
||||
index 6242fa98..c1435e88 100644
|
||||
--- a/drivers/storage/nvmed/src/nvme/executor.rs
|
||||
+++ b/drivers/storage/nvmed/src/nvme/executor.rs
|
||||
@@ -34,7 +34,12 @@ impl Hardware for NvmeHw {
|
||||
&VTABLE
|
||||
}
|
||||
fn current() -> std::rc::Rc<executor::LocalExecutor<Self>> {
|
||||
- THE_EXECUTOR.with(|exec| Rc::clone(exec.borrow().as_ref().unwrap()))
|
||||
+ THE_EXECUTOR.with(|exec| {
|
||||
+ Rc::clone(exec.borrow().as_ref().unwrap_or_else(|| {
|
||||
+ log::error!("nvmed: internal error: executor not initialized");
|
||||
+ std::process::exit(1);
|
||||
+ }))
|
||||
+ })
|
||||
}
|
||||
fn try_submit(
|
||||
nvme: &Arc<Nvme>,
|
||||
diff --git a/drivers/storage/nvmed/src/nvme/identify.rs b/drivers/storage/nvmed/src/nvme/identify.rs
|
||||
index 05e5b9b2..b1b6e959 100644
|
||||
--- a/drivers/storage/nvmed/src/nvme/identify.rs
|
||||
+++ b/drivers/storage/nvmed/src/nvme/identify.rs
|
||||
@@ -126,7 +126,7 @@ impl LbaFormat {
|
||||
0b01 => RelativePerformance::Better,
|
||||
0b10 => RelativePerformance::Good,
|
||||
0b11 => RelativePerformance::Degraded,
|
||||
- _ => unreachable!(),
|
||||
+ _ => RelativePerformance::Degraded,
|
||||
}
|
||||
}
|
||||
pub fn is_available(&self) -> bool {
|
||||
@@ -153,7 +153,14 @@ impl Nvme {
|
||||
/// Returns the serial number, model, and firmware, in that order.
|
||||
pub async fn identify_controller(&self) {
|
||||
// TODO: Use same buffer
|
||||
- let data: Dma<IdentifyControllerData> = unsafe { Dma::zeroed().unwrap().assume_init() };
|
||||
+ let data: Dma<IdentifyControllerData> = unsafe {
|
||||
+ Dma::zeroed()
|
||||
+ .map(|dma| dma.assume_init())
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ log::error!("nvmed: failed to allocate identify DMA: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ })
|
||||
+ };
|
||||
|
||||
// println!(" - Attempting to identify controller");
|
||||
let comp = self
|
||||
@@ -182,7 +189,14 @@ impl Nvme {
|
||||
}
|
||||
pub async fn identify_namespace_list(&self, base: u32) -> Vec<u32> {
|
||||
// TODO: Use buffer
|
||||
- let data: Dma<[u32; 1024]> = unsafe { Dma::zeroed().unwrap().assume_init() };
|
||||
+ let data: Dma<[u32; 1024]> = unsafe {
|
||||
+ Dma::zeroed()
|
||||
+ .map(|dma| dma.assume_init())
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ log::error!("nvmed: failed to allocate namespace list DMA: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ })
|
||||
+ };
|
||||
|
||||
// println!(" - Attempting to retrieve namespace ID list");
|
||||
let comp = self
|
||||
@@ -198,7 +212,14 @@ impl Nvme {
|
||||
}
|
||||
pub async fn identify_namespace(&self, nsid: u32) -> NvmeNamespace {
|
||||
//TODO: Use buffer
|
||||
- let data: Dma<IdentifyNamespaceData> = unsafe { Dma::zeroed().unwrap().assume_init() };
|
||||
+ let data: Dma<IdentifyNamespaceData> = unsafe {
|
||||
+ Dma::zeroed()
|
||||
+ .map(|dma| dma.assume_init())
|
||||
+ .unwrap_or_else(|err| {
|
||||
+ log::error!("nvmed: failed to allocate namespace DMA: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ })
|
||||
+ };
|
||||
|
||||
log::debug!("Attempting to identify namespace {nsid}");
|
||||
let comp = self
|
||||
@@ -216,7 +237,10 @@ impl Nvme {
|
||||
let block_size = data
|
||||
.formatted_lba_size()
|
||||
.lba_data_size()
|
||||
- .expect("nvmed: error: size outside 512-2^64 range");
|
||||
+ .unwrap_or_else(|| {
|
||||
+ log::error!("nvmed: error: size outside 512-2^64 range");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
log::debug!("NVME block size: {}", block_size);
|
||||
|
||||
NvmeNamespace {
|
||||
diff --git a/drivers/storage/nvmed/src/nvme/mod.rs b/drivers/storage/nvmed/src/nvme/mod.rs
|
||||
index 682ee933..90a25d5b 100644
|
||||
--- a/drivers/storage/nvmed/src/nvme/mod.rs
|
||||
+++ b/drivers/storage/nvmed/src/nvme/mod.rs
|
||||
@@ -160,7 +160,15 @@ impl Nvme {
|
||||
}
|
||||
fn cur_thread_ctxt(&self) -> Arc<ReentrantMutex<ThreadCtxt>> {
|
||||
// TODO: multi-threading
|
||||
- Arc::clone(self.thread_ctxts.read().get(&0).unwrap())
|
||||
+ Arc::clone(
|
||||
+ self.thread_ctxts
|
||||
+ .read()
|
||||
+ .get(&0)
|
||||
+ .unwrap_or_else(|| {
|
||||
+ log::error!("nvmed: internal error: thread context 0 missing");
|
||||
+ std::process::exit(1);
|
||||
+ }),
|
||||
+ )
|
||||
}
|
||||
|
||||
pub unsafe fn submission_queue_tail(&self, qid: u16, tail: u16) {
|
||||
@@ -208,10 +216,22 @@ impl Nvme {
|
||||
}
|
||||
|
||||
for (qid, iv) in self.cq_ivs.get_mut().iter_mut() {
|
||||
- let ctxt = thread_ctxts.get(&0).unwrap().lock();
|
||||
+ let ctxt = match thread_ctxts.get(&0) {
|
||||
+ Some(c) => c.lock(),
|
||||
+ None => {
|
||||
+ log::error!("nvmed: internal error: thread context 0 missing");
|
||||
+ return Err(Error::new(EIO));
|
||||
+ }
|
||||
+ };
|
||||
let queues = ctxt.queues.borrow();
|
||||
|
||||
- let &(ref cq, ref sq) = queues.get(qid).unwrap();
|
||||
+ let (cq, sq) = match queues.get(qid) {
|
||||
+ Some(pair) => pair,
|
||||
+ None => {
|
||||
+ log::error!("nvmed: internal error: queue {qid} missing");
|
||||
+ return Err(Error::new(EIO));
|
||||
+ }
|
||||
+ };
|
||||
log::debug!(
|
||||
"iv {iv} [cq {qid}: {:X}, {}] [sq {qid}: {:X}, {}]",
|
||||
cq.data.physical(),
|
||||
@@ -222,7 +242,13 @@ impl Nvme {
|
||||
}
|
||||
|
||||
{
|
||||
- let main_ctxt = thread_ctxts.get(&0).unwrap().lock();
|
||||
+ let main_ctxt = match thread_ctxts.get(&0) {
|
||||
+ Some(c) => c.lock(),
|
||||
+ None => {
|
||||
+ log::error!("nvmed: internal error: thread context 0 missing");
|
||||
+ return Err(Error::new(EIO));
|
||||
+ }
|
||||
+ };
|
||||
|
||||
for (i, prp) in main_ctxt.buffer_prp.borrow_mut().iter_mut().enumerate() {
|
||||
*prp = (main_ctxt.buffer.borrow_mut().physical() + i * 4096) as u64;
|
||||
@@ -231,7 +257,13 @@ impl Nvme {
|
||||
let regs = self.regs.get_mut();
|
||||
|
||||
let mut queues = main_ctxt.queues.borrow_mut();
|
||||
- let (asq, acq) = queues.get_mut(&0).unwrap();
|
||||
+ let (asq, acq) = match queues.get_mut(&0) {
|
||||
+ Some(pair) => pair,
|
||||
+ None => {
|
||||
+ log::error!("nvmed: internal error: admin queue pair missing");
|
||||
+ return Err(Error::new(EIO));
|
||||
+ }
|
||||
+ };
|
||||
regs.aqa
|
||||
.write(((acq.data.len() as u32 - 1) << 16) | (asq.data.len() as u32 - 1));
|
||||
regs.asq_low.write(asq.data.physical() as u32);
|
||||
@@ -281,14 +313,14 @@ impl Nvme {
|
||||
let vector = vector as u8;
|
||||
|
||||
if masked {
|
||||
- assert_ne!(
|
||||
+ debug_assert_ne!(
|
||||
to_clear & (1 << vector),
|
||||
(1 << vector),
|
||||
"nvmed: internal error: cannot both mask and set"
|
||||
);
|
||||
to_mask |= 1 << vector;
|
||||
} else {
|
||||
- assert_ne!(
|
||||
+ debug_assert_ne!(
|
||||
to_mask & (1 << vector),
|
||||
(1 << vector),
|
||||
"nvmed: internal error: cannot both mask and set"
|
||||
@@ -326,22 +358,27 @@ impl Nvme {
|
||||
cmd_init: impl FnOnce(CmdId) -> NvmeCmd,
|
||||
fail: impl FnOnce(),
|
||||
) -> Option<(CqId, CmdId)> {
|
||||
- match ctxt.queues.borrow_mut().get_mut(&sq_id).unwrap() {
|
||||
- (sq, _cq) => {
|
||||
- if sq.is_full() {
|
||||
- fail();
|
||||
- return None;
|
||||
- }
|
||||
- let cmd_id = sq.tail;
|
||||
- let tail = sq.submit_unchecked(cmd_init(cmd_id));
|
||||
-
|
||||
- // TODO: Submit in bulk
|
||||
- unsafe {
|
||||
- self.submission_queue_tail(sq_id, tail);
|
||||
- }
|
||||
- Some((sq_id, cmd_id))
|
||||
+ let mut queues_ref = ctxt.queues.borrow_mut();
|
||||
+ let (sq, _cq) = match queues_ref.get_mut(&sq_id) {
|
||||
+ Some(pair) => pair,
|
||||
+ None => {
|
||||
+ log::error!("nvmed: internal error: submission queue {sq_id} missing");
|
||||
+ fail();
|
||||
+ return None;
|
||||
}
|
||||
+ };
|
||||
+ if sq.is_full() {
|
||||
+ fail();
|
||||
+ return None;
|
||||
+ }
|
||||
+ let cmd_id = sq.tail;
|
||||
+ let tail = sq.submit_unchecked(cmd_init(cmd_id));
|
||||
+
|
||||
+ // TODO: Submit in bulk
|
||||
+ unsafe {
|
||||
+ self.submission_queue_tail(sq_id, tail);
|
||||
}
|
||||
+ Some((sq_id, cmd_id))
|
||||
}
|
||||
|
||||
pub async fn create_io_completion_queue(
|
||||
@@ -349,13 +386,19 @@ impl Nvme {
|
||||
io_cq_id: CqId,
|
||||
vector: Option<Iv>,
|
||||
) -> NvmeCompQueue {
|
||||
- let queue = NvmeCompQueue::new().expect("nvmed: failed to allocate I/O completion queue");
|
||||
-
|
||||
- let len = u16::try_from(queue.data.len())
|
||||
- .expect("nvmed: internal error: I/O CQ longer than 2^16 entries");
|
||||
- let raw_len = len
|
||||
- .checked_sub(1)
|
||||
- .expect("nvmed: internal error: CQID 0 for I/O CQ");
|
||||
+ let queue = NvmeCompQueue::new().unwrap_or_else(|err| {
|
||||
+ log::error!("nvmed: failed to allocate I/O completion queue: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
+
|
||||
+ let len = u16::try_from(queue.data.len()).unwrap_or_else(|_| {
|
||||
+ log::error!("nvmed: internal error: I/O CQ longer than 2^16 entries");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
+ let raw_len = len.checked_sub(1).unwrap_or_else(|| {
|
||||
+ log::error!("nvmed: internal error: CQID 0 for I/O CQ");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
|
||||
let comp = self
|
||||
.submit_and_complete_admin_command(|cid| {
|
||||
@@ -370,22 +413,28 @@ impl Nvme {
|
||||
.await;
|
||||
|
||||
/*match comp.status.specific {
|
||||
- 1 => panic!("invalid queue identifier"),
|
||||
- 2 => panic!("invalid queue size"),
|
||||
- 8 => panic!("invalid interrupt vector"),
|
||||
+ 1 => { log::error!("nvmed: invalid queue identifier"); std::process::exit(1); }
|
||||
+ 2 => { log::error!("nvmed: invalid queue size"); std::process::exit(1); }
|
||||
+ 8 => { log::error!("nvmed: invalid interrupt vector"); std::process::exit(1); }
|
||||
_ => (),
|
||||
}*/
|
||||
|
||||
queue
|
||||
}
|
||||
pub async fn create_io_submission_queue(&self, io_sq_id: SqId, io_cq_id: CqId) -> NvmeCmdQueue {
|
||||
- let q = NvmeCmdQueue::new().expect("failed to create submission queue");
|
||||
-
|
||||
- let len = u16::try_from(q.data.len())
|
||||
- .expect("nvmed: internal error: I/O SQ longer than 2^16 entries");
|
||||
- let raw_len = len
|
||||
- .checked_sub(1)
|
||||
- .expect("nvmed: internal error: SQID 0 for I/O SQ");
|
||||
+ let q = NvmeCmdQueue::new().unwrap_or_else(|err| {
|
||||
+ log::error!("nvmed: failed to create submission queue: {err}");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
+
|
||||
+ let len = u16::try_from(q.data.len()).unwrap_or_else(|_| {
|
||||
+ log::error!("nvmed: internal error: I/O SQ longer than 2^16 entries");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
+ let raw_len = len.checked_sub(1).unwrap_or_else(|| {
|
||||
+ log::error!("nvmed: internal error: SQID 0 for I/O SQ");
|
||||
+ std::process::exit(1);
|
||||
+ });
|
||||
|
||||
let comp = self
|
||||
.submit_and_complete_admin_command(|cid| {
|
||||
@@ -399,9 +448,9 @@ impl Nvme {
|
||||
})
|
||||
.await;
|
||||
/*match comp.status.specific {
|
||||
- 0 => panic!("completion queue invalid"),
|
||||
- 1 => panic!("invalid queue identifier"),
|
||||
- 2 => panic!("invalid queue size"),
|
||||
+ 0 => { log::error!("nvmed: completion queue invalid"); std::process::exit(1); }
|
||||
+ 1 => { log::error!("nvmed: invalid queue identifier"); std::process::exit(1); }
|
||||
+ 2 => { log::error!("nvmed: invalid queue size"); std::process::exit(1); }
|
||||
_ => (),
|
||||
}*/
|
||||
|
||||
@@ -431,7 +480,10 @@ impl Nvme {
|
||||
self.thread_ctxts
|
||||
.read()
|
||||
.get(&0)
|
||||
- .unwrap()
|
||||
+ .unwrap_or_else(|| {
|
||||
+ log::error!("nvmed: internal error: thread context 0 missing");
|
||||
+ std::process::exit(1);
|
||||
+ })
|
||||
.lock()
|
||||
.queues
|
||||
.borrow_mut()
|
||||
@@ -497,8 +549,8 @@ impl Nvme {
|
||||
for chunk in buf.chunks_mut(/* TODO: buf len */ 8192) {
|
||||
let blocks = (chunk.len() + block_size - 1) / block_size;
|
||||
|
||||
- assert!(blocks > 0);
|
||||
- assert!(blocks <= 0x1_0000);
|
||||
+ debug_assert!(blocks > 0);
|
||||
+ debug_assert!(blocks <= 0x1_0000);
|
||||
|
||||
self.namespace_rw(&*ctxt, namespace, lba, (blocks - 1) as u16, false)
|
||||
.await?;
|
||||
@@ -525,8 +577,8 @@ impl Nvme {
|
||||
for chunk in buf.chunks(/* TODO: buf len */ 8192) {
|
||||
let blocks = (chunk.len() + block_size - 1) / block_size;
|
||||
|
||||
- assert!(blocks > 0);
|
||||
- assert!(blocks <= 0x1_0000);
|
||||
+ debug_assert!(blocks > 0);
|
||||
+ debug_assert!(blocks <= 0x1_0000);
|
||||
|
||||
ctxt.buffer.borrow_mut()[..chunk.len()].copy_from_slice(chunk);
|
||||
|
||||
diff --git a/drivers/storage/nvmed/src/nvme/queues.rs b/drivers/storage/nvmed/src/nvme/queues.rs
|
||||
index a3712aeb..438c905c 100644
|
||||
--- a/drivers/storage/nvmed/src/nvme/queues.rs
|
||||
+++ b/drivers/storage/nvmed/src/nvme/queues.rs
|
||||
@@ -145,8 +145,8 @@ impl Status {
|
||||
3 => Self::PathRelatedStatus(code),
|
||||
4..=6 => Self::Rsvd(code),
|
||||
7 => Self::Vendor(code),
|
||||
- _ => unreachable!(),
|
||||
+ _ => Self::Vendor(code),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
diff --git a/drivers/storage/virtio-blkd/src/scheme.rs b/drivers/storage/virtio-blkd/src/scheme.rs
|
||||
index ec4ecf73..39fb24a8 100644
|
||||
--- a/drivers/storage/virtio-blkd/src/scheme.rs
|
||||
+++ b/drivers/storage/virtio-blkd/src/scheme.rs
|
||||
@@ -15,19 +15,34 @@ trait BlkExtension {
|
||||
|
||||
impl BlkExtension for Queue<'_> {
|
||||
async fn read(&self, block: u64, target: &mut [u8]) -> usize {
|
||||
- let req = Dma::new(BlockVirtRequest {
|
||||
+ let req = match Dma::new(BlockVirtRequest {
|
||||
ty: BlockRequestTy::In,
|
||||
reserved: 0,
|
||||
sector: block,
|
||||
- })
|
||||
- .unwrap();
|
||||
+ }) {
|
||||
+ Ok(req) => req,
|
||||
+ Err(err) => {
|
||||
+ log::error!("virtio-blkd: failed to allocate read request DMA: {err}");
|
||||
+ return 0;
|
||||
+ }
|
||||
+ };
|
||||
|
||||
let result = unsafe {
|
||||
- Dma::<[u8]>::zeroed_slice(target.len())
|
||||
- .unwrap()
|
||||
- .assume_init()
|
||||
+ match Dma::<[u8]>::zeroed_slice(target.len()) {
|
||||
+ Ok(dma) => dma.assume_init(),
|
||||
+ Err(err) => {
|
||||
+ log::error!("virtio-blkd: failed to allocate read buffer DMA: {err}");
|
||||
+ return 0;
|
||||
+ }
|
||||
+ }
|
||||
+ };
|
||||
+ let status = match Dma::new(u8::MAX) {
|
||||
+ Ok(s) => s,
|
||||
+ Err(err) => {
|
||||
+ log::error!("virtio-blkd: failed to allocate read status DMA: {err}");
|
||||
+ return 0;
|
||||
+ }
|
||||
};
|
||||
- let status = Dma::new(u8::MAX).unwrap();
|
||||
|
||||
let chain = ChainBuilder::new()
|
||||
.chain(Buffer::new(&req))
|
||||
@@ -37,28 +52,46 @@ impl BlkExtension for Queue<'_> {
|
||||
|
||||
// XXX: Subtract 1 because the of status byte.
|
||||
let written = self.send(chain).await as usize - 1;
|
||||
- assert_eq!(*status, 0);
|
||||
+ if *status != 0 {
|
||||
+ log::error!("virtio-blkd: read failed with status {}", *status);
|
||||
+ return 0;
|
||||
+ }
|
||||
|
||||
target[..written].copy_from_slice(&result);
|
||||
written
|
||||
}
|
||||
|
||||
async fn write(&self, block: u64, target: &[u8]) -> usize {
|
||||
- let req = Dma::new(BlockVirtRequest {
|
||||
+ let req = match Dma::new(BlockVirtRequest {
|
||||
ty: BlockRequestTy::Out,
|
||||
reserved: 0,
|
||||
sector: block,
|
||||
- })
|
||||
- .unwrap();
|
||||
+ }) {
|
||||
+ Ok(req) => req,
|
||||
+ Err(err) => {
|
||||
+ log::error!("virtio-blkd: failed to allocate write request DMA: {err}");
|
||||
+ return 0;
|
||||
+ }
|
||||
+ };
|
||||
|
||||
let mut result = unsafe {
|
||||
- Dma::<[u8]>::zeroed_slice(target.len())
|
||||
- .unwrap()
|
||||
- .assume_init()
|
||||
+ match Dma::<[u8]>::zeroed_slice(target.len()) {
|
||||
+ Ok(dma) => dma.assume_init(),
|
||||
+ Err(err) => {
|
||||
+ log::error!("virtio-blkd: failed to allocate write buffer DMA: {err}");
|
||||
+ return 0;
|
||||
+ }
|
||||
+ }
|
||||
};
|
||||
result.copy_from_slice(target.as_ref());
|
||||
|
||||
- let status = Dma::new(u8::MAX).unwrap();
|
||||
+ let status = match Dma::new(u8::MAX) {
|
||||
+ Ok(s) => s,
|
||||
+ Err(err) => {
|
||||
+ log::error!("virtio-blkd: failed to allocate write status DMA: {err}");
|
||||
+ return 0;
|
||||
+ }
|
||||
+ };
|
||||
|
||||
let chain = ChainBuilder::new()
|
||||
.chain(Buffer::new(&req))
|
||||
@@ -67,7 +100,10 @@ impl BlkExtension for Queue<'_> {
|
||||
.build();
|
||||
|
||||
self.send(chain).await as usize;
|
||||
- assert_eq!(*status, 0);
|
||||
+ if *status != 0 {
|
||||
+ log::error!("virtio-blkd: write failed with status {}", *status);
|
||||
+ return 0;
|
||||
+ }
|
||||
|
||||
target.len()
|
||||
}
|
||||
@@ -0,0 +1,158 @@
|
||||
# P2-usb-pm-and-drivers.patch
|
||||
#
|
||||
# USB power management and driver interface improvements:
|
||||
# suspend/resume commands, SCSI driver enablement, PortPmState type,
|
||||
# IRQ reactor staged port state fallback.
|
||||
#
|
||||
# Covers:
|
||||
# - usbctl/main.rs: pm-state, suspend, resume subcommands
|
||||
# - xhcid/drivers.toml: enable SCSI over USB driver (was commented out)
|
||||
# - xhcid/driver_interface.rs: PortPmState enum, suspend/resume/port_pm_state methods
|
||||
# - xhcid/irq_reactor.rs: staged_port_states fallback in with_ring/with_ring_mut
|
||||
#
|
||||
diff --git a/drivers/usb/usbctl/src/main.rs b/drivers/usb/usbctl/src/main.rs
|
||||
index 9b5773d9..232f7cfc 100644
|
||||
--- a/drivers/usb/usbctl/src/main.rs
|
||||
+++ b/drivers/usb/usbctl/src/main.rs
|
||||
@@ -15,6 +15,9 @@ fn main() {
|
||||
Command::new("port")
|
||||
.arg(Arg::new("PORT").num_args(1).required(true))
|
||||
.subcommand(Command::new("status"))
|
||||
+ .subcommand(Command::new("pm-state"))
|
||||
+ .subcommand(Command::new("suspend"))
|
||||
+ .subcommand(Command::new("resume"))
|
||||
.subcommand(
|
||||
Command::new("endpoint")
|
||||
.arg(Arg::new("ENDPOINT_NUM").num_args(1).required(true))
|
||||
@@ -38,7 +41,16 @@ fn main() {
|
||||
if let Some(_status_scmd_matches) = port_scmd_matches.subcommand_matches("status") {
|
||||
let state = handle.port_state().expect("Failed to get port state");
|
||||
println!("{}", state.as_str());
|
||||
+ } else if let Some(_pm_state_scmd_matches) = port_scmd_matches.subcommand_matches("pm-state") {
|
||||
+ let state = handle
|
||||
+ .port_pm_state()
|
||||
+ .expect("Failed to get port power-management state");
|
||||
+ println!("{}", state.as_str());
|
||||
+ } else if let Some(_suspend_scmd_matches) = port_scmd_matches.subcommand_matches("suspend") {
|
||||
+ handle.suspend_device().expect("Failed to suspend device");
|
||||
+ } else if let Some(_resume_scmd_matches) = port_scmd_matches.subcommand_matches("resume") {
|
||||
+ handle.resume_device().expect("Failed to resume device");
|
||||
} else if let Some(endp_scmd_matches) = port_scmd_matches.subcommand_matches("endpoint") {
|
||||
let endp_num = endp_scmd_matches
|
||||
|
||||
.get_one::<String>("ENDPOINT_NUM")
|
||||
diff --git a/drivers/usb/xhcid/drivers.toml b/drivers/usb/xhcid/drivers.toml
|
||||
index 83c90e23..470ec063 100644
|
||||
--- a/drivers/usb/xhcid/drivers.toml
|
||||
+++ b/drivers/usb/xhcid/drivers.toml
|
||||
@@ -1,10 +1,9 @@
|
||||
-#TODO: causes XHCI errors
|
||||
-#[[drivers]]
|
||||
-#name = "SCSI over USB"
|
||||
-#class = 8 # Mass Storage class
|
||||
-#subclass = 6 # SCSI transparent command set
|
||||
-#command = ["usbscsid", "$SCHEME", "$PORT", "$IF_PROTO"]
|
||||
+[[drivers]]
|
||||
+name = "SCSI over USB"
|
||||
+class = 8 # Mass Storage class
|
||||
+subclass = 6 # SCSI transparent command set
|
||||
+command = ["usbscsid", "$SCHEME", "$PORT", "$IF_PROTO"]
|
||||
|
||||
[[drivers]]
|
||||
name = "USB HUB"
|
||||
|
||||
diff --git a/drivers/usb/xhcid/src/driver_interface.rs b/drivers/usb/xhcid/src/driver_interface.rs
|
||||
index 727f8d7e..82f839ae 100644
|
||||
--- a/drivers/usb/xhcid/src/driver_interface.rs
|
||||
+++ b/drivers/usb/xhcid/src/driver_interface.rs
|
||||
@@ -444,6 +444,33 @@ impl str::FromStr for PortState {
|
||||
}
|
||||
}
|
||||
|
||||
+#[repr(u8)]
|
||||
+#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq, Serialize, Deserialize)]
|
||||
+pub enum PortPmState {
|
||||
+ Active,
|
||||
+ Suspended,
|
||||
+}
|
||||
+impl PortPmState {
|
||||
+ pub fn as_str(&self) -> &'static str {
|
||||
+ match self {
|
||||
+ Self::Active => "active",
|
||||
+ Self::Suspended => "suspended",
|
||||
+ }
|
||||
+ }
|
||||
+}
|
||||
+
|
||||
+impl str::FromStr for PortPmState {
|
||||
+ type Err = Invalid;
|
||||
+
|
||||
+ fn from_str(s: &str) -> result::Result<Self, Self::Err> {
|
||||
+ Ok(match s {
|
||||
+ "active" => Self::Active,
|
||||
+ "suspended" => Self::Suspended,
|
||||
+ _ => return Err(Invalid("read reserved port PM state")),
|
||||
+ })
|
||||
+ }
|
||||
+}
|
||||
+
|
||||
#[repr(u8)]
|
||||
#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq, Serialize, Deserialize)]
|
||||
pub enum EndpointStatus {
|
||||
@@ -560,6 +587,16 @@ impl XhciClientHandle {
|
||||
let _bytes_written = file.write(&[])?;
|
||||
Ok(())
|
||||
}
|
||||
+ pub fn suspend_device(&self) -> result::Result<(), XhciClientHandleError> {
|
||||
+ let file = self.fd.openat("suspend", libredox::flag::O_WRONLY, 0)?;
|
||||
+ let _bytes_written = file.write(&[])?;
|
||||
+ Ok(())
|
||||
+ }
|
||||
+ pub fn resume_device(&self) -> result::Result<(), XhciClientHandleError> {
|
||||
+ let file = self.fd.openat("resume", libredox::flag::O_WRONLY, 0)?;
|
||||
+ let _bytes_written = file.write(&[])?;
|
||||
+ Ok(())
|
||||
+ }
|
||||
pub fn get_standard_descs(&self) -> result::Result<DevDesc, XhciClientHandleError> {
|
||||
let json = self.read("descriptors")?;
|
||||
Ok(serde_json::from_slice(&json)?)
|
||||
@@ -582,7 +619,11 @@ impl XhciClientHandle {
|
||||
let string = self.read_to_string("state")?;
|
||||
Ok(string.parse()?)
|
||||
}
|
||||
+ pub fn port_pm_state(&self) -> result::Result<PortPmState, XhciClientHandleError> {
|
||||
+ let string = self.read_to_string("pm_state")?;
|
||||
+ Ok(string.parse()?)
|
||||
+ }
|
||||
pub fn open_endpoint_ctl(&self, num: u8) -> result::Result<File, XhciClientHandleError> {
|
||||
let path = format!("endpoints/{}/ctl", num);
|
||||
let fd = self.fd.openat(&path, libredox::flag::O_RDWR, 0)?;
|
||||
|
||||
diff --git a/drivers/usb/xhcid/src/xhci/irq_reactor.rs b/drivers/usb/xhcid/src/xhci/irq_reactor.rs
|
||||
index ac492d5b..310fe51f 100644
|
||||
--- a/drivers/usb/xhcid/src/xhci/irq_reactor.rs
|
||||
+++ b/drivers/usb/xhcid/src/xhci/irq_reactor.rs
|
||||
@@ -633,7 +633,10 @@ impl<const N: usize> Xhci<N> {
|
||||
pub fn with_ring<T, F: FnOnce(&Ring) -> T>(&self, id: RingId, function: F) -> Option<T> {
|
||||
use super::RingOrStreams;
|
||||
|
||||
- let slot_state = self.port_states.get(&id.port)?;
|
||||
+ let slot_state = self
|
||||
+ .port_states
|
||||
+ .get(&id.port)
|
||||
+ .or_else(|| self.staged_port_states.get(&id.port))?;
|
||||
let endpoint_state = slot_state.endpoint_states.get(&id.endpoint_num)?;
|
||||
|
||||
let ring_ref = match endpoint_state.transfer {
|
||||
@@ -650,7 +653,10 @@ impl<const N: usize> Xhci<N> {
|
||||
) -> Option<T> {
|
||||
use super::RingOrStreams;
|
||||
|
||||
- let mut slot_state = self.port_states.get_mut(&id.port)?;
|
||||
+ let mut slot_state = self
|
||||
+ .port_states
|
||||
+ .get_mut(&id.port)
|
||||
+ .or_else(|| self.staged_port_states.get_mut(&id.port))?;
|
||||
let mut endpoint_state = slot_state.endpoint_states.get_mut(&id.endpoint_num)?;
|
||||
|
||||
let ring_ref = match endpoint_state.transfer {
|
||||
@@ -0,0 +1,398 @@
|
||||
--- a/drivers/pcid/src/cfg_access/mod.rs
|
||||
+++ b/drivers/pcid/src/cfg_access/mod.rs
|
||||
@@ -349,7 +349,11 @@
|
||||
let bus_addr = self.bus_addr(address.segment(), address.bus())?;
|
||||
Some(unsafe { bus_addr.add(Self::bus_addr_offset_in_dwords(address, offset)) })
|
||||
}
|
||||
+
|
||||
+ pub fn has_extended_config(&self, address: PciAddress) -> bool {
|
||||
+ self.mmio_addr(address, 0x100).is_some()
|
||||
+ }
|
||||
}
|
||||
|
||||
impl ConfigRegionAccess for Pcie {
|
||||
--- a/drivers/pcid/src/scheme.rs
|
||||
+++ b/drivers/pcid/src/scheme.rs
|
||||
@@ -5,12 +5,61 @@
|
||||
use redox_scheme::{CallerCtx, OpenResult};
|
||||
use scheme_utils::HandleMap;
|
||||
use syscall::dirent::{DirEntry, DirentBuf, DirentKind};
|
||||
-use syscall::error::{Error, Result, EACCES, EBADF, EINVAL, EIO, EISDIR, ENOENT, ENOTDIR, EALREADY};
|
||||
+use syscall::error::{
|
||||
+ Error, Result, EACCES, EALREADY, EBADF, EINVAL, EIO, EISDIR, ENOENT, ENOTDIR, EROFS,
|
||||
+};
|
||||
use syscall::flag::{MODE_CHR, MODE_DIR, O_DIRECTORY, O_STAT};
|
||||
use syscall::schemev2::NewFdFlags;
|
||||
use syscall::ENOLCK;
|
||||
|
||||
use crate::cfg_access::Pcie;
|
||||
+
|
||||
+const PCIE_EXTENDED_CAPABILITY_AER: u16 = 0x0001;
|
||||
+
|
||||
+#[derive(Clone, Copy)]
|
||||
+enum AerRegisterName {
|
||||
+ UncorStatus,
|
||||
+ UncorMask,
|
||||
+ UncorSeverity,
|
||||
+ CorStatus,
|
||||
+ CorMask,
|
||||
+ Cap,
|
||||
+ HeaderLog,
|
||||
+}
|
||||
+
|
||||
+impl AerRegisterName {
|
||||
+ fn from_path(path: &str) -> Option<Self> {
|
||||
+ Some(match path {
|
||||
+ "uncor_status" => Self::UncorStatus,
|
||||
+ "uncor_mask" => Self::UncorMask,
|
||||
+ "uncor_severity" => Self::UncorSeverity,
|
||||
+ "cor_status" => Self::CorStatus,
|
||||
+ "cor_mask" => Self::CorMask,
|
||||
+ "cap" => Self::Cap,
|
||||
+ "header_log" => Self::HeaderLog,
|
||||
+ _ => return None,
|
||||
+ })
|
||||
+ }
|
||||
+
|
||||
+ const fn offset(self) -> u16 {
|
||||
+ match self {
|
||||
+ Self::UncorStatus => 0x00,
|
||||
+ Self::UncorMask => 0x04,
|
||||
+ Self::UncorSeverity => 0x08,
|
||||
+ Self::CorStatus => 0x0C,
|
||||
+ Self::CorMask => 0x10,
|
||||
+ Self::Cap => 0x14,
|
||||
+ Self::HeaderLog => 0x18,
|
||||
+ }
|
||||
+ }
|
||||
+
|
||||
+ const fn len(self) -> usize {
|
||||
+ match self {
|
||||
+ Self::HeaderLog => 16,
|
||||
+ _ => 4,
|
||||
+ }
|
||||
+ }
|
||||
+}
|
||||
|
||||
pub struct PciScheme {
|
||||
handles: HandleMap<HandleWrapper>,
|
||||
@@ -20,13 +69,27 @@
|
||||
binds: HashMap<String, u32>,
|
||||
}
|
||||
enum Handle {
|
||||
- TopLevel { entries: Vec<String> },
|
||||
+ TopLevel {
|
||||
+ entries: Vec<String>,
|
||||
+ },
|
||||
Access,
|
||||
- Device,
|
||||
- Channel { addr: PciAddress, st: ChannelState },
|
||||
+ Device {
|
||||
+ addr: PciAddress,
|
||||
+ },
|
||||
+ Channel {
|
||||
+ addr: PciAddress,
|
||||
+ st: ChannelState,
|
||||
+ },
|
||||
SchemeRoot,
|
||||
/// Represents an open handle to a device's bind endpoint
|
||||
- Bind { addr: PciAddress },
|
||||
+ Bind {
|
||||
+ addr: PciAddress,
|
||||
+ },
|
||||
+ AerDir,
|
||||
+ Aer {
|
||||
+ addr: PciAddress,
|
||||
+ register: AerRegisterName,
|
||||
+ },
|
||||
/// Uevent surface for hotplug consumers. Opening uevent returns an object
|
||||
/// from which device add/remove events can be read. Since pcid currently
|
||||
/// only scans at startup, this surface is ready for hotplug polling consumers.
|
||||
@@ -38,13 +101,23 @@
|
||||
}
|
||||
impl Handle {
|
||||
fn is_file(&self) -> bool {
|
||||
- matches!(self, Self::Access | Self::Channel { .. } | Self::Bind { .. } | Self::Uevent)
|
||||
+ matches!(
|
||||
+ self,
|
||||
+ Self::Access
|
||||
+ | Self::Channel { .. }
|
||||
+ | Self::Bind { .. }
|
||||
+ | Self::Aer { .. }
|
||||
+ | Self::Uevent
|
||||
+ )
|
||||
}
|
||||
fn is_dir(&self) -> bool {
|
||||
!self.is_file()
|
||||
}
|
||||
fn requires_root(&self) -> bool {
|
||||
- matches!(self, Self::Access | Self::Channel { .. } | Self::Bind { .. })
|
||||
+ matches!(
|
||||
+ self,
|
||||
+ Self::Access | Self::Channel { .. } | Self::Bind { .. }
|
||||
+ )
|
||||
}
|
||||
fn is_scheme_root(&self) -> bool {
|
||||
matches!(self, Self::SchemeRoot)
|
||||
@@ -57,6 +130,16 @@
|
||||
}
|
||||
|
||||
const DEVICE_CONTENTS: &[&str] = &["channel", "bind"];
|
||||
+const DEVICE_AER_CONTENTS: &[&str] = &["channel", "bind", "aer"];
|
||||
+const AER_CONTENTS: &[&str] = &[
|
||||
+ "uncor_status",
|
||||
+ "uncor_mask",
|
||||
+ "uncor_severity",
|
||||
+ "cor_status",
|
||||
+ "cor_mask",
|
||||
+ "cap",
|
||||
+ "header_log",
|
||||
+];
|
||||
|
||||
impl PciScheme {
|
||||
pub fn access(&mut self) -> usize {
|
||||
@@ -141,7 +224,12 @@
|
||||
|
||||
let (len, mode) = match handle.inner {
|
||||
Handle::TopLevel { ref entries } => (entries.len(), MODE_DIR | 0o755),
|
||||
- Handle::Device => (DEVICE_CONTENTS.len(), MODE_DIR | 0o755),
|
||||
+ Handle::Device { addr } => (
|
||||
+ Self::device_entries(&self.pcie, addr).len(),
|
||||
+ MODE_DIR | 0o755,
|
||||
+ ),
|
||||
+ Handle::AerDir => (AER_CONTENTS.len(), MODE_DIR | 0o755),
|
||||
+ Handle::Aer { register, .. } => (register.len(), MODE_CHR | 0o444),
|
||||
Handle::Access | Handle::Channel { .. } | Handle::Bind { .. } => (0, MODE_CHR | 0o600),
|
||||
Handle::Uevent => (0, MODE_CHR | 0o644),
|
||||
Handle::SchemeRoot => return Err(Error::new(EBADF)),
|
||||
@@ -154,7 +242,7 @@
|
||||
&mut self,
|
||||
id: usize,
|
||||
buf: &mut [u8],
|
||||
- _offset: u64,
|
||||
+ offset: u64,
|
||||
_fcntl_flags: u32,
|
||||
_ctx: &CallerCtx,
|
||||
) -> Result<usize> {
|
||||
@@ -166,11 +254,14 @@
|
||||
|
||||
match handle.inner {
|
||||
Handle::TopLevel { .. } => Err(Error::new(EISDIR)),
|
||||
- Handle::Device => Err(Error::new(EISDIR)),
|
||||
+ Handle::Device { .. } | Handle::AerDir => Err(Error::new(EISDIR)),
|
||||
Handle::Channel {
|
||||
addr: _,
|
||||
ref mut st,
|
||||
} => Self::read_channel(st, buf),
|
||||
+ Handle::Aer { addr, register } => {
|
||||
+ Self::read_aer_register(&self.pcie, addr, register, buf, offset)
|
||||
+ }
|
||||
Handle::Uevent => {
|
||||
// Uevent surface is ready for hotplug polling consumers.
|
||||
// pcid currently only scans at startup, so return empty (EAGAIN would indicate no data available).
|
||||
@@ -209,8 +300,15 @@
|
||||
}
|
||||
return Ok(buf);
|
||||
}
|
||||
- Handle::Device => DEVICE_CONTENTS,
|
||||
- Handle::Access | Handle::Channel { .. } | Handle::Bind { .. } | Handle::Uevent => return Err(Error::new(ENOTDIR)),
|
||||
+ Handle::Device { addr } => Self::device_entries(&self.pcie, addr),
|
||||
+ Handle::AerDir => AER_CONTENTS,
|
||||
+ Handle::Access
|
||||
+ | Handle::Channel { .. }
|
||||
+ | Handle::Bind { .. }
|
||||
+ | Handle::Aer { .. }
|
||||
+ | Handle::Uevent => {
|
||||
+ return Err(Error::new(ENOTDIR));
|
||||
+ }
|
||||
Handle::SchemeRoot => return Err(Error::new(EBADF)),
|
||||
};
|
||||
|
||||
@@ -243,6 +341,7 @@
|
||||
Handle::Channel { addr, ref mut st } => {
|
||||
Self::write_channel(&self.pcie, &mut self.tree, addr, st, buf)
|
||||
}
|
||||
+ Handle::Aer { .. } => Err(Error::new(EROFS)),
|
||||
|
||||
_ => Err(Error::new(EBADF)),
|
||||
}
|
||||
@@ -357,45 +456,151 @@
|
||||
binds: HashMap::new(),
|
||||
}
|
||||
}
|
||||
- fn parse_after_pci_addr(&mut self, addr: PciAddress, after: &str, ctx: &CallerCtx) -> Result<Handle> {
|
||||
+ fn device_entries(pcie: &Pcie, addr: PciAddress) -> &'static [&'static str] {
|
||||
+ if Self::find_pcie_extended_capability(pcie, addr, PCIE_EXTENDED_CAPABILITY_AER).is_some() {
|
||||
+ DEVICE_AER_CONTENTS
|
||||
+ } else {
|
||||
+ DEVICE_CONTENTS
|
||||
+ }
|
||||
+ }
|
||||
+ fn find_pcie_extended_capability(
|
||||
+ pcie: &Pcie,
|
||||
+ addr: PciAddress,
|
||||
+ capability_id: u16,
|
||||
+ ) -> Option<u16> {
|
||||
+ if !pcie.has_extended_config(addr) {
|
||||
+ return None;
|
||||
+ }
|
||||
+
|
||||
+ let mut offset = 0x100_u16;
|
||||
+
|
||||
+ while offset <= 0xFFC {
|
||||
+ let header = unsafe { pcie.read(addr, offset) };
|
||||
+ if header == 0 || header == u32::MAX {
|
||||
+ return None;
|
||||
+ }
|
||||
+
|
||||
+ if (header & 0xFFFF) as u16 == capability_id {
|
||||
+ return Some(offset);
|
||||
+ }
|
||||
+
|
||||
+ let next = ((header >> 20) & 0xFFF) as u16;
|
||||
+ if next < 0x100 || next <= offset || next > 0xFFC || next % 4 != 0 {
|
||||
+ return None;
|
||||
+ }
|
||||
+ offset = next;
|
||||
+ }
|
||||
+
|
||||
+ None
|
||||
+ }
|
||||
+ fn read_file_bytes(data: &[u8], buf: &mut [u8], offset: u64) -> Result<usize> {
|
||||
+ let Ok(offset) = usize::try_from(offset) else {
|
||||
+ return Ok(0);
|
||||
+ };
|
||||
+ if offset >= data.len() {
|
||||
+ return Ok(0);
|
||||
+ }
|
||||
+
|
||||
+ let count = std::cmp::min(buf.len(), data.len() - offset);
|
||||
+ buf[..count].copy_from_slice(&data[offset..offset + count]);
|
||||
+ Ok(count)
|
||||
+ }
|
||||
+ fn read_aer_register(
|
||||
+ pcie: &Pcie,
|
||||
+ addr: PciAddress,
|
||||
+ register: AerRegisterName,
|
||||
+ buf: &mut [u8],
|
||||
+ offset: u64,
|
||||
+ ) -> Result<usize> {
|
||||
+ let Some(aer_base) =
|
||||
+ Self::find_pcie_extended_capability(pcie, addr, PCIE_EXTENDED_CAPABILITY_AER)
|
||||
+ else {
|
||||
+ return Err(Error::new(ENOENT));
|
||||
+ };
|
||||
+
|
||||
+ let mut data = [0_u8; 16];
|
||||
+ for (index, chunk) in data[..register.len()].chunks_exact_mut(4).enumerate() {
|
||||
+ let index = u16::try_from(index).map_err(|_| Error::new(EIO))?;
|
||||
+ let value = unsafe { pcie.read(addr, aer_base + register.offset() + index * 4) };
|
||||
+ chunk.copy_from_slice(&value.to_le_bytes());
|
||||
+ }
|
||||
+
|
||||
+ Self::read_file_bytes(&data[..register.len()], buf, offset)
|
||||
+ }
|
||||
+ fn parse_after_pci_addr(
|
||||
+ &mut self,
|
||||
+ addr: PciAddress,
|
||||
+ after: &str,
|
||||
+ ctx: &CallerCtx,
|
||||
+ ) -> Result<Handle> {
|
||||
if after.chars().next().map_or(false, |c| c != '/') {
|
||||
return Err(Error::new(ENOENT));
|
||||
}
|
||||
let func = self.tree.get_mut(&addr).ok_or(Error::new(ENOENT))?;
|
||||
|
||||
Ok(if after.is_empty() {
|
||||
- Handle::Device
|
||||
+ Handle::Device { addr }
|
||||
} else {
|
||||
let path = &after[1..];
|
||||
|
||||
- match path {
|
||||
- "channel" => {
|
||||
- if func.enabled {
|
||||
- return Err(Error::new(ENOLCK));
|
||||
+ if path == "aer" {
|
||||
+ if Self::find_pcie_extended_capability(
|
||||
+ &self.pcie,
|
||||
+ addr,
|
||||
+ PCIE_EXTENDED_CAPABILITY_AER,
|
||||
+ )
|
||||
+ .is_none()
|
||||
+ {
|
||||
+ return Err(Error::new(ENOENT));
|
||||
+ }
|
||||
+ Handle::AerDir
|
||||
+ } else if let Some(register_name) = path.strip_prefix("aer/") {
|
||||
+ let register =
|
||||
+ AerRegisterName::from_path(register_name).ok_or(Error::new(ENOENT))?;
|
||||
+ if Self::find_pcie_extended_capability(
|
||||
+ &self.pcie,
|
||||
+ addr,
|
||||
+ PCIE_EXTENDED_CAPABILITY_AER,
|
||||
+ )
|
||||
+ .is_none()
|
||||
+ {
|
||||
+ return Err(Error::new(ENOENT));
|
||||
+ }
|
||||
+ Handle::Aer { addr, register }
|
||||
+ } else {
|
||||
+ match path {
|
||||
+ "channel" => {
|
||||
+ if func.enabled {
|
||||
+ return Err(Error::new(ENOLCK));
|
||||
+ }
|
||||
+ func.inner.legacy_interrupt_line = crate::enable_function(
|
||||
+ &self.pcie,
|
||||
+ &mut func.endpoint_header,
|
||||
+ &mut func.capabilities,
|
||||
+ );
|
||||
+ func.enabled = true;
|
||||
+ Handle::Channel {
|
||||
+ addr,
|
||||
+ st: ChannelState::AwaitingData,
|
||||
+ }
|
||||
}
|
||||
- func.inner.legacy_interrupt_line = crate::enable_function(
|
||||
- &self.pcie,
|
||||
- &mut func.endpoint_header,
|
||||
- &mut func.capabilities,
|
||||
- );
|
||||
- func.enabled = true;
|
||||
- Handle::Channel {
|
||||
- addr,
|
||||
- st: ChannelState::AwaitingData,
|
||||
+ "bind" => {
|
||||
+ let addr_str = format!("{}", addr);
|
||||
+ if let Some(&owner_pid) = self.binds.get(&addr_str) {
|
||||
+ log::info!(
|
||||
+ "pcid: device {} already bound by pid {}",
|
||||
+ addr_str,
|
||||
+ owner_pid
|
||||
+ );
|
||||
+ return Err(Error::new(EALREADY));
|
||||
+ }
|
||||
+ let caller_pid = u32::try_from(ctx.pid).map_err(|_| Error::new(EINVAL))?;
|
||||
+ self.binds.insert(addr_str.clone(), caller_pid);
|
||||
+ log::info!("pcid: device {} bound by pid {}", addr_str, caller_pid);
|
||||
+ Handle::Bind { addr }
|
||||
}
|
||||
- }
|
||||
- "bind" => {
|
||||
- let addr_str = format!("{}", addr);
|
||||
- if let Some(&owner_pid) = self.binds.get(&addr_str) {
|
||||
- log::info!("pcid: device {} already bound by pid {}", addr_str, owner_pid);
|
||||
- return Err(Error::new(EALREADY));
|
||||
- }
|
||||
- let caller_pid = ctx.pid;
|
||||
- self.binds.insert(addr_str.clone(), caller_pid);
|
||||
- log::info!("pcid: device {} bound by pid {}", addr_str, caller_pid);
|
||||
- Handle::Bind { addr }
|
||||
- }
|
||||
- _ => return Err(Error::new(ENOENT)),
|
||||
+ _ => return Err(Error::new(ENOENT)),
|
||||
+ }
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -10625,7 +10625,7 @@ diff --git a/drivers/pcid-spawner/src/main.rs b/drivers/pcid-spawner/src/main.rs
|
||||
index a968f4d4..bfff05c3 100644
|
||||
--- a/drivers/pcid-spawner/src/main.rs
|
||||
+++ b/drivers/pcid-spawner/src/main.rs
|
||||
@@ -1,11 +1,40 @@
|
||||
@@ -1,11 +1,41 @@
|
||||
+use std::env;
|
||||
use std::fs;
|
||||
use std::process::Command;
|
||||
@@ -10667,7 +10667,7 @@ index a968f4d4..bfff05c3 100644
|
||||
fn main() -> Result<()> {
|
||||
let mut args = pico_args::Arguments::from_env();
|
||||
let initfs = args.contains("--initfs");
|
||||
@@ -30,6 +59,12 @@ fn main() -> Result<()> {
|
||||
@@ -30,12 +59,33 @@ fn main() -> Result<()> {
|
||||
}
|
||||
|
||||
let config: Config = toml::from_str(&config_data)?;
|
||||
+5171
-1130
File diff suppressed because it is too large
Load Diff
@@ -2,7 +2,7 @@ diff --git a/src/main.rs b/src/main.rs
|
||||
index b2e2736..a6a9474 100644
|
||||
--- a/src/main.rs
|
||||
+++ b/src/main.rs
|
||||
@@ -500,33 +500,62 @@ pub extern "C" fn main() -> ! {
|
||||
@@ -500,36 +500,63 @@ pub extern "C" fn main() -> ! {
|
||||
|
||||
print!("live: 0/{} MiB", size / MIBI as u64);
|
||||
|
||||
|
||||
@@ -0,0 +1,97 @@
|
||||
diff --git a/src/main.rs b/src/main.rs
|
||||
index b2e2736..a6a9474 100644
|
||||
--- a/src/main.rs
|
||||
+++ b/src/main.rs
|
||||
@@ -500,33 +500,62 @@ pub extern "C" fn main() -> ! {
|
||||
|
||||
print!("live: 0/{} MiB", size / MIBI as u64);
|
||||
|
||||
- let ptr = os.alloc_zeroed_page_aligned(size as usize);
|
||||
- if ptr.is_null() {
|
||||
- panic!("Failed to allocate memory for live");
|
||||
- }
|
||||
-
|
||||
- let live = unsafe { slice::from_raw_parts_mut(ptr, size as usize) };
|
||||
-
|
||||
- let mut i = 0;
|
||||
- for chunk in live.chunks_mut(MIBI) {
|
||||
- print!("\rlive: {}/{} MiB", i / MIBI as u64, size / MIBI as u64);
|
||||
- i += unsafe {
|
||||
- fs.disk
|
||||
- .read_at(fs.block + i / redoxfs::BLOCK_SIZE, chunk)
|
||||
- .expect("Failed to read live disk") as u64
|
||||
- };
|
||||
- }
|
||||
- println!("\rlive: {}/{} MiB", i / MIBI as u64, size / MIBI as u64);
|
||||
-
|
||||
- println!("Switching to live disk");
|
||||
- unsafe {
|
||||
- LIVE_OPT = Some((fs.block, slice::from_raw_parts_mut(ptr, size as usize)));
|
||||
- }
|
||||
+ let live_size = match usize::try_from(size) {
|
||||
+ Ok(live_size) => live_size,
|
||||
+ Err(_) => {
|
||||
+ println!("\rlive: disabled (image too large for bootloader address space)");
|
||||
+ live = false;
|
||||
+ 0
|
||||
+ }
|
||||
+ };
|
||||
|
||||
- area_add(OsMemoryEntry {
|
||||
- base: live.as_ptr() as u64,
|
||||
- size: live.len() as u64,
|
||||
- kind: OsMemoryKind::Reserved,
|
||||
- });
|
||||
+ let ptr = if live {
|
||||
+ os.alloc_zeroed_page_aligned(live_size)
|
||||
+ } else {
|
||||
+ ptr::null_mut()
|
||||
+ };
|
||||
+
|
||||
+ if live && ptr.is_null() {
|
||||
+ println!(
|
||||
+ "\rlive: disabled (unable to allocate {} MiB upfront)",
|
||||
+ size / MIBI as u64
|
||||
+ );
|
||||
+ live = false;
|
||||
+ }
|
||||
+
|
||||
+ let live = if live {
|
||||
+ Some(unsafe { slice::from_raw_parts_mut(ptr, live_size) })
|
||||
+ } else {
|
||||
+ println!("Continuing without live preload");
|
||||
+ None
|
||||
+ };
|
||||
+
|
||||
+ if let Some(live) = live {
|
||||
+ let mut i = 0;
|
||||
+ for chunk in live.chunks_mut(MIBI) {
|
||||
+ print!("\rlive: {}/{} MiB", i / MIBI as u64, size / MIBI as u64);
|
||||
+ i += unsafe {
|
||||
+ fs.disk
|
||||
+ .read_at(fs.block + i / redoxfs::BLOCK_SIZE, chunk)
|
||||
+ .expect("Failed to read live disk") as u64
|
||||
+ };
|
||||
+ }
|
||||
+ println!("\rlive: {}/{} MiB", i / MIBI as u64, size / MIBI as u64);
|
||||
+
|
||||
+ println!("Switching to live disk");
|
||||
+ unsafe {
|
||||
+ LIVE_OPT = Some((fs.block, slice::from_raw_parts_mut(ptr, live_size)));
|
||||
+ }
|
||||
+
|
||||
+ area_add(OsMemoryEntry {
|
||||
+ base: live.as_ptr() as u64,
|
||||
+ size: live.len() as u64,
|
||||
+ kind: OsMemoryKind::Reserved,
|
||||
+ });
|
||||
+
|
||||
+ Some(live)
|
||||
+ } else {
|
||||
+ None
|
||||
+ }
|
||||
-
|
||||
- Some(live)
|
||||
} else {
|
||||
None
|
||||
};
|
||||
@@ -11,7 +11,7 @@ index 4b0bf31..90a97b8 100644
|
||||
let mut esp_fs = match FileSystem::handle_protocol(esp_handle) {
|
||||
Ok(esp_fs) => esp_fs,
|
||||
Err(err) => {
|
||||
@@ -87,9 +89,37 @@ fn esp_live_image(esp_handle: Handle, esp_device_path: &DevicePath) -> Option<V
|
||||
@@ -87,8 +89,36 @@ fn esp_live_image(esp_handle: Handle, esp_device_path: &DevicePath) -> Option<V
|
||||
};
|
||||
|
||||
let mut buffer = Vec::new();
|
||||
|
||||
@@ -0,0 +1,60 @@
|
||||
diff --git a/src/os/uefi/device.rs b/src/os/uefi/device.rs
|
||||
index 4b0bf31..90a97b8 100644
|
||||
--- a/src/os/uefi/device.rs
|
||||
+++ b/src/os/uefi/device.rs
|
||||
@@ -46,6 +46,8 @@ fn device_path_relation(a_path: &DevicePath, b_path: &DevicePath) -> DevicePath
|
||||
}
|
||||
|
||||
fn esp_live_image(esp_handle: Handle, esp_device_path: &DevicePath) -> Option<Vec<u8>> {
|
||||
+ const MAX_LIVE_IMAGE_PRELOAD: usize = 128 * 1024 * 1024;
|
||||
+
|
||||
let mut esp_fs = match FileSystem::handle_protocol(esp_handle) {
|
||||
Ok(esp_fs) => esp_fs,
|
||||
Err(err) => {
|
||||
@@ -87,9 +89,37 @@ fn esp_live_image(esp_handle: Handle, esp_device_path: &DevicePath) -> Option<V
|
||||
};
|
||||
|
||||
let mut buffer = Vec::new();
|
||||
+ let mut chunk = [0_u8; 64 * 1024];
|
||||
+
|
||||
+ loop {
|
||||
+ let read = match live_image.read(&mut chunk) {
|
||||
+ Ok(read) => read,
|
||||
+ Err(err) => {
|
||||
+ log::warn!(
|
||||
+ "Failed while reading {}\\redox-live.iso: {:?}",
|
||||
+ device_path_to_string(esp_device_path),
|
||||
+ err
|
||||
+ );
|
||||
+ return None;
|
||||
+ }
|
||||
+ };
|
||||
+
|
||||
+ if read == 0 {
|
||||
+ break;
|
||||
+ }
|
||||
|
||||
- live_image.read_to_end(&mut buffer).unwrap();
|
||||
+ if buffer.len().saturating_add(read) > MAX_LIVE_IMAGE_PRELOAD {
|
||||
+ log::warn!(
|
||||
+ "Skipping {}\\redox-live.iso preload: file exceeds {} MiB safety limit",
|
||||
+ device_path_to_string(esp_device_path),
|
||||
+ MAX_LIVE_IMAGE_PRELOAD / 1024 / 1024
|
||||
+ );
|
||||
+ return None;
|
||||
+ }
|
||||
+
|
||||
+ buffer.extend_from_slice(&chunk[..read]);
|
||||
+ }
|
||||
|
||||
Some(buffer)
|
||||
}
|
||||
@@ -130,7 +160,7 @@ pub fn disk_device_priority() -> Vec<DiskDevice> {
|
||||
return vec![DiskDevice {
|
||||
handle: esp_handle,
|
||||
// Support both a copy of livedisk.iso and a standalone redoxfs partition
|
||||
- partition_offset: if &buffer[512..520] == b"EFI PART" {
|
||||
+ partition_offset: if buffer.len() >= 520 && &buffer[512..520] == b"EFI PART" {
|
||||
//TODO: get block from partition table
|
||||
2 * crate::MIBI as u64
|
||||
} else {
|
||||
@@ -113,7 +113,7 @@ diff --git a/src/os/uefi/device.rs b/src/os/uefi/device.rs
|
||||
index 0b7991f..554d88e 100644
|
||||
--- a/src/os/uefi/device.rs
|
||||
+++ b/src/os/uefi/device.rs
|
||||
@@ -13,6 +13,160 @@ use uefi_std::{fs::FileSystem, loaded_image::LoadedImage, proto::Protocol};
|
||||
@@ -13,6 +13,154 @@ use uefi_std::{fs::FileSystem, loaded_image::LoadedImage, proto::Protocol};
|
||||
|
||||
use super::disk::{DiskEfi, DiskOrFileEfi};
|
||||
|
||||
|
||||
@@ -0,0 +1,392 @@
|
||||
diff --git a/src/arch/x86/mod.rs b/src/arch/x86/mod.rs
|
||||
index bda3f5d..55889df 100644
|
||||
--- a/src/arch/x86/mod.rs
|
||||
+++ b/src/arch/x86/mod.rs
|
||||
@@ -3,10 +3,15 @@ use crate::os::Os;
|
||||
pub(crate) mod x32;
|
||||
pub(crate) mod x64;
|
||||
|
||||
-pub unsafe fn paging_create(os: &impl Os, kernel_phys: u64, kernel_size: u64) -> Option<usize> {
|
||||
+pub unsafe fn paging_create(
|
||||
+ os: &impl Os,
|
||||
+ kernel_phys: u64,
|
||||
+ kernel_size: u64,
|
||||
+ identity_map_end: u64,
|
||||
+) -> Option<usize> {
|
||||
unsafe {
|
||||
if crate::KERNEL_64BIT {
|
||||
- x64::paging_create(os, kernel_phys, kernel_size)
|
||||
+ x64::paging_create(os, kernel_phys, kernel_size, identity_map_end)
|
||||
} else {
|
||||
x32::paging_create(os, kernel_phys, kernel_size)
|
||||
}
|
||||
diff --git a/src/arch/x86/x64.rs b/src/arch/x86/x64.rs
|
||||
index a0a275a..fcf309d 100644
|
||||
--- a/src/arch/x86/x64.rs
|
||||
+++ b/src/arch/x86/x64.rs
|
||||
@@ -29,7 +29,12 @@ const PRESENT: u64 = 1;
|
||||
const WRITABLE: u64 = 1 << 1;
|
||||
const LARGE: u64 = 1 << 7;
|
||||
|
||||
-pub unsafe fn paging_create(os: &impl Os, kernel_phys: u64, kernel_size: u64) -> Option<usize> {
|
||||
+pub unsafe fn paging_create(
|
||||
+ os: &impl Os,
|
||||
+ kernel_phys: u64,
|
||||
+ kernel_size: u64,
|
||||
+ identity_map_end: u64,
|
||||
+) -> Option<usize> {
|
||||
unsafe {
|
||||
// Create PML4
|
||||
let pml4 = paging_allocate(os)?;
|
||||
@@ -42,8 +47,14 @@ pub unsafe fn paging_create(os: &impl Os, kernel_phys: u64, kernel_size: u64) -
|
||||
pml4[0] = pdp.as_ptr() as u64 | WRITABLE | PRESENT;
|
||||
pml4[256] = pdp.as_ptr() as u64 | WRITABLE | PRESENT;
|
||||
|
||||
- // Identity map 8 GiB using 2 MiB pages
|
||||
- for pdp_i in 0..8 {
|
||||
+ let mut needed_pdp = identity_map_end.div_ceil(0x4000_0000);
|
||||
+ if needed_pdp == 0 {
|
||||
+ needed_pdp = 1;
|
||||
+ }
|
||||
+ assert!(needed_pdp <= pdp.len() as u64, "identity map end exceeds paging span");
|
||||
+
|
||||
+ // Identity map required physical range using 2 MiB pages
|
||||
+ for pdp_i in 0..needed_pdp as usize {
|
||||
let pd = paging_allocate(os)?;
|
||||
pdp[pdp_i] = pd.as_ptr() as u64 | WRITABLE | PRESENT;
|
||||
for pd_i in 0..pd.len() {
|
||||
diff --git a/src/main.rs b/src/main.rs
|
||||
index 78dabb0..fd8eb81 100644
|
||||
--- a/src/main.rs
|
||||
+++ b/src/main.rs
|
||||
@@ -62,6 +62,10 @@ pub static mut KERNEL_64BIT: bool = false;
|
||||
|
||||
pub static mut LIVE_OPT: Option<(u64, &'static [u8])> = None;
|
||||
|
||||
+fn region_end(base: u64, size: u64) -> u64 {
|
||||
+ base.saturating_add(size).next_multiple_of(0x1000)
|
||||
+}
|
||||
+
|
||||
struct SliceWriter<'a> {
|
||||
slice: &'a mut [u8],
|
||||
i: usize,
|
||||
@@ -645,9 +649,6 @@ fn main(os: &impl Os) -> (usize, u64, KernelArgs) {
|
||||
(memory.len() as u64, memory.as_mut_ptr() as u64)
|
||||
};
|
||||
|
||||
- let page_phys = unsafe { paging_create(os, kernel.as_ptr() as u64, kernel.len() as u64) }
|
||||
- .expect("Failed to set up paging");
|
||||
-
|
||||
let max_env_size = 64 * KIBI;
|
||||
let mut env_size = max_env_size;
|
||||
let env_base = os.alloc_zeroed_page_aligned(env_size);
|
||||
@@ -655,6 +656,28 @@ fn main(os: &impl Os) -> (usize, u64, KernelArgs) {
|
||||
panic!("Failed to allocate memory for stack");
|
||||
}
|
||||
|
||||
+ let mut identity_map_end = region_end(kernel.as_ptr() as u64, kernel.len() as u64)
|
||||
+ .max(region_end(stack_base as u64, stack_size as u64))
|
||||
+ .max(region_end(bootstrap_base, bootstrap_size))
|
||||
+ .max(region_end(env_base as u64, max_env_size as u64));
|
||||
+
|
||||
+ if let Some(ref live) = live_opt {
|
||||
+ identity_map_end = identity_map_end.max(region_end(
|
||||
+ live.as_ptr() as u64,
|
||||
+ live.len() as u64,
|
||||
+ ));
|
||||
+ }
|
||||
+
|
||||
+ let page_phys = unsafe {
|
||||
+ paging_create(
|
||||
+ os,
|
||||
+ kernel.as_ptr() as u64,
|
||||
+ kernel.len() as u64,
|
||||
+ identity_map_end,
|
||||
+ )
|
||||
+ }
|
||||
+ .expect("Failed to set up paging");
|
||||
+
|
||||
{
|
||||
let mut w = SliceWriter {
|
||||
slice: unsafe { slice::from_raw_parts_mut(env_base, max_env_size) },
|
||||
diff --git a/src/os/uefi/device.rs b/src/os/uefi/device.rs
|
||||
index 0b7991f..554d88e 100644
|
||||
--- a/src/os/uefi/device.rs
|
||||
+++ b/src/os/uefi/device.rs
|
||||
@@ -13,6 +13,160 @@ use uefi_std::{fs::FileSystem, loaded_image::LoadedImage, proto::Protocol};
|
||||
|
||||
use super::disk::{DiskEfi, DiskOrFileEfi};
|
||||
|
||||
+#[derive(Clone, Copy)]
|
||||
+struct GptPartitionInfo {
|
||||
+ first_lba: u64,
|
||||
+ last_lba: u64,
|
||||
+}
|
||||
+
|
||||
+fn read_u32_le(bytes: &[u8]) -> Option<u32> {
|
||||
+ Some(u32::from_le_bytes(bytes.get(..4)?.try_into().ok()?))
|
||||
+}
|
||||
+
|
||||
+fn read_u64_le(bytes: &[u8]) -> Option<u64> {
|
||||
+ Some(u64::from_le_bytes(bytes.get(..8)?.try_into().ok()?))
|
||||
+}
|
||||
+
|
||||
+fn decode_utf16_name(bytes: &[u8]) -> Option<String> {
|
||||
+ let mut units = Vec::new();
|
||||
+ for chunk in bytes.chunks_exact(2) {
|
||||
+ let unit = u16::from_le_bytes([chunk[0], chunk[1]]);
|
||||
+ if unit == 0 {
|
||||
+ break;
|
||||
+ }
|
||||
+ units.push(unit);
|
||||
+ }
|
||||
+ String::from_utf16(&units).ok()
|
||||
+}
|
||||
+
|
||||
+fn select_partition(best: &mut Option<GptPartitionInfo>, candidate: GptPartitionInfo) {
|
||||
+ match best {
|
||||
+ Some(current) if current.last_lba.saturating_sub(current.first_lba) >= candidate.last_lba.saturating_sub(candidate.first_lba) => {}
|
||||
+ _ => *best = Some(candidate),
|
||||
+ }
|
||||
+}
|
||||
+
|
||||
+fn parse_gpt_partition_offset_from_bytes(data: &[u8], block_size: usize) -> Option<u64> {
|
||||
+ let header_offset = block_size;
|
||||
+ let header = data.get(header_offset..header_offset + 92)?;
|
||||
+ if header.get(..8)? != b"EFI PART" {
|
||||
+ return None;
|
||||
+ }
|
||||
+
|
||||
+ let entries_lba = read_u64_le(header.get(72..80)?)?;
|
||||
+ let entry_count = read_u32_le(header.get(80..84)?)? as usize;
|
||||
+ let entry_size = read_u32_le(header.get(84..88)?)? as usize;
|
||||
+ if entry_size < 128 {
|
||||
+ return None;
|
||||
+ }
|
||||
+
|
||||
+ let entries_offset = entries_lba.checked_mul(block_size as u64)? as usize;
|
||||
+ let mut redox_partition = None;
|
||||
+ let mut fallback_partition = None;
|
||||
+
|
||||
+ for index in 0..entry_count {
|
||||
+ let entry_offset = entries_offset.checked_add(index.checked_mul(entry_size)?)?;
|
||||
+ let entry = data.get(entry_offset..entry_offset + entry_size)?;
|
||||
+ if entry.get(..16)?.iter().all(|byte| *byte == 0) {
|
||||
+ continue;
|
||||
+ }
|
||||
+
|
||||
+ let first_lba = read_u64_le(entry.get(32..40)?)?;
|
||||
+ let last_lba = read_u64_le(entry.get(40..48)?)?;
|
||||
+ if first_lba == 0 || last_lba < first_lba {
|
||||
+ continue;
|
||||
+ }
|
||||
+
|
||||
+ let partition = GptPartitionInfo { first_lba, last_lba };
|
||||
+ let name = decode_utf16_name(entry.get(56..128)?).unwrap_or_default();
|
||||
+ if name == "REDOX" {
|
||||
+ redox_partition = Some(partition);
|
||||
+ break;
|
||||
+ }
|
||||
+
|
||||
+ select_partition(&mut fallback_partition, partition);
|
||||
+ }
|
||||
+
|
||||
+ redox_partition
|
||||
+ .or(fallback_partition)
|
||||
+ .map(|partition| partition.first_lba * block_size as u64)
|
||||
+}
|
||||
+
|
||||
+fn parse_gpt_partition_offset_from_parts(
|
||||
+ entries: &[u8],
|
||||
+ entry_count: usize,
|
||||
+ entry_size: usize,
|
||||
+ block_size: usize,
|
||||
+) -> Option<u64> {
|
||||
+ let mut redox_partition = None;
|
||||
+ let mut fallback_partition = None;
|
||||
+
|
||||
+ for index in 0..entry_count {
|
||||
+ let entry_offset = index.checked_mul(entry_size)?;
|
||||
+ let entry = entries.get(entry_offset..entry_offset + entry_size)?;
|
||||
+ if entry.get(..16)?.iter().all(|byte| *byte == 0) {
|
||||
+ continue;
|
||||
+ }
|
||||
+
|
||||
+ let first_lba = read_u64_le(entry.get(32..40)?)?;
|
||||
+ let last_lba = read_u64_le(entry.get(40..48)?)?;
|
||||
+ if first_lba == 0 || last_lba < first_lba {
|
||||
+ continue;
|
||||
+ }
|
||||
+
|
||||
+ let partition = GptPartitionInfo { first_lba, last_lba };
|
||||
+ let name = decode_utf16_name(entry.get(56..128)?).unwrap_or_default();
|
||||
+ if name == "REDOX" {
|
||||
+ redox_partition = Some(partition);
|
||||
+ break;
|
||||
+ }
|
||||
+
|
||||
+ select_partition(&mut fallback_partition, partition);
|
||||
+ }
|
||||
+ redox_partition
|
||||
+ .or(fallback_partition)
|
||||
+ .map(|partition| partition.first_lba * block_size as u64)
|
||||
+}
|
||||
+
|
||||
+fn gpt_partition_offset_from_buffer(data: &[u8]) -> Option<u64> {
|
||||
+ parse_gpt_partition_offset_from_bytes(data, 512)
|
||||
+}
|
||||
+
|
||||
+fn gpt_partition_offset_from_disk(disk: &mut DiskEfi) -> Option<u64> {
|
||||
+ const GPT_SECTOR_SIZE: usize = 512;
|
||||
+
|
||||
+ if disk.media_block_size() == 0 {
|
||||
+ return None;
|
||||
+ }
|
||||
+
|
||||
+ let mut boot_region = vec![0_u8; 2048];
|
||||
+ disk.read_bytes(0, &mut boot_region).ok()?;
|
||||
+ let header = boot_region.get(GPT_SECTOR_SIZE..GPT_SECTOR_SIZE + 92)?;
|
||||
+ if header.get(..8)? != b"EFI PART" {
|
||||
+ return None;
|
||||
+ }
|
||||
+
|
||||
+ let entries_lba = read_u64_le(header.get(72..80)?)?;
|
||||
+ let entry_count = read_u32_le(header.get(80..84)?)? as usize;
|
||||
+ let entry_size = read_u32_le(header.get(84..88)?)? as usize;
|
||||
+ if entry_size < 128 {
|
||||
+ return None;
|
||||
+ }
|
||||
+
|
||||
+ let entries_bytes = entry_count.checked_mul(entry_size)?;
|
||||
+ let entries_offset = entries_lba.checked_mul(GPT_SECTOR_SIZE as u64)?;
|
||||
+ let mut entries = vec![0_u8; entries_bytes];
|
||||
+ disk.read_bytes(entries_offset, &mut entries).ok()?;
|
||||
+
|
||||
+ parse_gpt_partition_offset_from_parts(&entries, entry_count, entry_size, GPT_SECTOR_SIZE)
|
||||
+}
|
||||
+
|
||||
#[derive(Debug)]
|
||||
enum DevicePathRelation {
|
||||
This,
|
||||
@@ -131,12 +285,7 @@ pub fn disk_device_priority() -> Vec<DiskDevice> {
|
||||
return vec![DiskDevice {
|
||||
handle: esp_handle,
|
||||
// Support both a copy of livedisk.iso and a standalone redoxfs partition
|
||||
- partition_offset: if &buffer[512..520] == b"EFI PART" {
|
||||
- //TODO: get block from partition table
|
||||
- 2 * crate::MIBI as u64
|
||||
- } else {
|
||||
- 0
|
||||
- },
|
||||
+ partition_offset: gpt_partition_offset_from_buffer(&buffer).unwrap_or(0),
|
||||
disk: DiskOrFileEfi::File(buffer),
|
||||
device_path: esp_device_path,
|
||||
file_path: Some("redox-live.iso"),
|
||||
@@ -154,7 +303,7 @@ pub fn disk_device_priority() -> Vec<DiskDevice> {
|
||||
};
|
||||
let mut devices = Vec::with_capacity(handles.len());
|
||||
for handle in handles {
|
||||
- let disk = match DiskEfi::handle_protocol(handle) {
|
||||
+ let mut disk = match DiskEfi::handle_protocol(handle) {
|
||||
Ok(ok) => ok,
|
||||
Err(err) => {
|
||||
log::warn!(
|
||||
@@ -182,14 +331,15 @@ pub fn disk_device_priority() -> Vec<DiskDevice> {
|
||||
}
|
||||
};
|
||||
|
||||
+ let partition_offset = if disk.0.Media.LogicalPartition {
|
||||
+ 0
|
||||
+ } else {
|
||||
+ gpt_partition_offset_from_disk(&mut disk).unwrap_or(2 * crate::MIBI as u64)
|
||||
+ };
|
||||
+
|
||||
devices.push(DiskDevice {
|
||||
handle,
|
||||
- partition_offset: if disk.0.Media.LogicalPartition {
|
||||
- 0
|
||||
- } else {
|
||||
- //TODO: get block from partition table
|
||||
- 2 * crate::MIBI as u64
|
||||
- },
|
||||
+ partition_offset,
|
||||
disk: DiskOrFileEfi::Disk(disk),
|
||||
device_path,
|
||||
file_path: None,
|
||||
diff --git a/src/os/uefi/disk.rs b/src/os/uefi/disk.rs
|
||||
index 3f920bb..4d109f8 100644
|
||||
--- a/src/os/uefi/disk.rs
|
||||
+++ b/src/os/uefi/disk.rs
|
||||
@@ -117,3 +117,43 @@ impl Disk for DiskEfi {
|
||||
Err(Error::new(EIO))
|
||||
}
|
||||
}
|
||||
+
|
||||
+impl DiskEfi {
|
||||
+ pub fn media_block_size(&self) -> usize {
|
||||
+ self.0.Media.BlockSize as usize
|
||||
+ }
|
||||
+
|
||||
+ pub fn read_bytes(&mut self, offset: u64, buffer: &mut [u8]) -> Result<()> {
|
||||
+ let block_size = self.media_block_size();
|
||||
+ if block_size == 0 || block_size > self.1.len() {
|
||||
+ return Err(Error::new(EINVAL));
|
||||
+ }
|
||||
+
|
||||
+ let scratch = &mut self.1[..block_size];
|
||||
+ let mut copied = 0usize;
|
||||
+
|
||||
+ while copied < buffer.len() {
|
||||
+ let absolute = offset as usize + copied;
|
||||
+ let lba = (absolute / block_size) as u64;
|
||||
+ let in_block = absolute % block_size;
|
||||
+
|
||||
+ match (self.0.ReadBlocks)(
|
||||
+ self.0,
|
||||
+ self.0.Media.MediaId,
|
||||
+ lba,
|
||||
+ block_size,
|
||||
+ scratch.as_mut_ptr(),
|
||||
+ ) {
|
||||
+ status if status.is_success() => {
|
||||
+ let chunk_len = core::cmp::min(block_size - in_block, buffer.len() - copied);
|
||||
+ buffer[copied..copied + chunk_len]
|
||||
+ .copy_from_slice(&scratch[in_block..in_block + chunk_len]);
|
||||
+ copied += chunk_len;
|
||||
+ }
|
||||
+ _ => return Err(Error::new(EIO)),
|
||||
+ }
|
||||
+ }
|
||||
+
|
||||
+ Ok(())
|
||||
+ }
|
||||
+}
|
||||
diff --git a/src/os/uefi/mod.rs b/src/os/uefi/mod.rs
|
||||
index c79266e..86235a4 100644
|
||||
--- a/src/os/uefi/mod.rs
|
||||
+++ b/src/os/uefi/mod.rs
|
||||
@@ -47,17 +47,19 @@ pub(crate) fn alloc_zeroed_page_aligned(size: usize) -> *mut u8 {
|
||||
let ptr = {
|
||||
// Max address mapped by src/arch paging code (8 GiB)
|
||||
let mut ptr = 0x2_0000_0000;
|
||||
- status_to_result((std::system_table().BootServices.AllocatePages)(
|
||||
- 1, // AllocateMaxAddress
|
||||
- MemoryType::EfiRuntimeServicesData, // Keeps this memory out of free space list
|
||||
+ if status_to_result((std::system_table().BootServices.AllocatePages)(
|
||||
+ 0, // AllocateAnyPages
|
||||
+ MemoryType::EfiLoaderData,
|
||||
pages,
|
||||
&mut ptr,
|
||||
))
|
||||
- .unwrap();
|
||||
+ .is_err()
|
||||
+ {
|
||||
+ return ptr::null_mut();
|
||||
+ }
|
||||
ptr as *mut u8
|
||||
};
|
||||
|
||||
- assert!(!ptr.is_null());
|
||||
unsafe { ptr::write_bytes(ptr, 0, pages * page_size) };
|
||||
ptr
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user