milestone: desktop path Phases 1-5

Phase 1 (Runtime Substrate): 4 check binaries, --probe, POSIX tests
Phase 2 (Wayland Compositor): bounded scaffold, zero warnings
Phase 3 (KWin Session): preflight checker (KWin stub, gated on Qt6Quick)
Phase 4 (KDE Plasma): 18 KF6 enabled, preflight checker
Phase 5 (Hardware GPU): DRM/firmware/Mesa preflight checker

Build: zero warnings, all scripts syntax-clean. Oracle-verified.
This commit is contained in:
2026-04-29 09:54:06 +01:00
parent b23714f542
commit 8acc73d774
508 changed files with 76526 additions and 396 deletions
+63
View File
@@ -0,0 +1,63 @@
# Community Hardware
This document tracks the devices from developers or community that need a driver.
This document was created because unfortunately we can't know the most sold device models of the world to measure our device porting priority, thus we will use our community data to measure our device priorities, if you find a "device model users" survey (similar to [Debian Popularity Contest](https://popcon.debian.org/) and [Steam Hardware/Software Survey](https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam)), please comment.
If you want to contribute to this table, install [pciutils](https://mj.ucw.cz/sw/pciutils/) on your Linux or Unix-like distribution (it may have a package on your distribution), run the `lspci -v` command to see your hardware devices, their kernel drivers and give the results of these items on each device:
- The first field (each device has an unique name for this item)
- Kernel driver
- Kernel module
If you are unsure of what to do, you can talk with us on the [chat](https://doc.redox-os.org/book/chat.html).
## Template
You will use this template to insert your devices on the table.
```
| | | | No |
```
- Remove the `#` characters in the port numbers to avoid GitLab issues to be wrongly mentioned
## Devices
| **Device model** | **Kernel driver?** | **Kernel module?** | **There's a Redox driver?** |
|------------------|--------------------|--------------------|-----------------------------|
| Realtek RTL8821CE 802.11ac (Wi-Fi) | rtw_8821ce | rtw88_8821ce | No |
| Intel Ice Lake-LP SPI Controller | intel-spi | spi_intel_pci | No |
| Intel Ice Lake-LP SMBus Controller | i801_smbus | i2c_i801 | No |
| Intel Ice Lake-LP Smart Sound Technology Audio Controller | snd_hda_intel | snd_hda_intel, snd_sof_pci_intel_icl | No |
| Intel Ice Lake-LP Serial IO SPI Controller | intel-lpss | No | No |
| Intel Ice Lake-LP Serial IO UART Controller | intel-lpss | No | No |
| Intel Ice Lake-LP Serial IO I2C Controller | intel-lpss | No | No |
| Ice Lake-LP USB 3.1 xHCI Host Controller | xhci_hcd | No | No |
| Intel Processor Power and Thermal Controller | proc_thermal | processor_thermal_device_pci_legacy | No |
| Intel Device 8a02 | icl_uncore | No | No |
| Iris Plus Graphics G1 (Ice Lake) | i915 | i915 | No |
| Intel Corporation Raptor Lake-P 6p+8e cores Host Bridge/DRAM Controller | No | No | No |
| Intel Corporation Raptor Lake PCI Express 5.0 Graphics Port (PEG010) (prog-if 00 [Normal decode]) | pcieport | No | No |
| Intel Corporation Raptor Lake-P [UHD Graphics] (rev 04) (prog-if 00 [VGA controller]) | i915 | i915 | No |
| Intel Corporation Raptor Lake Dynamic Platform and Thermal Framework Processor Participant | proc_thermal_pci | processor_thermal_device_pci | No |
| Intel Corporation Raptor Lake PCIe 4.0 Graphics Port (prog-if 00 [Normal decode]) | pcieport | No | No |
| Intel Corporation Raptor Lake-P Thunderbolt 4 PCI Express Root Port 0 (prog-if 00 [Normal decode]) | pcieport | No | No |
| Intel Corporation GNA Scoring Accelerator module | No | No | No |
| Intel Corporation Raptor Lake-P Thunderbolt 4 USB Controller (prog-if 30 [XHCI]) | xhci_hcd | xhci_pci | No |
| Intel Corporation Raptor Lake-P Thunderbolt 4 NHI 0 (prog-if 40 [USB4 Host Interface]) | thunderbolt | thunderbolt | No |
| Intel Corporation Raptor Lake-P Thunderbolt 4 NHI 1 (prog-if 40 [USB4 Host Interface]) | thunderbolt | thunderbolt | No |
| Intel Corporation Alder Lake PCH USB 3.2 xHCI Host Controller (rev 01) (prog-if 30 [XHCI]) | xhci_hcd | xhci_pci | No |
| Intel Corporation Alder Lake PCH Shared SRAM (rev 01) | No | No | No |
| Intel Corporation Raptor Lake PCH CNVi WiFi (rev 01) | iwlwifi | iwlwifi | No |
| Intel Corporation Alder Lake PCH Serial IO I2C Controller #0 (rev 01) | intel-lpss | intel_lpss_pci | No |
| Intel Corporation Alder Lake PCH HECI Controller (rev 01) | mei_me | mei_me | No |
| Intel Corporation Device 51b8 (rev 01) (prog-if 00 [Normal decode]) | pcieport | No | No |
| Intel Corporation Alder Lake-P PCH PCIe Root Port 6 (rev 01) (prog-if 00 [Normal decode]) | pcieport | No | No |
| Intel Corporation Raptor Lake LPC/eSPI Controller (rev 01) | No | No | No |
| Intel Corporation Raptor Lake-P/U/H cAVS (rev 01) (prog-if 80) | sof-audio-pci-intel-tgl | snd_hda_intel, snd_sof_pci_intel_tgl | No |
| Intel Corporation Alder Lake PCH-P SMBus Host Controller | i801_smbus | i2c_i801 | No |
| Intel Corporation Alder Lake-P PCH SPI Controller (rev 01) | intel-spi | spi_intel_pci | No |
| NVIDIA Corporation GA107GLM [RTX A1000 6GB Laptop GPU] (rev a1) | nvidia | nouveau, nvidia_drm, nvidia | No |
| SK hynix Platinum P41/PC801 NVMe Solid State Drive (prog-if 02 [NVM Express]) | nvme | nvme | No |
| Realtek Semiconductor Co., Ltd. RTS5261 PCI Express Card Reader (rev 01) | rtsx_pci | rtsx_pci | No |
+160
View File
@@ -0,0 +1,160 @@
# Drivers
- [Libraries](#libraries)
- [Services](#services)
- [Hardware Interfaces](#hardware-interfaces)
- [Devices](#devices)
- [CPU](#cpu)
- [Controllers](#controllers)
- [Storage](#storage)
- [Graphics](#graphics)
- [Input](#input)
- [Sound](#sound)
- [Networking](#networking)
- [Virtualization](#virtualization)
- [System Interfaces](#system-interfaces)
- [System Calls](#system-calls)
- [Schemes](#schemes)
- [Contribution Details](#contribution-details)
## Libraries
- amlserde - Library to provide serialization/deserialization of the AML symbol table from ACPI
- common - Library with shared driver code
- executor - Library to run Rust futures and integrate the executor in an interrupt+queue model without a separated reactor thread
- [graphics/console-draw](graphics/console-draw/) - Library with shared terminal drawing code
- [graphics/driver-graphics](graphics/driver-graphics/) - Library with shared graphics code
- [graphics/graphics-ipc](graphics/graphics-ipc/) - Library with graphics IPC shared code
- [net/driver-network](net/driver-network/) - Library with shared networking code
- [storage/partitionlib](storage/partitionlib/) - Library with MBR and GPT code
- [storage/driver-block](storage/driver-block/) - Library with shared storage code
- virtio-core - VirtIO driver library
## Services
- [graphics/fbbootlogd](graphics/fbbootlogd/) - Daemon for boot log drawing
- [graphics/fbcond](graphics/fbcond/) - Terminal daemon
- hwd - Daemon that handle the ACPI and DeviceTree booting
- inputd - Multiplexes input from multiple input drivers and provides that to Orbital
- pcid-spawner - Daemon for PCI-based device driver spawn
- [storage/lived](storage/lived/) - Daemon for live disk
- redoxerd - Daemon that send/receive terminal text between the host system and QEMU
## Hardware Interfaces
- acpid - ACPI interface driver
- pcid - PCI and PCI Express driver
## Devices
### CPU
- rtcd - x86 Real Time Clock driver
### Controllers
- [usb/xhcid](usb/xhcid/) - xHCI USB controller driver
### Storage
- [storage/ahcid](storage/ahcid/) - AHCI (SATA) driver
- [storage/bcm2835-sdhcid](storage/bcm2835-sdhcid/) - BCM2835 storage driver
- [storage/ided](storage/ided/) - PATA (IDE) driver
- [storage/nvmed](storage/nvmed/) - NVMe driver
- [storage/virtio-blkd](storage/virtio-blkd/) - VirtIO block device driver
- [storage/usbscsid](storage/usbscsid/) - USB SCSI driver
### Graphics
- [graphics/ihdgd](graphics/ihdgd/) - Intel graphics driver
- [graphics/vesad](graphics/vesad/) - VESA video driver
- [graphics/virtio-gpud](graphics/virtio-gpud/) - VirtIO-GPU device driver
### Input
- [input/ps2d](input/ps2d/) - PS/2 interface driver
- [input/usbhidd](input/usbhidd/) - USB HID driver
- [usb/usbhubd](usb/usbhubd/) - USB Hub driver
- [usb/usbctl](usb/usbctl/) - TODO
### Sound
- [audio/ac97d](audio/ac97d/) - AC'97 codec driver
- [audio/ihdad](audio/ihdad/) - Intel HD Audio chipset driver
- [audio/sb16d](audio/sb16d/) - Sound Blaster sound card driver
### Networking
- [net/e1000d](net/e1000d/) - Intel Gigabit ethernet driver
- [net/ixgbed](net/ixgbed/) - Intel 10 Gigabit ethernet driver
- [net/rtl8139d](net/rtl8139d/), [net/rtl8168d](net/rtl8168d/) - Realtek ethernet drivers
- [net/virtio-netd](net/virtio-netd/) - VirtIO network device driver
### Virtualization
- vboxd - VirtualBox driver
Some drivers are work-in-progress and incomplete, read [this](https://gitlab.redox-os.org/redox-os/base/-/issues/56) tracking issue to verify.
## System Interfaces
This section explain the system interfaces used by drivers.
### System Calls
- `iopl` : system call that sets the I/O privilege level. x86 has four privilege rings (0/1/2/3), of which the kernel runs in ring 0 and userspace in ring 3. IOPL can only be changed by the kernel, for obvious security reasons, and therefore the Redox kernel needs root to set it. It is unique for each process. Processes with IOPL=3 can access I/O ports, and the kernel can access them as well.
### Schemes
- `/scheme/memory/physical` : Allows mapping physical memory frames to driver-accessible virtual memory pages, with various available memory types:
- `/scheme/memory/physical` : Default memory type (currently writeback)
- `/scheme/memory/physical@wb` Writeback cached memory
- `/scheme/memory/physical@uc` : Uncacheable memory
- `/scheme/memory/physical@wc` : Write-combining memory
- `/scheme/irq` : Allows getting events from interrupts. It is used primarily by listening for its file descriptors using the `/scheme/event` scheme.
## Contribution Details
### Driver Design
A device driver on Redox is an user-space daemon that use system calls and schemes to work, while operating systems with monolithic kernels drivers use internal kernel APIs instead of common program APIs.
If you want to port a driver from a monolithic operating system to Redox you will need to rewrite the driver with reverse enginnering of the code logic, because the logic is adapted to internal kernel APIs (it's a hard task if the device is complex, datasheets are much more easy).
### Write a Driver
Datasheets are preferable (much more easy depending on device complexity), when they are freely available. Be aware that datasheets are often provided under a [Non-Disclosure Agreement](https://en.wikipedia.org/wiki/Non-disclosure_agreement) from hardware vendors, which can affect the ability to create an MIT-licensed driver.
If datasheets aren't available you need to do reverse-engineering of BSD or Linux drivers (if you want use a Linux driver as reference for your Redox driver please ask in the [Chat](https://doc.redox-os.org/book/chat.html) before the implementation to know/satisfy the license requirements and not waste your time, also if you use a BSD driver not licensed as BSD as reference).
### Libraries
You should use the [redox-scheme](https://crates.io/crates/redox-scheme) and [redox_event](https://crates.io/crates/redox_event) libraries to create your drivers, you can also read the [example driver](https://gitlab.redox-os.org/redox-os/exampled) or read the code of other drivers with the same type of your device.
Before testing your changes be aware of [this](https://doc.redox-os.org/book/coding-and-building.html#how-to-update-initfs).
### References
If you want to reverse enginner the existing drivers, you can access the BSD code using these links:
- [FreeBSD drivers](https://github.com/freebsd/freebsd-src/tree/main/sys/dev)
- [NetBSD drivers](https://github.com/NetBSD/src/tree/trunk/sys/dev)
- [OpenBSD drivers](https://github.com/openbsd/src/tree/master/sys/dev)
## How To Contribute
To learn how to contribute to this system component you need to read the following document:
- [CONTRIBUTING.md](https://gitlab.redox-os.org/redox-os/redox/-/blob/master/CONTRIBUTING.md)
## Development
To learn how to do development with this system component inside the Redox build system you need to read the [Build System](https://doc.redox-os.org/book/build-system-reference.html) and [Coding and Building](https://doc.redox-os.org/book/coding-and-building.html) pages.
### How To Build
To build this system component you need to download the Redox build system, you can learn how to do it on the [Building Redox](https://doc.redox-os.org/book/podman-build.html) page.
This is necessary because they only work with cross-compilation to a Redox virtual machine or real hardware, but you can do some testing from Linux.
[Back to top](#drivers)
@@ -0,0 +1,33 @@
[package]
name = "acpid"
description = "ACPI daemon"
version = "0.1.0"
authors = ["4lDO2 <4lDO2@protonmail.com>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
acpi.workspace = true
arrayvec = "0.7.6"
log.workspace = true
num-derive = "0.3"
num-traits = "0.2"
parking_lot.workspace = true
plain.workspace = true
redox_syscall.workspace = true
redox_event.workspace = true
rustc-hash = "1.1.0"
thiserror.workspace = true
ron.workspace = true
serde.workspace = true
amlserde = { path = "../amlserde" }
common = { path = "../common" }
daemon = { path = "../../daemon" }
libredox.workspace = true
redox-scheme.workspace = true
scheme-utils = { path = "../../scheme-utils" }
[lints]
workspace = true
+873
View File
@@ -0,0 +1,873 @@
use acpi::aml::object::{Object, WrappedObject};
use acpi::aml::op_region::{RegionHandler, RegionSpace};
use rustc_hash::FxHashMap;
use std::convert::{TryFrom, TryInto};
use std::error::Error;
use std::ops::Deref;
use std::str::FromStr;
use std::sync::{Arc, Mutex};
use std::{fmt, mem};
use syscall::PAGE_SIZE;
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
use common::io::{Io, Pio};
use parking_lot::{RwLock, RwLockReadGuard, RwLockWriteGuard};
use thiserror::Error;
use acpi::{
aml::{namespace::AmlName, AmlError, Interpreter},
platform::AcpiPlatform,
AcpiTables,
};
use amlserde::aml_serde_name::aml_to_symbol;
use amlserde::{AmlSerde, AmlSerdeValue};
#[cfg(target_arch = "x86_64")]
pub mod dmar;
use crate::aml_physmem::{AmlPageCache, AmlPhysMemHandler};
/// The raw SDT header struct, as defined by the ACPI specification.
#[derive(Copy, Clone, Debug)]
#[repr(C, packed)]
pub struct SdtHeader {
pub signature: [u8; 4],
pub length: u32,
pub revision: u8,
pub checksum: u8,
pub oem_id: [u8; 6],
pub oem_table_id: [u8; 8],
pub oem_revision: u32,
pub creator_id: u32,
pub creator_revision: u32,
}
unsafe impl plain::Plain for SdtHeader {}
impl SdtHeader {
pub fn signature(&self) -> SdtSignature {
SdtSignature {
signature: self.signature,
oem_id: self.oem_id,
oem_table_id: self.oem_table_id,
}
}
pub fn length(&self) -> usize {
self.length
.try_into()
.expect("expected usize to be at least 32 bits")
}
}
#[derive(Clone, Copy, Debug, Eq, Hash, Ord, PartialEq, PartialOrd)]
pub struct SdtSignature {
pub signature: [u8; 4],
pub oem_id: [u8; 6],
pub oem_table_id: [u8; 8],
}
impl fmt::Display for SdtSignature {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"{}-{}-{}",
String::from_utf8_lossy(&self.signature),
String::from_utf8_lossy(&self.oem_id),
String::from_utf8_lossy(&self.oem_table_id)
)
}
}
#[derive(Debug, Error)]
pub enum TablePhysLoadError {
// TODO: Make syscall::Error implement std::error::Error, when enabling a Cargo feature.
#[error("i/o error: {0}")]
Io(#[from] std::io::Error),
#[error("invalid SDT: {0}")]
Validity(#[from] InvalidSdtError),
}
#[derive(Debug, Error)]
pub enum InvalidSdtError {
#[error("invalid size")]
InvalidSize,
#[error("invalid checksum")]
BadChecksum,
}
struct PhysmapGuard {
virt: *const u8,
size: usize,
}
impl PhysmapGuard {
fn map(page: usize, page_count: usize) -> std::io::Result<Self> {
let size = page_count * PAGE_SIZE;
let virt = unsafe {
common::physmap(page, size, common::Prot::RO, common::MemoryType::default())
.map_err(|error| std::io::Error::from_raw_os_error(error.errno()))?
};
Ok(Self {
virt: virt as *const u8,
size,
})
}
}
impl Deref for PhysmapGuard {
type Target = [u8];
fn deref(&self) -> &Self::Target {
unsafe { std::slice::from_raw_parts(self.virt as *const u8, self.size) }
}
}
impl Drop for PhysmapGuard {
fn drop(&mut self) {
unsafe {
let _ = libredox::call::munmap(self.virt as *mut (), self.size);
}
}
}
#[derive(Clone)]
pub struct Sdt(Arc<[u8]>);
impl Sdt {
pub fn new(slice: Arc<[u8]>) -> Result<Self, InvalidSdtError> {
let header = match plain::from_bytes::<SdtHeader>(&slice) {
Ok(header) => header,
Err(plain::Error::TooShort) => return Err(InvalidSdtError::InvalidSize),
Err(plain::Error::BadAlignment) => panic!(
"plain::from_bytes failed due to alignment, but SdtHeader is #[repr(packed)]!"
),
};
if header.length() != slice.len() {
return Err(InvalidSdtError::InvalidSize);
}
let checksum = slice
.iter()
.copied()
.fold(0_u8, |current_sum, item| current_sum.wrapping_add(item));
if checksum != 0 {
return Err(InvalidSdtError::BadChecksum);
}
Ok(Self(slice))
}
pub fn load_from_physical(physaddr: usize) -> Result<Self, TablePhysLoadError> {
let physaddr_start_page = physaddr / PAGE_SIZE * PAGE_SIZE;
let physaddr_page_offset = physaddr % PAGE_SIZE;
// Begin by reading and validating the header first. The SDT header is always 36 bytes
// long, and can thus span either one or two page table frames.
let needs_extra_page = (PAGE_SIZE - physaddr_page_offset)
.checked_sub(mem::size_of::<SdtHeader>())
.is_none();
let page_table_count = 1 + if needs_extra_page { 1 } else { 0 };
let pages = PhysmapGuard::map(physaddr_start_page, page_table_count)?;
assert!(pages.len() >= mem::size_of::<SdtHeader>());
let sdt_mem = &pages[physaddr_page_offset..];
let sdt = plain::from_bytes::<SdtHeader>(&sdt_mem[..mem::size_of::<SdtHeader>()])
.expect("either alignment is wrong, or the length is too short, both of which are already checked for");
let total_length = sdt.length();
let base_length = std::cmp::min(total_length, sdt_mem.len());
let extended_length = total_length - base_length;
let mut loaded = sdt_mem[..base_length].to_owned();
loaded.reserve(extended_length);
const SIMULTANEOUS_PAGE_COUNT: usize = 4;
let mut left = extended_length;
let mut offset = physaddr_start_page + page_table_count * PAGE_SIZE;
let length_per_iteration = PAGE_SIZE * SIMULTANEOUS_PAGE_COUNT;
while left > 0 {
let to_copy = std::cmp::min(left, length_per_iteration);
let additional_pages = PhysmapGuard::map(offset, to_copy.div_ceil(PAGE_SIZE))?;
loaded.extend(&additional_pages[..to_copy]);
left -= to_copy;
offset += to_copy;
}
assert_eq!(left, 0);
Self::new(loaded.into()).map_err(Into::into)
}
pub fn as_slice(&self) -> &[u8] {
&self.0
}
}
impl Deref for Sdt {
type Target = SdtHeader;
fn deref(&self) -> &Self::Target {
plain::from_bytes::<SdtHeader>(&self.0)
.expect("expected already validated Sdt to be able to get its header")
}
}
impl Sdt {
pub fn data(&self) -> &[u8] {
&self.0[mem::size_of::<SdtHeader>()..]
}
}
impl fmt::Debug for Sdt {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Sdt")
.field("header", &*self as &SdtHeader)
.field("extra_len", &self.data().len())
.finish()
}
}
pub struct Dsdt(Sdt);
pub struct Ssdt(Sdt);
// Current AML implementation builds the aml_context.namespace at startup,
// but the cache for symbols is lazy-loaded when someone
// reads from the acpi:/symbols scheme.
// If you dynamically add an SDT, you can add to the namespace, but you
// must empty the cache so it is rebuilt.
// If you modify an SDT, you must discard the aml_context and rebuild it.
pub struct AmlSymbols {
aml_context: Option<Interpreter<AmlPhysMemHandler>>,
// k = name, v = description
symbol_cache: FxHashMap<String, String>,
page_cache: Arc<Mutex<AmlPageCache>>,
aml_region_handlers: Vec<(RegionSpace, Box<dyn RegionHandler>)>,
}
impl AmlSymbols {
pub fn new(aml_region_handlers: Vec<(RegionSpace, Box<dyn RegionHandler>)>) -> Self {
Self {
aml_context: None,
symbol_cache: FxHashMap::default(),
page_cache: Arc::new(Mutex::new(AmlPageCache::default())),
aml_region_handlers,
}
}
pub fn init(&mut self, pci_fd: Option<&libredox::Fd>) -> Result<(), Box<dyn Error>> {
if self.aml_context.is_some() {
return Err("AML interpreter already initialized".into());
}
let format_err = |err| format!("{:?}", err);
let handler = AmlPhysMemHandler::new(pci_fd, Arc::clone(&self.page_cache));
//TODO: use these parsed tables for the rest of acpid
let rsdp_address = usize::from_str_radix(&std::env::var("RSDP_ADDR")?, 16)?;
let tables =
unsafe { AcpiTables::from_rsdp(handler.clone(), rsdp_address).map_err(format_err)? };
let platform = AcpiPlatform::new(tables, handler).map_err(format_err)?;
let interpreter = Interpreter::new_from_platform(&platform).map_err(format_err)?;
for (region, handler) in self.aml_region_handlers.drain(..) {
interpreter.install_region_handler(region, handler);
}
self.aml_context = Some(interpreter);
Ok(())
}
pub fn aml_context_mut(
&mut self,
pci_fd: Option<&libredox::Fd>,
) -> Result<&mut Interpreter<AmlPhysMemHandler>, AmlEvalError> {
if self.aml_context.is_none() {
match self.init(pci_fd) {
Ok(()) => (),
Err(err) => {
log::error!("failed to initialize AML context: {}", err);
}
}
}
self.aml_context
.as_mut()
.ok_or(AmlEvalError::NotInitialized)
}
pub fn symbols_cache(&self) -> &FxHashMap<String, String> {
&self.symbol_cache
}
pub fn lookup(&self, symbol: &str) -> Option<String> {
if let Some(description) = self.symbol_cache.get(symbol) {
log::trace!("Found symbol in cache, {}, {}", symbol, description);
return Some(description.to_owned());
}
None
}
pub fn build_cache(&mut self, pci_fd: Option<&libredox::Fd>) {
let Ok(aml_context) = self.aml_context_mut(pci_fd) else {
return;
};
let mut symbol_list: Vec<(AmlName, String)> = Vec::with_capacity(5000);
if aml_context
.namespace
.lock()
.traverse(|level_aml_name, level| {
for (child_seg, handle) in level.values.iter() {
if let Ok(aml_name) =
AmlName::from_name_seg(child_seg.to_owned()).resolve(level_aml_name)
{
let name = aml_to_symbol(&aml_name);
symbol_list.push((aml_name, name));
} else {
log::error!(
"AmlName resolve failed, {:?}:{:?}",
level_aml_name,
child_seg
);
}
}
Ok(true)
})
.is_err()
{
log::error!("Namespace traverse failed");
return;
}
let mut symbol_cache: FxHashMap<String, String> = FxHashMap::default();
for (aml_name, name) in &symbol_list {
// create an empty entry, in case something goes wrong with serialization
symbol_cache.insert(name.to_owned(), "".to_owned());
if let Some(ser_value) = AmlSerde::from_aml(aml_context, aml_name) {
if let Ok(ser_string) = ron::ser::to_string_pretty(&ser_value, Default::default()) {
// replace the empty entry
symbol_cache.insert(name.to_owned(), ser_string);
}
}
}
// Cache the new list
log::trace!("Updating symbols list");
self.symbol_cache = symbol_cache;
}
}
#[derive(Debug, Error)]
pub enum AmlEvalError {
#[error("AML error")]
AmlError(AmlError),
#[error("Failed to serialize argument")]
SerializationError,
#[error("Failed to deserialize")]
DeserializationError,
#[error("AML not initialized")]
NotInitialized,
}
impl From<AmlError> for AmlEvalError {
fn from(value: AmlError) -> Self {
AmlEvalError::AmlError(value)
}
}
pub struct AcpiContext {
tables: Vec<Sdt>,
dsdt: Option<Dsdt>,
fadt: Option<Fadt>,
aml_symbols: RwLock<AmlSymbols>,
// TODO: The kernel ACPI code seemed to use load_table quite ubiquitously, however ACPI 5.1
// states that DDBHandles can only be obtained when loading XSDT-pointed tables. So, we'll
// generate an index only for those.
sdt_order: RwLock<Vec<Option<SdtSignature>>>,
pub next_ctx: RwLock<u64>,
}
impl AcpiContext {
pub fn aml_eval(
&self,
symbol: AmlName,
args: Vec<AmlSerdeValue>,
) -> Result<AmlSerdeValue, AmlEvalError> {
let mut symbols = self.aml_symbols.write();
let interpreter = symbols.aml_context_mut(None)?;
interpreter.acquire_global_lock(16)?;
let args = args
.into_iter()
.map(|aml_serde_value| {
aml_serde_value
.to_aml_object()
.map(Object::wrap)
.ok_or(AmlEvalError::DeserializationError)
})
.collect::<Result<Vec<WrappedObject>, AmlEvalError>>()?;
let result = interpreter.evaluate(symbol, args);
interpreter
.release_global_lock()
.expect("Failed to release GIL!"); //TODO: check if this should panic
result
.map_err(AmlEvalError::from)
.map(|object| {
AmlSerdeValue::from_aml_value(object.deref())
.ok_or(AmlEvalError::SerializationError)
})
.flatten()
}
pub fn init(
rxsdt_physaddrs: impl Iterator<Item = u64>,
ec: Vec<(RegionSpace, Box<dyn RegionHandler>)>,
) -> Self {
let tables = rxsdt_physaddrs
.map(|physaddr| {
let physaddr: usize = physaddr
.try_into()
.expect("expected ACPI addresses to be compatible with the current word size");
log::trace!("TABLE AT {:#>08X}", physaddr);
Sdt::load_from_physical(physaddr).expect("failed to load physical SDT")
})
.collect::<Vec<Sdt>>();
let mut this = Self {
tables,
dsdt: None,
fadt: None,
// Temporary values
aml_symbols: RwLock::new(AmlSymbols::new(ec)),
next_ctx: RwLock::new(0),
sdt_order: RwLock::new(Vec::new()),
};
for table in &this.tables {
this.new_index(&table.signature());
}
Fadt::init(&mut this);
//TODO (hangs on real hardware): Dmar::init(&this);
this
}
pub fn dsdt(&self) -> Option<&Dsdt> {
self.dsdt.as_ref()
}
pub fn ssdts(&self) -> impl Iterator<Item = Ssdt> + '_ {
self.find_multiple_sdts(*b"SSDT")
.map(|sdt| Ssdt(sdt.clone()))
}
fn find_single_sdt_pos(&self, signature: [u8; 4]) -> Option<usize> {
let count = self
.tables
.iter()
.filter(|sdt| sdt.signature == signature)
.count();
if count > 1 {
log::warn!(
"Expected only a single SDT of signature `{}` ({:?}), but there were {}",
String::from_utf8_lossy(&signature),
signature,
count
);
}
self.tables
.iter()
.position(|sdt| sdt.signature == signature)
}
pub fn find_multiple_sdts<'a>(&'a self, signature: [u8; 4]) -> impl Iterator<Item = &'a Sdt> {
self.tables
.iter()
.filter(move |sdt| sdt.signature == signature)
}
pub fn take_single_sdt(&self, signature: [u8; 4]) -> Option<Sdt> {
self.find_single_sdt_pos(signature)
.map(|pos| self.tables[pos].clone())
}
pub fn fadt(&self) -> Option<&Fadt> {
self.fadt.as_ref()
}
pub fn sdt_from_signature(&self, signature: &SdtSignature) -> Option<&Sdt> {
self.tables.iter().find(|sdt| {
sdt.signature == signature.signature
&& sdt.oem_id == signature.oem_id
&& sdt.oem_table_id == signature.oem_table_id
})
}
pub fn get_signature_from_index(&self, index: usize) -> Option<SdtSignature> {
self.sdt_order.read().get(index).copied().flatten()
}
pub fn get_index_from_signature(&self, signature: &SdtSignature) -> Option<usize> {
self.sdt_order
.read()
.iter()
.rposition(|sig| sig.map_or(false, |sig| &sig == signature))
}
pub fn tables(&self) -> &[Sdt] {
&self.tables
}
pub fn new_index(&self, signature: &SdtSignature) {
self.sdt_order.write().push(Some(*signature));
}
pub fn aml_lookup(&self, symbol: &str) -> Option<String> {
if let Ok(aml_symbols) = self.aml_symbols(None) {
aml_symbols.lookup(symbol)
} else {
None
}
}
pub fn aml_symbols(
&self,
pci_fd: Option<&libredox::Fd>,
) -> Result<RwLockReadGuard<'_, AmlSymbols>, AmlError> {
// return the cached value if it exists
let symbols = self.aml_symbols.read();
if !symbols.symbols_cache().is_empty() {
return Ok(symbols);
}
// free the read lock
drop(symbols);
// List has not been initialized, we have to build it
log::trace!("Creating symbols list");
let mut aml_symbols = self.aml_symbols.write();
aml_symbols.build_cache(pci_fd);
// return the cached value
Ok(RwLockWriteGuard::downgrade(aml_symbols))
}
/// Discard any cached symbols list. To be called if the AML namespace changes.
pub fn aml_symbols_reset(&self) {
let mut aml_symbols = self.aml_symbols.write();
aml_symbols.symbol_cache = FxHashMap::default();
}
/// Set Power State
/// See https://uefi.org/sites/default/files/resources/ACPI_6_1.pdf
/// - search for PM1a
/// See https://forum.osdev.org/viewtopic.php?t=16990 for practical details
pub fn set_global_s_state(&self, state: u8) {
if state != 5 {
return;
}
let fadt = match self.fadt() {
Some(fadt) => fadt,
None => {
log::error!("Cannot set global S-state due to missing FADT.");
return;
}
};
let port = fadt.pm1a_control_block as u16;
let mut val = 1 << 13;
let aml_symbols = self.aml_symbols.read();
let s5_aml_name = match acpi::aml::namespace::AmlName::from_str("\\_S5") {
Ok(aml_name) => aml_name,
Err(error) => {
log::error!("Could not build AmlName for \\_S5, {:?}", error);
return;
}
};
let s5 = match &aml_symbols.aml_context {
Some(aml_context) => match aml_context.namespace.lock().get(s5_aml_name) {
Ok(s5) => s5,
Err(error) => {
log::error!("Cannot set S-state, missing \\_S5, {:?}", error);
return;
}
},
None => {
log::error!("Cannot set S-state, AML context not initialized");
return;
}
};
let package = match s5.deref() {
acpi::aml::object::Object::Package(package) => package,
_ => {
log::error!("Cannot set S-state, \\_S5 is not a package");
return;
}
};
let slp_typa = match package[0].deref() {
acpi::aml::object::Object::Integer(i) => i.to_owned(),
_ => {
log::error!("typa is not an Integer");
return;
}
};
let slp_typb = match package[1].deref() {
acpi::aml::object::Object::Integer(i) => i.to_owned(),
_ => {
log::error!("typb is not an Integer");
return;
}
};
log::trace!("Shutdown SLP_TYPa {:X}, SLP_TYPb {:X}", slp_typa, slp_typb);
val |= slp_typa as u16;
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
{
log::warn!("Shutdown with ACPI outw(0x{:X}, 0x{:X})", port, val);
Pio::<u16>::new(port).write(val);
}
// TODO: Handle SLP_TYPb
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
{
log::error!(
"Cannot shutdown with ACPI outw(0x{:X}, 0x{:X}) on this architecture",
port,
val
);
}
loop {
core::hint::spin_loop();
}
}
}
#[repr(C, packed)]
#[derive(Clone, Copy, Debug)]
pub struct FadtStruct {
pub header: SdtHeader,
pub firmware_ctrl: u32,
pub dsdt: u32,
// field used in ACPI 1.0; no longer in use, for compatibility only
reserved: u8,
pub preferred_power_managament: u8,
pub sci_interrupt: u16,
pub smi_command_port: u32,
pub acpi_enable: u8,
pub acpi_disable: u8,
pub s4_bios_req: u8,
pub pstate_control: u8,
pub pm1a_event_block: u32,
pub pm1b_event_block: u32,
pub pm1a_control_block: u32,
pub pm1b_control_block: u32,
pub pm2_control_block: u32,
pub pm_timer_block: u32,
pub gpe0_block: u32,
pub gpe1_block: u32,
pub pm1_event_length: u8,
pub pm1_control_length: u8,
pub pm2_control_length: u8,
pub pm_timer_length: u8,
pub gpe0_ength: u8,
pub gpe1_length: u8,
pub gpe1_base: u8,
pub c_state_control: u8,
pub worst_c2_latency: u16,
pub worst_c3_latency: u16,
pub flush_size: u16,
pub flush_stride: u16,
pub duty_offset: u8,
pub duty_width: u8,
pub day_alarm: u8,
pub month_alarm: u8,
pub century: u8,
// reserved in ACPI 1.0; used since ACPI 2.0+
pub boot_architecture_flags: u16,
reserved2: u8,
pub flags: u32,
}
unsafe impl plain::Plain for FadtStruct {}
#[repr(C, packed)]
#[derive(Clone, Copy, Debug, Default)]
pub struct GenericAddressStructure {
address_space: u8,
bit_width: u8,
bit_offset: u8,
access_size: u8,
address: u64,
}
#[repr(C, packed)]
#[derive(Clone, Copy, Debug)]
pub struct FadtAcpi2Struct {
// 12 byte structure; see below for details
pub reset_reg: GenericAddressStructure,
pub reset_value: u8,
reserved3: [u8; 3],
// 64bit pointers - Available on ACPI 2.0+
pub x_firmware_control: u64,
pub x_dsdt: u64,
pub x_pm1a_event_block: GenericAddressStructure,
pub x_pm1b_event_block: GenericAddressStructure,
pub x_pm1a_control_block: GenericAddressStructure,
pub x_pm1b_control_block: GenericAddressStructure,
pub x_pm2_control_block: GenericAddressStructure,
pub x_pm_timer_block: GenericAddressStructure,
pub x_gpe0_block: GenericAddressStructure,
pub x_gpe1_block: GenericAddressStructure,
}
unsafe impl plain::Plain for FadtAcpi2Struct {}
#[derive(Clone)]
pub struct Fadt(Sdt);
impl Fadt {
pub fn acpi_2_struct(&self) -> Option<&FadtAcpi2Struct> {
let bytes = &self.0 .0[mem::size_of::<FadtStruct>()..];
match plain::from_bytes::<FadtAcpi2Struct>(bytes) {
Ok(fadt2) => Some(fadt2),
Err(plain::Error::TooShort) => None,
Err(plain::Error::BadAlignment) => unreachable!(
"plain::from_bytes reported bad alignment, but FadtAcpi2Struct is #[repr(packed)]"
),
}
}
}
impl Deref for Fadt {
type Target = FadtStruct;
fn deref(&self) -> &Self::Target {
plain::from_bytes::<FadtStruct>(&self.0 .0)
.expect("expected FADT struct to already be validated in Deref impl")
}
}
impl Fadt {
pub fn new(sdt: Sdt) -> Option<Fadt> {
if sdt.signature != *b"FACP" || sdt.length() < mem::size_of::<Fadt>() {
return None;
}
Some(Fadt(sdt))
}
pub fn init(context: &mut AcpiContext) {
let fadt_sdt = context
.take_single_sdt(*b"FACP")
.expect("expected ACPI to always have a FADT");
let fadt = match Fadt::new(fadt_sdt) {
Some(fadt) => fadt,
None => {
log::error!("Failed to find FADT");
return;
}
};
let dsdt_ptr = match fadt.acpi_2_struct() {
Some(fadt2) => usize::try_from(fadt2.x_dsdt).unwrap_or_else(|_| {
usize::try_from(fadt.dsdt).expect("expected any given u32 to fit within usize")
}),
None => usize::try_from(fadt.dsdt).expect("expected any given u32 to fit within usize"),
};
log::debug!("FACP at {:X}", { dsdt_ptr });
let dsdt_sdt = match Sdt::load_from_physical(fadt.dsdt as usize) {
Ok(dsdt) => dsdt,
Err(error) => {
log::error!("Failed to load DSDT: {}", error);
return;
}
};
context.fadt = Some(fadt.clone());
context.dsdt = Some(Dsdt(dsdt_sdt.clone()));
context.tables.push(dsdt_sdt);
}
}
pub enum PossibleAmlTables {
Dsdt(Dsdt),
Ssdt(Ssdt),
}
impl PossibleAmlTables {
pub fn try_new(inner: Sdt) -> Option<Self> {
match &inner.signature {
b"DSDT" => Some(Self::Dsdt(Dsdt(inner))),
b"SSDT" => Some(Self::Ssdt(Ssdt(inner))),
_ => None,
}
}
}
impl AmlContainingTable for PossibleAmlTables {
fn aml(&self) -> &[u8] {
match self {
Self::Dsdt(dsdt) => dsdt.aml(),
Self::Ssdt(ssdt) => ssdt.aml(),
}
}
fn header(&self) -> &SdtHeader {
match self {
Self::Dsdt(dsdt) => dsdt.header(),
Self::Ssdt(ssdt) => ssdt.header(),
}
}
}
pub trait AmlContainingTable {
fn aml(&self) -> &[u8];
fn header(&self) -> &SdtHeader;
}
impl<T> AmlContainingTable for &T
where
T: AmlContainingTable,
{
fn aml(&self) -> &[u8] {
T::aml(*self)
}
fn header(&self) -> &SdtHeader {
T::header(*self)
}
}
impl AmlContainingTable for Dsdt {
fn aml(&self) -> &[u8] {
self.0.data()
}
fn header(&self) -> &SdtHeader {
&*self.0
}
}
impl AmlContainingTable for Ssdt {
fn aml(&self) -> &[u8] {
self.0.data()
}
fn header(&self) -> &SdtHeader {
&*self.0
}
}
@@ -0,0 +1,128 @@
use std::ops::{Deref, DerefMut};
use common::io::Mmio;
// TODO: Only wrap with Mmio where there are hardware-registers. (Some of these structs seem to be
// ring buffer entries, which are not to be treated the same way).
pub struct DrhdPage {
virt: *mut Drhd,
}
impl DrhdPage {
pub fn map(base_phys: usize) -> syscall::Result<Self> {
assert_eq!(
base_phys % crate::acpi::PAGE_SIZE,
0,
"DRHD registers must be page-aligned"
);
// TODO: Uncachable? Can reads have side-effects?
let virt = unsafe {
common::physmap(
base_phys,
crate::acpi::PAGE_SIZE,
common::Prot::RO,
common::MemoryType::default(),
)?
} as *mut Drhd;
Ok(Self { virt })
}
}
impl Deref for DrhdPage {
type Target = Drhd;
fn deref(&self) -> &Self::Target {
unsafe { &*self.virt }
}
}
impl DerefMut for DrhdPage {
fn deref_mut(&mut self) -> &mut Self::Target {
unsafe { &mut *self.virt }
}
}
impl Drop for DrhdPage {
fn drop(&mut self) {
unsafe {
let _ = libredox::call::munmap(self.virt.cast(), crate::acpi::PAGE_SIZE);
}
}
}
#[repr(C, packed)]
pub struct DrhdFault {
pub sts: Mmio<u32>,
pub ctrl: Mmio<u32>,
pub data: Mmio<u32>,
pub addr: [Mmio<u32>; 2],
_rsv: [Mmio<u64>; 2],
pub log: Mmio<u64>,
}
#[repr(C, packed)]
pub struct DrhdProtectedMemory {
pub en: Mmio<u32>,
pub low_base: Mmio<u32>,
pub low_limit: Mmio<u32>,
pub high_base: Mmio<u64>,
pub high_limit: Mmio<u64>,
}
#[repr(C, packed)]
pub struct DrhdInvalidation {
pub queue_head: Mmio<u64>,
pub queue_tail: Mmio<u64>,
pub queue_addr: Mmio<u64>,
_rsv: Mmio<u32>,
pub cmpl_sts: Mmio<u32>,
pub cmpl_ctrl: Mmio<u32>,
pub cmpl_data: Mmio<u32>,
pub cmpl_addr: [Mmio<u32>; 2],
}
#[repr(C, packed)]
pub struct DrhdPageRequest {
pub queue_head: Mmio<u64>,
pub queue_tail: Mmio<u64>,
pub queue_addr: Mmio<u64>,
_rsv: Mmio<u32>,
pub sts: Mmio<u32>,
pub ctrl: Mmio<u32>,
pub data: Mmio<u32>,
pub addr: [Mmio<u32>; 2],
}
#[repr(C, packed)]
pub struct DrhdMtrrVariable {
pub base: Mmio<u64>,
pub mask: Mmio<u64>,
}
#[repr(C, packed)]
pub struct DrhdMtrr {
pub cap: Mmio<u64>,
pub def_type: Mmio<u64>,
pub fixed: [Mmio<u64>; 11],
pub variable: [DrhdMtrrVariable; 10],
}
#[repr(C, packed)]
pub struct Drhd {
pub version: Mmio<u32>,
_rsv: Mmio<u32>,
pub cap: Mmio<u64>,
pub ext_cap: Mmio<u64>,
pub gl_cmd: Mmio<u32>,
pub gl_sts: Mmio<u32>,
pub root_table: Mmio<u64>,
pub ctx_cmd: Mmio<u64>,
_rsv1: Mmio<u32>,
pub fault: DrhdFault,
_rsv2: Mmio<u32>,
pub pm: DrhdProtectedMemory,
pub invl: DrhdInvalidation,
_rsv3: Mmio<u64>,
pub intr_table: Mmio<u64>,
pub page_req: DrhdPageRequest,
pub mtrr: DrhdMtrr,
}
@@ -0,0 +1,528 @@
//! DMA Remapping Table -- `DMAR`. This is Intel's implementation of IOMMU functionality, known as
//! VT-d.
//!
//! Too understand what all of these structs mean, refer to the "Intel(R) Virtualization
//! Technology for Directed I/O" specification.
// TODO: Move this code to a separate driver as well?
use std::convert::TryFrom;
use std::ops::Deref;
use std::{fmt, mem};
use common::io::Io as _;
use num_derive::FromPrimitive;
use num_traits::FromPrimitive;
use self::drhd::DrhdPage;
use crate::acpi::{AcpiContext, Sdt, SdtHeader};
pub mod drhd;
#[repr(C, packed)]
pub struct DmarStruct {
pub sdt_header: SdtHeader,
pub host_addr_width: u8,
pub flags: u8,
pub _rsvd: [u8; 10],
// This header is followed by N remapping structures.
}
unsafe impl plain::Plain for DmarStruct {}
/// The DMA Remapping Table
#[derive(Debug)]
pub struct Dmar(Sdt);
impl Dmar {
fn remmapping_structs_area(&self) -> &[u8] {
&self.0.as_slice()[mem::size_of::<DmarStruct>()..]
}
}
impl Deref for Dmar {
type Target = DmarStruct;
fn deref(&self) -> &Self::Target {
plain::from_bytes(self.0.as_slice())
.expect("expected Dmar struct to already have checked the length, and alignment issues should be impossible due to #[repr(packed)]")
}
}
impl Dmar {
// TODO: Again, perhaps put this code into a different driver, and read the table the regular
// way via the acpi scheme?
pub fn init(acpi_ctx: &AcpiContext) {
let dmar_sdt = match acpi_ctx.take_single_sdt(*b"DMAR") {
Some(dmar_sdt) => dmar_sdt,
None => {
log::warn!("Unable to find `DMAR` ACPI table.");
return;
}
};
let dmar = match Dmar::new(dmar_sdt) {
Some(dmar) => dmar,
None => {
log::error!("Failed to parse DMAR table, possibly malformed.");
return;
}
};
log::info!("Found DMAR: {}: {}", dmar.host_addr_width, dmar.flags);
log::debug!("DMAR: {:?}", dmar);
for dmar_entry in dmar.iter() {
log::debug!("DMAR entry: {:?}", dmar_entry);
match dmar_entry {
DmarEntry::Drhd(dmar_drhd) => {
let drhd = dmar_drhd.map();
log::debug!("VER: {:X}", drhd.version.read());
log::debug!("CAP: {:X}", drhd.cap.read());
log::debug!("EXT_CAP: {:X}", drhd.ext_cap.read());
log::debug!("GCMD: {:X}", drhd.gl_cmd.read());
log::debug!("GSTS: {:X}", drhd.gl_sts.read());
log::debug!("RT: {:X}", drhd.root_table.read());
}
_ => (),
}
}
}
fn new(sdt: Sdt) -> Option<Dmar> {
assert_eq!(
sdt.signature, *b"DMAR",
"signature already checked against `DMAR`"
);
if sdt.length() < mem::size_of::<DmarStruct>() {
log::error!(
"The DMAR table was too small ({} B < {} B).",
sdt.length(),
mem::size_of::<Dmar>()
);
return None;
}
// No need to check alignment for #[repr(packed)] structs.
Some(Dmar(sdt))
}
pub fn iter(&self) -> DmarIter<'_> {
DmarIter(DmarRawIter {
bytes: self.remmapping_structs_area(),
})
}
}
/// DMAR DMA Remapping Hardware Unit Definition
#[derive(Clone, Copy, Debug)]
#[repr(C, packed)]
pub struct DmarDrhdHeader {
pub kind: u16,
pub length: u16,
pub flags: u8,
pub _rsv: u8,
pub segment: u16,
pub base: u64,
}
unsafe impl plain::Plain for DmarDrhdHeader {}
#[derive(Clone, Copy, Debug)]
#[repr(C, packed)]
pub struct DeviceScopeHeader {
pub ty: u8,
pub len: u8,
pub _rsvd: u16,
pub enumeration_id: u8,
pub start_bus_num: u8,
// The variable-sized path comes after.
}
unsafe impl plain::Plain for DeviceScopeHeader {}
pub struct DeviceScope(Box<[u8]>);
impl DeviceScope {
pub fn try_new(raw: &[u8]) -> Option<Self> {
// TODO: Check ty.
let header_bytes = match raw.get(..mem::size_of::<DeviceScopeHeader>()) {
Some(bytes) => bytes,
None => return None,
};
let header = plain::from_bytes::<DeviceScopeHeader>(header_bytes)
.expect("length already checked, and alignment 1 (#[repr(packed)] should suffice");
let len = usize::from(header.len);
if len > raw.len() {
log::warn!("Device scope smaller than len field.");
return None;
}
Some(Self(raw.into()))
}
}
impl fmt::Debug for DeviceScope {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("DeviceScope")
.field("header", &*self as &DeviceScopeHeader)
.field("path", &self.path())
.finish()
}
}
impl Deref for DeviceScope {
type Target = DeviceScopeHeader;
fn deref(&self) -> &Self::Target {
plain::from_bytes(&self.0)
.expect("expected length to be sufficient, and alignment (due to #[repr(packed)]")
}
}
impl DeviceScope {
pub fn path(&self) -> &[u8] {
&self.0[mem::size_of::<DeviceScopeHeader>()..]
}
}
pub struct DmarDrhd(Box<[u8]>);
impl DmarDrhd {
pub fn try_new(raw: &[u8]) -> Option<Self> {
if raw.len() < mem::size_of::<DmarDrhdHeader>() {
return None;
}
Some(Self(raw.into()))
}
pub fn device_scope_area(&self) -> &[u8] {
&self.0[mem::size_of::<DmarDrhdHeader>()..]
}
pub fn map(&self) -> DrhdPage {
let base = usize::try_from(self.base).expect("expected u64 to fit within usize");
DrhdPage::map(base).expect("failed to map DRHD registers")
}
}
impl Deref for DmarDrhd {
type Target = DmarDrhdHeader;
fn deref(&self) -> &Self::Target {
plain::from_bytes::<DmarDrhdHeader>(&self.0[..mem::size_of::<DmarDrhdHeader>()])
.expect("length is already checked, and alignment 1 (#[repr(packed)] should suffice")
}
}
impl fmt::Debug for DmarDrhd {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("DmarDrhd")
.field("header", &*self as &DmarDrhd)
// TODO: print out device scopes
.finish()
}
}
/// DMAR Reserved Memory Region Reporting
#[derive(Clone, Copy, Debug)]
#[repr(C, packed)]
pub struct DmarRmrrHeader {
pub kind: u16,
pub length: u16,
pub _rsv: u16,
pub segment: u16,
pub base: u64,
pub limit: u64,
// The device scopes come after.
}
unsafe impl plain::Plain for DmarRmrrHeader {}
pub struct DmarRmrr(Box<[u8]>);
impl DmarRmrr {
pub fn try_new(raw: &[u8]) -> Option<Self> {
if raw.len() < mem::size_of::<DmarRmrrHeader>() {
return None;
}
Some(Self(raw.into()))
}
}
impl Deref for DmarRmrr {
type Target = DmarRmrrHeader;
fn deref(&self) -> &Self::Target {
plain::from_bytes(&self.0[..mem::size_of::<DmarRmrrHeader>()])
.expect("length already checked, and with #[repr(packed)] alignment should be okay")
}
}
impl fmt::Debug for DmarRmrr {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("DmarRmrr")
.field("header", &*self as &DmarRmrrHeader)
// TODO: print out device scopes
.finish()
}
}
/// DMAR Root Port ATS Capability Reporting
#[derive(Clone, Copy, Debug)]
#[repr(C, packed)]
pub struct DmarAtsrHeader {
kind: u16,
length: u16,
flags: u8,
_rsv: u8,
segment: u16,
// The device scopes come after.
}
unsafe impl plain::Plain for DmarAtsrHeader {}
pub struct DmarAtsr(Box<[u8]>);
impl DmarAtsr {
pub fn try_new(raw: &[u8]) -> Option<Self> {
if raw.len() < mem::size_of::<DmarAtsrHeader>() {
return None;
}
Some(Self(raw.into()))
}
}
impl Deref for DmarAtsr {
type Target = DmarAtsrHeader;
fn deref(&self) -> &Self::Target {
plain::from_bytes(&self.0[..mem::size_of::<DmarAtsrHeader>()])
.expect("length already checked, and with #[repr(packed)] alignment should be okay")
}
}
impl fmt::Debug for DmarAtsr {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("DmarAtsr")
.field("header", &*self as &DmarAtsrHeader)
// TODO: print out device scopes
.finish()
}
}
/// DMAR Remapping Hardware Static Affinity
#[derive(Clone, Copy, Debug)]
#[repr(C, packed)]
pub struct DmarRhsa {
pub kind: u16,
pub length: u16,
pub _rsv: u32,
pub base: u64,
pub domain: u32,
}
unsafe impl plain::Plain for DmarRhsa {}
impl DmarRhsa {
pub fn try_new(raw: &[u8]) -> Option<Self> {
let bytes = raw.get(..mem::size_of::<DmarRhsa>())?;
let this = plain::from_bytes(bytes)
.expect("length is already checked, and alignment 1 should suffice (#[repr(packed)])");
Some(*this)
}
}
/// DMAR ACPI Name-space Device Declaration
#[derive(Clone, Copy, Debug)]
#[repr(C, packed)]
pub struct DmarAnddHeader {
pub kind: u16,
pub length: u16,
pub _rsv: [u8; 3],
pub acpi_dev: u8,
// The device scopes come after.
}
unsafe impl plain::Plain for DmarAnddHeader {}
pub struct DmarAndd(Box<[u8]>);
impl DmarAndd {
pub fn try_new(raw: &[u8]) -> Option<Self> {
if raw.len() < mem::size_of::<DmarAnddHeader>() {
return None;
}
Some(Self(raw.into()))
}
}
impl Deref for DmarAndd {
type Target = DmarAnddHeader;
fn deref(&self) -> &Self::Target {
plain::from_bytes(&self.0[..mem::size_of::<DmarAnddHeader>()])
.expect("length already checked, and with #[repr(packed)] alignment should be okay")
}
}
impl fmt::Debug for DmarAndd {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("DmarAndd")
.field("header", &*self as &DmarAnddHeader)
// TODO: print out device scopes
.finish()
}
}
/// DMAR ACPI Name-space Device Declaration
#[derive(Clone, Copy, Debug)]
#[repr(C, packed)]
pub struct DmarSatcHeader {
pub kind: u16,
pub length: u16,
pub flags: u8,
pub _rsvd: u8,
pub seg_num: u16,
// The device scopes come after.
}
unsafe impl plain::Plain for DmarSatcHeader {}
pub struct DmarSatc(Box<[u8]>);
impl DmarSatc {
pub fn try_new(raw: &[u8]) -> Option<Self> {
if raw.len() < mem::size_of::<DmarSatcHeader>() {
return None;
}
Some(Self(raw.into()))
}
}
impl Deref for DmarSatc {
type Target = DmarSatcHeader;
fn deref(&self) -> &Self::Target {
plain::from_bytes(&self.0[..mem::size_of::<DmarSatcHeader>()])
.expect("length already checked, and with #[repr(packed)] alignment should be okay")
}
}
impl fmt::Debug for DmarSatc {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("DmarSatc")
.field("header", &*self as &DmarSatcHeader)
// TODO: print out device scopes
.finish()
}
}
/// The list of different "Remapping Structure Types".
///
/// Refer to section 8.2 in the VTIO spec (as of revision 3.2).
#[derive(Clone, Copy, Debug, FromPrimitive)]
#[repr(u16)]
pub enum EntryType {
Drhd = 0,
Rmrr = 1,
Atsr = 2,
Rhsa = 3,
Andd = 4,
Satc = 5,
}
/// DMAR Entries
#[derive(Debug)]
pub enum DmarEntry {
Drhd(DmarDrhd),
Rmrr(DmarRmrr),
Atsr(DmarAtsr),
Rhsa(DmarRhsa),
Andd(DmarAndd),
// TODO: "SoC Integrated Address Translation Cache Reporting Structure".
Satc(DmarSatc),
TooShort(EntryType),
Unknown(u16),
}
struct DmarRawIter<'sdt> {
bytes: &'sdt [u8],
}
impl<'sdt> Iterator for DmarRawIter<'sdt> {
type Item = (u16, &'sdt [u8]);
fn next(&mut self) -> Option<Self::Item> {
let type_bytes = match self.bytes.get(..2) {
Some(bytes) => bytes,
None => {
if !self.bytes.is_empty() {
log::warn!("DMAR table ended between two entries.");
}
return None;
}
};
let len_bytes = match self.bytes.get(2..4) {
Some(bytes) => bytes,
None => {
log::warn!("DMAR table ended between two entries.");
return None;
}
};
let remainder = &self.bytes[4..];
let type_bytes = <[u8; 2]>::try_from(type_bytes)
.expect("expected a 2-byte slice to be convertible to [u8; 2]");
let len_bytes = <[u8; 2]>::try_from(type_bytes)
.expect("expected a 2-byte slice to be convertible to [u8; 2]");
let ty = u16::from_ne_bytes(type_bytes);
let len = u16::from_ne_bytes(len_bytes);
let len = usize::try_from(len).expect("expected u16 to fit within usize");
if len > remainder.len() {
log::warn!("DMAR remapping structure length was smaller than the remaining length of the table.");
return None;
}
let (current, residue) = self.bytes.split_at(len);
self.bytes = residue;
Some((ty, current))
}
}
pub struct DmarIter<'sdt>(DmarRawIter<'sdt>);
impl Iterator for DmarIter<'_> {
type Item = DmarEntry;
fn next(&mut self) -> Option<Self::Item> {
let (raw_type, raw) = self.0.next()?;
// NOTE: If any of these entries look incorrect, we should simply continue the iterator,
// and instead print a warning.
let entry_type = match EntryType::from_u16(raw_type) {
Some(ty) => ty,
None => {
log::warn!(
"Encountered invalid entry type {} (length {})",
raw_type,
raw.len()
);
return Some(DmarEntry::Unknown(raw_type));
}
};
let item_opt = match entry_type {
EntryType::Drhd => DmarDrhd::try_new(raw).map(DmarEntry::Drhd),
EntryType::Rmrr => DmarRmrr::try_new(raw).map(DmarEntry::Rmrr),
EntryType::Atsr => DmarAtsr::try_new(raw).map(DmarEntry::Atsr),
EntryType::Rhsa => DmarRhsa::try_new(raw).map(DmarEntry::Rhsa),
EntryType::Andd => DmarAndd::try_new(raw).map(DmarEntry::Andd),
EntryType::Satc => DmarSatc::try_new(raw).map(DmarEntry::Satc),
};
let item = item_opt.unwrap_or(DmarEntry::TooShort(entry_type));
Some(item)
}
}
@@ -0,0 +1,430 @@
use acpi::{aml::AmlError, Handle, PciAddress, PhysicalMapping};
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
use common::io::{Io, Pio};
use num_traits::PrimInt;
use rustc_hash::FxHashMap;
use std::fmt::LowerHex;
use std::mem::size_of;
use std::ptr::NonNull;
use std::sync::{Arc, Mutex};
use syscall::PAGE_SIZE;
const PAGE_MASK: usize = !(PAGE_SIZE - 1);
const OFFSET_MASK: usize = PAGE_SIZE - 1;
struct MappedPage {
phys_page: usize,
virt_page: usize,
}
impl MappedPage {
fn new(phys_page: usize) -> std::io::Result<Self> {
let virt_page = unsafe {
common::physmap(
phys_page,
PAGE_SIZE,
common::Prot::RW,
common::MemoryType::default(),
)
.map_err(|error| std::io::Error::from_raw_os_error(error.errno()))?
} as usize;
Ok(Self {
phys_page,
virt_page,
})
}
}
impl Drop for MappedPage {
fn drop(&mut self) {
log::trace!("Drop page {:#x}", self.phys_page);
if let Err(e) = unsafe { libredox::call::munmap(self.virt_page as *mut (), PAGE_SIZE) } {
log::error!("funmap (phys): {:?}", e);
}
}
}
#[derive(Default)]
pub struct AmlPageCache {
page_cache: FxHashMap<usize, MappedPage>,
}
impl AmlPageCache {
/// get a virtual address for the given physical page
fn get_page(&mut self, phys_target: usize) -> std::io::Result<&MappedPage> {
let phys_page = phys_target & PAGE_MASK;
if self.page_cache.contains_key(&phys_page) {
log::trace!("re-using cached page {:#x}", phys_page);
Ok(self
.page_cache
.get(&phys_page)
.expect("could not get page after contains=true"))
} else {
let mapped_page = MappedPage::new(phys_page)?;
log::trace!("adding page {:#x} to cache", mapped_page.phys_page);
self.page_cache.insert(phys_page, mapped_page);
Ok(self
.page_cache
.get(&phys_page)
.expect("can't find page that was just inserted"))
}
}
/// The offset into the virtual slice of T that matches the physical target
fn sized_index<T>(phys_target: usize) -> usize {
assert_eq!(
phys_target & !(size_of::<T>() - 1),
phys_target,
"address {} is not aligned",
phys_target
);
(phys_target & OFFSET_MASK) / size_of::<T>()
}
/// Read from the given physical address
fn read_from_phys<T: PrimInt + LowerHex>(&mut self, phys_target: usize) -> std::io::Result<T> {
let mapped_page = self.get_page(phys_target)?;
let page_as_slice = unsafe {
std::slice::from_raw_parts(
mapped_page.virt_page as *const T,
PAGE_SIZE / size_of::<T>(),
)
};
// for debugging only
let _virt_ptr = page_as_slice[Self::sized_index::<T>(phys_target)..].as_ptr() as usize;
let val = page_as_slice[Self::sized_index::<T>(phys_target)];
log::trace!(
"read {:#x}, virt {:#x}, val {:#x}",
phys_target,
_virt_ptr,
val
);
Ok(val)
}
/// Write to the given physical address
fn write_to_phys<T: PrimInt + LowerHex>(
&mut self,
phys_target: usize,
val: T,
) -> std::io::Result<()> {
let mapped_page = self.get_page(phys_target)?;
let page_as_slice = unsafe {
std::slice::from_raw_parts_mut(
mapped_page.virt_page as *mut T,
PAGE_SIZE / size_of::<T>(),
)
};
// for debugging only
let _virt_ptr = page_as_slice[Self::sized_index::<T>(phys_target)..].as_ptr() as usize;
page_as_slice[Self::sized_index::<T>(phys_target)] = val;
log::trace!(
"write {:#x}, virt {:#x}, val {:#x}",
phys_target,
_virt_ptr,
val
);
Ok(())
}
pub fn clear(&mut self) {
log::trace!("Clear page cache");
self.page_cache.clear();
}
}
#[derive(Clone)]
pub struct AmlPhysMemHandler {
page_cache: Arc<Mutex<AmlPageCache>>,
pci_fd: Arc<Option<libredox::Fd>>,
}
/// Read from a physical address.
/// Generic parameter must be u8, u16, u32 or u64.
impl AmlPhysMemHandler {
pub fn new(pci_fd_opt: Option<&libredox::Fd>, page_cache: Arc<Mutex<AmlPageCache>>) -> Self {
let pci_fd = if let Some(pci_fd) = pci_fd_opt {
Some(libredox::Fd::new(pci_fd.raw()))
} else {
log::error!("pci_fd is not registered");
None
};
Self {
page_cache,
pci_fd: Arc::new(pci_fd),
}
}
fn pci_call_metadata(kind: u8, addr: PciAddress, off: u16) -> [u64; 2] {
// Segment: u16, at 28 bits
// Bus: u8, 8 bits, 256 total, at 20 bits
// Device: u8, 5 bits, 32 total, at 15 bits
// Function: u8, 3 bits, 8 total, at 12 bits
// Offset: u16, 12 bits, 4096 total, at 0 bits
[
kind.into(),
(u64::from(addr.segment()) << 28)
| (u64::from(addr.bus()) << 20)
| (u64::from(addr.device()) << 15)
| (u64::from(addr.function()) << 12)
| u64::from(off),
]
}
fn read_pci(&self, addr: PciAddress, off: u16, value: &mut [u8]) {
let metadata = Self::pci_call_metadata(1, addr, off);
match &*self.pci_fd {
Some(pci_fd) => match pci_fd.call_ro(value, syscall::CallFlags::empty(), &metadata) {
Ok(_) => {}
Err(err) => {
log::error!("read pci {addr}@{off:04X}:{:02X}: {}", value.len(), err);
}
},
None => {
log::error!(
"read pci {addr}@{off:04X}:{:02X}: pci access not available",
value.len()
);
}
}
}
fn write_pci(&self, addr: PciAddress, off: u16, value: &[u8]) {
let metadata = Self::pci_call_metadata(2, addr, off);
match &*self.pci_fd {
Some(pci_fd) => match pci_fd.call_wo(value, syscall::CallFlags::empty(), &metadata) {
Ok(_) => {}
Err(err) => {
log::error!("write pci {addr}@{off:04X}={value:02X?}: {}", err);
}
},
None => {
log::error!("write pci {addr}@{off:04X}={value:02X?}: pci access not available");
}
}
}
}
impl acpi::Handler for AmlPhysMemHandler {
unsafe fn map_physical_region<T>(&self, phys: usize, size: usize) -> PhysicalMapping<Self, T> {
let phys_page = phys & PAGE_MASK;
let offset = phys & OFFSET_MASK;
let pages = (offset + size + PAGE_SIZE - 1) / PAGE_SIZE;
let map_size = pages * PAGE_SIZE;
let virt_page = common::physmap(
phys_page,
map_size,
common::Prot::RW,
common::MemoryType::default(),
)
.expect("failed to map physical region") as usize;
PhysicalMapping {
physical_start: phys,
virtual_start: NonNull::new((virt_page + offset) as *mut T).unwrap(),
region_length: size,
mapped_length: map_size,
handler: self.clone(),
}
}
fn unmap_physical_region<T>(region: &PhysicalMapping<Self, T>) {
let virt_page = region.virtual_start.addr().get() & PAGE_MASK;
unsafe {
libredox::call::munmap(virt_page as *mut (), region.mapped_length)
.expect("failed to unmap physical region")
}
}
fn read_u8(&self, address: usize) -> u8 {
log::trace!("read u8 {:X}", address);
if let Ok(mut page_cache) = self.page_cache.lock() {
if let Ok(value) = page_cache.read_from_phys::<u8>(address) {
return value;
}
}
log::error!("failed to read u8 {:#x}", address);
0
}
fn read_u16(&self, address: usize) -> u16 {
log::trace!("read u16 {:X}", address);
if let Ok(mut page_cache) = self.page_cache.lock() {
if let Ok(value) = page_cache.read_from_phys::<u16>(address) {
return value;
}
}
log::error!("failed to read u16 {:#x}", address);
0
}
fn read_u32(&self, address: usize) -> u32 {
log::trace!("read u32 {:X}", address);
if let Ok(mut page_cache) = self.page_cache.lock() {
if let Ok(value) = page_cache.read_from_phys::<u32>(address) {
return value;
}
}
log::error!("failed to read u32 {:#x}", address);
0
}
fn read_u64(&self, address: usize) -> u64 {
log::trace!("read u64 {:X}", address);
if let Ok(mut page_cache) = self.page_cache.lock() {
if let Ok(value) = page_cache.read_from_phys::<u64>(address) {
return value;
}
}
log::error!("failed to read u64 {:#x}", address);
0
}
fn write_u8(&self, address: usize, value: u8) {
log::trace!("write u8 {:X} = {:X}", address, value);
if let Ok(mut page_cache) = self.page_cache.lock() {
if page_cache.write_to_phys::<u8>(address, value).is_ok() {
return;
}
}
log::error!("failed to write u8 {:#x}", address);
}
fn write_u16(&self, address: usize, value: u16) {
log::trace!("write u16 {:X} = {:X}", address, value);
if let Ok(mut page_cache) = self.page_cache.lock() {
if page_cache.write_to_phys::<u16>(address, value).is_ok() {
return;
}
}
log::error!("failed to write u16 {:#x}", address);
}
fn write_u32(&self, address: usize, value: u32) {
log::trace!("write u32 {:X} = {:X}", address, value);
if let Ok(mut page_cache) = self.page_cache.lock() {
if page_cache.write_to_phys::<u32>(address, value).is_ok() {
return;
}
}
log::error!("failed to write u32 {:#x}", address);
}
fn write_u64(&self, address: usize, value: u64) {
log::trace!("write u64 {:X} = {:X}", address, value);
if let Ok(mut page_cache) = self.page_cache.lock() {
if page_cache.write_to_phys::<u64>(address, value).is_ok() {
return;
}
}
log::error!("failed to write u64 {:#x}", address);
}
// Pio must be enabled via syscall::iopl
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
fn read_io_u8(&self, port: u16) -> u8 {
Pio::<u8>::new(port).read()
}
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
fn read_io_u16(&self, port: u16) -> u16 {
Pio::<u16>::new(port).read()
}
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
fn read_io_u32(&self, port: u16) -> u32 {
Pio::<u32>::new(port).read()
}
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
fn write_io_u8(&self, port: u16, value: u8) {
Pio::<u8>::new(port).write(value)
}
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
fn write_io_u16(&self, port: u16, value: u16) {
Pio::<u16>::new(port).write(value)
}
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
fn write_io_u32(&self, port: u16, value: u32) {
Pio::<u32>::new(port).write(value)
}
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
fn read_io_u8(&self, port: u16) -> u8 {
log::error!("cannot read u8 from port 0x{port:04X}");
0
}
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
fn read_io_u16(&self, port: u16) -> u16 {
log::error!("cannot read u16 from port 0x{port:04X}");
0
}
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
fn read_io_u32(&self, port: u16) -> u32 {
log::error!("cannot read u32 from port 0x{port:04X}");
0
}
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
fn write_io_u8(&self, port: u16, value: u8) {
log::error!("cannot write 0x{value:02X} to port 0x{port:04X}");
}
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
fn write_io_u16(&self, port: u16, value: u16) {
log::error!("cannot write 0x{value:04X} to port 0x{port:04X}");
}
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
fn write_io_u32(&self, port: u16, value: u32) {
log::error!("cannot write 0x{value:08X} to port 0x{port:04X}");
}
fn read_pci_u8(&self, addr: PciAddress, off: u16) -> u8 {
let mut value = [0u8];
self.read_pci(addr, off, &mut value);
value[0]
}
fn read_pci_u16(&self, addr: PciAddress, off: u16) -> u16 {
let mut value = [0u8; 2];
self.read_pci(addr, off, &mut value);
u16::from_le_bytes(value)
}
fn read_pci_u32(&self, addr: PciAddress, off: u16) -> u32 {
let mut value = [0u8; 4];
self.read_pci(addr, off, &mut value);
u32::from_le_bytes(value)
}
fn write_pci_u8(&self, addr: PciAddress, off: u16, value: u8) {
self.write_pci(addr, off, &[value]);
}
fn write_pci_u16(&self, addr: PciAddress, off: u16, value: u16) {
self.write_pci(addr, off, &value.to_le_bytes());
}
fn write_pci_u32(&self, addr: PciAddress, off: u16, value: u32) {
self.write_pci(addr, off, &value.to_le_bytes());
}
fn nanos_since_boot(&self) -> u64 {
let ts = libredox::call::clock_gettime(libredox::flag::CLOCK_MONOTONIC)
.expect("failed to get time");
(ts.tv_sec as u64) * 1_000_000_000 + (ts.tv_nsec as u64)
}
fn stall(&self, microseconds: u64) {
let start = std::time::Instant::now();
while start.elapsed().as_micros() < microseconds.into() {
std::hint::spin_loop();
}
}
fn sleep(&self, milliseconds: u64) {
std::thread::sleep(std::time::Duration::from_millis(milliseconds));
}
fn create_mutex(&self) -> Handle {
log::debug!("TODO: Handler::create_mutex");
Handle(0)
}
fn acquire(&self, mutex: Handle, timeout: u16) -> Result<(), AmlError> {
log::debug!("TODO: Handler::acquire");
Ok(())
}
fn release(&self, mutex: Handle) {
log::debug!("TODO: Handler::release");
}
}
+256
View File
@@ -0,0 +1,256 @@
use std::time::Duration;
use acpi::aml::{
op_region::{OpRegion, RegionHandler, RegionSpace},
AmlError,
};
use common::{
io::{Io, Pio},
timeout::Timeout,
};
use log::*;
const EC_DATA: u16 = 0x62;
const EC_SC: u16 = 0x66;
const OBF: u8 = 1 << 0; // output full / data ready for host <> empty
const IBF: u8 = 1 << 1; // input full / data ready for ec <> empty
const CMD: u8 = 1 << 3; // byte in data reg is command <> data
const BURST: u8 = 1 << 4; // burst mode <> normal mode
const SCI_EVT: u8 = 1 << 5; // sci event pending <> not
const SMI_EVT: u8 = 1 << 6; // smi event pending <> not
const RD_EC: u8 = 0x80;
const WR_EC: u8 = 0x81;
const BE_EC: u8 = 0x82;
const BD_EC: u8 = 0x83;
const QR_EC: u8 = 0x84;
const BURST_ACK: u8 = 0x90;
pub const DEFAULT_EC_TIMEOUT: Duration = Duration::from_millis(10);
#[repr(transparent)]
pub struct ScBits(u8);
#[allow(dead_code)]
impl ScBits {
const fn obf(&self) -> bool {
(self.0 & OBF) != 0
}
const fn ibf(&self) -> bool {
(self.0 & IBF) != 0
}
const fn cmd(&self) -> bool {
(self.0 & CMD) != 0
}
const fn burst(&self) -> bool {
(self.0 & BURST) != 0
}
const fn sci_evt(&self) -> bool {
(self.0 & SCI_EVT) != 0
}
const fn smi_evt(&self) -> bool {
(self.0 & SMI_EVT) != 0
}
}
#[derive(Debug, Clone, Copy)]
pub struct Ec {
sc: u16,
data: u16,
timeout: Duration,
}
impl Ec {
pub fn new() -> Self {
Self {
sc: EC_SC,
data: EC_DATA,
timeout: DEFAULT_EC_TIMEOUT,
}
}
#[allow(dead_code)]
pub fn with_address(sc: u16, data: u16, timeout: Duration) -> Self {
Self { sc, data, timeout }
}
#[inline]
fn read_reg_sc(&self) -> ScBits {
ScBits(Pio::<u8>::new(self.sc).read())
}
#[inline]
fn read_reg_data(&self) -> u8 {
Pio::<u8>::new(self.data).read()
}
#[inline]
fn write_reg_sc(&self, value: u8) {
Pio::<u8>::new(self.sc).write(value);
}
#[inline]
fn write_reg_data(&self, value: u8) {
Pio::<u8>::new(self.data).write(value);
}
#[inline]
fn wait_for_write_ready(&self) -> Option<()> {
let timeout = Timeout::new(self.timeout);
loop {
if !self.read_reg_sc().ibf() {
return Some(());
}
timeout.run().ok()?;
}
}
#[inline]
fn wait_for_read_ready(&self) -> Option<()> {
let timeout = Timeout::new(self.timeout);
loop {
if self.read_reg_sc().obf() {
return Some(());
}
timeout.run().ok()?;
}
}
//https://uefi.org/htmlspecs/ACPI_Spec_6_4_html/12_ACPI_Embedded_Controller_Interface_Specification/embedded-controller-command-set.html
pub fn read(&self, address: u8) -> Option<u8> {
trace!("ec read addr: {:x}", address);
self.wait_for_write_ready()?;
self.write_reg_sc(RD_EC);
self.wait_for_write_ready()?;
self.write_reg_data(address);
self.wait_for_read_ready()?;
let val = self.read_reg_data();
trace!("got: {:x}", val);
Some(val)
}
pub fn write(&self, address: u8, value: u8) -> Option<()> {
trace!("ec write addr: {:x}, with: {:x}", address, value);
self.wait_for_write_ready()?;
self.write_reg_sc(WR_EC);
self.wait_for_write_ready()?;
self.write_reg_data(address);
self.wait_for_write_ready()?;
self.write_reg_data(value);
trace!("done");
Some(())
}
// disabled if not met
// First Access - 400 microseconds
// Subsequent Accesses - 50 microseconds each
// Total Burst Time - 1 millisecond
//Accesses should be responded to within 50 microseconds.
#[allow(dead_code)]
fn enable_burst(&self) -> bool {
trace!("ec burst enable");
self.wait_for_write_ready();
self.write_reg_sc(BE_EC);
self.wait_for_read_ready();
let res = self.read_reg_data() == BURST_ACK;
trace!("success: {}", res);
res
}
#[allow(dead_code)]
fn disable_burst(&self) {
trace!("ec burst disable");
self.wait_for_write_ready();
self.write_reg_sc(BD_EC);
trace!("done");
}
//OSPM driver sends this command when the SCI_EVT flag in the EC_SC register is set.
#[allow(dead_code)]
fn queue_query(&mut self) -> u8 {
trace!("ec query");
self.wait_for_write_ready();
self.write_reg_sc(QR_EC);
self.wait_for_read_ready();
let val = self.read_reg_data();
trace!("got: {}", val);
val
}
}
impl RegionHandler for Ec {
fn read_u8(
&self,
region: &acpi::aml::op_region::OpRegion,
offset: usize,
) -> Result<u8, acpi::aml::AmlError> {
assert_eq!(region.space, RegionSpace::EmbeddedControl);
self.read(offset as u8).ok_or(AmlError::MutexAcquireTimeout) // TODO proper error type
}
fn write_u8(
&self,
region: &OpRegion,
offset: usize,
value: u8,
) -> Result<(), acpi::aml::AmlError> {
assert_eq!(region.space, RegionSpace::EmbeddedControl);
self.write(offset as u8, value)
.ok_or(AmlError::MutexAcquireTimeout) // TODO proper error type
}
fn read_u16(&self, _region: &OpRegion, _offset: usize) -> Result<u16, acpi::aml::AmlError> {
warn!("Got u16 EC read from AML!");
Err(acpi::aml::AmlError::NoHandlerForRegionAccess(
RegionSpace::EmbeddedControl,
)) // TODO proper error type
}
fn read_u32(&self, _region: &OpRegion, _offset: usize) -> Result<u32, acpi::aml::AmlError> {
warn!("Got u32 EC read from AML!");
Err(acpi::aml::AmlError::NoHandlerForRegionAccess(
RegionSpace::EmbeddedControl,
)) // TODO proper error type
}
fn read_u64(&self, _region: &OpRegion, _offset: usize) -> Result<u64, acpi::aml::AmlError> {
warn!("Got u64 EC read from AML!");
Err(acpi::aml::AmlError::NoHandlerForRegionAccess(
RegionSpace::EmbeddedControl,
)) // TODO proper error type
}
fn write_u16(
&self,
_region: &OpRegion,
_offset: usize,
_value: u16,
) -> Result<(), acpi::aml::AmlError> {
warn!("Got u16 EC write from AML!");
Err(acpi::aml::AmlError::NoHandlerForRegionAccess(
RegionSpace::EmbeddedControl,
)) // TODO proper error type
}
fn write_u32(
&self,
_region: &OpRegion,
_offset: usize,
_value: u32,
) -> Result<(), acpi::aml::AmlError> {
warn!("Got u32 EC write from AML!");
Err(acpi::aml::AmlError::NoHandlerForRegionAccess(
RegionSpace::EmbeddedControl,
)) // TODO proper error type
}
fn write_u64(
&self,
_region: &OpRegion,
_offset: usize,
_value: u64,
) -> Result<(), acpi::aml::AmlError> {
warn!("Got u64 EC write from AML!");
Err(acpi::aml::AmlError::NoHandlerForRegionAccess(
RegionSpace::EmbeddedControl,
)) // TODO proper error type
}
}
+143
View File
@@ -0,0 +1,143 @@
use std::convert::TryFrom;
use std::fs::File;
use std::mem;
use std::ops::ControlFlow;
use std::os::unix::io::AsRawFd;
use std::sync::Arc;
use ::acpi::aml::op_region::{RegionHandler, RegionSpace};
use event::{EventFlags, RawEventQueue};
use redox_scheme::{scheme::register_sync_scheme, Socket};
use scheme_utils::Blocking;
mod acpi;
mod aml_physmem;
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
mod ec;
mod scheme;
fn daemon(daemon: daemon::Daemon) -> ! {
common::setup_logging(
"misc",
"acpi",
"acpid",
common::output_level(),
common::file_level(),
);
log::info!("acpid start");
let rxsdt_raw_data: Arc<[u8]> = std::fs::read("/scheme/kernel.acpi/rxsdt")
.expect("acpid: failed to read `/scheme/kernel.acpi/rxsdt`")
.into();
if rxsdt_raw_data.is_empty() {
log::info!("System doesn't use ACPI");
daemon.ready();
std::process::exit(0);
}
let sdt = self::acpi::Sdt::new(rxsdt_raw_data).expect("acpid: failed to parse [RX]SDT");
let mut thirty_two_bit;
let mut sixty_four_bit;
let physaddrs_iter = match &sdt.signature {
b"RSDT" => {
thirty_two_bit = sdt
.data()
.chunks(mem::size_of::<u32>())
// TODO: With const generics, the compiler has some way of doing this for static sizes.
.map(|chunk| <[u8; mem::size_of::<u32>()]>::try_from(chunk).unwrap())
.map(|chunk| u32::from_le_bytes(chunk))
.map(u64::from);
&mut thirty_two_bit as &mut dyn Iterator<Item = u64>
}
b"XSDT" => {
sixty_four_bit = sdt
.data()
.chunks(mem::size_of::<u64>())
.map(|chunk| <[u8; mem::size_of::<u64>()]>::try_from(chunk).unwrap())
.map(|chunk| u64::from_le_bytes(chunk));
&mut sixty_four_bit as &mut dyn Iterator<Item = u64>
}
_ => panic!("acpid: expected [RX]SDT from kernel to be either of those"),
};
let region_handlers: Vec<(RegionSpace, Box<dyn RegionHandler + 'static>)> = vec![
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
(RegionSpace::EmbeddedControl, Box::new(ec::Ec::new())),
];
let acpi_context = self::acpi::AcpiContext::init(physaddrs_iter, region_handlers);
// TODO: I/O permission bitmap?
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
common::acquire_port_io_rights().expect("acpid: failed to set I/O privilege level to Ring 3");
let shutdown_pipe = File::open("/scheme/kernel.acpi/kstop")
.expect("acpid: failed to open `/scheme/kernel.acpi/kstop`");
let mut event_queue = RawEventQueue::new().expect("acpid: failed to create event queue");
let socket = Socket::nonblock().expect("acpid: failed to create disk scheme");
let mut scheme = self::scheme::AcpiScheme::new(&acpi_context, &socket);
let mut handler = Blocking::new(&socket, 16);
event_queue
.subscribe(shutdown_pipe.as_raw_fd() as usize, 0, EventFlags::READ)
.expect("acpid: failed to register shutdown pipe for event queue");
event_queue
.subscribe(socket.inner().raw(), 1, EventFlags::READ)
.expect("acpid: failed to register scheme socket for event queue");
register_sync_scheme(&socket, "acpi", &mut scheme)
.expect("acpid: failed to register acpi scheme to namespace");
daemon.ready();
libredox::call::setrens(0, 0).expect("acpid: failed to enter null namespace");
let mut mounted = true;
while mounted {
let Some(event) = event_queue
.next()
.transpose()
.expect("acpid: failed to read event file")
else {
break;
};
if event.fd == socket.inner().raw() {
loop {
match handler
.process_requests_nonblocking(&mut scheme)
.expect("acpid: failed to process requests")
{
ControlFlow::Continue(()) => {}
ControlFlow::Break(()) => break,
}
}
} else if event.fd == shutdown_pipe.as_raw_fd() as usize {
log::info!("Received shutdown request from kernel.");
mounted = false;
} else {
log::debug!("Received request to unknown fd: {}", event.fd);
continue;
}
}
drop(shutdown_pipe);
drop(event_queue);
acpi_context.set_global_s_state(5);
unreachable!("System should have shut down before this is entered");
}
fn main() {
common::init();
daemon::Daemon::new(daemon);
}
@@ -0,0 +1,485 @@
use acpi::aml::namespace::AmlName;
use amlserde::aml_serde_name::to_aml_format;
use amlserde::AmlSerdeValue;
use core::str;
use libredox::Fd;
use parking_lot::RwLockReadGuard;
use redox_scheme::scheme::SchemeSync;
use redox_scheme::{CallerCtx, OpenResult, SendFdRequest, Socket};
use ron::de::SpannedError;
use scheme_utils::HandleMap;
use std::convert::{TryFrom, TryInto};
use std::str::FromStr;
use syscall::dirent::{DirEntry, DirentBuf, DirentKind};
use syscall::schemev2::NewFdFlags;
use syscall::FobtainFdFlags;
use syscall::data::Stat;
use syscall::error::{Error, Result};
use syscall::error::{EACCES, EBADF, EBADFD, EINVAL, EIO, EISDIR, ENOENT, ENOTDIR};
use syscall::flag::{MODE_DIR, MODE_FILE};
use syscall::flag::{O_ACCMODE, O_DIRECTORY, O_RDONLY, O_STAT, O_SYMLINK};
use syscall::{EOVERFLOW, EPERM};
use crate::acpi::{AcpiContext, AmlSymbols, SdtSignature};
pub struct AcpiScheme<'acpi, 'sock> {
ctx: &'acpi AcpiContext,
handles: HandleMap<Handle<'acpi>>,
pci_fd: Option<Fd>,
socket: &'sock Socket,
}
struct Handle<'a> {
kind: HandleKind<'a>,
stat: bool,
allowed_to_eval: bool,
}
enum HandleKind<'a> {
TopLevel,
Tables,
Table(SdtSignature),
Symbols(RwLockReadGuard<'a, AmlSymbols>),
Symbol { name: String, description: String },
SchemeRoot,
RegisterPci,
}
impl HandleKind<'_> {
fn is_dir(&self) -> bool {
match self {
Self::TopLevel => true,
Self::Tables => true,
Self::Table(_) => false,
Self::Symbols(_) => true,
Self::Symbol { .. } => false,
Self::SchemeRoot => false,
Self::RegisterPci => false,
}
}
fn len(&self, acpi_ctx: &AcpiContext) -> Result<usize> {
Ok(match self {
// Files
Self::Table(signature) => acpi_ctx
.sdt_from_signature(signature)
.ok_or(Error::new(EBADFD))?
.length(),
Self::Symbol { description, .. } => description.len(),
// Directories
Self::TopLevel | Self::Symbols(_) | Self::Tables => 0,
Self::SchemeRoot | Self::RegisterPci => return Err(Error::new(EBADF)),
})
}
}
impl<'acpi, 'sock> AcpiScheme<'acpi, 'sock> {
pub fn new(ctx: &'acpi AcpiContext, socket: &'sock Socket) -> Self {
Self {
ctx,
handles: HandleMap::new(),
pci_fd: None,
socket,
}
}
}
fn parse_hex_digit(hex: u8) -> Option<u8> {
let hex = hex.to_ascii_lowercase();
if hex >= b'a' && hex <= b'f' {
Some(hex - b'a' + 10)
} else if hex >= b'0' && hex <= b'9' {
Some(hex - b'0')
} else {
None
}
}
fn parse_hex_2digit(hex: &[u8]) -> Option<u8> {
parse_hex_digit(hex[0])
.and_then(|most_significant| Some((most_significant << 4) | parse_hex_digit(hex[1])?))
}
fn parse_oem_id(hex: [u8; 12]) -> Option<[u8; 6]> {
Some([
parse_hex_2digit(&hex[0..2])?,
parse_hex_2digit(&hex[2..4])?,
parse_hex_2digit(&hex[4..6])?,
parse_hex_2digit(&hex[6..8])?,
parse_hex_2digit(&hex[8..10])?,
parse_hex_2digit(&hex[10..12])?,
])
}
fn parse_oem_table_id(hex: [u8; 16]) -> Option<[u8; 8]> {
Some([
parse_hex_2digit(&hex[0..2])?,
parse_hex_2digit(&hex[2..4])?,
parse_hex_2digit(&hex[4..6])?,
parse_hex_2digit(&hex[6..8])?,
parse_hex_2digit(&hex[8..10])?,
parse_hex_2digit(&hex[10..12])?,
parse_hex_2digit(&hex[12..14])?,
parse_hex_2digit(&hex[14..16])?,
])
}
fn parse_table(table: &[u8]) -> Option<SdtSignature> {
let signature_part = table.get(..4)?;
let first_hyphen = table.get(4)?;
let oem_id_part = table.get(5..17)?;
let second_hyphen = table.get(17)?;
let oem_table_part = table.get(18..34)?;
if *first_hyphen != b'-' {
return None;
}
if *second_hyphen != b'-' {
return None;
}
if table.len() > 34 {
return None;
}
Some(SdtSignature {
signature: <[u8; 4]>::try_from(signature_part)
.expect("expected 4-byte slice to be convertible into [u8; 4]"),
oem_id: {
let hex = <[u8; 12]>::try_from(oem_id_part)
.expect("expected 12-byte slice to be convertible into [u8; 12]");
parse_oem_id(hex)?
},
oem_table_id: {
let hex = <[u8; 16]>::try_from(oem_table_part)
.expect("expected 16-byte slice to be convertible into [u8; 16]");
parse_oem_table_id(hex)?
},
})
}
impl SchemeSync for AcpiScheme<'_, '_> {
fn scheme_root(&mut self) -> Result<usize> {
Ok(self.handles.insert(Handle {
stat: false,
kind: HandleKind::SchemeRoot,
allowed_to_eval: false,
}))
}
fn openat(
&mut self,
dirfd: usize,
path: &str,
flags: usize,
_fcntl_flags: u32,
ctx: &CallerCtx,
) -> Result<OpenResult> {
let handle = self.handles.get(dirfd)?;
let path = path.trim_start_matches('/');
let flag_stat = flags & O_STAT == O_STAT;
let flag_dir = flags & O_DIRECTORY == O_DIRECTORY;
let kind = match handle.kind {
HandleKind::SchemeRoot => {
// TODO: arrayvec
let components = {
let mut v = arrayvec::ArrayVec::<&str, 3>::new();
let it = path.split('/');
for component in it.take(3) {
v.push(component);
}
v
};
match &*components {
[""] => HandleKind::TopLevel,
["register_pci"] => HandleKind::RegisterPci,
["tables"] => HandleKind::Tables,
["tables", table] => {
let signature = parse_table(table.as_bytes()).ok_or(Error::new(ENOENT))?;
HandleKind::Table(signature)
}
["symbols"] => {
if let Ok(aml_symbols) = self.ctx.aml_symbols(self.pci_fd.as_ref()) {
HandleKind::Symbols(aml_symbols)
} else {
return Err(Error::new(EIO));
}
}
["symbols", symbol] => {
if let Some(description) = self.ctx.aml_lookup(symbol) {
HandleKind::Symbol {
name: (*symbol).to_owned(),
description,
}
} else {
return Err(Error::new(ENOENT));
}
}
_ => return Err(Error::new(ENOENT)),
}
}
HandleKind::Symbols(ref aml_symbols) => {
if let Some(description) = aml_symbols.lookup(path) {
HandleKind::Symbol {
name: (*path).to_owned(),
description,
}
} else {
return Err(Error::new(ENOENT));
}
}
_ => return Err(Error::new(EACCES)),
};
if kind.is_dir() && !flag_dir && !flag_stat {
return Err(Error::new(EISDIR));
} else if !kind.is_dir() && flag_dir && !flag_stat {
return Err(Error::new(ENOTDIR));
}
let allowed_to_eval = if flags & O_ACCMODE == O_RDONLY || flag_stat {
false
} else if ctx.uid == 0 {
true
} else {
return Err(Error::new(EINVAL));
};
if flags & O_SYMLINK == O_SYMLINK && !flag_stat {
return Err(Error::new(EINVAL));
}
let fd = self.handles.insert(Handle {
stat: flag_stat,
kind,
allowed_to_eval,
});
Ok(OpenResult::ThisScheme {
number: fd,
flags: NewFdFlags::POSITIONED,
})
}
fn fstat(&mut self, id: usize, stat: &mut Stat, _ctx: &CallerCtx) -> Result<()> {
let handle = self.handles.get(id)?;
stat.st_size = handle
.kind
.len(self.ctx)?
.try_into()
.unwrap_or(u64::max_value());
if handle.kind.is_dir() {
stat.st_mode = MODE_DIR;
} else {
stat.st_mode = MODE_FILE;
}
Ok(())
}
fn read(
&mut self,
id: usize,
buf: &mut [u8],
offset: u64,
_fcntl: u32,
_ctx: &CallerCtx,
) -> Result<usize> {
let offset: usize = offset.try_into().map_err(|_| Error::new(EINVAL))?;
let handle = self.handles.get_mut(id)?;
if handle.stat {
return Err(Error::new(EBADF));
}
let src_buf = match &handle.kind {
HandleKind::Table(ref signature) => self
.ctx
.sdt_from_signature(signature)
.ok_or(Error::new(EBADFD))?
.as_slice(),
HandleKind::Symbol { description, .. } => description.as_bytes(),
_ => return Err(Error::new(EINVAL)),
};
let offset = std::cmp::min(src_buf.len(), offset);
let src_buf = &src_buf[offset..];
let to_copy = std::cmp::min(src_buf.len(), buf.len());
buf[..to_copy].copy_from_slice(&src_buf[..to_copy]);
Ok(to_copy)
}
fn getdents<'buf>(
&mut self,
id: usize,
mut buf: DirentBuf<&'buf mut [u8]>,
opaque_offset: u64,
) -> Result<DirentBuf<&'buf mut [u8]>> {
let handle = self.handles.get_mut(id)?;
match &handle.kind {
HandleKind::TopLevel => {
const TOPLEVEL_ENTRIES: &[&str] = &["tables", "symbols"];
for (idx, name) in TOPLEVEL_ENTRIES
.iter()
.enumerate()
.skip(opaque_offset as usize)
{
buf.entry(DirEntry {
inode: 0,
next_opaque_id: idx as u64 + 1,
name,
kind: DirentKind::Directory,
})?;
}
}
HandleKind::Symbols(aml_symbols) => {
for (idx, (symbol_name, _value)) in aml_symbols
.symbols_cache()
.iter()
.enumerate()
.skip(opaque_offset as usize)
{
buf.entry(DirEntry {
inode: 0,
next_opaque_id: idx as u64 + 1,
name: symbol_name.as_str(),
kind: DirentKind::Regular,
})?;
}
}
HandleKind::Tables => {
for (idx, table) in self
.ctx
.tables()
.iter()
.enumerate()
.skip(opaque_offset as usize)
{
let utf8_or_eio = |bytes| str::from_utf8(bytes).map_err(|_| Error::new(EIO));
let mut name = String::new();
name.push_str(utf8_or_eio(&table.signature[..])?);
name.push('-');
for byte in table.oem_id.iter() {
std::fmt::write(&mut name, format_args!("{:>02X}", byte)).unwrap();
}
name.push('-');
for byte in table.oem_table_id.iter() {
std::fmt::write(&mut name, format_args!("{:>02X}", byte)).unwrap();
}
buf.entry(DirEntry {
inode: 0,
next_opaque_id: idx as u64 + 1,
name: &name,
kind: DirentKind::Regular,
})?;
}
}
_ => return Err(Error::new(EIO)),
}
Ok(buf)
}
fn call(
&mut self,
id: usize,
payload: &mut [u8],
_metadata: &[u64],
_ctx: &CallerCtx,
) -> Result<usize> {
let handle = self.handles.get_mut(id)?;
if !handle.allowed_to_eval {
return Err(Error::new(EPERM));
}
let Ok(args): Result<Vec<AmlSerdeValue>, SpannedError> = ron::de::from_bytes(payload)
else {
return Err(Error::new(EINVAL));
};
let HandleKind::Symbol { name, .. } = &handle.kind else {
return Err(Error::new(EBADF));
};
let Ok(aml_name) = AmlName::from_str(&to_aml_format(name)) else {
log::error!("Failed to convert symbol name: \"{name}\" to aml name!");
return Err(Error::new(EBADF));
};
let Ok(result) = self.ctx.aml_eval(aml_name, args) else {
return Err(Error::new(EINVAL));
};
let Ok(serialized_result) = ron::ser::to_string(&result) else {
log::error!("Failed to serialize aml result!");
return Err(Error::new(EINVAL));
};
let byte_result = serialized_result.as_bytes();
let result_len = byte_result.len();
if result_len > payload.len() {
return Err(Error::new(EOVERFLOW));
}
payload[..result_len].copy_from_slice(byte_result);
Ok(result_len)
}
fn on_sendfd(&mut self, sendfd_request: &SendFdRequest) -> Result<usize> {
let id = sendfd_request.id();
let num_fds = sendfd_request.num_fds();
let handle = self.handles.get(id)?;
if !matches!(handle.kind, HandleKind::RegisterPci) {
return Err(Error::new(EACCES));
}
if num_fds == 0 {
return Ok(0);
}
if num_fds > 1 {
return Err(Error::new(EINVAL));
}
let mut new_fd = usize::MAX;
if let Err(e) = sendfd_request.obtain_fd(
&self.socket,
FobtainFdFlags::UPPER_TBL,
std::slice::from_mut(&mut new_fd),
) {
return Err(e);
}
let new_fd = libredox::Fd::new(new_fd);
if self.pci_fd.is_some() {
return Err(Error::new(EINVAL));
} else {
self.pci_fd = Some(new_fd);
}
Ok(num_fds)
}
fn on_close(&mut self, id: usize) {
self.handles.remove(id);
}
}
@@ -0,0 +1,14 @@
[package]
name = "amlserde"
description = "Library for serializing AML symbols"
version = "0.0.1"
authors = ["Ron Williams"]
repository = "https://gitlab.redox-os.org/redox-os/drivers"
categories = ["hardware-support"]
license = "MIT/Apache-2.0"
edition = "2021"
[dependencies]
acpi.workspace = true
serde.workspace = true
toml.workspace = true
@@ -0,0 +1,484 @@
use acpi::{
aml::{
namespace::AmlName,
object::{
FieldAccessType, FieldFlags, FieldUnit, FieldUnitKind, FieldUpdateRule, MethodFlags,
Object, ReferenceKind, WrappedObject,
},
op_region::{OpRegion, RegionSpace},
Interpreter,
},
Handle, Handler,
};
use serde::{Deserialize, Serialize};
use std::{
ops::{Deref, Shl},
str::FromStr,
sync::{
atomic::{AtomicU64, Ordering},
Arc,
},
};
#[derive(Debug, Serialize, Deserialize)]
pub struct AmlSerde {
pub name: String,
pub value: AmlSerdeValue,
}
#[derive(Debug, Serialize, Deserialize)]
pub enum AmlSerdeValue {
Uninitialized,
Integer(u64),
String(String),
OpRegion {
region: AmlSerdeRegionSpace,
offset: u64,
length: u64,
parent_device: String,
},
Field {
kind: AmlSerdeFieldKind,
flags: AmlSerdeFieldFlags,
offset: u64,
length: u64,
},
Device,
Event(u64),
Method {
arg_count: usize,
serialize: bool,
sync_level: u8,
},
Buffer(Vec<u8>),
BufferField {
offset: u64,
length: u64,
data: Box<AmlSerdeValue>,
},
Processor {
id: u8,
pblk_address: u32,
pblk_len: u8,
},
Mutex {
mutex: u32,
sync_level: u8,
},
Reference {
kind: AmlSerdeReferenceKind,
inner: Box<AmlSerdeValue>,
},
Package {
contents: Vec<AmlSerdeValue>,
},
PowerResource {
system_level: u8,
resource_order: u16,
},
RawDataBuffer,
ThermalZone,
Debug,
}
#[derive(Debug, Serialize, Deserialize)]
pub enum AmlSerdeRegionSpace {
SystemMemory,
SystemIo,
PciConfig,
EmbeddedControl,
SMBus,
SystemCmos,
PciBarTarget,
IPMI,
GeneralPurposeIo,
GenericSerialBus,
Pcc,
OemDefined(u8),
}
#[derive(Debug, Serialize, Deserialize)]
pub enum AmlSerdeFieldKind {
Normal {
region: Box<AmlSerdeValue>,
},
Bank {
region: Box<AmlSerdeValue>,
bank: Box<AmlSerdeValue>,
bank_value: u64,
},
Index {
index: Box<AmlSerdeValue>,
data: Box<AmlSerdeValue>,
},
}
#[derive(Debug, Serialize, Deserialize)]
pub struct AmlSerdeFieldFlags {
pub access_type: AmlSerdeFieldAccessType,
pub lock_rule: bool, // bit 4
pub update_rule: AmlSerdeFieldUpdateRule,
}
impl Into<u8> for AmlSerdeFieldFlags {
fn into(self) -> u8 {
// bits 0..4
(self.access_type as u8) +
// bit 4
(self.lock_rule as u8).shl(4) +
// bits 5..7
(self.update_rule as u8).shl(5)
}
}
#[derive(Debug, Serialize, Deserialize)]
#[repr(u8)]
pub enum AmlSerdeFieldAccessType {
Any = 0,
Byte = 1,
Word = 2,
DWord = 3,
QWord = 4,
Buffer = 5,
}
#[derive(Debug, Serialize, Deserialize)]
#[repr(u8)]
pub enum AmlSerdeFieldUpdateRule {
Preserve = 0,
WriteAsOnes = 1,
WriteAsZeros = 2,
}
#[derive(Debug, Serialize, Deserialize)]
pub enum AmlSerdeReferenceKind {
RefOf,
Local,
Arg,
Index,
Named,
Unresolved,
}
impl AmlSerde {
pub fn default() -> Self {
Self {
name: "name".to_owned(),
value: AmlSerdeValue::String(String::default()),
}
}
pub fn from_aml<H: Handler>(aml_context: &Interpreter<H>, aml_name: &AmlName) -> Option<Self> {
//TODO: why does namespace.get not take a reference to aml_name
let aml_value = if let Ok(aml_value) = aml_context.namespace.lock().get(aml_name.clone()) {
aml_value
} else {
return None;
};
let value = if let Some(value) = AmlSerdeValue::from_aml_value(aml_value.deref()) {
value
} else {
return None;
};
Some(AmlSerde {
name: aml_name.to_string(),
value,
})
}
}
impl AmlSerdeValue {
pub fn default() -> Self {
AmlSerdeValue::String("".to_owned())
}
pub fn from_aml_value(aml_value: &Object) -> Option<Self> {
Some(match aml_value {
Object::Uninitialized => AmlSerdeValue::Uninitialized,
Object::Integer(n) => AmlSerdeValue::Integer(n.to_owned()),
Object::String(s) => AmlSerdeValue::String(s.to_owned()),
Object::OpRegion(region) => AmlSerdeValue::OpRegion {
region: match region.space {
RegionSpace::SystemMemory => AmlSerdeRegionSpace::SystemMemory,
RegionSpace::SystemIO => AmlSerdeRegionSpace::SystemIo,
RegionSpace::PciConfig => AmlSerdeRegionSpace::PciConfig,
RegionSpace::EmbeddedControl => AmlSerdeRegionSpace::EmbeddedControl,
RegionSpace::SmBus => AmlSerdeRegionSpace::SMBus,
RegionSpace::SystemCmos => AmlSerdeRegionSpace::SystemCmos,
RegionSpace::PciBarTarget => AmlSerdeRegionSpace::PciBarTarget,
RegionSpace::Ipmi => AmlSerdeRegionSpace::IPMI,
RegionSpace::GeneralPurposeIo => AmlSerdeRegionSpace::GeneralPurposeIo,
RegionSpace::GenericSerialBus => AmlSerdeRegionSpace::GenericSerialBus,
RegionSpace::Pcc => AmlSerdeRegionSpace::Pcc,
RegionSpace::Oem(n) => AmlSerdeRegionSpace::OemDefined(n.to_owned()),
},
offset: region.base,
length: region.length,
parent_device: region.parent_device_path.to_string(),
},
Object::FieldUnit(field) => AmlSerdeValue::Field {
kind: match &field.kind {
FieldUnitKind::Normal { region } => AmlSerdeFieldKind::Normal {
region: AmlSerdeValue::from_aml_value(region.deref()).map(Box::new)?,
},
FieldUnitKind::Bank {
region,
bank,
bank_value,
} => AmlSerdeFieldKind::Bank {
region: AmlSerdeValue::from_aml_value(region.deref()).map(Box::new)?,
bank: AmlSerdeValue::from_aml_value(bank.deref()).map(Box::new)?,
bank_value: bank_value.to_owned(),
},
FieldUnitKind::Index { index, data } => AmlSerdeFieldKind::Index {
index: AmlSerdeValue::from_aml_value(index.deref()).map(Box::new)?,
data: AmlSerdeValue::from_aml_value(data.deref()).map(Box::new)?,
},
},
flags: AmlSerdeFieldFlags {
access_type: match field.flags.access_type() {
Ok(FieldAccessType::Any) => AmlSerdeFieldAccessType::Any,
Ok(FieldAccessType::Byte) => AmlSerdeFieldAccessType::Byte,
Ok(FieldAccessType::Word) => AmlSerdeFieldAccessType::Word,
Ok(FieldAccessType::DWord) => AmlSerdeFieldAccessType::DWord,
Ok(FieldAccessType::QWord) => AmlSerdeFieldAccessType::QWord,
Ok(FieldAccessType::Buffer) => AmlSerdeFieldAccessType::Buffer,
_ => return None,
},
lock_rule: field.flags.lock_rule(),
update_rule: match field.flags.update_rule() {
FieldUpdateRule::Preserve => AmlSerdeFieldUpdateRule::Preserve,
FieldUpdateRule::WriteAsOnes => AmlSerdeFieldUpdateRule::WriteAsOnes,
FieldUpdateRule::WriteAsZeros => AmlSerdeFieldUpdateRule::WriteAsZeros,
},
},
offset: field.bit_index as u64,
length: field.bit_length as u64,
},
Object::Device => AmlSerdeValue::Device,
Object::Event(event) => AmlSerdeValue::Event(event.load(Ordering::Relaxed)),
Object::Method { flags, code: _ } => AmlSerdeValue::Method {
arg_count: flags.arg_count(),
serialize: flags.serialize(),
sync_level: flags.sync_level(),
},
//TODO: distinguish from Method?
Object::NativeMethod { f: _, flags } => AmlSerdeValue::Method {
arg_count: flags.arg_count(),
serialize: flags.serialize(),
sync_level: flags.sync_level(),
},
Object::Buffer(buffer_data) => AmlSerdeValue::Buffer(buffer_data.to_owned()),
Object::BufferField {
buffer,
offset,
length,
} => AmlSerdeValue::BufferField {
offset: offset.to_owned() as u64,
length: length.to_owned() as u64,
data: AmlSerdeValue::from_aml_value(buffer.deref()).map(Box::new)?,
},
Object::Processor {
proc_id,
pblk_address,
pblk_length,
} => AmlSerdeValue::Processor {
id: proc_id.to_owned(),
pblk_address: pblk_address.to_owned(),
pblk_len: pblk_length.to_owned(),
},
Object::Mutex { mutex, sync_level } => AmlSerdeValue::Mutex {
mutex: mutex.0,
sync_level: sync_level.to_owned(),
},
Object::Reference { kind, inner } => AmlSerdeValue::Reference {
kind: match kind {
ReferenceKind::RefOf => AmlSerdeReferenceKind::RefOf,
ReferenceKind::Local => AmlSerdeReferenceKind::Local,
ReferenceKind::Arg => AmlSerdeReferenceKind::Arg,
ReferenceKind::Index => AmlSerdeReferenceKind::Index,
ReferenceKind::Named => AmlSerdeReferenceKind::Named,
ReferenceKind::Unresolved => AmlSerdeReferenceKind::Unresolved,
},
inner: AmlSerdeValue::from_aml_value(inner.deref()).map(Box::new)?,
},
Object::Package(aml_contents) => AmlSerdeValue::Package {
contents: aml_contents
.iter()
.filter_map(|item| AmlSerdeValue::from_aml_value(item))
.collect(),
},
Object::PowerResource {
system_level,
resource_order,
} => AmlSerdeValue::PowerResource {
system_level: system_level.to_owned(),
resource_order: resource_order.to_owned(),
},
Object::RawDataBuffer => AmlSerdeValue::RawDataBuffer,
Object::ThermalZone => AmlSerdeValue::ThermalZone,
Object::Debug => AmlSerdeValue::Debug,
})
}
pub fn to_aml_object(self) -> Option<Object> {
Some(match self {
AmlSerdeValue::Uninitialized => Object::Uninitialized,
AmlSerdeValue::Integer(n) => Object::Integer(n),
AmlSerdeValue::String(s) => Object::String(s),
AmlSerdeValue::OpRegion {
region,
offset,
length,
parent_device,
} => Object::OpRegion(OpRegion {
space: match region {
AmlSerdeRegionSpace::PciConfig => RegionSpace::PciConfig,
AmlSerdeRegionSpace::EmbeddedControl => RegionSpace::EmbeddedControl,
AmlSerdeRegionSpace::SMBus => RegionSpace::SmBus,
AmlSerdeRegionSpace::SystemCmos => RegionSpace::SystemCmos,
AmlSerdeRegionSpace::PciBarTarget => RegionSpace::PciBarTarget,
AmlSerdeRegionSpace::IPMI => RegionSpace::Ipmi,
AmlSerdeRegionSpace::GeneralPurposeIo => RegionSpace::GeneralPurposeIo,
AmlSerdeRegionSpace::GenericSerialBus => RegionSpace::GenericSerialBus,
AmlSerdeRegionSpace::SystemMemory => RegionSpace::SystemMemory,
AmlSerdeRegionSpace::SystemIo => RegionSpace::SystemIO,
AmlSerdeRegionSpace::Pcc => RegionSpace::Pcc,
AmlSerdeRegionSpace::OemDefined(n) => RegionSpace::Oem(n),
},
base: offset,
length,
//
parent_device_path: AmlName::from_str(&parent_device).ok()?, // TODO: Error value hidden
}),
AmlSerdeValue::Field {
kind,
flags,
offset,
length,
} => Object::FieldUnit(FieldUnit {
kind: match kind {
AmlSerdeFieldKind::Normal { region } => FieldUnitKind::Normal {
region: region.to_aml_object()?.wrap(),
},
AmlSerdeFieldKind::Bank {
region,
bank,
bank_value,
} => FieldUnitKind::Bank {
region: region.to_aml_object()?.wrap(),
bank: bank.to_aml_object()?.wrap(),
bank_value: bank_value.to_owned(),
},
AmlSerdeFieldKind::Index { index, data } => FieldUnitKind::Index {
index: index.to_aml_object()?.wrap(),
data: data.to_aml_object()?.wrap(),
},
},
flags: FieldFlags(flags.into()),
bit_index: offset as usize,
bit_length: length as usize,
}),
AmlSerdeValue::Device => Object::Device,
AmlSerdeValue::Event(event) => Object::Event(Arc::new(AtomicU64::new(event))),
AmlSerdeValue::Method {
arg_count,
serialize,
sync_level,
} => Object::Method {
code: (return None), //TODO figure out what to do here
//TODO check specs to see if all bit patterns are allowed
flags: MethodFlags(
(arg_count as u8).clamp(0, 7)
+ (serialize as u8).shl(3)
+ sync_level.clamp(0, 15).shl(4),
),
},
//TODO: handle native method?
AmlSerdeValue::Buffer(buffer_data) => Object::Buffer(buffer_data),
AmlSerdeValue::BufferField {
data,
offset,
length,
} => Object::BufferField {
offset: offset as usize,
length: length as usize,
buffer: data.to_aml_object()?.wrap(),
},
AmlSerdeValue::Processor {
id,
pblk_address,
pblk_len,
} => Object::Processor {
proc_id: id,
pblk_address,
pblk_length: pblk_len,
},
AmlSerdeValue::Mutex { mutex, sync_level } => Object::Mutex {
mutex: Handle(mutex),
sync_level: sync_level,
},
AmlSerdeValue::Reference { kind, inner } => Object::Reference {
kind: match kind {
AmlSerdeReferenceKind::RefOf => ReferenceKind::RefOf,
AmlSerdeReferenceKind::Local => ReferenceKind::Local,
AmlSerdeReferenceKind::Arg => ReferenceKind::Arg,
AmlSerdeReferenceKind::Index => ReferenceKind::Index,
AmlSerdeReferenceKind::Named => ReferenceKind::Named,
AmlSerdeReferenceKind::Unresolved => ReferenceKind::Unresolved,
},
inner: inner.to_aml_object()?.wrap(),
},
AmlSerdeValue::Package { contents } => Object::Package(
contents
.into_iter()
.map(|item| item.to_aml_object().map(Object::wrap)) // TODO: see if errors should be ignored here
.collect::<Option<Vec<WrappedObject>>>()?,
),
AmlSerdeValue::PowerResource {
system_level,
resource_order,
} => Object::PowerResource {
system_level: system_level.to_owned(),
resource_order: resource_order.to_owned(),
},
AmlSerdeValue::RawDataBuffer => Object::RawDataBuffer,
AmlSerdeValue::ThermalZone => Object::ThermalZone,
AmlSerdeValue::Debug => Object::Debug,
})
}
}
pub mod aml_serde_name {
use acpi::aml::namespace::AmlName;
/// Add a leading backslash to make the name a valid
/// namespace reference
pub fn to_aml_format(pretty_name: &String) -> String {
format!("\\{}", pretty_name)
}
/// convert a string from AML namespace style to
/// acpi symbol style
pub fn to_symbol(aml_style_name: &String) -> String {
let mut name = aml_style_name.to_owned();
// remove leading slash
name = name.trim_start_matches("\\").to_owned();
// remove unnecessary underscores
while let Some(index) = name.find("_.") {
name.remove(index);
}
while name.len() > 0 && &name[name.len() - 1..] == "_" {
name.pop();
}
name.shrink_to_fit();
name
}
/// Convert to string and remove
/// trailing underscores from each name segment
pub fn aml_to_symbol(aml_name: &AmlName) -> String {
to_symbol(&aml_name.as_string())
}
}
@@ -0,0 +1,21 @@
[package]
name = "ac97d"
description = "AC'97 driver"
version = "0.1.0"
edition = "2021"
[dependencies]
common = { path = "../../common" }
libredox.workspace = true
log.workspace = true
redox_event.workspace = true
redox_syscall.workspace = true
spin.workspace = true
daemon = { path = "../../../daemon" }
pcid = { path = "../../pcid" }
redox-scheme.workspace = true
scheme-utils = { path = "../../../scheme-utils" }
[lints]
workspace = true
@@ -0,0 +1,5 @@
[[drivers]]
name = "AC97 Audio"
class = 0x04
subclass = 0x01
command = ["ac97d"]
@@ -0,0 +1,333 @@
use common::io::Pio;
use redox_scheme::scheme::SchemeSync;
use redox_scheme::CallerCtx;
use redox_scheme::OpenResult;
use scheme_utils::{FpathWriter, HandleMap};
use syscall::error::{Error, Result, EACCES, EBADF, EINVAL, ENOENT};
use syscall::schemev2::NewFdFlags;
use syscall::EWOULDBLOCK;
use common::{
dma::Dma,
io::{Io, Mmio},
};
use spin::Mutex;
const NUM_SUB_BUFFS: usize = 32;
const SUB_BUFF_SIZE: usize = 2048;
enum Handle {
Todo,
SchemeRoot,
}
#[allow(dead_code)]
struct MixerRegs {
/* 0x00 */ reset: Pio<u16>,
/* 0x02 */ master_volume: Pio<u16>,
/* 0x04 */ aux_out_volume: Pio<u16>,
/* 0x06 */ mono_volume: Pio<u16>,
/* 0x08 */ master_tone: Pio<u16>,
/* 0x0A */ pc_beep_volume: Pio<u16>,
/* 0x0C */ phone_volume: Pio<u16>,
/* 0x0E */ mic_volume: Pio<u16>,
/* 0x10 */ line_in_volume: Pio<u16>,
/* 0x12 */ cd_volume: Pio<u16>,
/* 0x14 */ video_volume: Pio<u16>,
/* 0x16 */ aux_in_volume: Pio<u16>,
/* 0x18 */ pcm_out_volume: Pio<u16>,
/* 0x1A */ record_select: Pio<u16>,
/* 0x1C */ record_gain: Pio<u16>,
/* 0x1E */ record_gain_mic: Pio<u16>,
/* 0x20 */ general_purpose: Pio<u16>,
/* 0x22 */ control_3d: Pio<u16>,
/* 0x24 */ audio_int_paging: Pio<u16>,
/* 0x26 */ powerdown: Pio<u16>,
/* 0x28 */ extended_id: Pio<u16>,
/* 0x2A */ extended_ctrl: Pio<u16>,
/* 0x2C */ vra_pcm_front: Pio<u16>,
}
impl MixerRegs {
fn new(bar0: u16) -> Self {
Self {
reset: Pio::new(bar0 + 0x00),
master_volume: Pio::new(bar0 + 0x02),
aux_out_volume: Pio::new(bar0 + 0x04),
mono_volume: Pio::new(bar0 + 0x06),
master_tone: Pio::new(bar0 + 0x08),
pc_beep_volume: Pio::new(bar0 + 0x0A),
phone_volume: Pio::new(bar0 + 0x0C),
mic_volume: Pio::new(bar0 + 0x0E),
line_in_volume: Pio::new(bar0 + 0x10),
cd_volume: Pio::new(bar0 + 0x12),
video_volume: Pio::new(bar0 + 0x14),
aux_in_volume: Pio::new(bar0 + 0x16),
pcm_out_volume: Pio::new(bar0 + 0x18),
record_select: Pio::new(bar0 + 0x1A),
record_gain: Pio::new(bar0 + 0x1C),
record_gain_mic: Pio::new(bar0 + 0x1E),
general_purpose: Pio::new(bar0 + 0x20),
control_3d: Pio::new(bar0 + 0x22),
audio_int_paging: Pio::new(bar0 + 0x24),
powerdown: Pio::new(bar0 + 0x26),
extended_id: Pio::new(bar0 + 0x28),
extended_ctrl: Pio::new(bar0 + 0x2A),
vra_pcm_front: Pio::new(bar0 + 0x2C),
}
}
}
#[allow(dead_code)]
struct BusBoxRegs {
/// Buffer descriptor list base address
/* 0x00 */
bdbar: Pio<u32>,
/// Current index value
/* 0x04 */
civ: Pio<u8>,
/// Last valid index
/* 0x05 */
lvi: Pio<u8>,
/// Status
/* 0x06 */
sr: Pio<u16>,
/// Position in current buffer
/* 0x08 */
picb: Pio<u16>,
/// Prefetched index value
/* 0x0A */
piv: Pio<u8>,
/// Control
/* 0x0B */
cr: Pio<u8>,
}
impl BusBoxRegs {
fn new(base: u16) -> Self {
Self {
bdbar: Pio::new(base + 0x00),
civ: Pio::new(base + 0x04),
lvi: Pio::new(base + 0x05),
sr: Pio::new(base + 0x06),
picb: Pio::new(base + 0x08),
piv: Pio::new(base + 0x0A),
cr: Pio::new(base + 0x0B),
}
}
}
#[allow(dead_code)]
struct BusRegs {
/// PCM in register box
/* 0x00 */
pi: BusBoxRegs,
/// PCM out register box
/* 0x10 */
po: BusBoxRegs,
/// Microphone register box
/* 0x20 */
mc: BusBoxRegs,
}
impl BusRegs {
fn new(bar1: u16) -> Self {
Self {
pi: BusBoxRegs::new(bar1 + 0x00),
po: BusBoxRegs::new(bar1 + 0x10),
mc: BusBoxRegs::new(bar1 + 0x20),
}
}
}
#[repr(C, packed)]
pub struct BufferDescriptor {
/* 0x00 */ addr: Mmio<u32>,
/* 0x04 */ samples: Mmio<u16>,
/* 0x06 */ flags: Mmio<u16>,
}
pub struct Ac97 {
mixer: MixerRegs,
bus: BusRegs,
bdl: Dma<[BufferDescriptor; NUM_SUB_BUFFS]>,
buf: Dma<[u8; NUM_SUB_BUFFS * SUB_BUFF_SIZE]>,
handles: Mutex<HandleMap<Handle>>,
}
impl Ac97 {
pub unsafe fn new(bar0: u16, bar1: u16) -> Result<Self> {
let mut module = Ac97 {
mixer: MixerRegs::new(bar0),
bus: BusRegs::new(bar1),
bdl: Dma::zeroed(
//TODO: PhysBox::new_in_32bit_space(bdl_size)?
)?
.assume_init(),
buf: Dma::zeroed(
//TODO: PhysBox::new_in_32bit_space(buf_size)?
)?
.assume_init(),
handles: Mutex::new(HandleMap::new()),
};
module.init()?;
Ok(module)
}
fn init(&mut self) -> Result<()> {
//TODO: support other sample rates, or just the default of 48000 Hz
{
// Check if VRA is supported
if !self.mixer.extended_id.readf(1 << 0) {
println!("ac97d: VRA not supported and is currently required");
return Err(Error::new(ENOENT));
}
// Enable VRA
self.mixer.extended_ctrl.writef(1 << 0, true);
// Attempt to set sample rate for PCM front to 44100 Hz
let desired_sample_rate = 44100;
self.mixer.vra_pcm_front.write(desired_sample_rate);
// Read back real sample rate
let real_sample_rate = self.mixer.vra_pcm_front.read();
println!("ac97d: set sample rate to {}", real_sample_rate);
// Error if we cannot set the sample rate as desired
if real_sample_rate != desired_sample_rate {
println!(
"ac97d: sample rate is {} but only {} is supported",
real_sample_rate, desired_sample_rate
);
return Err(Error::new(ENOENT));
}
}
// Ensure PCM out is stopped
self.bus.po.cr.writef(1, false);
// Reset PCM out
self.bus.po.cr.writef(1 << 1, true);
while self.bus.po.cr.readf(1 << 1) {
// Spinning on resetting PCM out
//TODO: relax
}
// Initialize BDL for PCM out
for i in 0..NUM_SUB_BUFFS {
self.bdl[i]
.addr
.write((self.buf.physical() + i * SUB_BUFF_SIZE) as u32);
self.bdl[i]
.samples
.write((SUB_BUFF_SIZE / 2/* Each sample is i16 or 2 bytes */) as u16);
self.bdl[i]
.flags
.write(1 << 15 /* Interrupt on completion */);
}
self.bus.po.bdbar.write(self.bdl.physical() as u32);
// Enable interrupt on completion
self.bus.po.cr.writef(1 << 4, true);
// Start bus master
self.bus.po.cr.writef(1 << 0, true);
// Set master volume to 0 db (loudest output, DANGER!)
self.mixer.master_volume.write(0);
// Set PCM output volume to 0 db (medium)
self.mixer.pcm_out_volume.write(0x808);
Ok(())
}
pub fn irq(&mut self) -> bool {
let ints = self.bus.po.sr.read() & 0b11100;
if ints != 0 {
self.bus.po.sr.write(ints);
true
} else {
false
}
}
}
impl SchemeSync for Ac97 {
fn scheme_root(&mut self) -> Result<usize> {
Ok(self.handles.lock().insert(Handle::SchemeRoot))
}
fn openat(
&mut self,
dirfd: usize,
_path: &str,
_flags: usize,
_fcntl_flags: u32,
ctx: &CallerCtx,
) -> Result<OpenResult> {
{
let handles = self.handles.lock();
let handle = handles.get(dirfd)?;
if !matches!(handle, Handle::SchemeRoot) {
return Err(Error::new(EACCES));
}
}
if ctx.uid == 0 {
let id = self.handles.lock().insert(Handle::Todo);
Ok(OpenResult::ThisScheme {
number: id,
flags: NewFdFlags::empty(),
})
} else {
Err(Error::new(EACCES))
}
}
fn write(
&mut self,
id: usize,
buf: &[u8],
_offset: u64,
_flags: u32,
_ctx: &CallerCtx,
) -> Result<usize> {
{
let mut handles = self.handles.lock();
let handle = handles.get_mut(id)?;
if !matches!(handle, Handle::Todo) {
return Err(Error::new(EBADF));
}
}
if buf.len() != SUB_BUFF_SIZE {
return Err(Error::new(EINVAL));
}
let civ = self.bus.po.civ.read() as usize;
let mut lvi = self.bus.po.lvi.read() as usize;
if lvi == (civ + 3) % NUM_SUB_BUFFS {
// Block if we already are 3 buffers ahead
Err(Error::new(EWOULDBLOCK))
} else {
// Fill next buffer
lvi = (lvi + 1) % NUM_SUB_BUFFS;
for i in 0..SUB_BUFF_SIZE {
self.buf[lvi * SUB_BUFF_SIZE + i] = buf[i];
}
self.bus.po.lvi.write(lvi as u8);
Ok(SUB_BUFF_SIZE)
}
}
fn fpath(&mut self, _id: usize, buf: &mut [u8], _ctx: &CallerCtx) -> Result<usize> {
FpathWriter::with(buf, "audiohw", |_| Ok(()))
}
fn on_close(&mut self, id: usize) {
self.handles.lock().remove(id);
}
}
@@ -0,0 +1,134 @@
use std::io::{Read, Write};
use std::os::unix::io::AsRawFd;
use std::usize;
use event::{user_data, EventQueue};
use pcid_interface::PciFunctionHandle;
use redox_scheme::scheme::register_sync_scheme;
use redox_scheme::Socket;
use scheme_utils::ReadinessBased;
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub mod device;
fn main() {
pcid_interface::pci_daemon(daemon);
}
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
fn daemon(daemon: daemon::Daemon, pcid_handle: PciFunctionHandle) -> ! {
let pci_config = pcid_handle.config();
let mut name = pci_config.func.name();
name.push_str("_ac97");
let bar0 = pci_config.func.bars[0].expect_port();
let bar1 = pci_config.func.bars[1].expect_port();
let irq = pci_config
.func
.legacy_interrupt_line
.expect("ac97d: no legacy interrupts supported");
println!(" + ac97 {}", pci_config.func.display());
common::setup_logging(
"audio",
"pci",
&name,
common::output_level(),
common::file_level(),
);
common::acquire_port_io_rights().expect("ac97d: failed to set I/O privilege level to Ring 3");
let mut irq_file = irq.irq_handle("ac97d");
let socket = Socket::nonblock().expect("ac97d: failed to create socket");
let mut device =
unsafe { device::Ac97::new(bar0, bar1).expect("ac97d: failed to allocate device") };
let mut readiness_based = ReadinessBased::new(&socket, 16);
user_data! {
enum Source {
Irq,
Scheme,
}
}
let event_queue = EventQueue::<Source>::new().expect("ac97d: Could not create event queue.");
event_queue
.subscribe(
irq_file.as_raw_fd() as usize,
Source::Irq,
event::EventFlags::READ,
)
.unwrap();
event_queue
.subscribe(
socket.inner().raw(),
Source::Scheme,
event::EventFlags::READ,
)
.unwrap();
register_sync_scheme(&socket, "audiohw", &mut device)
.expect("ac97d: failed to register audiohw scheme to namespace");
daemon.ready();
libredox::call::setrens(0, 0).expect("ac97d: failed to enter null namespace");
let all = [Source::Irq, Source::Scheme];
for event in all
.into_iter()
.chain(event_queue.map(|e| e.expect("ac97d: failed to get next event").user_data))
{
match event {
Source::Irq => {
let mut irq = [0; 8];
irq_file.read(&mut irq).unwrap();
if !device.irq() {
continue;
}
irq_file.write(&mut irq).unwrap();
readiness_based
.poll_all_requests(&mut device)
.expect("ac97d: failed to poll requests");
readiness_based
.write_responses()
.expect("ac97d: failed to write to socket");
/*
let next_read = device_irq.next_read();
if next_read > 0 {
return Ok(Some(next_read));
}
*/
}
Source::Scheme => {
readiness_based
.read_and_process_requests(&mut device)
.expect("ac97d: failed to read from socket");
readiness_based
.write_responses()
.expect("ac97d: failed to write to socket");
/*
let next_read = device.borrow().next_read();
if next_read > 0 {
return Ok(Some(next_read));
}
*/
}
}
}
std::process::exit(0);
}
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
fn daemon(daemon: daemon::Daemon, pcid_handle: PciFunctionHandle) -> ! {
unimplemented!()
}
@@ -0,0 +1,22 @@
[package]
name = "ihdad"
description = "Intel HD Audio chipset driver"
version = "0.1.0"
edition = "2021"
[dependencies]
bitflags.workspace = true
libredox.workspace = true
log.workspace = true
redox_event.workspace = true
redox_syscall.workspace = true
spin.workspace = true
common = { path = "../../common" }
daemon = { path = "../../../daemon" }
pcid = { path = "../../pcid" }
redox-scheme.workspace = true
scheme-utils = { path = "../../../scheme-utils" }
[lints]
workspace = true
@@ -0,0 +1,5 @@
[[drivers]]
name = "Intel HD Audio"
class = 0x04
subclass = 0x03
command = ["ihdad"]
@@ -0,0 +1,501 @@
use common::dma::Dma;
use common::io::{Io, Mmio};
use common::timeout::Timeout;
use syscall::error::{Error, Result, EIO};
use super::common::*;
// CORBCTL
const CMEIE: u8 = 1 << 0; // 1 bit
const CORBRUN: u8 = 1 << 1; // 1 bit
// CORBSIZE
const CORBSZCAP: (u8, u8) = (4, 4);
const CORBSIZE: (u8, u8) = (0, 2);
// CORBRP
const CORBRPRST: u16 = 1 << 15;
// RIRBWP
const RIRBWPRST: u16 = 1 << 15;
// RIRBCTL
const RINTCTL: u8 = 1 << 0; // 1 bit
const RIRBDMAEN: u8 = 1 << 1; // 1 bit
const CORB_OFFSET: usize = 0x00;
const RIRB_OFFSET: usize = 0x10;
const ICMD_OFFSET: usize = 0x20;
// ICS
const ICB: u16 = 1 << 0;
const IRV: u16 = 1 << 1;
// CORB and RIRB offset
const COMMAND_BUFFER_OFFSET: usize = 0x40;
const CORB_BUFF_MAX_SIZE: usize = 1024;
struct CommandBufferRegs {
corblbase: Mmio<u32>,
corbubase: Mmio<u32>,
corbwp: Mmio<u16>,
corbrp: Mmio<u16>,
corbctl: Mmio<u8>,
corbsts: Mmio<u8>,
corbsize: Mmio<u8>,
rsvd5: Mmio<u8>,
rirblbase: Mmio<u32>,
rirbubase: Mmio<u32>,
rirbwp: Mmio<u16>,
rintcnt: Mmio<u16>,
rirbctl: Mmio<u8>,
rirbsts: Mmio<u8>,
rirbsize: Mmio<u8>,
rsvd6: Mmio<u8>,
}
struct CorbRegs {
corblbase: Mmio<u32>,
corbubase: Mmio<u32>,
corbwp: Mmio<u16>,
corbrp: Mmio<u16>,
corbctl: Mmio<u8>,
corbsts: Mmio<u8>,
corbsize: Mmio<u8>,
rsvd5: Mmio<u8>,
}
struct Corb {
regs: &'static mut CorbRegs,
corb_base: *mut u32,
corb_base_phys: usize,
corb_count: usize,
}
impl Corb {
pub fn new(regs_addr: usize, corb_buff_phys: usize, corb_buff_virt: *mut u32) -> Corb {
unsafe {
Corb {
regs: &mut *(regs_addr as *mut CorbRegs),
corb_base: corb_buff_virt,
corb_base_phys: corb_buff_phys,
corb_count: 0,
}
}
}
//Intel 4.4.1.3
pub fn init(&mut self) -> Result<()> {
self.stop()?;
//Determine CORB and RIRB size and allocate buffer
//3.3.24
let corbsize_reg = self.regs.corbsize.read();
let corbszcap = (corbsize_reg >> 4) & 0xF;
let mut corbsize_bytes: usize = 0;
let mut corbsize: u8 = 0;
if (corbszcap & 4) == 4 {
corbsize = 2;
corbsize_bytes = 1024;
self.corb_count = 256;
} else if (corbszcap & 2) == 2 {
corbsize = 1;
corbsize_bytes = 64;
self.corb_count = 16;
} else if (corbszcap & 1) == 1 {
corbsize = 0;
corbsize_bytes = 8;
self.corb_count = 2;
}
assert!(self.corb_count != 0);
let addr = self.corb_base_phys;
self.set_address(addr);
self.regs.corbsize.write((corbsize_reg & 0xFC) | corbsize);
self.reset_read_pointer()?;
let old_wp = self.regs.corbwp.read();
self.regs.corbwp.write(old_wp & 0xFF00);
Ok(())
}
pub fn start(&mut self) {
self.regs.corbctl.writef(CORBRUN, true);
}
#[inline(never)]
pub fn stop(&mut self) -> Result<()> {
let timeout = Timeout::from_secs(1);
while self.regs.corbctl.readf(CORBRUN) {
self.regs.corbctl.writef(CORBRUN, false);
timeout.run().map_err(|()| {
log::error!("timeout on clearing CORBRUN");
Error::new(EIO)
})?;
}
Ok(())
}
pub fn set_address(&mut self, addr: usize) {
self.regs.corblbase.write((addr & 0xFFFFFFFF) as u32);
self.regs.corbubase.write(((addr as u64) >> 32) as u32);
}
pub fn reset_read_pointer(&mut self) -> Result<()> {
// 3.3.21
self.stop()?;
// Set CORBRPRST to 1
log::trace!("CORBRP {:X}", self.regs.corbrp.read());
self.regs.corbrp.writef(CORBRPRST, true);
log::trace!("CORBRP {:X}", self.regs.corbrp.read());
{
// Wait for it to become 1
let timeout = Timeout::from_secs(1);
while !self.regs.corbrp.readf(CORBRPRST) {
self.regs.corbrp.writef(CORBRPRST, true);
timeout.run().map_err(|()| {
log::error!("timeout on setting CORBRPRST");
Error::new(EIO)
})?;
}
}
// Clear the bit again
self.regs.corbrp.writef(CORBRPRST, false);
{
// Read back the bit until zero to verify that it is cleared.
let timeout = Timeout::from_secs(1);
loop {
if !self.regs.corbrp.readf(CORBRPRST) {
break;
}
self.regs.corbrp.writef(CORBRPRST, false);
timeout.run().map_err(|()| {
log::error!("timeout on clearing CORBRPRST");
Error::new(EIO)
})?;
}
}
Ok(())
}
fn send_command(&mut self, cmd: u32) -> Result<()> {
{
// wait for the commands to finish
let timeout = Timeout::from_secs(1);
while (self.regs.corbwp.read() & 0xff) != (self.regs.corbrp.read() & 0xff) {
timeout.run().map_err(|()| {
log::error!("timeout on CORB command");
Error::new(EIO)
})?;
}
}
let write_pos: usize = ((self.regs.corbwp.read() as usize & 0xFF) + 1) % self.corb_count;
unsafe {
*self.corb_base.offset(write_pos as isize) = cmd;
}
self.regs.corbwp.write(write_pos as u16);
log::trace!("Corb: {:08X}", cmd);
Ok(())
}
}
struct RirbRegs {
rirblbase: Mmio<u32>,
rirbubase: Mmio<u32>,
rirbwp: Mmio<u16>,
rintcnt: Mmio<u16>,
rirbctl: Mmio<u8>,
rirbsts: Mmio<u8>,
rirbsize: Mmio<u8>,
rsvd6: Mmio<u8>,
}
struct Rirb {
regs: &'static mut RirbRegs,
rirb_base: *mut u64,
rirb_base_phys: usize,
rirb_rp: u16,
rirb_count: usize,
}
impl Rirb {
pub fn new(regs_addr: usize, rirb_buff_phys: usize, rirb_buff_virt: *mut u64) -> Rirb {
unsafe {
Rirb {
regs: &mut *(regs_addr as *mut RirbRegs),
rirb_base: rirb_buff_virt,
rirb_rp: 0,
rirb_base_phys: rirb_buff_phys,
rirb_count: 0,
}
}
}
//Intel 4.4.1.3
pub fn init(&mut self) -> Result<()> {
self.stop()?;
let rirbsize_reg = self.regs.rirbsize.read();
let rirbszcap = (rirbsize_reg >> 4) & 0xF;
let mut rirbsize_bytes: usize = 0;
let mut rirbsize: u8 = 0;
if (rirbszcap & 4) == 4 {
rirbsize = 2;
rirbsize_bytes = 2048;
self.rirb_count = 256;
} else if (rirbszcap & 2) == 2 {
rirbsize = 1;
rirbsize_bytes = 128;
self.rirb_count = 8;
} else if (rirbszcap & 1) == 1 {
rirbsize = 0;
rirbsize_bytes = 16;
self.rirb_count = 2;
}
assert!(self.rirb_count != 0);
let addr = self.rirb_base_phys;
self.set_address(addr);
self.reset_write_pointer();
self.rirb_rp = 0;
self.regs.rintcnt.write(1);
Ok(())
}
pub fn start(&mut self) {
self.regs.rirbctl.writef(RIRBDMAEN | RINTCTL, true);
}
pub fn stop(&mut self) -> Result<()> {
let timeout = Timeout::from_secs(1);
while self.regs.rirbctl.readf(RIRBDMAEN) {
self.regs.rirbctl.writef(RIRBDMAEN, false);
timeout.run().map_err(|()| {
log::error!("timeout on clearing RIRBDMAEN");
Error::new(EIO)
})?;
}
Ok(())
}
pub fn set_address(&mut self, addr: usize) {
self.regs.rirblbase.write((addr & 0xFFFFFFFF) as u32);
self.regs.rirbubase.write(((addr as u64) >> 32) as u32);
}
pub fn reset_write_pointer(&mut self) {
self.regs.rirbwp.writef(RIRBWPRST, true);
}
fn read_response(&mut self) -> Result<u64> {
{
// wait for response
let timeout = Timeout::from_secs(1);
while (self.regs.rirbwp.read() & 0xff) == (self.rirb_rp & 0xff) {
timeout.run().map_err(|()| {
log::error!("timeout on RIRB response");
Error::new(EIO)
})?;
}
}
let read_pos: u16 = (self.rirb_rp + 1) % self.rirb_count as u16;
let res: u64;
unsafe {
res = *self.rirb_base.offset(read_pos as isize);
}
self.rirb_rp = read_pos;
log::trace!("Rirb: {:08X}", res);
Ok(res)
}
}
struct ImmediateCommandRegs {
icoi: Mmio<u32>,
irii: Mmio<u32>,
ics: Mmio<u16>,
rsvd7: [Mmio<u8>; 6],
}
pub struct ImmediateCommand {
regs: &'static mut ImmediateCommandRegs,
}
impl ImmediateCommand {
pub fn new(regs_addr: usize) -> ImmediateCommand {
unsafe {
ImmediateCommand {
regs: &mut *(regs_addr as *mut ImmediateCommandRegs),
}
}
}
pub fn cmd(&mut self, cmd: u32) -> Result<u64> {
{
// wait for ready
let timeout = Timeout::from_secs(1);
while self.regs.ics.readf(ICB) {
timeout.run().map_err(|()| {
log::error!("timeout on immediate command");
Error::new(EIO)
})?;
}
}
// write command
self.regs.icoi.write(cmd);
// set ICB bit to send command
self.regs.ics.writef(ICB, true);
{
// wait for IRV bit to be set to indicate a response is latched
let timeout = Timeout::from_secs(1);
while !self.regs.ics.readf(IRV) {
timeout.run().map_err(|()| {
log::error!("timeout on immediate response");
Error::new(EIO)
})?;
}
}
// read the result register twice, total of 8 bytes
// highest 4 will most likely be zeros (so I've heard)
let mut res: u64 = self.regs.irii.read() as u64;
res |= (self.regs.irii.read() as u64) << 32;
// clear the bit so we know when the next response comes
self.regs.ics.writef(IRV, false);
Ok(res)
}
}
pub struct CommandBuffer {
// regs: &'static mut CommandBufferRegs,
corb: Corb,
rirb: Rirb,
icmd: ImmediateCommand,
use_immediate_cmd: bool,
mem: Dma<[u8; 0x1000]>,
}
impl CommandBuffer {
pub fn new(regs_addr: usize, mut cmd_buff: Dma<[u8; 0x1000]>) -> CommandBuffer {
let corb = Corb::new(
regs_addr + CORB_OFFSET,
cmd_buff.physical(),
cmd_buff.as_mut_ptr().cast(),
);
let rirb = Rirb::new(
regs_addr + RIRB_OFFSET,
cmd_buff.physical() + CORB_BUFF_MAX_SIZE,
cmd_buff
.as_mut_ptr()
.cast::<u8>()
.wrapping_add(CORB_BUFF_MAX_SIZE)
.cast(),
);
let icmd = ImmediateCommand::new(regs_addr + ICMD_OFFSET);
let cmdbuff = CommandBuffer {
corb,
rirb,
icmd,
use_immediate_cmd: false,
mem: cmd_buff,
};
cmdbuff
}
pub fn init(&mut self, use_imm_cmds: bool) -> Result<()> {
self.corb.init()?;
self.rirb.init()?;
self.set_use_imm_cmds(use_imm_cmds)?;
Ok(())
}
pub fn stop(&mut self) -> Result<()> {
self.corb.stop()?;
self.rirb.stop()?;
Ok(())
}
pub fn cmd12(&mut self, addr: WidgetAddr, command: u32, data: u8) -> Result<u64> {
let mut ncmd: u32 = 0;
ncmd |= (addr.0 as u32 & 0x00F) << 28;
ncmd |= (addr.1 as u32 & 0x0FF) << 20;
ncmd |= (command & 0xFFF) << 8;
ncmd |= (data as u32 & 0x0FF) << 0;
self.cmd(ncmd)
}
pub fn cmd4(&mut self, addr: WidgetAddr, command: u32, data: u16) -> Result<u64> {
let mut ncmd: u32 = 0;
ncmd |= (addr.0 as u32 & 0x000F) << 28;
ncmd |= (addr.1 as u32 & 0x00FF) << 20;
ncmd |= (command & 0x000F) << 16;
ncmd |= (data as u32 & 0xFFFF) << 0;
self.cmd(ncmd)
}
pub fn cmd(&mut self, cmd: u32) -> Result<u64> {
if self.use_immediate_cmd {
self.cmd_imm(cmd)
} else {
self.cmd_buff(cmd)
}
}
pub fn cmd_imm(&mut self, cmd: u32) -> Result<u64> {
self.icmd.cmd(cmd)
}
pub fn cmd_buff(&mut self, cmd: u32) -> Result<u64> {
self.corb.send_command(cmd)?;
self.rirb.read_response()
}
pub fn set_use_imm_cmds(&mut self, use_imm: bool) -> Result<()> {
self.use_immediate_cmd = use_imm;
if self.use_immediate_cmd {
self.corb.stop()?;
self.rirb.stop()?;
} else {
self.corb.start();
self.rirb.start();
}
Ok(())
}
}
@@ -0,0 +1,195 @@
use std::fmt;
use std::mem::transmute;
pub type HDANodeAddr = u16;
pub type HDACodecAddr = u8;
pub type NodeAddr = u16;
pub type CodecAddr = u8;
pub type WidgetAddr = (CodecAddr, NodeAddr);
/*
impl fmt::Display for WidgetAddr {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{:01X}:{:02X}\n", self.0, self.1)
}
}*/
#[derive(Debug, PartialEq)]
#[repr(u8)]
pub enum HDAWidgetType {
AudioOutput = 0x0,
AudioInput = 0x1,
AudioMixer = 0x2,
AudioSelector = 0x3,
PinComplex = 0x4,
Power = 0x5,
VolumeKnob = 0x6,
BeepGenerator = 0x7,
VendorDefined = 0xf,
}
impl fmt::Display for HDAWidgetType {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{:?}", self)
}
}
#[derive(Debug, PartialEq)]
#[repr(u8)]
pub enum DefaultDevice {
LineOut = 0x0,
Speaker = 0x1,
HPOut = 0x2,
CD = 0x3,
SPDIF = 0x4,
DigitalOtherOut = 0x5,
ModemLineSide = 0x6,
ModemHandsetSide = 0x7,
LineIn = 0x8,
AUX = 0x9,
MicIn = 0xA,
Telephony = 0xB,
SPDIFIn = 0xC,
DigitalOtherIn = 0xD,
Reserved = 0xE,
Other = 0xF,
}
#[derive(Debug)]
#[repr(u8)]
pub enum PortConnectivity {
ConnectedToJack = 0x0,
NoPhysicalConnection = 0x1,
FixedFunction = 0x2,
JackAndInternal = 0x3,
}
#[derive(Debug)]
#[repr(u8)]
pub enum GrossLocation {
ExternalOnPrimary = 0x0,
Internal = 0x1,
SeperateChasis = 0x2,
Other = 0x3,
}
#[derive(Debug)]
#[repr(u8)]
pub enum GeometricLocation {
NA = 0x0,
Rear = 0x1,
Front = 0x2,
Left = 0x3,
Right = 0x4,
Top = 0x5,
Bottom = 0x6,
Special1 = 0x7,
Special2 = 0x8,
Special3 = 0x9,
Resvd1 = 0xA,
Resvd2 = 0xB,
Resvd3 = 0xC,
Resvd4 = 0xD,
Resvd5 = 0xE,
Resvd6 = 0xF,
}
#[derive(Debug)]
#[repr(u8)]
pub enum Color {
Unknown = 0x0,
Black = 0x1,
Grey = 0x2,
Blue = 0x3,
Green = 0x4,
Red = 0x5,
Orange = 0x6,
Yellow = 0x7,
Purple = 0x8,
Pink = 0x9,
Resvd1 = 0xA,
Resvd2 = 0xB,
Resvd3 = 0xC,
Resvd4 = 0xD,
White = 0xE,
Other = 0xF,
}
pub struct ConfigurationDefault {
value: u32,
}
impl ConfigurationDefault {
pub fn from_u32(value: u32) -> ConfigurationDefault {
ConfigurationDefault { value: value }
}
pub fn color(&self) -> Color {
unsafe { transmute(((self.value >> 12) & 0xF) as u8) }
}
pub fn default_device(&self) -> DefaultDevice {
unsafe { transmute(((self.value >> 20) & 0xF) as u8) }
}
pub fn port_connectivity(&self) -> PortConnectivity {
unsafe { transmute(((self.value >> 30) & 0x3) as u8) }
}
pub fn gross_location(&self) -> GrossLocation {
unsafe { transmute(((self.value >> 28) & 0x3) as u8) }
}
pub fn geometric_location(&self) -> GeometricLocation {
unsafe { transmute(((self.value >> 24) & 0x7) as u8) }
}
pub fn is_output(&self) -> bool {
match self.default_device() {
DefaultDevice::LineOut
| DefaultDevice::Speaker
| DefaultDevice::HPOut
| DefaultDevice::CD
| DefaultDevice::SPDIF
| DefaultDevice::DigitalOtherOut
| DefaultDevice::ModemLineSide => true,
_ => false,
}
}
pub fn is_input(&self) -> bool {
match self.default_device() {
DefaultDevice::ModemHandsetSide
| DefaultDevice::LineIn
| DefaultDevice::AUX
| DefaultDevice::MicIn
| DefaultDevice::Telephony
| DefaultDevice::SPDIFIn
| DefaultDevice::DigitalOtherIn => true,
_ => false,
}
}
pub fn sequence(&self) -> u8 {
(self.value & 0xF) as u8
}
pub fn default_association(&self) -> u8 {
((self.value >> 4) & 0xF) as u8
}
}
impl fmt::Display for ConfigurationDefault {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(
f,
"{:?} {:?} {:?} {:?}",
self.default_device(),
self.color(),
self.gross_location(),
self.geometric_location()
)
}
}
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,16 @@
#![allow(dead_code)]
pub mod cmdbuff;
pub mod common;
pub mod device;
pub mod node;
pub mod stream;
pub use self::node::*;
pub use self::stream::*;
pub use self::cmdbuff::*;
pub use self::device::IntelHDA;
pub use self::stream::BitsPerSample;
pub use self::stream::BufferDescriptorListEntry;
pub use self::stream::StreamBuffer;
pub use self::stream::StreamDescriptorRegs;
@@ -0,0 +1,108 @@
use super::common::*;
use std::{fmt, mem};
#[derive(Clone)]
pub struct HDANode {
pub addr: WidgetAddr,
// 0x4
pub subnode_count: u16,
pub subnode_start: u16,
// 0x5
pub function_group_type: u8,
// 0x9
pub capabilities: u32,
// 0xE
pub conn_list_len: u8,
pub connections: Vec<WidgetAddr>,
pub connection_default: u8,
pub is_widget: bool,
pub config_default: u32,
}
impl HDANode {
pub fn new() -> HDANode {
HDANode {
addr: (0, 0),
subnode_count: 0,
subnode_start: 0,
function_group_type: 0,
capabilities: 0,
conn_list_len: 0,
config_default: 0,
is_widget: false,
connections: Vec::<WidgetAddr>::new(),
connection_default: 0,
}
}
pub fn widget_type(&self) -> HDAWidgetType {
unsafe { mem::transmute(((self.capabilities >> 20) & 0xF) as u8) }
}
pub fn device_default(&self) -> Option<DefaultDevice> {
if self.widget_type() != HDAWidgetType::PinComplex {
None
} else {
Some(unsafe { mem::transmute(((self.config_default >> 20) & 0xF) as u8) })
}
}
pub fn configuration_default(&self) -> ConfigurationDefault {
ConfigurationDefault::from_u32(self.config_default)
}
pub fn addr(&self) -> WidgetAddr {
self.addr
}
}
impl fmt::Display for HDANode {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
if self.addr == (0, 0) {
write!(
f,
"Addr: {:02X}:{:02X}, Root Node.",
self.addr.0, self.addr.1
)
} else if self.is_widget {
match self.widget_type() {
HDAWidgetType::PinComplex => write!(
f,
"Addr: {:02X}:{:02X}, Type: {:?}: {:?}, Inputs: {}/{}: {:X?}.",
self.addr.0,
self.addr.1,
self.widget_type(),
self.device_default().unwrap(),
self.connection_default,
self.conn_list_len,
self.connections
),
_ => write!(
f,
"Addr: {:02X}:{:02X}, Type: {:?}, Inputs: {}/{}: {:X?}.",
self.addr.0,
self.addr.1,
self.widget_type(),
self.connection_default,
self.conn_list_len,
self.connections
),
}
} else {
write!(
f,
"Addr: {:02X}:{:02X}, AFG: {}, Widget count {}.",
self.addr.0, self.addr.1, self.function_group_type, self.subnode_count
)
}
}
}
@@ -0,0 +1,387 @@
use common::dma::Dma;
use common::io::{Io, Mmio};
use std::cmp::min;
use std::ptr::copy_nonoverlapping;
use std::result;
use syscall::error::{Error, Result, EIO};
use syscall::PAGE_SIZE;
extern crate syscall;
pub enum BaseRate {
BR44_1,
BR48,
}
pub struct SampleRate {
base: BaseRate,
mult: u16,
div: u16,
}
use self::BaseRate::{BR44_1, BR48};
pub const SR_8: SampleRate = SampleRate {
base: BR48,
mult: 1,
div: 6,
};
pub const SR_11_025: SampleRate = SampleRate {
base: BR44_1,
mult: 1,
div: 4,
};
pub const SR_16: SampleRate = SampleRate {
base: BR48,
mult: 1,
div: 3,
};
pub const SR_22_05: SampleRate = SampleRate {
base: BR44_1,
mult: 1,
div: 2,
};
pub const SR_32: SampleRate = SampleRate {
base: BR48,
mult: 2,
div: 3,
};
pub const SR_44_1: SampleRate = SampleRate {
base: BR44_1,
mult: 1,
div: 1,
};
pub const SR_48: SampleRate = SampleRate {
base: BR48,
mult: 1,
div: 1,
};
pub const SR_88_1: SampleRate = SampleRate {
base: BR44_1,
mult: 2,
div: 1,
};
pub const SR_96: SampleRate = SampleRate {
base: BR48,
mult: 2,
div: 1,
};
pub const SR_176_4: SampleRate = SampleRate {
base: BR44_1,
mult: 4,
div: 1,
};
pub const SR_192: SampleRate = SampleRate {
base: BR48,
mult: 4,
div: 1,
};
#[repr(u8)]
pub enum BitsPerSample {
Bits8 = 0,
Bits16 = 1,
Bits20 = 2,
Bits24 = 3,
Bits32 = 4,
}
pub fn format_to_u16(sr: &SampleRate, bps: BitsPerSample, channels: u8) -> u16 {
// 3.3.41
let base: u16 = match sr.base {
BaseRate::BR44_1 => 1 << 14,
BaseRate::BR48 => 0,
};
let mult = ((sr.mult - 1) & 0x7) << 11;
let div = ((sr.div - 1) & 0x7) << 8;
let bits = (bps as u16) << 4;
let chan = ((channels - 1) & 0xF) as u16;
let val: u16 = base | mult | div | bits | chan;
val
}
#[repr(C, packed)]
pub struct StreamDescriptorRegs {
ctrl_lo: Mmio<u16>,
ctrl_hi: Mmio<u8>,
status: Mmio<u8>,
link_pos: Mmio<u32>,
buff_length: Mmio<u32>,
last_valid_index: Mmio<u16>,
resv1: Mmio<u16>,
fifo_size_: Mmio<u16>,
format: Mmio<u16>,
resv2: Mmio<u32>,
buff_desc_list_lo: Mmio<u32>,
buff_desc_list_hi: Mmio<u32>,
}
impl StreamDescriptorRegs {
pub fn status(&self) -> u8 {
self.status.read()
}
pub fn set_status(&mut self, status: u8) {
self.status.write(status);
}
pub fn control(&self) -> u32 {
let mut ctrl = self.ctrl_lo.read() as u32;
ctrl |= (self.ctrl_hi.read() as u32) << 16;
ctrl
}
pub fn set_control(&mut self, control: u32) {
self.ctrl_lo.write((control & 0xFFFF) as u16);
self.ctrl_hi.write(((control >> 16) & 0xFF) as u8);
}
pub fn set_pcm_format(&mut self, sr: &SampleRate, bps: BitsPerSample, channels: u8) {
// 3.3.41
let val = format_to_u16(sr, bps, channels);
self.format.write(val);
}
pub fn fifo_size(&self) -> u16 {
self.fifo_size_.read()
}
pub fn set_cyclic_buffer_length(&mut self, length: u32) {
self.buff_length.write(length);
}
pub fn cyclic_buffer_length(&self) -> u32 {
self.buff_length.read()
}
pub fn run(&mut self) {
let val = self.control() | (1 << 1);
self.set_control(val);
}
pub fn stop(&mut self) {
let val = self.control() & !(1 << 1);
self.set_control(val);
}
pub fn stream_number(&self) -> u8 {
((self.control() >> 20) & 0xF) as u8
}
pub fn set_stream_number(&mut self, stream_number: u8) {
let val = (self.control() & 0x00FFFF) | (((stream_number & 0xF) as u32) << 20);
self.set_control(val);
}
pub fn set_address(&mut self, addr: usize) {
self.buff_desc_list_lo.write((addr & 0xFFFFFFFF) as u32);
self.buff_desc_list_hi
.write((((addr as u64) >> 32) & 0xFFFFFFFF) as u32);
}
pub fn set_last_valid_index(&mut self, index: u16) {
self.last_valid_index.write(index);
}
pub fn link_position(&self) -> u32 {
self.link_pos.read()
}
pub fn set_interrupt_on_completion(&mut self, enable: bool) {
let mut ctrl = self.control();
if enable {
ctrl |= 1 << 2;
} else {
ctrl &= !(1 << 2);
}
self.set_control(ctrl);
}
pub fn buffer_complete(&self) -> bool {
self.status.readf(1 << 2)
}
pub fn clear_interrupts(&mut self) {
self.status.write(0x7 << 2);
}
// get sample size in bytes
pub fn sample_size(&self) -> usize {
let format = self.format.read();
let chan = (format & 0xF) as usize;
let bits = ((format >> 4) & 0xF) as usize;
match bits {
0 => 1 * (chan + 1),
1 => 2 * (chan + 1),
_ => 4 * (chan + 1),
}
}
}
pub struct OutputStream {
buff: StreamBuffer,
desc_regs: &'static mut StreamDescriptorRegs,
}
impl OutputStream {
pub fn new(
block_count: usize,
block_length: usize,
regs: &'static mut StreamDescriptorRegs,
) -> OutputStream {
OutputStream {
buff: StreamBuffer::new(block_length, block_count).unwrap(),
desc_regs: regs,
}
}
pub fn write_block(&mut self, buf: &[u8]) -> Result<usize> {
self.buff.write_block(buf)
}
pub fn block_size(&self) -> usize {
self.buff.block_size()
}
pub fn block_count(&self) -> usize {
self.buff.block_count()
}
pub fn current_block(&self) -> usize {
self.buff.current_block()
}
pub fn addr(&self) -> usize {
self.buff.addr()
}
pub fn phys(&self) -> usize {
self.buff.phys()
}
}
#[repr(C, packed)]
pub struct BufferDescriptorListEntry {
addr_low: Mmio<u32>,
addr_high: Mmio<u32>,
len: Mmio<u32>,
ioc_resv: Mmio<u32>,
}
impl BufferDescriptorListEntry {
pub fn address(&self) -> u64 {
(self.addr_low.read() as u64) | ((self.addr_high.read() as u64) << 32)
}
pub fn set_address(&mut self, addr: u64) {
self.addr_low.write(addr as u32);
self.addr_high.write((addr >> 32) as u32);
}
pub fn length(&self) -> u32 {
self.len.read()
}
pub fn set_length(&mut self, length: u32) {
self.len.write(length)
}
pub fn interrupt_on_completion(&self) -> bool {
(self.ioc_resv.read() & 0x1) == 0x1
}
pub fn set_interrupt_on_complete(&mut self, ioc: bool) {
self.ioc_resv.writef(1, ioc);
}
}
pub struct StreamBuffer {
mem: Dma<[u8]>,
block_cnt: usize,
block_len: usize,
cur_pos: usize,
}
impl StreamBuffer {
pub fn new(
block_length: usize,
block_count: usize,
) -> result::Result<StreamBuffer, &'static str> {
let page_aligned_size = (block_length * block_count).next_multiple_of(PAGE_SIZE);
let mem = unsafe {
Dma::zeroed_slice(page_aligned_size)
.map_err(|_| "Could not allocate physical memory for buffer.")?
.assume_init()
};
Ok(StreamBuffer {
mem,
block_len: block_length,
block_cnt: block_count,
cur_pos: 0,
})
}
pub fn length(&self) -> usize {
self.block_len * self.block_cnt
}
pub fn addr(&self) -> usize {
self.mem.as_ptr() as usize
}
pub fn phys(&self) -> usize {
self.mem.physical()
}
pub fn block_size(&self) -> usize {
self.block_len
}
pub fn block_count(&self) -> usize {
self.block_cnt
}
pub fn current_block(&self) -> usize {
self.cur_pos
}
pub fn write_block(&mut self, buf: &[u8]) -> Result<usize> {
if buf.len() != self.block_size() {
return Err(Error::new(EIO));
}
let len = min(self.block_size(), buf.len());
//log::trace!("Phys: {:X} Virt: {:X} Offset: {:X} Len: {:X}", self.phys(), self.addr(), self.current_block() * self.block_size(), len);
unsafe {
copy_nonoverlapping(
buf.as_ptr(),
(self.addr() + self.current_block() * self.block_size()) as *mut u8,
len,
);
}
self.cur_pos += 1;
self.cur_pos %= self.block_count();
Ok(len)
}
}
impl Drop for StreamBuffer {
fn drop(&mut self) {
log::debug!("IHDA: Deallocating buffer.");
}
}
+135
View File
@@ -0,0 +1,135 @@
use redox_scheme::scheme::register_sync_scheme;
use redox_scheme::Socket;
use scheme_utils::ReadinessBased;
use std::io::{Read, Write};
use std::os::unix::io::AsRawFd;
use std::usize;
use event::{user_data, EventQueue};
use pcid_interface::irq_helpers::pci_allocate_interrupt_vector;
use pcid_interface::PciFunctionHandle;
pub mod hda;
/*
VEND:PROD
Virtualbox 8086:2668
QEMU ICH9 8086:293E
82801H ICH8 8086:284B
*/
fn main() {
pcid_interface::pci_daemon(daemon);
}
fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
let pci_config = pcid_handle.config();
let mut name = pci_config.func.name();
name.push_str("_ihda");
common::setup_logging(
"audio",
"pci",
&name,
common::output_level(),
common::file_level(),
);
log::info!("IHDA {}", pci_config.func.display());
let address = unsafe { pcid_handle.map_bar(0) }.ptr.as_ptr() as usize;
let irq_file = pci_allocate_interrupt_vector(&mut pcid_handle, "ihdad");
{
let vend_prod: u32 = ((pci_config.func.full_device_id.vendor_id as u32) << 16)
| (pci_config.func.full_device_id.device_id as u32);
user_data! {
enum Source {
Irq,
Scheme,
}
}
let event_queue =
EventQueue::<Source>::new().expect("ihdad: Could not create event queue.");
let socket = Socket::nonblock().expect("ihdad: failed to create socket");
let mut device = unsafe {
hda::IntelHDA::new(address, vend_prod).expect("ihdad: failed to allocate device")
};
let mut readiness_based = ReadinessBased::new(&socket, 16);
register_sync_scheme(&socket, "audiohw", &mut device)
.expect("ihdad: failed to register audiohw scheme to namespace");
daemon.ready();
event_queue
.subscribe(
socket.inner().raw(),
Source::Scheme,
event::EventFlags::READ,
)
.unwrap();
event_queue
.subscribe(
irq_file.irq_handle().as_raw_fd() as usize,
Source::Irq,
event::EventFlags::READ,
)
.unwrap();
libredox::call::setrens(0, 0).expect("ihdad: failed to enter null namespace");
let all = [Source::Irq, Source::Scheme];
for event in all
.into_iter()
.chain(event_queue.map(|e| e.expect("failed to get next event").user_data))
{
match event {
Source::Irq => {
let mut irq = [0; 8];
irq_file.irq_handle().read(&mut irq).unwrap();
if !device.irq() {
continue;
}
irq_file.irq_handle().write(&mut irq).unwrap();
readiness_based
.poll_all_requests(&mut device)
.expect("ihdad: failed to poll requests");
readiness_based
.write_responses()
.expect("ihdad: failed to write to socket");
/*
let next_read = device_irq.next_read();
if next_read > 0 {
return Ok(Some(next_read));
}
*/
}
Source::Scheme => {
readiness_based
.read_and_process_requests(&mut device)
.expect("ihdad: failed to read from socket");
readiness_based
.write_responses()
.expect("ihdad: failed to write to socket");
/*
let next_read = device.borrow().next_read();
if next_read > 0 {
return Ok(Some(next_read));
}
*/
}
}
}
std::process::exit(0);
}
}
@@ -0,0 +1,20 @@
[package]
name = "sb16d"
description = "Sound Blaster sound card driver"
version = "0.1.0"
edition = "2021"
[dependencies]
bitflags.workspace = true
common = { path = "../../common" }
libredox.workspace = true
log.workspace = true
daemon = { path = "../../../daemon" }
redox_event.workspace = true
redox_syscall.workspace = true
spin.workspace = true
redox-scheme.workspace = true
scheme-utils = { path = "../../../scheme-utils" }
[lints]
workspace = true
@@ -0,0 +1,232 @@
use std::{thread, time};
use common::io::{Io, Pio, ReadOnly, WriteOnly};
use redox_scheme::scheme::SchemeSync;
use redox_scheme::CallerCtx;
use redox_scheme::OpenResult;
use scheme_utils::{FpathWriter, HandleMap};
use syscall::error::{Error, Result, EACCES, EBADF, ENODEV};
use syscall::schemev2::NewFdFlags;
use spin::Mutex;
const NUM_SUB_BUFFS: usize = 32;
const SUB_BUFF_SIZE: usize = 2048;
enum Handle {
Todo,
SchemeRoot,
}
#[allow(dead_code)]
pub struct Sb16 {
handles: Mutex<HandleMap<Handle>>,
pub(crate) irqs: Vec<u8>,
dmas: Vec<u8>,
// Regs
/* 0x04 */ mixer_addr: WriteOnly<Pio<u8>>,
/* 0x05 */ mixer_data: Pio<u8>,
/* 0x06 */ dsp_reset: WriteOnly<Pio<u8>>,
/* 0x0A */ dsp_read_data: ReadOnly<Pio<u8>>,
/* 0x0C */ dsp_write_data: WriteOnly<Pio<u8>>,
/* 0x0C */ dsp_write_status: ReadOnly<Pio<u8>>,
/* 0x0E */ dsp_read_status: ReadOnly<Pio<u8>>,
}
impl Sb16 {
pub unsafe fn new(addr: u16) -> Result<Self> {
let mut module = Sb16 {
handles: Mutex::new(HandleMap::new()),
irqs: Vec::new(),
dmas: Vec::new(),
// Regs
mixer_addr: WriteOnly::new(Pio::new(addr + 0x04)),
mixer_data: Pio::new(addr + 0x05),
dsp_reset: WriteOnly::new(Pio::new(addr + 0x06)),
dsp_read_data: ReadOnly::new(Pio::new(addr + 0x0A)),
dsp_write_data: WriteOnly::new(Pio::new(addr + 0x0C)),
dsp_write_status: ReadOnly::new(Pio::new(addr + 0x0C)),
dsp_read_status: ReadOnly::new(Pio::new(addr + 0x0E)),
};
module.init()?;
Ok(module)
}
fn mixer_read(&mut self, index: u8) -> u8 {
self.mixer_addr.write(index);
self.mixer_data.read()
}
fn mixer_write(&mut self, index: u8, value: u8) {
self.mixer_addr.write(index);
self.mixer_data.write(value);
}
fn dsp_read(&mut self) -> Result<u8> {
// Bit 7 must be 1 before data can be sent
while !self.dsp_read_status.readf(1 << 7) {
//TODO: timeout!
std::thread::yield_now();
}
Ok(self.dsp_read_data.read())
}
fn dsp_write(&mut self, value: u8) -> Result<()> {
// Bit 7 must be 0 before data can be sent
while self.dsp_write_status.readf(1 << 7) {
//TODO: timeout!
std::thread::yield_now();
}
self.dsp_write_data.write(value);
Ok(())
}
fn init(&mut self) -> Result<()> {
// Perform DSP reset
{
// Write 1 to reset port
self.dsp_reset.write(1);
// Wait 3us
thread::sleep(time::Duration::from_micros(3));
// Write 0 to reset port
self.dsp_reset.write(0);
//TODO: Wait for ready byte (0xAA) using read status
thread::sleep(time::Duration::from_micros(100));
let ready = self.dsp_read()?;
if ready != 0xAA {
log::error!("ready byte was 0x{:02X} instead of 0xAA", ready);
return Err(Error::new(ENODEV));
}
}
// Read DSP version
{
self.dsp_write(0xE1)?;
let major = self.dsp_read()?;
let minor = self.dsp_read()?;
log::info!("DSP version {}.{:02}", major, minor);
if major != 4 {
log::error!("Unsupported DSP major version {}", major);
return Err(Error::new(ENODEV));
}
}
// Get available IRQs and DMAs
{
self.irqs.clear();
let irq_mask = self.mixer_read(0x80);
if (irq_mask & (1 << 0)) != 0 {
self.irqs.push(2);
}
if (irq_mask & (1 << 1)) != 0 {
self.irqs.push(5);
}
if (irq_mask & (1 << 2)) != 0 {
self.irqs.push(7);
}
if (irq_mask & (1 << 3)) != 0 {
self.irqs.push(10);
}
self.dmas.clear();
let dma_mask = self.mixer_read(0x81);
if (dma_mask & (1 << 0)) != 0 {
self.dmas.push(0);
}
if (dma_mask & (1 << 1)) != 0 {
self.dmas.push(1);
}
if (dma_mask & (1 << 3)) != 0 {
self.dmas.push(3);
}
if (dma_mask & (1 << 5)) != 0 {
self.dmas.push(5);
}
if (dma_mask & (1 << 6)) != 0 {
self.dmas.push(6);
}
if (dma_mask & (1 << 7)) != 0 {
self.dmas.push(7);
}
log::info!("IRQs {:02X?} DMAs {:02X?}", self.irqs, self.dmas);
}
// Set output sample rate to 44100 Hz (Redox OS standard)
{
let rate = 44100u16;
self.dsp_write(0x41)?;
self.dsp_write((rate >> 8) as u8)?;
self.dsp_write(rate as u8)?;
}
Ok(())
}
pub fn irq(&mut self) -> bool {
//TODO
false
}
}
impl SchemeSync for Sb16 {
fn scheme_root(&mut self) -> Result<usize> {
Ok(self.handles.lock().insert(Handle::SchemeRoot))
}
fn openat(
&mut self,
dirfd: usize,
_path: &str,
_flags: usize,
_fcntl_flags: u32,
ctx: &CallerCtx,
) -> Result<OpenResult> {
{
let handles = self.handles.lock();
let handle = handles.get(dirfd)?;
if !matches!(handle, Handle::SchemeRoot) {
return Err(Error::new(EACCES));
}
}
if ctx.uid == 0 {
let id = self.handles.lock().insert(Handle::Todo);
Ok(OpenResult::ThisScheme {
number: id,
flags: NewFdFlags::empty(),
})
} else {
Err(Error::new(EACCES))
}
}
fn write(
&mut self,
_id: usize,
_buf: &[u8],
_offset: u64,
_flags: u32,
_ctx: &CallerCtx,
) -> Result<usize> {
//TODO
Err(Error::new(EBADF))
}
fn fpath(&mut self, _id: usize, buf: &mut [u8], _ctx: &CallerCtx) -> Result<usize> {
FpathWriter::with(buf, "audiohw", |_| Ok(()))
}
fn on_close(&mut self, id: usize) {
self.handles.lock().remove(id);
}
}
@@ -0,0 +1,118 @@
use libredox::{flag, Fd};
use redox_scheme::scheme::register_sync_scheme;
use redox_scheme::Socket;
use scheme_utils::ReadinessBased;
use std::{env, usize};
use event::{user_data, EventQueue};
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub mod device;
fn main() {
daemon::Daemon::new(daemon);
}
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
fn daemon(daemon: daemon::Daemon) -> ! {
let mut args = env::args().skip(1);
let addr_str = args.next().unwrap_or("220".to_string());
let addr = u16::from_str_radix(&addr_str, 16).expect("sb16: failed to parse address");
println!(" + sb16 at 0x{:X}\n", addr);
common::setup_logging(
"audio",
"pci",
"sb16",
common::output_level(),
common::file_level(),
);
common::acquire_port_io_rights().expect("sb16d: failed to acquire port IO rights");
let socket = Socket::nonblock().expect("sb16d: failed to create socket");
let mut device = unsafe { device::Sb16::new(addr).expect("sb16d: failed to allocate device") };
let mut readiness_based = ReadinessBased::new(&socket, 16);
//TODO: error on multiple IRQs?
let irq_file = match device.irqs.first() {
Some(irq) => Fd::open(&format!("/scheme/irq/{}", irq), flag::O_RDWR, 0)
.expect("sb16d: failed to open IRQ file"),
None => panic!("sb16d: no IRQs found"),
};
user_data! {
enum Source {
Irq,
Scheme,
}
}
let event_queue = EventQueue::<Source>::new().expect("sb16d: Could not create event queue.");
event_queue
.subscribe(irq_file.raw(), Source::Irq, event::EventFlags::READ)
.unwrap();
event_queue
.subscribe(
socket.inner().raw(),
Source::Scheme,
event::EventFlags::READ,
)
.unwrap();
register_sync_scheme(&socket, "sb16d", &mut device)
.expect("sb16d: failed to register audiohw scheme to namespace");
daemon.ready();
libredox::call::setrens(0, 0).expect("sb16d: failed to enter null namespace");
let all = [Source::Irq, Source::Scheme];
for event in all
.into_iter()
.chain(event_queue.map(|e| e.expect("sb16d: failed to get next event").user_data))
{
match event {
Source::Irq => {
let mut irq = [0; 8];
irq_file.read(&mut irq).unwrap();
if !device.irq() {
continue;
}
irq_file.write(&mut irq).unwrap();
readiness_based
.poll_all_requests(&mut device)
.expect("sb16d: failed to poll requests");
readiness_based
.write_responses()
.expect("sb16d: failed to write to socket");
/*
let next_read = device_irq.next_read();
if next_read > 0 {
return Ok(Some(next_read));
}
*/
}
Source::Scheme => {
readiness_based
.read_and_process_requests(&mut device)
.expect("sb16d: failed to read from socket");
readiness_based
.write_responses()
.expect("sb16d: failed to write to socket");
}
}
}
std::process::exit(0);
}
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
fn daemon(daemon: daemon::Daemon) -> ! {
unimplemented!()
}
@@ -0,0 +1,18 @@
[package]
name = "common"
description = "Shared driver code library"
version = "0.1.0"
edition = "2021"
authors = ["4lDO2 <4lDO2@protonmail.com>"]
license = "MIT"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
libredox.workspace = true
log.workspace = true
redox_syscall = { workspace = true, features = ["std"] }
redox-log.workspace = true
[lints]
workspace = true
+265
View File
@@ -0,0 +1,265 @@
use std::mem::{self, size_of, MaybeUninit};
use std::ops::{Deref, DerefMut};
use std::ptr;
use std::sync::LazyLock;
use libredox::call::MmapArgs;
use libredox::{error::Result, flag, Fd};
use syscall::PAGE_SIZE;
use crate::{memory_root_fd, MemoryType, VirtaddrTranslationHandle};
/// Defines the platform-specific memory type for DMA operations
///
/// - On x86 systems, DMA uses Write-back memory ([`MemoryType::Writeback`])
/// - On aarch64 systems, DMA uses uncacheable memory ([`MemoryType::Uncacheable`])
const DMA_MEMTY: MemoryType = {
if cfg!(any(target_arch = "x86", target_arch = "x86_64")) {
// x86 ensures cache coherence with DMA memory
MemoryType::Writeback
} else if cfg!(target_arch = "aarch64") {
// aarch64 currently must map DMA memory without caching to ensure coherence
MemoryType::Uncacheable
} else if cfg!(target_arch = "riscv64") {
// FIXME check this out more
MemoryType::Uncacheable
} else {
panic!("invalid arch")
}
};
/// Returns a file descriptor for zeroized physically-contiguous DMA memory.
///
/// # Returns
///
/// A [Result] containing:
/// - '[Ok]' - A [Fd] (file descriptor) to zeroized, physically continuous DMA usable memory
/// - '[Err]' - The error returned by the provider of the /scheme/memory/zeroed scheme.
///
/// # Errors
///
/// This function can return an error in the following case:
///
/// - The request for the physical memory fails.
pub(crate) fn phys_contiguous_fd() -> Result<Fd> {
memory_root_fd().openat(
&format!("zeroed@{DMA_MEMTY}?phys_contiguous"),
flag::O_CLOEXEC,
0,
)
}
/// Allocates a chunk of physical memory for DMA, and then maps it to virtual memory.
///
/// # Arguments
/// 'length: [usize]' - The length of the memory region. Must be a multiple of [`PAGE_SIZE`]
///
/// # Returns
///
/// This function returns a [Result] containing the following:
/// - A '[Ok]([usize], *[mut] ())' containing a Tuple of the physical address of the region, and a raw pointer to that region in virtual memory.
/// - An '[Err]' - containing the error for the operation.
///
/// # Errors
///
/// This function asserts if:
/// - length is not a multiple of [`PAGE_SIZE`]
///
/// This function returns an error if:
/// - A file descriptor to physically contiguous memory of type [`DMA_MEMTY`] could not be acquired
/// - A virtual mapping for the physically contiguous memory could not be created
/// - The virtual address returned by the memory manager was invalid.
fn alloc_and_map(length: usize, handle: &VirtaddrTranslationHandle) -> Result<(usize, *mut ())> {
assert_eq!(length % PAGE_SIZE, 0);
unsafe {
let fd = phys_contiguous_fd()?;
let virt = libredox::call::mmap(MmapArgs {
fd: fd.raw(),
offset: 0, // ignored
addr: core::ptr::null_mut(), // ignored
length,
flags: flag::MAP_PRIVATE,
prot: flag::PROT_READ | flag::PROT_WRITE,
})?;
let phys = handle.translate(virt as usize)?;
for i in 1..length.div_ceil(PAGE_SIZE) {
debug_assert_eq!(
handle.translate(virt as usize + i * PAGE_SIZE),
Ok(phys + i * PAGE_SIZE),
"NOT CONTIGUOUS"
);
}
Ok((phys, virt as *mut ()))
}
}
/// A safe accessor for DMA memory.
pub struct Dma<T: ?Sized> {
/// The physical address of the memory
phys: usize,
/// The page-aligned length of the memory. Will be a multiple of [`PAGE_SIZE`]
aligned_len: usize,
/// The pointer to the Dma memory in the virtual address space.
virt: *mut T,
}
impl<T> Dma<T> {
/// [Dma] constructor that allocates and initializes a region of DMA memory with the page-aligned
/// size and initial value of some T
///
/// # Arguments
/// 'value: T' - The initial value to write to the allocated region
///
/// # Returns
///
/// This function returns a [Result] containing the following:
///
/// - A '[Ok] (`[Dma]<T>`)' containing the initialized region
/// - An '[Err]' containing an error.
pub fn new(value: T) -> Result<Self> {
unsafe {
let mut zeroed = Self::zeroed()?;
zeroed.as_mut_ptr().write(value);
Ok(zeroed.assume_init())
}
}
/// [Dma] constructor that allocates and zeroizes a memory region of the page-aligned size of T
///
/// # Returns
///
/// This function returns a [Result] containing the following:
///
/// - A '[Ok] (`[Dma]<[MaybeUninit]<T>>`)' containing the allocated and zeroized memory
/// - An '[Err]' containing an error.
pub fn zeroed() -> Result<Dma<MaybeUninit<T>>> {
let aligned_len = size_of::<T>().next_multiple_of(PAGE_SIZE);
let (phys, virt) = alloc_and_map(aligned_len, &*VIRTTOPHYS_HANDLE)?;
Ok(Dma {
phys,
virt: virt.cast(),
aligned_len,
})
}
}
impl<T> Dma<MaybeUninit<T>> {
/// Assumes that possibly uninitialized DMA memory has been initialized, and returns a new
/// instance of an object of type `[Dma]<T>`.
///
/// # Returns
/// - `[Dma]<T>` - The original structure without the [`MaybeUninit`] wrapper around its contents.
///
/// # Notes
/// - This is unsafe because it assumes that the memory stored within the `[Dma]<T>` is a valid
/// instance of T. If it isn't (for example -- if it was initialized with [`Dma::zeroed`]),
/// then the underlying memory may not contain the expected T structure.
pub unsafe fn assume_init(self) -> Dma<T> {
let Dma {
phys,
aligned_len,
virt,
} = self;
mem::forget(self);
Dma {
phys,
aligned_len,
virt: virt.cast(),
}
}
}
impl<T: ?Sized> Dma<T> {
/// Returns the physical address of the physical memory that this [Dma] structure references.
///
/// # Returns
/// [usize] - The physical address of the memory.
pub fn physical(&self) -> usize {
self.phys
}
}
// TODO: there should exist a "context" struct that drivers create at start, which would be passed
// to the respective functions
static VIRTTOPHYS_HANDLE: LazyLock<VirtaddrTranslationHandle> = LazyLock::new(|| {
VirtaddrTranslationHandle::new().expect("failed to acquire virttophys translation handle")
});
impl<T> Dma<[T]> {
/// Returns a [Dma] object containing a zeroized slice of T with a given count.
///
/// # Arguments
///
/// - 'count: [usize]' - The number of elements of type T in the allocated slice.
pub fn zeroed_slice(count: usize) -> Result<Dma<[MaybeUninit<T>]>> {
let aligned_len = count
.checked_mul(size_of::<T>())
.unwrap()
.next_multiple_of(PAGE_SIZE);
let (phys, virt) = alloc_and_map(aligned_len, &*VIRTTOPHYS_HANDLE)?;
Ok(Dma {
phys,
aligned_len,
virt: ptr::slice_from_raw_parts_mut(virt.cast(), count),
})
}
/// Casts the slice from type T to type U.
///
/// # Returns
/// '`[DMA]<U>`' - A cast handle to the Dma memory.
pub unsafe fn cast_slice<U>(self) -> Dma<[U]> {
let Dma {
phys,
virt,
aligned_len,
} = self;
core::mem::forget(self);
Dma {
phys,
virt: virt as *mut [U],
aligned_len,
}
}
}
impl<T> Dma<[MaybeUninit<T>]> {
/// See [`Dma<MaybeUninit<T>>::assume_init`]
pub unsafe fn assume_init(self) -> Dma<[T]> {
let &Dma {
phys,
aligned_len,
virt,
} = &self;
mem::forget(self);
Dma {
phys,
aligned_len,
virt: virt as *mut [T],
}
}
}
impl<T: ?Sized> Deref for Dma<T> {
type Target = T;
fn deref(&self) -> &T {
unsafe { &*self.virt }
}
}
impl<T: ?Sized> DerefMut for Dma<T> {
fn deref_mut(&mut self) -> &mut T {
unsafe { &mut *self.virt }
}
}
impl<T: ?Sized> Drop for Dma<T> {
fn drop(&mut self) {
unsafe {
ptr::drop_in_place(self.virt);
let _ = libredox::call::munmap(self.virt as *mut (), self.aligned_len);
}
}
}
@@ -0,0 +1,95 @@
use core::{
cmp::PartialEq,
ops::{BitAnd, BitOr, Not},
};
mod mmio;
mod mmio_ptr;
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
mod pio;
pub use mmio::*;
pub use mmio_ptr::*;
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub use pio::*;
/// IO abstraction
pub trait Io {
/// Value type for IO, usually some unsigned number
type Value: Copy
+ PartialEq
+ BitAnd<Output = Self::Value>
+ BitOr<Output = Self::Value>
+ Not<Output = Self::Value>;
/// Read the underlying valu2e
fn read(&self) -> Self::Value;
/// Write the underlying value
fn write(&mut self, value: Self::Value);
/// Check whether the underlying value contains bit flags
#[inline(always)]
fn readf(&self, flags: Self::Value) -> bool {
(self.read() & flags) as Self::Value == flags
}
/// Enable or disable specific bit flags
#[inline(always)]
fn writef(&mut self, flags: Self::Value, value: bool) {
let tmp: Self::Value = match value {
true => self.read() | flags,
false => self.read() & !flags,
};
self.write(tmp);
}
}
/// Read-only IO
#[repr(transparent)]
pub struct ReadOnly<I> {
inner: I,
}
impl<I: Io> ReadOnly<I> {
/// Wraps IO
pub const fn new(inner: I) -> ReadOnly<I> {
ReadOnly { inner }
}
}
impl<I: Io> ReadOnly<I> {
/// Calls [`Io::read`]
#[inline(always)]
pub fn read(&self) -> I::Value {
self.inner.read()
}
/// Calls [`Io::readf`]
#[inline(always)]
pub fn readf(&self, flags: I::Value) -> bool {
self.inner.readf(flags)
}
}
#[repr(transparent)]
/// Write-only IO
pub struct WriteOnly<I> {
inner: I,
}
impl<I: Io> WriteOnly<I> {
/// Wraps IO
pub const fn new(inner: I) -> WriteOnly<I> {
WriteOnly { inner }
}
}
impl<I: Io> WriteOnly<I> {
/// Calls [`Io::write`]
#[inline(always)]
pub fn write(&mut self, value: I::Value) {
self.inner.write(value)
}
// writef requires read which is not valid when write-only
}
@@ -0,0 +1,173 @@
use core::{mem::MaybeUninit, ptr};
use super::Io;
/// MMIO abstraction
#[repr(C, packed)]
pub struct Mmio<T> {
value: MaybeUninit<T>,
}
impl<T> Mmio<T> {
/// Creates a zeroed instance
pub unsafe fn zeroed() -> Self {
Self {
value: MaybeUninit::zeroed(),
}
}
/// Creates an unitialized instance
pub unsafe fn uninit() -> Self {
Self {
value: MaybeUninit::uninit(),
}
}
/// Creates a new instance
pub const fn new(value: T) -> Self {
Self {
value: MaybeUninit::new(value),
}
}
}
// Generic implementation (WARNING: requires aligned pointers!)
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
impl<T> Io for Mmio<T>
where
T: Copy
+ PartialEq
+ core::ops::BitAnd<Output = T>
+ core::ops::BitOr<Output = T>
+ core::ops::Not<Output = T>,
{
type Value = T;
fn read(&self) -> T {
unsafe { ptr::read_volatile(ptr::addr_of!(self.value).cast::<T>()) }
}
fn write(&mut self, value: T) {
unsafe { ptr::write_volatile(ptr::addr_of_mut!(self.value).cast::<T>(), value) };
}
}
// x86 u8 implementation
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
impl Io for Mmio<u8> {
type Value = u8;
fn read(&self) -> Self::Value {
unsafe {
let value: Self::Value;
let ptr: *const Self::Value = ptr::addr_of!(self.value).cast::<Self::Value>();
core::arch::asm!(
"mov {}, [{}]",
out(reg_byte) value,
in(reg) ptr
);
value
}
}
fn write(&mut self, value: Self::Value) {
unsafe {
let ptr: *mut Self::Value = ptr::addr_of_mut!(self.value).cast::<Self::Value>();
core::arch::asm!(
"mov [{}], {}",
in(reg) ptr,
in(reg_byte) value,
);
}
}
}
// x86 u16 implementation
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
impl Io for Mmio<u16> {
type Value = u16;
fn read(&self) -> Self::Value {
unsafe {
let value: Self::Value;
let ptr: *const Self::Value = ptr::addr_of!(self.value).cast::<Self::Value>();
core::arch::asm!(
"mov {:x}, [{}]",
out(reg) value,
in(reg) ptr
);
value
}
}
fn write(&mut self, value: Self::Value) {
unsafe {
let ptr: *mut Self::Value = ptr::addr_of_mut!(self.value).cast::<Self::Value>();
core::arch::asm!(
"mov [{}], {:x}",
in(reg) ptr,
in(reg) value,
);
}
}
}
// x86 u32 implementation
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
impl Io for Mmio<u32> {
type Value = u32;
fn read(&self) -> Self::Value {
unsafe {
let value: Self::Value;
let ptr: *const Self::Value = ptr::addr_of!(self.value).cast::<Self::Value>();
core::arch::asm!(
"mov {:e}, [{}]",
out(reg) value,
in(reg) ptr
);
value
}
}
fn write(&mut self, value: Self::Value) {
unsafe {
let ptr: *mut Self::Value = ptr::addr_of_mut!(self.value).cast::<Self::Value>();
core::arch::asm!(
"mov [{}], {:e}",
in(reg) ptr,
in(reg) value,
);
}
}
}
// x86 u64 implementation (x86_64 only)
#[cfg(target_arch = "x86_64")]
impl Io for Mmio<u64> {
type Value = u64;
fn read(&self) -> Self::Value {
unsafe {
let value: Self::Value;
let ptr: *const Self::Value = ptr::addr_of!(self.value).cast::<Self::Value>();
core::arch::asm!(
"mov {:r}, [{}]",
out(reg) value,
in(reg) ptr
);
value
}
}
fn write(&mut self, value: Self::Value) {
unsafe {
let ptr: *mut Self::Value = ptr::addr_of_mut!(self.value).cast::<Self::Value>();
core::arch::asm!(
"mov [{}], {:r}",
in(reg) ptr,
in(reg) value,
);
}
}
}
@@ -0,0 +1,157 @@
use super::Io;
/// MMIO using pointer instead of wrapped type
pub struct MmioPtr<T> {
ptr: *mut T,
}
impl<T> MmioPtr<T> {
//TODO: reads and writes are unsafe, not new.
/// Creates a `MmioPtr`.
pub unsafe fn new(ptr: *mut T) -> Self {
Self { ptr }
}
/// Creates a const pointer from a `MmioPtr`.
pub const fn as_ptr(&self) -> *const T {
self.ptr
}
/// Creates a mutable pointer from a `MmioPtr`.
pub const fn as_mut_ptr(&mut self) -> *mut T {
self.ptr
}
}
// Generic implementation (WARNING: requires aligned pointers!)
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
impl<T> Io for MmioPtr<T>
where
T: Copy
+ PartialEq
+ core::ops::BitAnd<Output = T>
+ core::ops::BitOr<Output = T>
+ core::ops::Not<Output = T>,
{
type Value = T;
fn read(&self) -> T {
unsafe { core::ptr::read_volatile(self.ptr) }
}
fn write(&mut self, value: T) {
unsafe { core::ptr::write_volatile(self.ptr, value) };
}
}
// x86 u8 implementation
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
impl Io for MmioPtr<u8> {
type Value = u8;
fn read(&self) -> Self::Value {
unsafe {
let value: Self::Value;
core::arch::asm!(
"mov {}, [{}]",
out(reg_byte) value,
in(reg) self.ptr
);
value
}
}
fn write(&mut self, value: Self::Value) {
unsafe {
core::arch::asm!(
"mov [{}], {}",
in(reg) self.ptr,
in(reg_byte) value,
);
}
}
}
// x86 u16 implementation
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
impl Io for MmioPtr<u16> {
type Value = u16;
fn read(&self) -> Self::Value {
unsafe {
let value: Self::Value;
core::arch::asm!(
"mov {:x}, [{}]",
out(reg) value,
in(reg) self.ptr
);
value
}
}
fn write(&mut self, value: Self::Value) {
unsafe {
core::arch::asm!(
"mov [{}], {:x}",
in(reg) self.ptr,
in(reg) value,
);
}
}
}
// x86 u32 implementation
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
impl Io for MmioPtr<u32> {
type Value = u32;
fn read(&self) -> Self::Value {
unsafe {
let value: Self::Value;
core::arch::asm!(
"mov {:e}, [{}]",
out(reg) value,
in(reg) self.ptr
);
value
}
}
fn write(&mut self, value: Self::Value) {
unsafe {
core::arch::asm!(
"mov [{}], {:e}",
in(reg) self.ptr,
in(reg) value,
);
}
}
}
// x86 u64 implementation (x86_64 only)
#[cfg(target_arch = "x86_64")]
impl Io for MmioPtr<u64> {
type Value = u64;
fn read(&self) -> Self::Value {
unsafe {
let value: Self::Value;
core::arch::asm!(
"mov {:r}, [{}]",
out(reg) value,
in(reg) self.ptr
);
value
}
}
fn write(&mut self, value: Self::Value) {
unsafe {
core::arch::asm!(
"mov [{}], {:r}",
in(reg) self.ptr,
in(reg) value,
);
}
}
}
@@ -0,0 +1,89 @@
use core::{arch::asm, marker::PhantomData};
use super::Io;
/// Generic PIO
#[derive(Copy, Clone)]
pub struct Pio<T> {
port: u16,
value: PhantomData<T>,
}
impl<T> Pio<T> {
/// Create a PIO from a given port
pub const fn new(port: u16) -> Self {
Pio::<T> {
port,
value: PhantomData,
}
}
}
/// Read/Write for byte PIO
impl Io for Pio<u8> {
type Value = u8;
/// Read
#[inline(always)]
fn read(&self) -> u8 {
let value: u8;
unsafe {
asm!("in al, dx", in("dx") self.port, out("al") value, options(nostack, nomem, preserves_flags));
}
value
}
/// Write
#[inline(always)]
fn write(&mut self, value: u8) {
unsafe {
asm!("out dx, al", in("dx") self.port, in("al") value, options(nostack, nomem, preserves_flags));
}
}
}
/// Read/Write for word PIO
impl Io for Pio<u16> {
type Value = u16;
/// Read
#[inline(always)]
fn read(&self) -> u16 {
let value: u16;
unsafe {
asm!("in ax, dx", in("dx") self.port, out("ax") value, options(nostack, nomem, preserves_flags));
}
value
}
/// Write
#[inline(always)]
fn write(&mut self, value: u16) {
unsafe {
asm!("out dx, ax", in("dx") self.port, in("ax") value, options(nostack, nomem, preserves_flags));
}
}
}
/// Read/Write for doubleword PIO
impl Io for Pio<u32> {
type Value = u32;
/// Read
#[inline(always)]
fn read(&self) -> u32 {
let value: u32;
unsafe {
asm!("in eax, dx", in("dx") self.port, out("eax") value, options(nostack, nomem, preserves_flags));
}
value
}
/// Write
#[inline(always)]
fn write(&mut self, value: u32) {
unsafe {
asm!("out dx, eax", in("dx") self.port, in("eax") value, options(nostack, nomem, preserves_flags));
}
}
}
+331
View File
@@ -0,0 +1,331 @@
//! This crate provides various abstractions for use by all drivers in the Redox drivers repo.
//!
//! This includes direct memory access via [dma], and Scatter-Gather List support via [sgl]. It also
//! provides various memory management structures for use with drivers, and some logging support.
use libredox::call::MmapArgs;
use libredox::flag::{self, O_CLOEXEC, O_RDONLY, O_RDWR, O_WRONLY};
use libredox::{
errno::EINVAL,
error::{Error, Result},
Fd,
};
use syscall::{ProcSchemeVerb, PAGE_SIZE};
/// The Direct Memory Access (DMA) API for drivers
pub mod dma;
/// MMIO utilities
pub mod io;
mod logger;
/// The Scatter Gather List (SGL) API for drivers.
pub mod sgl;
/// Low latency timeout for driver loops
pub mod timeout;
pub use logger::{file_level, output_level, setup_logging};
use std::sync::OnceLock;
static MEMORY_ROOT_FD: OnceLock<libredox::Fd> = OnceLock::new();
/// Initializes a file descriptor to be used as the root memory for a driver.
///
/// # Panics
///
/// This function will panic if:
/// - `libredox` is unable to open a file descriptor.
/// - The memory root file descriptor has already been set (this function has already been called).
pub fn init() {
if MEMORY_ROOT_FD
.set(
libredox::Fd::open("/scheme/memory/scheme-root", 0, 0)
.expect("drivers common: failed to open memory root fd"),
)
.is_err()
{
panic!("drivers common: failed to set memory root fd");
}
}
/// Gets the memory root file descriptor.
///
/// # Panics
///
/// This function will panic if `init` has not already been called first.
pub fn memory_root_fd() -> &'static libredox::Fd {
MEMORY_ROOT_FD
.get()
.expect("drivers common: memory root fd not initialized. Please call `common::init` in your main function.")
}
/// Specifies the write behavior for a specific region of memory
///
/// These types indicate to the driver how writes to a specific memory region are handled by the
/// system. This usually refers to the caching behavior that the processor or I/O device responsible
/// for that memory implements.
///
/// aarch64 and x86 have very different cache-coherency rules, so this API as written is likely
/// not sufficient to describe the memory caching behavior in a cross-platform manner. As such,
/// consider this API unstable.
#[derive(Clone, Copy, Debug)]
pub enum MemoryType {
/// A region of memory that implements Write-back caching.
///
/// In write-back caching, the processor will first store data in its local cache, and then
/// flush it to the actual storage location at regular intervals, or as applications access
/// the data.
Writeback,
/// A region of memory that does not implement caching. Writes to these regions are immediate.
Uncacheable,
/// A region of memory that implements write combining.
///
/// Write combining memory regions store all writes in a temporary buffer called a Write
/// Combine Buffer. Multiple writes to the location are stored in a single buffer, and then
/// released to the memory location in an unspecified order. Write-Combine memory does not
/// guarantee that the order at which you write to it is the order at which those writes are
/// committed to memory.
WriteCombining,
/// Memory stored in an intermediate Write Combine Buffer and released later
/// Memory-Mapped I/O. This is an aarch64-specific term.
DeviceMemory,
}
impl Default for MemoryType {
fn default() -> Self {
Self::Writeback
}
}
/// Represents the protection level of an area of memory.
///
/// This structure shouldn't be used directly -- instead, use the [`Prot::RO`] (Read-Only),
/// [`Prot::WO`] (Write-Only) and [`Prot::RW`] (Read-Write) constants to specify the memory's protection
/// level.
#[derive(Clone, Copy, Debug)]
pub struct Prot {
/// The memory is readable
pub read: bool,
/// The memory is writeable
pub write: bool,
}
/// Implements the memory protection level constants
impl Prot {
/// A constant representing Read-Only memory.
pub const RO: Self = Self {
read: true,
write: false,
};
/// A constant representing Write-Only memory
pub const WO: Self = Self {
read: false,
write: true,
};
/// A constant representing Read-Write memory
pub const RW: Self = Self {
read: true,
write: true,
};
}
/// Maps physical memory to virtual memory
///
/// # Arguments
///
/// * '`base_phys`: [usize]' - The base address of the physical memory to map.
/// * 'len: [usize]' - The length of the physical memory to map (Should be a multiple of [`PAGE_SIZE`]
/// * '_: [Prot]' - The memory protection level of the mapping.
/// * 'type: [`MemoryType`]' - The caching behavior specification of the memory.
///
/// # Returns
///
/// A '[Result]<*mut ()>' which is:
/// - '[Ok]' containing a raw pointer to the mapped memory.
/// - '[Err]' which contains an error on failure.
///
/// # Errors
///
/// This function will return an error if:
/// - An invalid value is provided to 'read' or 'write'
/// - The system could not open a file descriptor to the memory scheme for the specified [`MemoryType`].
/// - The system failed to map the physical address to a virtual address. See [`libredox::call::mmap`]
///
/// # Safety
///
/// Safe, as the kernel ensures it doesn't conflict with any other memory described in the memory
/// map for regular RAM.
///
/// # Notes
/// - This function is unsafe, and upon using it you will be responsible for freeing the memory with
/// [`libredox::call::munmap`]. If you want a safe accessor, use [`PhysBorrowed`] instead.
/// - The `MemoryType` specified is used to tell the function which memory scheme to access. (i.e
/// /scheme/memory/physical@wb, /scheme/memory/physical@uc, etc).
pub unsafe fn physmap(
base_phys: usize,
len: usize,
Prot { read, write }: Prot,
ty: MemoryType,
) -> Result<*mut ()> {
// TODO: arraystring?
//Return an error rather than potentially crash the kernel.
if base_phys == 0 {
return Err(Error::new(EINVAL));
}
let path = format!(
"physical@{}",
match ty {
MemoryType::Writeback => "wb",
MemoryType::Uncacheable => "uc",
MemoryType::WriteCombining => "wc",
MemoryType::DeviceMemory => "dev",
}
);
let mode = match (read, write) {
(true, true) => O_RDWR,
(true, false) => O_RDONLY,
(false, true) => O_WRONLY,
(false, false) => return Err(Error::new(EINVAL)),
};
let mut prot = 0;
if read {
prot |= flag::PROT_READ;
}
if write {
prot |= flag::PROT_WRITE;
}
let fd = memory_root_fd().openat(&path, O_CLOEXEC | mode, 0)?;
Ok(libredox::call::mmap(MmapArgs {
fd: fd.raw(),
offset: base_phys as u64,
length: len.next_multiple_of(PAGE_SIZE),
flags: flag::MAP_SHARED,
prot,
addr: core::ptr::null_mut(),
})? as *mut ())
}
impl std::fmt::Display for MemoryType {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}",
match self {
Self::Writeback => "wb",
Self::Uncacheable => "uc",
Self::WriteCombining => "wc",
Self::DeviceMemory => "dev",
}
)
}
}
/// A safe virtual mapping to physical memory that unmaps the memory when the structure goes out
/// of scope.
///
/// This function provides a safe binding to [physmap]. It implements Drop to free the mapped memory
/// when the structure goes out of scope.
pub struct PhysBorrowed {
mem: *mut (),
len: usize,
}
impl PhysBorrowed {
/// Constructs a `PhysBorrowed` instance.
///
/// # Arguments
/// See [physmap] for a description of the parameters.
///
/// # Returns
/// A '[Result]' which contains the following:
/// - A '[`PhysBorrowed`]' which represents the newly mapped region.
/// - An 'Err' if a memory mapping error occurs.
///
/// # Errors
/// See [physmap] for a description of the error cases.
pub fn map(base_phys: usize, len: usize, prot: Prot, ty: MemoryType) -> Result<Self> {
let mem = unsafe { physmap(base_phys, len, prot, ty)? };
Ok(Self {
mem,
len: len.next_multiple_of(PAGE_SIZE),
})
}
/// Gets a raw pointer to the borrowed region.
///
/// # Returns
/// - self.mem - A pointer to the mapped region in virtual memory.
///
/// # Notes
/// - The pointer may live beyond the lifetime of [`PhysBorrowed`], so dereferences to the pointer
/// must be treated as unsafe.
///
pub fn as_ptr(&self) -> *mut () {
self.mem
}
/// Gets the length of the mapped region.
///
/// # Returns
/// - self.len - The length of the mapped region. It should be a multiple of [`PAGE_SIZE`]
pub fn mapped_len(&self) -> usize {
self.len
}
}
impl Drop for PhysBorrowed {
/// Frees the mapped memory region.
fn drop(&mut self) {
unsafe {
let _ = libredox::call::munmap(self.mem, self.len);
}
}
}
/// Instructs the kernel to enable I/O ports for this (usermode) process (x86-specific).
///
/// On Redox, x86 privilege ring 3 represents userspace. Most Redox drivers run in userspace to
/// prevent system instability caused by a faulty driver. Processes with (bitmap-enabled) IO port
/// rights can use the IN/OUT instructions. This is not the same as IOPL 3; the CLI instruction is
/// still not allowed.
pub fn acquire_port_io_rights() -> Result<()> {
extern "C" {
fn redox_cur_thrfd_v0() -> usize;
}
let kernel_fd = syscall::dup(unsafe { redox_cur_thrfd_v0() }, b"open_via_dup")?;
let res = libredox::call::call_wo(
kernel_fd,
&[],
syscall::CallFlags::empty(),
&[ProcSchemeVerb::Iopl as u64],
);
let _ = syscall::close(kernel_fd);
res?;
Ok(())
}
/// Kernel handle for translating virtual addresses in the current address space, to their
/// underlying physical addresses.
///
/// It is currently unspecified whether this handle is specific to the address space at the time it
/// was created, or whether all calls reference the currently active address space.
pub struct VirtaddrTranslationHandle {
fd: Fd,
}
impl VirtaddrTranslationHandle {
/// Create a new handle, requires uid=0 but this may change.
pub fn new() -> Result<Self> {
Ok(Self {
fd: memory_root_fd().openat("translation", O_CLOEXEC, 0)?,
})
}
/// Translate physical => virtual.
pub fn translate(&self, physical: usize) -> Result<usize> {
let mut buf = physical.to_ne_bytes();
libredox::call::call_ro(self.fd.raw(), &mut buf, syscall::CallFlags::empty(), &[])?;
Ok(usize::from_ne_bytes(buf))
}
}
@@ -0,0 +1,108 @@
use std::str::FromStr;
use libredox::{flag, Fd};
use redox_log::{OutputBuilder, RedoxLogger};
/// Get the log verbosity for the output level.
pub fn output_level() -> log::LevelFilter {
log::LevelFilter::Info
}
/// Get the log verbosity for the file level.
pub fn file_level() -> log::LevelFilter {
log::LevelFilter::Info
}
/// Configures logging for a single driver.
#[cfg_attr(not(target_os = "redox"), allow(unused_variables, unused_mut))]
pub fn setup_logging(
category: &str,
subcategory: &str,
logfile_base: &str,
mut output_level: log::LevelFilter,
file_level: log::LevelFilter,
) {
RedoxLogger::init_timezone();
if let Some(log_level) = read_bootloader_log_level_env(category, subcategory) {
output_level = log_level;
}
let mut logger = RedoxLogger::new().with_output(
OutputBuilder::stderr()
.with_filter(output_level) // limit global output to important info
.with_ansi_escape_codes()
.flush_on_newline(true)
.build(),
);
#[cfg(target_os = "redox")]
match OutputBuilder::in_redox_logging_scheme(
category,
subcategory,
format!("{logfile_base}.log"),
) {
Ok(b) => {
logger = logger.with_output(b.with_filter(file_level).flush_on_newline(true).build())
}
Err(error) => eprintln!("Failed to create {logfile_base}.log: {}", error),
}
#[cfg(target_os = "redox")]
match OutputBuilder::in_redox_logging_scheme(
category,
subcategory,
format!("{logfile_base}.ansi.log"),
) {
Ok(b) => {
logger = logger.with_output(
b.with_filter(file_level)
.with_ansi_escape_codes()
.flush_on_newline(true)
.build(),
)
}
Err(error) => eprintln!("Failed to create {logfile_base}.ansi.log: {}", error),
}
logger.enable().expect("failed to set default logger");
}
fn read_bootloader_log_level_env(category: &str, subcategory: &str) -> Option<log::LevelFilter> {
let mut env_bytes = [0_u8; 4096];
// TODO: Have the kernel env can specify prefixed env key instead of having to read all of them
let envs = {
let Ok(fd) = Fd::open("/scheme/sys/env", flag::O_RDONLY | flag::O_CLOEXEC, 0) else {
return None;
};
let Ok(bytes_read) = fd.read(&mut env_bytes) else {
return None;
};
if bytes_read >= env_bytes.len() {
return None;
}
let env_bytes = &mut env_bytes[..bytes_read];
env_bytes
.split(|&c| c == b'\n')
.filter(|var| var.starts_with(b"DRIVER_"))
.collect::<Vec<_>>()
};
let log_env_keys = [
format!("DRIVER_{}_LOG_LEVEL=", subcategory.to_ascii_uppercase()),
format!("DRIVER_{}_LOG_LEVEL=", category.to_ascii_uppercase()),
"DRIVER_LOG_LEVEL=".to_string(),
];
for log_env_key in log_env_keys {
let log_env_key = log_env_key.as_bytes();
if let Some(log_env) = envs.iter().find_map(|var| var.strip_prefix(log_env_key)) {
if let Ok(Ok(log_level)) = str::from_utf8(&log_env).map(log::LevelFilter::from_str) {
return Some(log_level);
}
}
}
None
}
+130
View File
@@ -0,0 +1,130 @@
use std::num::NonZeroUsize;
use libredox::call::MmapArgs;
use libredox::errno::EINVAL;
use libredox::error::{Error, Result};
use libredox::flag::{MAP_PRIVATE, PROT_NONE, PROT_READ, PROT_WRITE};
use syscall::{MAP_FIXED, PAGE_SIZE};
use crate::dma::phys_contiguous_fd;
use crate::VirtaddrTranslationHandle;
/// A Scatter-Gather List data structure
///
/// See: <https://en.wikipedia.org/wiki/Gather/scatter_(vector_addressing)>
#[derive(Debug)]
pub struct Sgl {
/// A raw pointer to the SGL in virtual memory
virt: *mut u8,
/// The length of the allocated memory, guaranteed to be a multiple of [`PAGE_SIZE`].
aligned_length: usize,
/// The length of the allocated memory. This value is NOT guaranteed to be a multiple of [`PAGE_SIZE`]
unaligned_length: NonZeroUsize,
/// The vector of chunks tracked by this [Sgl] object. This is the sparsely-populated vector in the SGL algorithm.
chunks: Vec<Chunk>,
}
/// A structure representing a chunk of memory in the sparsely-populated vector of the SGL
#[derive(Debug)]
pub struct Chunk {
/// The offset of the chunk in the sparsely-populated vector.
pub offset: usize,
/// The physical address of the chunk
pub phys: usize,
/// A raw pointer to the chunk in virtual memory
pub virt: *mut u8,
/// The length of the chunk in bytes.
pub length: usize,
}
impl Sgl {
/// Constructor for the scatter/gather list.
///
/// # Arguments
///
/// '`unaligned_length`: [usize]' - The length of the SGL, not necessarily aligned to the nearest
/// page.
pub fn new(unaligned_length: usize) -> Result<Self> {
let unaligned_length = NonZeroUsize::new(unaligned_length).ok_or(Error::new(EINVAL))?;
// TODO: Both PAGE_SIZE and MAX_ALLOC_SIZE should be dynamic.
let aligned_length = unaligned_length.get().next_multiple_of(PAGE_SIZE);
const MAX_ALLOC_SIZE: usize = 1 << 22;
unsafe {
let virt = libredox::call::mmap(MmapArgs {
flags: MAP_PRIVATE,
prot: PROT_NONE,
length: aligned_length,
offset: 0,
fd: !0,
addr: core::ptr::null_mut(),
})?
.cast::<u8>();
let mut this = Self {
virt,
aligned_length,
unaligned_length,
chunks: Vec::new(),
};
// TODO: SglContext to avoid reopening these fds?
let phys_contiguous_fd = phys_contiguous_fd()?;
let virttophys_handle = VirtaddrTranslationHandle::new()?;
let mut offset = 0;
while offset < aligned_length {
let preferred_chunk_length = (aligned_length - offset)
.min(MAX_ALLOC_SIZE)
.next_power_of_two();
let chunk_length = if preferred_chunk_length > aligned_length - offset {
preferred_chunk_length / 2
} else {
preferred_chunk_length
};
libredox::call::mmap(MmapArgs {
addr: virt.add(offset).cast(),
flags: MAP_PRIVATE | (MAP_FIXED.bits() as u32),
prot: PROT_READ | PROT_WRITE,
length: chunk_length,
fd: phys_contiguous_fd.raw(),
offset: 0,
})?;
let phys = virttophys_handle.translate(virt as usize + offset)?;
this.chunks.push(Chunk {
offset,
phys,
length: (unaligned_length.get() - offset).min(chunk_length),
virt: virt.add(offset),
});
offset += chunk_length;
}
Ok(this)
}
}
/// Returns an immutable reference to the vector of chunks
pub fn chunks(&self) -> &[Chunk] {
&self.chunks
}
/// Returns a raw pointer to the vector of chunks in virtual memory
pub fn as_ptr(&self) -> *mut u8 {
self.virt
}
/// Returns the length of the scatter-gather list.
pub fn len(&self) -> usize {
self.unaligned_length.get()
}
}
impl Drop for Sgl {
fn drop(&mut self) {
unsafe {
let _ = libredox::call::munmap(self.virt.cast(), self.aligned_length);
}
}
}
@@ -0,0 +1,56 @@
use std::time::{Duration, Instant};
/// Represents an amount of time for a driver to give up to the OS scheduler.
pub struct Timeout {
instant: Instant,
duration: Duration,
}
impl Timeout {
/// Create a new `Timeout` from a `Duration`.
#[inline]
pub fn new(duration: Duration) -> Self {
Self {
instant: Instant::now(),
duration,
}
}
/// Create a new `Timeout` by specifying the amount of microseconds.
#[inline]
pub fn from_micros(micros: u64) -> Self {
Self::new(Duration::from_micros(micros))
}
/// Create a new `Timeout` by specifying the amount of milliseconds.
#[inline]
pub fn from_millis(millis: u64) -> Self {
Self::new(Duration::from_millis(millis))
}
/// Create a new `Timeout` by specifying the amount of seconds.
#[inline]
pub fn from_secs(secs: u64) -> Self {
Self::new(Duration::from_secs(secs))
}
/// Execute the `Timeout`.
///
/// # Errors
///
/// Returns an `Err` if the duration of the `Timeout` has already elapsed
/// between creating the `Timeout` and calling this function.
#[inline]
pub fn run(&self) -> Result<(), ()> {
if self.instant.elapsed() < self.duration {
// Sleeps in Redox are only evaluated on PIT ticks (a few ms), which is not
// short enough for a reasonably responsive timeout. However, the clock is
// highly accurate. So, we yield instead of sleep to reduce latency.
//TODO: allow timeout that spins instead of yields?
std::thread::yield_now();
Ok(())
} else {
Err(())
}
}
}
@@ -0,0 +1,15 @@
[package]
name = "executor"
description = "Asynchronous framework for queue-based hardware interfaces"
authors = ["4lDO2 <4lDO2@protonmail.com>"]
version = "0.1.0"
edition = "2021"
license = "MIT"
[dependencies]
log.workspace = true
redox_event.workspace = true
slab.workspace = true
[lints]
workspace = true
@@ -0,0 +1,396 @@
use std::cell::{Cell, RefCell};
use std::collections::{HashMap, VecDeque};
use std::fmt::Debug;
use std::fs::File;
use std::future::{Future, IntoFuture};
use std::hash::Hash;
use std::io::{Read, Write};
use std::marker::PhantomData;
use std::os::fd::AsRawFd;
use std::panic::AssertUnwindSafe;
use std::pin::Pin;
use std::ptr::NonNull;
use std::rc::Rc;
use std::task;
use event::{EventFlags, RawEventQueue};
use slab::Slab;
type EventUserData = usize;
type FutIdx = usize;
pub trait Hardware: Sized {
type CmdId: Clone + Copy + Debug + Hash + Eq + PartialEq;
type CqId: Clone + Copy + Debug + Hash + Eq + PartialEq;
type SqId: Clone + Copy + Debug + Hash + Eq + PartialEq;
type Sqe: Debug + Clone + Copy;
type Cqe;
type Iv: Clone + Copy + Debug;
type GlobalCtxt;
// TODO: the kernel should also do this automatically before sending EOI messages to the IC
fn mask_vector(ctxt: &Self::GlobalCtxt, iv: Self::Iv);
fn unmask_vector(ctxt: &Self::GlobalCtxt, iv: Self::Iv);
fn set_sqe_cmdid(sqe: &mut Self::Sqe, id: Self::CmdId);
fn get_cqe_cmdid(cqe: &Self::Cqe) -> Self::CmdId;
// TODO: support multiple SQs per CQ or vice versa?
fn sq_cq(ctxt: &Self::GlobalCtxt, id: Self::CqId) -> Self::SqId;
fn current() -> Rc<LocalExecutor<Self>>;
fn vtable() -> &'static task::RawWakerVTable;
fn try_submit(
ctxt: &Self::GlobalCtxt,
sq_id: Self::SqId,
success: impl FnOnce(Self::CmdId) -> Self::Sqe,
fail: impl FnOnce(),
) -> Option<(Self::CqId, Self::CmdId)>;
fn poll_cqes(ctxt: &Self::GlobalCtxt, handle: impl FnMut(Self::CqId, Self::Cqe));
}
/// Async executor, single IV, thread-per-core architecture
pub struct LocalExecutor<Hw: Hardware> {
global_ctxt: Hw::GlobalCtxt,
queue: RawEventQueue,
vector: Hw::Iv,
irq_handle: File,
intx: bool,
// TODO: One IV and SQ/CQ per core (where the admin queue can be managed by the main thread).
awaiting_submission: RefCell<HashMap<Hw::SqId, VecDeque<FutIdx>>>,
awaiting_completion:
RefCell<HashMap<Hw::CqId, HashMap<Hw::CmdId, (FutIdx, NonNull<Option<Hw::Cqe>>)>>>,
external_event: RefCell<HashMap<EventUserData, (FutIdx, NonNull<EventFlags>)>>,
next_user_data: Cell<usize>,
ready_futures: RefCell<VecDeque<FutIdx>>,
futures: RefCell<Slab<Pin<Box<dyn Future<Output = ()> + 'static>>>>,
is_polling: Cell<bool>,
}
impl<Hw: Hardware> LocalExecutor<Hw> {
pub fn register_external_event(
&self,
fd: usize,
flags: event::EventFlags,
) -> ExternalEventSource<Hw> {
let user_data = self.next_user_data.get();
self.next_user_data.set(user_data.checked_add(1).unwrap());
self.queue
.subscribe(fd, user_data, flags)
.expect("failed to subscribe to event");
ExternalEventSource {
flags: event::EventFlags::empty(),
user_data,
_not_send_or_unpin: PhantomData,
}
}
pub fn current() -> Rc<Self> {
Hw::current()
}
pub fn poll(&self) -> usize {
assert!(!self.is_polling.replace(true));
let mut finished = 0;
for future_idx in self.ready_futures.borrow_mut().drain(..) {
let waker = waker::<Hw>(future_idx);
let mut futures = self.futures.borrow_mut();
let res = match std::panic::catch_unwind(AssertUnwindSafe(|| {
futures[future_idx]
.as_mut()
.poll(&mut task::Context::from_waker(&waker))
})) {
Ok(r) => r,
Err(_) => {
log::error!("Task panicked!");
core::mem::forget(futures.remove(future_idx));
continue;
}
};
if res.is_ready() {
drop(futures.remove(future_idx));
finished += 1;
}
}
self.is_polling.set(false);
finished
}
pub fn spawn(&self, fut: impl IntoFuture<Output = ()> + 'static) {
let idx = self
.futures
.borrow_mut()
.insert(Box::pin(fut.into_future()));
self.ready_futures.borrow_mut().push_back(idx);
}
pub fn block_on<'a, O: 'a>(&self, fut: impl IntoFuture<Output = O> + 'a) -> O {
let retval = Rc::new(RefCell::new(None));
let retval2 = Rc::clone(&retval);
let idx = self.futures.borrow_mut().insert({
let t1: Pin<Box<dyn Future<Output = ()> + 'a>> = Box::pin(async move {
*retval2.borrow_mut() = Some(fut.await);
});
// SAFETY: Apart from the lifetimes, the types are exactly the same. We also know
// block_on simply cannot return without having fully awaited and dropped the future,
// even if that future panics (cf. the catch_unwind invocation).
let t2: Pin<Box<dyn Future<Output = ()> + 'static>> =
unsafe { std::mem::transmute(t1) };
t2
});
self.ready_futures.borrow_mut().push_front(idx);
loop {
let finished = self.poll();
if retval.borrow().is_some() {
break;
}
if finished == 0 {
self.react();
}
}
let o = retval.borrow_mut().take().unwrap();
o
}
fn react(&self) {
let event = self.queue.next_event().expect("failed to get next event");
if event.user_data != 0 {
let Some((fut_idx, flags_ptr)) =
self.external_event.borrow_mut().remove(&event.user_data)
else {
// Spurious event
return;
};
unsafe {
flags_ptr
.as_ptr()
.write(event::EventFlags::from_bits_retain(event.flags));
}
self.ready_futures.borrow_mut().push_back(fut_idx);
return;
}
if self.intx {
let mut buf = [0_u8; core::mem::size_of::<usize>()];
if (&self.irq_handle).read(&mut buf).unwrap() != 0 {
(&self.irq_handle).write(&buf).unwrap();
}
}
// TODO: The kernel should probably do the masking (when using MSI/MSI-X at least), which
// should happen before EOI messages to the interrupt controller.
Hw::mask_vector(&self.global_ctxt, self.vector);
Hw::poll_cqes(&self.global_ctxt, |cq_id, cqe| {
if let Some((fut_idx, comp_ptr)) = self
.awaiting_completion
.borrow_mut()
.get_mut(&cq_id)
.and_then(|per_cmd| per_cmd.remove(&Hw::get_cqe_cmdid(&cqe)))
{
unsafe {
comp_ptr.as_ptr().write(Some(cqe));
}
self.ready_futures.borrow_mut().push_back(fut_idx);
if let Some(submitting) = self
.awaiting_submission
.borrow_mut()
.get_mut(&Hw::sq_cq(&self.global_ctxt, cq_id))
.and_then(|q| q.pop_front())
{
self.ready_futures.borrow_mut().push_back(submitting);
}
}
});
Hw::unmask_vector(&self.global_ctxt, self.vector);
}
pub async fn submit(&self, sq_id: Hw::SqId, cmd: Hw::Sqe) -> Hw::Cqe {
CqeFuture::<Hw> {
state: State::<Hw>::Submitting { sq_id, cmd },
comp: None,
_not_send: PhantomData,
}
.await
}
}
struct CqeFuture<Hw: Hardware> {
pub state: State<Hw>,
pub comp: Option<Hw::Cqe>,
pub _not_send: PhantomData<*const ()>,
}
enum State<Hw: Hardware> {
Submitting { sq_id: Hw::SqId, cmd: Hw::Sqe },
Completing { cq_id: Hw::CqId, cmd_id: Hw::CmdId },
}
fn current_executor_and_idx<Hw: Hardware>(
cx: &mut task::Context<'_>,
) -> (Rc<LocalExecutor<Hw>>, FutIdx) {
let executor = LocalExecutor::current();
let idx = cx.waker().data() as FutIdx;
assert_eq!(
cx.waker().vtable() as *const _,
Hw::vtable(),
"incompatible executor for CqeFuture"
);
(executor, idx)
}
impl<Hw: Hardware> Future for CqeFuture<Hw> {
type Output = Hw::Cqe;
fn poll(self: Pin<&mut Self>, cx: &mut task::Context<'_>) -> task::Poll<Self::Output> {
let this = unsafe { self.get_unchecked_mut() };
let (executor, idx) = current_executor_and_idx::<Hw>(cx);
match this.state {
State::Submitting { sq_id, mut cmd } => {
let mut awaiting = executor.awaiting_submission.borrow_mut();
if let Some((cq_id, cmd_id)) = Hw::try_submit(
&executor.global_ctxt,
sq_id,
|cmd_id| {
Hw::set_sqe_cmdid(&mut cmd, cmd_id);
log::trace!("About to submit {cmd:?}");
cmd
},
|| {
awaiting.entry(sq_id).or_default().push_back(idx);
},
) {
executor
.awaiting_completion
.borrow_mut()
.entry(cq_id)
.or_default()
.insert(cmd_id, (idx, (&mut this.comp).into()));
this.state = State::Completing { cq_id, cmd_id };
}
task::Poll::Pending
}
State::Completing { cq_id, cmd_id } => match this.comp.take() {
Some(comp) => {
log::trace!("ready!");
task::Poll::Ready(comp)
}
// Shouldn't technically be possible
None => {
log::trace!("spurious poll");
executor
.awaiting_completion
.borrow_mut()
.entry(cq_id)
.or_default()
.insert(cmd_id, (idx, (&mut this.comp).into()));
task::Poll::Pending
}
},
}
}
}
unsafe fn vt_clone<Hw: Hardware>(idx: *const ()) -> task::RawWaker {
task::RawWaker::new(idx, Hw::vtable())
}
unsafe fn vt_drop(_idx: *const ()) {}
unsafe fn vt_wake<Hw: Hardware>(idx: *const ()) {
Hw::current()
.ready_futures
.borrow_mut()
.push_back(idx as FutIdx);
}
fn waker<Hw: Hardware>(idx: FutIdx) -> task::Waker {
unsafe { task::Waker::from_raw(task::RawWaker::new(idx as *const (), Hw::vtable())) }
}
pub const fn vtable<Hw: Hardware>() -> task::RawWakerVTable {
task::RawWakerVTable::new(vt_clone::<Hw>, vt_wake::<Hw>, vt_wake::<Hw>, vt_drop)
}
pub struct ExternalEventSource<Hw: Hardware> {
flags: event::EventFlags,
user_data: EventUserData,
_not_send_or_unpin: PhantomData<(*const (), fn() -> Hw)>,
}
pub struct Event {
flags: event::EventFlags,
_not_send: PhantomData<*const ()>,
}
impl Event {
pub fn flags(&self) -> event::EventFlags {
self.flags
}
}
impl<Hw: Hardware> ExternalEventSource<Hw> {
fn poll_next(self: Pin<&mut Self>, cx: &mut task::Context) -> task::Poll<Option<Event>> {
let this = unsafe { self.get_unchecked_mut() };
let flags = std::mem::take(&mut this.flags);
if flags.is_empty() {
let (executor, idx) = current_executor_and_idx::<Hw>(cx);
executor
.external_event
.borrow_mut()
.insert(this.user_data, (idx, (&mut this.flags).into()));
return task::Poll::Pending;
}
task::Poll::Ready(Some(Event {
flags,
_not_send: PhantomData,
}))
}
pub async fn next(mut self: Pin<&mut Self>) -> Option<Event> {
core::future::poll_fn(|cx| self.as_mut().poll_next(cx)).await
}
}
pub fn init_raw<Hw: Hardware>(
global_ctxt: Hw::GlobalCtxt,
vector: Hw::Iv,
intx: bool,
irq_handle: File,
) -> LocalExecutor<Hw> {
let queue = RawEventQueue::new().expect("failed to allocate event queue for local executor");
// TODO: Multiple CPUs
queue
.subscribe(irq_handle.as_raw_fd() as usize, 0, EventFlags::READ)
.expect("failed to subscribe to IRQ event");
LocalExecutor {
global_ctxt,
queue,
vector,
intx,
irq_handle,
awaiting_submission: RefCell::new(HashMap::new()),
awaiting_completion: RefCell::new(HashMap::new()),
external_event: RefCell::new(HashMap::new()),
next_user_data: Cell::new(1),
ready_futures: RefCell::new(VecDeque::new()),
futures: RefCell::new(Slab::with_capacity(16)),
is_polling: Cell::new(false),
}
}
@@ -0,0 +1,18 @@
[package]
name = "console-draw"
description = "Shared terminal drawing code library"
version = "0.1.0"
edition = "2021"
[dependencies]
drm.workspace = true
orbclient.workspace = true
ransid.workspace = true
graphics-ipc = { path = "../graphics-ipc" }
[features]
default = []
[lints]
workspace = true
@@ -0,0 +1,460 @@
extern crate ransid;
use std::collections::VecDeque;
use std::convert::{TryFrom, TryInto};
use std::{cmp, io, mem, ptr};
use drm::buffer::{Buffer, DrmFourcc};
use drm::control::{connector, crtc, framebuffer, ClipRect, Device, Mode};
use graphics_ipc::{CpuBackedBuffer, V2GraphicsHandle};
use orbclient::FONT;
#[derive(Debug, Copy, Clone)]
#[repr(C, packed)]
pub struct Damage {
pub x: u32,
pub y: u32,
pub width: u32,
pub height: u32,
}
impl Damage {
pub const NONE: Self = Damage {
x: 0,
y: 0,
width: 0,
height: 0,
};
pub fn merge(self, other: Self) -> Self {
if self.width == 0 || self.height == 0 {
return other;
}
if other.width == 0 || other.height == 0 {
return self;
}
let x = cmp::min(self.x, other.x);
let y = cmp::min(self.y, other.y);
let x2 = cmp::max(self.x + self.width, other.x + other.width);
let y2 = cmp::max(self.y + self.height, other.y + other.height);
Damage {
x,
y,
width: x2 - x,
height: y2 - y,
}
}
}
pub struct V2DisplayMap {
pub display_handle: V2GraphicsHandle,
connector: connector::Handle,
crtc: crtc::Handle,
fb: framebuffer::Handle,
pub buffer: CpuBackedBuffer,
}
impl V2DisplayMap {
pub fn new(display_handle: V2GraphicsHandle) -> io::Result<Self> {
let connector = display_handle.first_display().unwrap();
let connector_info = display_handle.get_connector(connector, true).unwrap();
let mode = connector_info.modes()[0];
let (width, height) = mode.size();
// FIXME do something smarter that avoids conflicts
let crtc = display_handle.resource_handles().unwrap().filter_crtcs(
display_handle
.get_encoder(connector_info.encoders()[0])
.unwrap()
.possible_crtcs(),
)[0];
let buffer = CpuBackedBuffer::new(
&display_handle,
(width.into(), height.into()),
DrmFourcc::Argb8888,
32,
)?;
let fb = display_handle.add_framebuffer(buffer.buffer(), 32, 32)?;
display_handle.set_crtc(crtc, Some(fb), (0, 0), &[connector], Some(mode))?;
Ok(Self {
display_handle,
connector,
crtc,
fb,
buffer,
})
}
unsafe fn console_map(&mut self) -> DisplayMap {
let size = self.buffer.buffer().size();
let shadow_buf = self.buffer.shadow_buf();
DisplayMap {
offscreen: ptr::slice_from_raw_parts_mut(
shadow_buf.as_mut_ptr() as *mut u32,
shadow_buf.len() / 4,
),
width: size.0 as usize,
height: size.1 as usize,
}
}
pub fn dirty_fb(&mut self, damage: Damage) -> io::Result<()> {
self.buffer
.sync_rect(damage.x, damage.y, damage.width, damage.height);
self.display_handle.dirty_framebuffer(
self.fb,
&[ClipRect::new(
damage.x as u16,
damage.y as u16,
(damage.x + damage.width) as u16,
(damage.y + damage.height) as u16,
)],
)
}
}
struct DisplayMap {
offscreen: *mut [u32],
width: usize,
height: usize,
}
pub struct TextScreen {
console: ransid::Console,
}
impl TextScreen {
pub fn new() -> TextScreen {
TextScreen {
// Width and height will be filled in on the next write to the console
console: ransid::Console::new(0, 0),
}
}
/// Draw a rectangle
fn rect(map: &mut DisplayMap, x: usize, y: usize, w: usize, h: usize, color: u32) {
let start_y = cmp::min(map.height, y);
let end_y = cmp::min(map.height, y + h);
let start_x = cmp::min(map.width, x);
let len = cmp::min(map.width, x + w) - start_x;
let mut offscreen_ptr = map.offscreen as *mut u8 as usize;
let stride = map.width * 4;
let offset = y * stride + start_x * 4;
offscreen_ptr += offset;
let mut rows = end_y - start_y;
while rows > 0 {
for i in 0..len {
unsafe {
*(offscreen_ptr as *mut u32).add(i) = color;
}
}
offscreen_ptr += stride;
rows -= 1;
}
}
/// Invert a rectangle
fn invert(map: &mut DisplayMap, x: usize, y: usize, w: usize, h: usize) {
let start_y = cmp::min(map.height, y);
let end_y = cmp::min(map.height, y + h);
let start_x = cmp::min(map.width, x);
let len = cmp::min(map.width, x + w) - start_x;
let mut offscreen_ptr = map.offscreen as *mut u8 as usize;
let stride = map.width * 4;
let offset = y * stride + start_x * 4;
offscreen_ptr += offset;
let mut rows = end_y - start_y;
while rows > 0 {
let mut row_ptr = offscreen_ptr;
let mut cols = len;
while cols > 0 {
unsafe {
let color = *(row_ptr as *mut u32);
*(row_ptr as *mut u32) = !color;
}
row_ptr += 4;
cols -= 1;
}
offscreen_ptr += stride;
rows -= 1;
}
}
/// Draw a character
fn char(
map: &mut DisplayMap,
x: usize,
y: usize,
character: char,
color: u32,
_bold: bool,
_italic: bool,
) {
if x + 8 <= map.width && y + 16 <= map.height {
let mut dst = map.offscreen as *mut u8 as usize + (y * map.width + x) * 4;
let font_i = 16 * (character as usize);
if font_i + 16 <= FONT.len() {
for row in 0..16 {
let row_data = FONT[font_i + row];
for col in 0..8 {
if (row_data >> (7 - col)) & 1 == 1 {
unsafe {
*((dst + col * 4) as *mut u32) = color;
}
}
}
dst += map.width * 4;
}
}
}
}
}
impl TextScreen {
pub fn write(
&mut self,
map: &mut V2DisplayMap,
buf: &[u8],
input: &mut VecDeque<u8>,
) -> Damage {
let map = unsafe { &mut map.console_map() };
let mut min_changed = map.height;
let mut max_changed = 0;
let mut line_changed = |line| {
if line < min_changed {
min_changed = line;
}
if line > max_changed {
max_changed = line;
}
};
self.console.resize(map.width / 8, map.height / 16);
if self.console.state.x >= self.console.state.w {
self.console.state.x = self.console.state.w - 1;
}
if self.console.state.y >= self.console.state.h {
self.console.state.y = self.console.state.h - 1;
}
if self.console.state.cursor
&& self.console.state.x < self.console.state.w
&& self.console.state.y < self.console.state.h
{
let x = self.console.state.x;
let y = self.console.state.y;
Self::invert(map, x * 8, y * 16, 8, 16);
line_changed(y);
}
self.console.write(buf, |event| match event {
ransid::Event::Char {
x,
y,
c,
color,
bold,
..
} => {
Self::char(map, x * 8, y * 16, c, color.as_rgb(), bold, false);
line_changed(y);
}
ransid::Event::Input { data } => input.extend(data),
ransid::Event::Rect { x, y, w, h, color } => {
Self::rect(map, x * 8, y * 16, w * 8, h * 16, color.as_rgb());
for y2 in y..y + h {
line_changed(y2);
}
}
ransid::Event::ScreenBuffer { .. } => (),
ransid::Event::Move {
from_x,
from_y,
to_x,
to_y,
w,
h,
} => {
let width = map.width;
let pixels = unsafe { &mut *map.offscreen };
for raw_y in 0..h {
let y = if from_y > to_y { raw_y } else { h - raw_y - 1 };
for pixel_y in 0..16 {
{
let off_from = ((from_y + y) * 16 + pixel_y) * width + from_x * 8;
let off_to = ((to_y + y) * 16 + pixel_y) * width + to_x * 8;
let len = w * 8;
if off_from + len <= pixels.len() && off_to + len <= pixels.len() {
unsafe {
let data_ptr = pixels.as_mut_ptr() as *mut u32;
ptr::copy(
data_ptr.offset(off_from as isize),
data_ptr.offset(off_to as isize),
len,
);
}
}
}
}
line_changed(to_y + y);
}
}
ransid::Event::Resize { .. } => (),
ransid::Event::Title { .. } => (),
});
if self.console.state.cursor
&& self.console.state.x < self.console.state.w
&& self.console.state.y < self.console.state.h
{
let x = self.console.state.x;
let y = self.console.state.y;
Self::invert(map, x * 8, y * 16, 8, 16);
line_changed(y);
}
let width = map.width.try_into().unwrap();
let damage = Damage {
x: 0,
y: u32::try_from(min_changed).unwrap() * 16,
width,
height: u32::try_from(max_changed.saturating_sub(min_changed) + 1).unwrap() * 16,
};
damage
}
pub fn resize(&mut self, map: &mut V2DisplayMap, mode: Mode) -> io::Result<()> {
// FIXME fold row when target is narrower and maybe unfold when it is wider
fn copy_row(
old_map: &mut DisplayMap,
new_map: &mut DisplayMap,
from_row: usize,
to_row: usize,
) {
for x in 0..cmp::min(old_map.width, new_map.width) {
let old_idx = from_row * old_map.width + x;
let new_idx = to_row * new_map.width + x;
unsafe {
(*new_map.offscreen)[new_idx] = (*old_map.offscreen)[old_idx];
}
}
}
let mut new_buffer = CpuBackedBuffer::new(
&map.display_handle,
(u32::from(mode.size().0), u32::from(mode.size().1)),
DrmFourcc::Argb8888,
32,
)?;
let new_fb = map
.display_handle
.add_framebuffer(new_buffer.buffer(), 24, 32)?;
new_buffer.shadow_buf().fill(0);
{
let old_map = unsafe { &mut map.console_map() };
let new_size = new_buffer.buffer().size();
let new_shadow_buf = new_buffer.shadow_buf();
let new_map = &mut DisplayMap {
offscreen: ptr::slice_from_raw_parts_mut(
new_shadow_buf.as_mut_ptr() as *mut u32,
new_shadow_buf.len() / 4,
),
width: new_size.0 as usize,
height: new_size.1 as usize,
};
if new_map.height >= old_map.height {
for row in 0..old_map.height {
copy_row(old_map, new_map, row, row);
}
} else {
let deleted_rows = (old_map.height - new_map.height).div_ceil(16);
for row in 0..new_map.height {
if row + (deleted_rows + 1) * 16 >= old_map.height {
break;
}
copy_row(old_map, new_map, row + deleted_rows * 16, row);
}
self.console.state.y = self.console.state.y.saturating_sub(deleted_rows);
}
}
let old_buffer = mem::replace(&mut map.buffer, new_buffer);
old_buffer.destroy(&map.display_handle)?;
let old_fb = mem::replace(&mut map.fb, new_fb);
map.display_handle.set_crtc(
map.crtc,
Some(map.fb),
(0, 0),
&[map.connector],
Some(mode),
)?;
let _ = map.display_handle.destroy_framebuffer(old_fb);
Ok(())
}
}
pub struct TextBuffer {
pub lines: VecDeque<Vec<u8>>,
pub lines_max: usize,
}
impl TextBuffer {
pub fn new(max: usize) -> Self {
let mut lines = VecDeque::new();
lines.push_back(Vec::new());
Self {
lines,
lines_max: max,
}
}
pub fn write(&mut self, buf: &[u8]) {
if buf.is_empty() {
return;
}
for &byte in buf {
self.lines.back_mut().unwrap().push(byte);
if byte == b'\n' {
self.lines.push_back(Vec::new());
}
}
let max_len = self.lines_max;
while self.lines.len() > max_len {
self.lines.pop_front();
}
}
}
@@ -0,0 +1,22 @@
[package]
name = "driver-graphics"
description = "Shared video and graphics code library"
version = "0.1.0"
edition = "2021"
[dependencies]
drm-fourcc = "2.2.0"
drm-sys.workspace = true
edid.workspace = true #TODO: edid is abandoned, fork it and maintain?
log.workspace = true
redox-ioctl.workspace = true
redox-scheme.workspace = true
scheme-utils = { path = "../../../scheme-utils" }
redox_syscall.workspace = true
libredox.workspace = true
common = { path = "../../common" }
inputd = { path = "../../inputd" }
[lints]
workspace = true
@@ -0,0 +1,249 @@
use std::ffi::c_char;
use std::fmt::Debug;
use std::sync::Mutex;
use drm_sys::{
drm_mode_modeinfo, DRM_MODE_CONNECTOR_Unknown, DRM_MODE_DPMS_OFF, DRM_MODE_DPMS_ON,
DRM_MODE_DPMS_STANDBY, DRM_MODE_DPMS_SUSPEND, DRM_MODE_TYPE_PREFERRED,
};
use syscall::Result;
use crate::kms::objects::{KmsObjectId, KmsObjects};
use crate::kms::properties::{define_object_props, KmsPropertyData, CRTC_ID, DPMS, EDID};
use crate::GraphicsAdapter;
impl<T: GraphicsAdapter> KmsObjects<T> {
pub fn add_connector(
&mut self,
driver_data: T::Connector,
driver_data_state: <T::Connector as KmsConnectorDriver>::State,
crtcs: &[KmsObjectId],
) -> KmsObjectId {
let mut possible_crtcs = 0;
for &crtc in crtcs {
possible_crtcs = 1 << self.get_crtc(crtc).unwrap().lock().unwrap().crtc_index;
}
let encoder_id = self.add(KmsEncoder {
crtc_id: KmsObjectId::INVALID,
possible_crtcs: possible_crtcs,
possible_clones: 1 << self.encoders.len(),
});
self.encoders.push(encoder_id);
let connector_id = self.add(Mutex::new(KmsConnector {
encoder_id,
modes: vec![],
connector_type: DRM_MODE_CONNECTOR_Unknown,
connector_type_id: self.connectors.len() as u32, // FIXME maybe pick unique id within connector type?
connection: KmsConnectorStatus::Unknown,
mm_width: 0,
mm_height: 0,
subpixel: DrmSubpixelOrder::Unknown,
properties: KmsConnector::base_properties(),
edid: KmsObjectId::INVALID,
state: KmsConnectorState {
dpms: KmsDpms::On,
crtc_id: KmsObjectId::INVALID,
driver_data: driver_data_state,
},
driver_data,
}));
self.connectors.push(connector_id);
connector_id
}
pub fn connector_ids(&self) -> &[KmsObjectId] {
&self.connectors
}
pub fn connectors(&self) -> impl Iterator<Item = &Mutex<KmsConnector<T>>> + use<'_, T> {
self.connectors
.iter()
.map(|&id| self.get_connector(id).unwrap())
}
pub fn get_connector(&self, id: KmsObjectId) -> Result<&Mutex<KmsConnector<T>>> {
self.get(id)
}
pub fn encoder_ids(&self) -> &[KmsObjectId] {
&self.encoders
}
pub fn get_encoder(&self, id: KmsObjectId) -> Result<&KmsEncoder> {
self.get(id)
}
}
pub trait KmsConnectorDriver: Debug {
type State: Clone + Debug;
}
impl KmsConnectorDriver for () {
type State = ();
}
#[derive(Debug)]
pub struct KmsConnector<T: GraphicsAdapter> {
pub encoder_id: KmsObjectId,
pub modes: Vec<drm_mode_modeinfo>,
pub connector_type: u32,
pub connector_type_id: u32,
pub connection: KmsConnectorStatus,
pub mm_width: u32,
pub mm_height: u32,
pub subpixel: DrmSubpixelOrder,
pub properties: Vec<KmsPropertyData<Self>>,
pub edid: KmsObjectId,
pub state: KmsConnectorState<T>,
pub driver_data: T::Connector,
}
#[derive(Debug)]
pub struct KmsConnectorState<T: GraphicsAdapter> {
pub dpms: KmsDpms,
pub crtc_id: KmsObjectId,
pub driver_data: <T::Connector as KmsConnectorDriver>::State,
}
impl<T: GraphicsAdapter> Clone for KmsConnectorState<T> {
fn clone(&self) -> Self {
Self {
dpms: self.dpms.clone(),
crtc_id: self.crtc_id.clone(),
driver_data: self.driver_data.clone(),
}
}
}
define_object_props!(object, KmsConnector<T: GraphicsAdapter> {
EDID {
get => u64::from(object.edid.0),
}
DPMS {
get => object.state.dpms as u64,
}
CRTC_ID {
get => u64::from(object.state.crtc_id.0),
}
});
impl<T: GraphicsAdapter> KmsConnector<T> {
pub fn update_from_size(&mut self, width: u32, height: u32) {
self.modes = vec![modeinfo_for_size(width, height)];
}
pub fn update_from_edid(&mut self, edid: &[u8]) {
let edid = edid::parse(edid).unwrap().1;
if let Some(first_detailed_timing) =
edid.descriptors
.iter()
.find_map(|descriptor| match descriptor {
edid::Descriptor::DetailedTiming(detailed_timing) => Some(detailed_timing),
_ => None,
})
{
self.mm_width = first_detailed_timing.horizontal_size.into();
self.mm_height = first_detailed_timing.vertical_size.into();
} else {
log::error!("No edid timing descriptor detected");
}
self.modes = edid
.descriptors
.iter()
.filter_map(|descriptor| {
match descriptor {
edid::Descriptor::DetailedTiming(detailed_timing) => {
// FIXME extract full information
Some(modeinfo_for_size(
u32::from(detailed_timing.horizontal_active_pixels),
u32::from(detailed_timing.vertical_active_lines),
))
}
_ => None,
}
})
.collect::<Vec<_>>();
// First detailed timing descriptor indicates preferred mode.
for mode in self.modes.iter_mut().skip(1) {
mode.flags &= !DRM_MODE_TYPE_PREFERRED;
}
// FIXME update the EDID property
}
}
pub(crate) fn modeinfo_for_size(width: u32, height: u32) -> drm_mode_modeinfo {
let mut modeinfo = drm_mode_modeinfo {
// The actual visible display size
hdisplay: width as u16,
vdisplay: height as u16,
// These are used to calculate the refresh rate
clock: 60 * width * height / 1000,
htotal: width as u16,
vtotal: height as u16,
vscan: 0,
vrefresh: 60,
type_: drm_sys::DRM_MODE_TYPE_PREFERRED | drm_sys::DRM_MODE_TYPE_DRIVER,
name: [0; 32],
// These only matter when modesetting physical display adapters. For
// those we should be able to parse the EDID blob.
hsync_start: width as u16,
hsync_end: width as u16,
hskew: 0,
vsync_start: height as u16,
vsync_end: height as u16,
flags: 0,
};
let name = format!("{width}x{height}").into_bytes();
for (to, from) in modeinfo.name.iter_mut().zip(name) {
*to = from as c_char;
}
modeinfo
}
#[derive(Debug, Copy, Clone)]
#[repr(u32)]
pub enum KmsConnectorStatus {
Disconnected = 0,
Connected = 1,
Unknown = 2,
}
#[derive(Debug, Copy, Clone)]
#[repr(u32)]
pub enum DrmSubpixelOrder {
Unknown = 0,
HorizontalRGB,
HorizontalBGR,
VerticalRGB,
VerticalBGR,
None,
}
#[derive(Debug, Copy, Clone)]
#[repr(u64)]
pub enum KmsDpms {
On = DRM_MODE_DPMS_ON as u64,
Standby = DRM_MODE_DPMS_STANDBY as u64,
Suspend = DRM_MODE_DPMS_SUSPEND as u64,
Off = DRM_MODE_DPMS_OFF as u64,
}
// FIXME can we represent connector and encoder using a single struct?
#[derive(Debug)]
pub struct KmsEncoder {
pub crtc_id: KmsObjectId,
pub possible_crtcs: u32,
pub possible_clones: u32,
}
@@ -0,0 +1,3 @@
pub mod connector;
pub mod objects;
pub mod properties;
@@ -0,0 +1,237 @@
use std::collections::HashMap;
use std::fmt::Debug;
use std::marker::PhantomData;
use std::sync::{Arc, Mutex};
use drm_sys::{
drm_mode_modeinfo, DRM_MODE_OBJECT_BLOB, DRM_MODE_OBJECT_CONNECTOR, DRM_MODE_OBJECT_CRTC,
DRM_MODE_OBJECT_ENCODER, DRM_MODE_OBJECT_FB, DRM_MODE_OBJECT_PROPERTY,
};
use syscall::{Error, Result, EINVAL};
use crate::kms::connector::{KmsConnector, KmsEncoder};
use crate::kms::properties::{
define_object_props, init_standard_props, KmsBlob, KmsProperty, KmsPropertyData,
};
use crate::GraphicsAdapter;
#[derive(Debug)]
pub struct KmsObjects<T: GraphicsAdapter> {
next_id: KmsObjectId,
pub(crate) connectors: Vec<KmsObjectId>,
pub(crate) encoders: Vec<KmsObjectId>,
crtcs: Vec<KmsObjectId>,
framebuffers: Vec<KmsObjectId>,
pub(crate) objects: HashMap<KmsObjectId, KmsObject<T>>,
_marker: PhantomData<T>,
}
impl<T: GraphicsAdapter> KmsObjects<T> {
pub(crate) fn new() -> Self {
let mut objects = KmsObjects {
next_id: KmsObjectId(1),
connectors: vec![],
encoders: vec![],
crtcs: vec![],
framebuffers: vec![],
objects: HashMap::new(),
_marker: PhantomData,
};
init_standard_props(&mut objects);
objects
}
pub(crate) fn add<U: Into<KmsObject<T>>>(&mut self, data: U) -> KmsObjectId {
let id = self.next_id;
self.objects.insert(id, data.into());
self.next_id.0 += 1;
id
}
pub(crate) fn get<'a, U: 'a>(&'a self, id: KmsObjectId) -> Result<&'a U>
where
&'a U: TryFrom<&'a KmsObject<T>>,
{
let object = self.objects.get(&id).ok_or(Error::new(EINVAL))?;
if let Ok(object) = object.try_into() {
Ok(object)
} else {
Err(Error::new(EINVAL))
}
}
pub fn object_type(&self, id: KmsObjectId) -> Result<u32> {
let object = self.objects.get(&id).ok_or(Error::new(EINVAL))?;
Ok(object.object_type())
}
pub fn add_crtc(
&mut self,
driver_data: T::Crtc,
driver_data_state: <T::Crtc as KmsCrtcDriver>::State,
) -> KmsObjectId {
let crtc_index = self.crtcs.len() as u32;
let id = self.add(Mutex::new(KmsCrtc {
crtc_index,
gamma_size: 0,
properties: KmsCrtc::base_properties(),
state: KmsCrtcState {
fb_id: None,
mode: None,
driver_data: driver_data_state,
},
driver_data,
}));
self.crtcs.push(id);
id
}
pub fn crtc_ids(&self) -> &[KmsObjectId] {
&self.crtcs
}
pub fn crtcs(&self) -> impl Iterator<Item = &Mutex<KmsCrtc<T>>> + use<'_, T> {
self.crtcs
.iter()
.map(|&id| self.get::<Mutex<KmsCrtc<T>>>(id).unwrap())
}
pub fn get_crtc(&self, id: KmsObjectId) -> Result<&Mutex<KmsCrtc<T>>> {
self.get(id)
}
pub fn add_framebuffer(&mut self, fb: KmsFramebuffer<T>) -> KmsObjectId {
let id = self.add(fb);
self.framebuffers.push(id);
id
}
pub fn remove_framebuffer(&mut self, id: KmsObjectId) -> Result<()> {
let Some(object) = self.objects.get(&id) else {
return Err(Error::new(EINVAL));
};
let KmsObject::Framebuffer(_) = object else {
return Err(Error::new(EINVAL));
};
self.objects.remove(&id).unwrap();
Ok(())
}
pub fn fb_ids(&self) -> &[KmsObjectId] {
&self.framebuffers
}
pub fn get_framebuffer(&self, id: KmsObjectId) -> Result<&KmsFramebuffer<T>> {
self.get(id)
}
}
#[derive(Debug, Copy, Clone, Eq, PartialEq, Hash)]
pub struct KmsObjectId(pub(crate) u32);
impl KmsObjectId {
pub const INVALID: KmsObjectId = KmsObjectId(0);
}
impl From<KmsObjectId> for u64 {
fn from(value: KmsObjectId) -> Self {
value.0.into()
}
}
macro_rules! define_object_kinds {
(<$T:ident> $(
$variant:ident($data:ty) = $type:ident,
)*) => {
#[derive(Debug)]
pub(crate) enum KmsObject<$T: GraphicsAdapter> {
$($variant($data),)*
}
impl<$T: GraphicsAdapter> KmsObject<$T> {
fn object_type(&self) -> u32 {
match self {
$(Self::$variant(_) => $type,)*
}
}
}
$(
impl<$T: GraphicsAdapter> From<$data> for KmsObject<$T> {
fn from(value: $data) -> Self {
Self::$variant(value)
}
}
impl<'a, $T: GraphicsAdapter> TryFrom<&'a KmsObject<$T>> for &'a $data {
type Error = ();
fn try_from(value: &'a KmsObject<T>) -> Result<Self, Self::Error> {
match value {
KmsObject::$variant(data) => Ok(data),
_ => Err(()),
}
}
}
)*
};
}
define_object_kinds! { <T>
Crtc(Mutex<KmsCrtc<T>>) = DRM_MODE_OBJECT_CRTC,
Connector(Mutex<KmsConnector<T>>) = DRM_MODE_OBJECT_CONNECTOR,
Encoder(KmsEncoder) = DRM_MODE_OBJECT_ENCODER,
Property(KmsProperty) = DRM_MODE_OBJECT_PROPERTY,
Framebuffer(KmsFramebuffer<T>) = DRM_MODE_OBJECT_FB,
Blob(KmsBlob) = DRM_MODE_OBJECT_BLOB,
}
pub trait KmsCrtcDriver: Debug {
type State: Clone + Debug;
}
impl KmsCrtcDriver for () {
type State = ();
}
#[derive(Debug)]
pub struct KmsCrtc<T: GraphicsAdapter> {
pub crtc_index: u32,
pub gamma_size: u32,
pub properties: Vec<KmsPropertyData<Self>>,
pub state: KmsCrtcState<T>,
pub driver_data: T::Crtc,
}
#[derive(Debug)]
pub struct KmsCrtcState<T: GraphicsAdapter> {
pub fb_id: Option<KmsObjectId>,
pub mode: Option<drm_mode_modeinfo>,
pub driver_data: <T::Crtc as KmsCrtcDriver>::State,
}
impl<T: GraphicsAdapter> Clone for KmsCrtcState<T> {
fn clone(&self) -> Self {
Self {
fb_id: self.fb_id.clone(),
mode: self.mode.clone(),
driver_data: self.driver_data.clone(),
}
}
}
define_object_props!(object, KmsCrtc<T: GraphicsAdapter> {});
#[derive(Debug)]
pub struct KmsFramebuffer<T: GraphicsAdapter> {
pub width: u32,
pub height: u32,
pub pitch: u32,
pub bpp: u32,
pub depth: u32,
pub buffer: Arc<T::Buffer>,
pub driver_data: T::Framebuffer,
}
@@ -0,0 +1,241 @@
use std::ffi::c_char;
use std::fmt::Debug;
use std::mem;
use drm_sys::{
DRM_MODE_DPMS_OFF, DRM_MODE_DPMS_ON, DRM_MODE_DPMS_STANDBY, DRM_MODE_DPMS_SUSPEND,
DRM_MODE_OBJECT_CRTC, DRM_MODE_OBJECT_FB, DRM_PLANE_TYPE_CURSOR, DRM_PLANE_TYPE_OVERLAY,
DRM_PLANE_TYPE_PRIMARY, DRM_PROP_NAME_LEN,
};
use syscall::{Error, Result, EINVAL};
use crate::kms::objects::{KmsObject, KmsObjectId, KmsObjects};
use crate::GraphicsAdapter;
impl<T: GraphicsAdapter> KmsObjects<T> {
pub fn add_property(
&mut self,
name: &str,
immutable: bool,
atomic: bool,
kind: KmsPropertyKind,
) -> KmsObjectId {
match &kind {
KmsPropertyKind::Range(start, end) => assert!(start < end),
KmsPropertyKind::Enum(_variants) => {
// FIXME check duplicate variant numbers
}
KmsPropertyKind::Blob => {}
KmsPropertyKind::Bitmask(_bitmask_flags) => {
// FIXME check overlapping flag numbers
}
KmsPropertyKind::Object { type_: _ } => {}
KmsPropertyKind::SignedRange(start, end) => assert!(start < end),
}
let mut name_bytes = [0; DRM_PROP_NAME_LEN as usize];
for (to, &from) in name_bytes.iter_mut().zip(name.as_bytes()) {
*to = from as c_char;
}
self.add(KmsProperty {
name: KmsPropertyName::new("Property name", name),
immutable,
atomic,
kind,
})
}
pub fn get_property(&self, id: KmsObjectId) -> Result<&KmsProperty> {
self.get(id)
}
pub fn get_object_properties_data(&self, id: KmsObjectId) -> Result<(Vec<u32>, Vec<u64>)> {
let object = self.objects.get(&id).ok_or(Error::new(EINVAL))?;
match object {
KmsObject::Crtc(crtc) => {
let crtc = crtc.lock().unwrap();
let props = &crtc.properties;
Ok((
props.iter().map(|prop| prop.id.0).collect::<Vec<_>>(),
props
.iter()
.map(|prop| (prop.getter)(&crtc))
.collect::<Vec<_>>(),
))
}
KmsObject::Connector(connector) => {
let connector = connector.lock().unwrap();
let props = &connector.properties;
Ok((
props.iter().map(|prop| prop.id.0).collect::<Vec<_>>(),
props
.iter()
.map(|prop| (prop.getter)(&connector))
.collect::<Vec<_>>(),
))
}
KmsObject::Encoder(_)
| KmsObject::Property(_)
| KmsObject::Framebuffer(_)
| KmsObject::Blob(_) => Ok((vec![], vec![])),
}
}
pub fn add_blob(&mut self, data: Vec<u8>) -> KmsObjectId {
self.add(KmsBlob { data })
}
pub fn get_blob(&self, id: KmsObjectId) -> Result<&[u8]> {
Ok(&self.get::<KmsBlob>(id)?.data)
}
}
#[derive(Copy, Clone)]
pub struct KmsPropertyName(pub [c_char; DRM_PROP_NAME_LEN as usize]);
impl KmsPropertyName {
fn new(context: &str, name: &str) -> KmsPropertyName {
if name.len() > DRM_PROP_NAME_LEN as usize {
panic!("{context} {name} is too long");
}
let mut name_bytes = [0; DRM_PROP_NAME_LEN as usize];
for (to, &from) in name_bytes.iter_mut().zip(name.as_bytes()) {
*to = from as c_char;
}
KmsPropertyName(name_bytes)
}
}
impl Debug for KmsPropertyName {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let u8_bytes = unsafe { mem::transmute::<&[c_char], &[u8]>(&self.0) };
f.write_str(&String::from_utf8_lossy(u8_bytes).trim_end_matches('\0'))
}
}
#[derive(Debug)]
pub struct KmsProperty {
pub name: KmsPropertyName,
pub immutable: bool,
pub atomic: bool,
pub kind: KmsPropertyKind,
}
#[derive(Debug)]
pub enum KmsPropertyKind {
Range(u64, u64),
Enum(Vec<(KmsPropertyName, u64)>),
Blob,
Bitmask(Vec<(KmsPropertyName, u64)>),
Object { type_: u32 },
SignedRange(i64, i64),
}
#[derive(Debug)]
pub struct KmsPropertyData<T> {
pub id: KmsObjectId,
pub getter: fn(&T) -> u64,
}
#[derive(Debug)]
pub struct KmsBlob {
data: Vec<u8>,
}
macro_rules! define_properties {
($($prop:ident $($prop_name:literal)?: $prop_type:ident $({$($prop_content:tt)*})? [$($prop_flag:ident)?],)*) => {
$(#[allow(non_upper_case_globals)] pub const $prop: KmsObjectId = KmsObjectId(1 + ${index()});)*
pub(super) fn init_standard_props<T: GraphicsAdapter>(objects: &mut KmsObjects<T>) {
$(
assert_eq!(objects.add_property(
define_properties!(@prop_name $prop $($prop_name)?),
define_properties!(@is_immutable $($prop_flag)?),
define_properties!(@is_atomic $($prop_flag)?),
define_properties!(@prop_kind $prop_type $({$($prop_content)*})?),
), $prop);
)*
}
};
(@prop_name $prop:ident $prop_name:literal) => { $prop_name };
(@prop_name $prop:ident) => { stringify!($prop) };
(@is_immutable) => { false };
(@is_immutable immutable) => { true };
(@is_immutable atomic) => { false };
(@is_atomic) => { false };
(@is_atomic immutable) => { false };
(@is_atomic atomic) => { true };
(@prop_kind range { $start:expr, $end:expr }) => {
KmsPropertyKind::Range($start, $end)
};
(@prop_kind enum { $($variant:ident = $value:expr,)* }) => {
KmsPropertyKind::Enum(vec![
$((KmsPropertyName::new("Property variant name", stringify!($variant)), $value)),*]
)
};
(@prop_kind blob) => {
KmsPropertyKind::Blob
};
(@prop_kind object { $type:ident }) => {
KmsPropertyKind::Object { type_: $type }
};
(@prop_kind srange { $start:expr, $end:expr }) => {
KmsPropertyKind::SignedRange($start, $end)
};
}
define_properties! {
// Connector + Plane
CRTC_ID: object { DRM_MODE_OBJECT_CRTC } [atomic],
// Connector
EDID: blob [immutable],
DPMS: enum {
On = u64::from(DRM_MODE_DPMS_ON),
Standby = u64::from(DRM_MODE_DPMS_STANDBY),
Suspend = u64::from(DRM_MODE_DPMS_SUSPEND),
Off = u64::from(DRM_MODE_DPMS_OFF),
} [],
// CRTC
ACTIVE: range { 0,1 } [atomic],
MODE_ID: blob [atomic],
// Plane
type_ "type": enum {
Overlay = u64::from(DRM_PLANE_TYPE_OVERLAY),
Primary = u64::from(DRM_PLANE_TYPE_PRIMARY),
Cursor = u64::from(DRM_PLANE_TYPE_CURSOR),
} [immutable],
FB_ID: object { DRM_MODE_OBJECT_FB } [atomic],
CRTC_X: srange { i64::from(i32::MIN), i64::from(i32::MAX) } [atomic],
CRTC_Y: srange { i64::from(i32::MIN), i64::from(i32::MAX) } [atomic],
CRTC_W: range { 0, u64::from(u32::MAX) } [atomic],
CRTC_H: range { 0, u64::from(u32::MAX) } [atomic],
SRC_X: range { 0, u64::from(u32::MAX) } [atomic],
SRC_Y: range { 0, u64::from(u32::MAX) } [atomic],
SRC_W: range { 0, u64::from(u32::MAX) } [atomic],
SRC_H: range { 0, u64::from(u32::MAX) } [atomic],
FB_DAMAGE_CLIPS: blob [atomic],
}
macro_rules! define_object_props {
($object:ident, $obj:ident$(<$($T:ident$(: $bound:ident)?),*>)? { $(
$prop:ident {
get => $get:expr,
}
)* }) => {
impl$(<$($T$(: $bound)?),*>)? $obj$(<$($T),*>)? {
pub(super) fn base_properties() -> Vec<KmsPropertyData<Self>> {
vec![$(KmsPropertyData {
id: $prop,
getter: |$object| $get
}),*]
}
}
};
}
pub(super) use define_object_props;
@@ -0,0 +1,986 @@
#![feature(macro_metavar_expr)]
use std::collections::HashMap;
use std::fmt::Debug;
use std::fs::File;
use std::io::{self, Write};
use std::os::fd::BorrowedFd;
use std::sync::{Arc, Mutex};
use std::{cmp, mem};
use drm_fourcc::DrmFourcc;
use drm_sys::{
drm_mode_property_enum, DRM_MODE_CURSOR_BO, DRM_MODE_CURSOR_MOVE, DRM_MODE_PROP_ATOMIC,
DRM_MODE_PROP_BITMASK, DRM_MODE_PROP_BLOB, DRM_MODE_PROP_ENUM, DRM_MODE_PROP_IMMUTABLE,
DRM_MODE_PROP_OBJECT, DRM_MODE_PROP_RANGE, DRM_MODE_PROP_SIGNED_RANGE,
};
use inputd::{DisplayHandle, VtEventKind};
use libredox::Fd;
use redox_scheme::scheme::{register_scheme_inner, SchemeState, SchemeSync};
use redox_scheme::{CallerCtx, OpenResult, RequestKind, SignalBehavior, Socket};
use scheme_utils::{FpathWriter, HandleMap};
use syscall::schemev2::NewFdFlags;
use syscall::{Error, MapFlags, Result, EACCES, EAGAIN, EINVAL, ENOENT, EOPNOTSUPP};
use crate::kms::connector::{KmsConnectorDriver, KmsConnectorState};
use crate::kms::objects::{self, KmsCrtc, KmsCrtcDriver, KmsCrtcState, KmsObjectId, KmsObjects};
use crate::kms::properties::KmsPropertyKind;
pub mod kms;
#[derive(Debug, Copy, Clone)]
#[repr(C, packed)]
pub struct Damage {
pub x: u32,
pub y: u32,
pub width: u32,
pub height: u32,
}
impl Damage {
fn merge(self, other: Self) -> Self {
if self.width == 0 || self.height == 0 {
return other;
}
if other.width == 0 || other.height == 0 {
return self;
}
let x = cmp::min(self.x, other.x);
let y = cmp::min(self.y, other.y);
let x2 = cmp::max(self.x + self.width, other.x + other.width);
let y2 = cmp::max(self.y + self.height, other.y + other.height);
Damage {
x,
y,
width: x2 - x,
height: y2 - y,
}
}
#[must_use]
pub fn clip(mut self, width: u32, height: u32) -> Self {
// Clip damage
let x2 = self.x + self.width;
self.x = cmp::min(self.x, width);
if x2 > width {
self.width = width - self.x;
}
let y2 = self.y + self.height;
self.y = cmp::min(self.y, height);
if y2 > height {
self.height = height - self.y;
}
self
}
}
pub trait GraphicsAdapter: Sized + Debug {
type Connector: KmsConnectorDriver;
type Crtc: KmsCrtcDriver;
type Buffer: Buffer;
type Framebuffer: Framebuffer;
fn name(&self) -> &'static [u8];
fn desc(&self) -> &'static [u8];
fn init(&mut self, objects: &mut KmsObjects<Self>);
fn get_cap(&self, cap: u32) -> Result<u64>;
fn set_client_cap(&self, cap: u32, value: u64) -> Result<()>;
fn probe_connector(&mut self, objects: &mut KmsObjects<Self>, id: KmsObjectId);
fn create_dumb_buffer(&mut self, width: u32, height: u32) -> (Self::Buffer, u32);
fn map_dumb_buffer(&mut self, buffer: &Self::Buffer) -> *mut u8;
fn create_framebuffer(&mut self, buffer: &Self::Buffer) -> Self::Framebuffer;
fn set_crtc(
&mut self,
objects: &KmsObjects<Self>,
crtc: &Mutex<KmsCrtc<Self>>,
new_state: KmsCrtcState<Self>,
damage: Damage,
) -> syscall::Result<()>;
fn hw_cursor_size(&self) -> Option<(u32, u32)>;
fn handle_cursor(&mut self, cursor: &CursorPlane<Self::Buffer>, dirty_fb: bool);
}
pub trait Buffer: Debug {
fn size(&self) -> usize;
}
pub trait Framebuffer: Debug {}
impl Framebuffer for () {}
pub struct CursorPlane<C: Buffer> {
pub x: i32,
pub y: i32,
pub hot_x: i32,
pub hot_y: i32,
pub buffer: Option<Arc<C>>,
}
pub struct GraphicsScheme<T: GraphicsAdapter> {
inner: GraphicsSchemeInner<T>,
inputd_handle: DisplayHandle,
state: SchemeState,
}
impl<T: GraphicsAdapter> GraphicsScheme<T> {
pub fn new(mut adapter: T, scheme_name: String, early: bool) -> Self {
assert!(scheme_name.starts_with("display"));
let socket = Socket::nonblock().expect("failed to create graphics scheme");
let disable_graphical_debug = Some(
File::open("/scheme/debug/disable-graphical-debug")
.expect("vesad: Failed to open /scheme/debug/disable-graphical-debug"),
);
let mut objects = KmsObjects::new();
adapter.init(&mut objects);
for connector_id in objects.connector_ids().to_vec() {
adapter.probe_connector(&mut objects, connector_id)
}
let mut inner = GraphicsSchemeInner {
adapter,
scheme_name,
disable_graphical_debug,
socket,
objects,
handles: HandleMap::new(),
active_vt: 0,
vts: HashMap::new(),
};
let cap_id = inner.scheme_root().expect("failed to get this scheme root");
register_scheme_inner(&inner.socket, &inner.scheme_name, cap_id)
.expect("failed to register graphics scheme root");
let display_handle = if early {
DisplayHandle::new_early(&inner.scheme_name).unwrap()
} else {
DisplayHandle::new(&inner.scheme_name).unwrap()
};
Self {
inner,
inputd_handle: display_handle,
state: SchemeState::new(),
}
}
pub fn event_handle(&self) -> &Fd {
self.inner.socket.inner()
}
pub fn inputd_event_handle(&self) -> BorrowedFd<'_> {
self.inputd_handle.inner()
}
pub fn adapter(&self) -> &T {
&self.inner.adapter
}
pub fn adapter_mut(&mut self) -> &mut T {
&mut self.inner.adapter
}
pub fn kms_objects(&self) -> &KmsObjects<T> {
&self.inner.objects
}
pub fn kms_objects_mut(&mut self) -> &mut KmsObjects<T> {
&mut self.inner.objects
}
pub fn adapter_and_kms_objects_mut(&mut self) -> (&mut T, &mut KmsObjects<T>) {
(&mut self.inner.adapter, &mut self.inner.objects)
}
pub fn handle_vt_events(&mut self) {
while let Some(vt_event) = self
.inputd_handle
.read_vt_event()
.expect("driver-graphics: failed to read display handle")
{
match vt_event.kind {
VtEventKind::Activate => self.inner.activate_vt(vt_event.vt),
}
}
}
pub fn notify_displays_changed(&mut self) {
// FIXME notify clients
}
/// Process new scheme requests.
///
/// This needs to be called each time there is a new event on the scheme
/// file.
pub fn tick(&mut self) -> io::Result<()> {
loop {
let request = match self.inner.socket.next_request(SignalBehavior::Restart) {
Ok(Some(request)) => request,
Ok(None) => {
// Scheme likely got unmounted
std::process::exit(0);
}
Err(err) if err.errno == EAGAIN => break,
Err(err) => panic!("driver-graphics: failed to read display scheme: {err}"),
};
match request.kind() {
RequestKind::Call(call) => {
let response = call.handle_sync(&mut self.inner, &mut self.state);
self.inner
.socket
.write_response(response, SignalBehavior::Restart)
.expect("driver-graphics: failed to write response");
}
RequestKind::OnClose { id } => {
self.inner.on_close(id);
}
_ => (),
}
}
Ok(())
}
}
struct GraphicsSchemeInner<T: GraphicsAdapter> {
adapter: T,
scheme_name: String,
disable_graphical_debug: Option<File>,
socket: Socket,
objects: KmsObjects<T>,
handles: HandleMap<Handle<T>>,
active_vt: usize,
vts: HashMap<usize, VtState<T>>,
}
struct VtState<T: GraphicsAdapter> {
connector_state: Vec<KmsConnectorState<T>>,
crtc_state: Vec<KmsCrtcState<T>>,
cursor_plane: CursorPlane<T::Buffer>,
}
enum Handle<T: GraphicsAdapter> {
V2 {
vt: usize,
next_id: u32,
buffers: HashMap<u32, Arc<T::Buffer>>,
},
SchemeRoot,
}
impl<T: GraphicsAdapter> GraphicsSchemeInner<T> {
fn get_or_create_vt<'a>(
objects: &KmsObjects<T>,
vts: &'a mut HashMap<usize, VtState<T>>,
vt: usize,
) -> &'a mut VtState<T> {
vts.entry(vt).or_insert_with(|| VtState {
connector_state: objects
.connectors()
.map(|connector| connector.lock().unwrap().state.clone())
.collect(),
crtc_state: objects
.crtcs()
.map(|crtc| crtc.lock().unwrap().state.clone())
.collect(),
cursor_plane: CursorPlane {
x: 0,
y: 0,
hot_x: 0,
hot_y: 0,
buffer: None,
},
})
}
fn activate_vt(&mut self, vt: usize) {
log::info!("activate {}", vt);
// Disable the kernel graphical debug writing once switching vt's for the
// first time. This way the kernel graphical debug remains enabled if the
// userspace logging infrastructure doesn't start up because for example a
// kernel panic happened prior to it starting up or logd crashed.
if let Some(mut disable_graphical_debug) = self.disable_graphical_debug.take() {
let _ = disable_graphical_debug.write(&[1]);
}
self.active_vt = vt;
let vt_state = GraphicsSchemeInner::get_or_create_vt(&self.objects, &mut self.vts, vt);
for (connector_idx, connector_state) in vt_state.connector_state.iter().enumerate() {
let connector_id = self.objects.connector_ids()[connector_idx];
let mut connector = self
.objects
.get_connector(connector_id)
.unwrap()
.lock()
.unwrap();
connector.state = connector_state.clone();
}
for (crtc_idx, crtc_state) in vt_state.crtc_state.iter().enumerate() {
let crtc_id = self.objects.crtc_ids()[crtc_idx];
let crtc = self.objects.get_crtc(crtc_id).unwrap();
let connector_id = self.objects.connector_ids()[crtc_idx];
let fb = crtc_state.fb_id.map(|fb_id| {
self.objects
.get_framebuffer(fb_id)
.expect("removed framebuffers should be unset")
});
self.adapter
.set_crtc(
&self.objects,
crtc,
crtc_state.clone(),
Damage {
x: 0,
y: 0,
width: fb.map_or(0, |fb| fb.width),
height: fb.map_or(0, |fb| fb.height),
},
)
.unwrap();
self.objects
.get_connector(connector_id)
.unwrap()
.lock()
.unwrap()
.state
.crtc_id = crtc_id;
}
if self.adapter.hw_cursor_size().is_some() {
self.adapter.handle_cursor(&vt_state.cursor_plane, true);
}
}
}
const MAP_FAKE_OFFSET_MULTIPLIER: usize = 0x10_000_000;
impl<T: GraphicsAdapter> SchemeSync for GraphicsSchemeInner<T> {
fn scheme_root(&mut self) -> Result<usize> {
Ok(self.handles.insert(Handle::SchemeRoot))
}
fn openat(
&mut self,
dirfd: usize,
path: &str,
_flags: usize,
_fcntl_flags: u32,
_ctx: &CallerCtx,
) -> Result<OpenResult> {
if !matches!(self.handles.get(dirfd)?, Handle::SchemeRoot) {
return Err(Error::new(EACCES));
}
if path.is_empty() {
return Err(Error::new(EINVAL));
}
let handle = if path.starts_with("v") {
if !path.starts_with("v2/") {
return Err(Error::new(ENOENT));
}
let vt = path["v2/".len()..]
.parse::<usize>()
.map_err(|_| Error::new(EINVAL))?;
// Ensure the VT exists such that the rest of the methods can freely access it.
Self::get_or_create_vt(&self.objects, &mut self.vts, vt);
Handle::V2 {
vt,
next_id: 0,
buffers: HashMap::new(),
}
} else {
return Err(Error::new(EINVAL));
};
let id = self.handles.insert(handle);
Ok(OpenResult::ThisScheme {
number: id,
flags: NewFdFlags::empty(),
})
}
fn fpath(&mut self, id: usize, buf: &mut [u8], _ctx: &CallerCtx) -> syscall::Result<usize> {
FpathWriter::with(buf, &self.scheme_name, |w| {
match self.handles.get(id)? {
Handle::V2 {
vt,
next_id: _,
buffers: _,
} => write!(w, "v2/{vt}").unwrap(),
Handle::SchemeRoot => return Err(Error::new(EOPNOTSUPP)),
};
Ok(())
})
}
fn call(
&mut self,
id: usize,
payload: &mut [u8],
metadata: &[u64],
_ctx: &CallerCtx,
) -> Result<usize> {
use redox_ioctl::drm as ipc;
fn id_index(id: u32) -> u32 {
id & 0xFF
}
fn plane_id(i: u32) -> u32 {
id_index(i) | (1 << 13)
}
match self.handles.get_mut(id)? {
Handle::SchemeRoot => return Err(Error::new(EOPNOTSUPP)),
Handle::V2 {
vt,
next_id,
buffers,
} => match metadata[0] {
ipc::VERSION => ipc::DrmVersion::with(payload, |mut data| {
data.set_version_major(1);
data.set_version_minor(4);
data.set_version_patchlevel(0);
data.set_name(unsafe { mem::transmute(self.adapter.name()) });
data.set_date(unsafe { mem::transmute(&b"0"[..]) });
data.set_desc(unsafe { mem::transmute(self.adapter.desc()) });
Ok(0)
}),
ipc::GET_CAP => ipc::DrmGetCap::with(payload, |mut data| {
data.set_value(
self.adapter.get_cap(
data.capability()
.try_into()
.map_err(|_| syscall::Error::new(EINVAL))?,
)?,
);
Ok(0)
}),
ipc::SET_CLIENT_CAP => ipc::DrmSetClientCap::with(payload, |data| {
self.adapter.set_client_cap(
data.capability()
.try_into()
.map_err(|_| syscall::Error::new(EINVAL))?,
data.value(),
)?;
Ok(0)
}),
ipc::MODE_CARD_RES => ipc::DrmModeCardRes::with(payload, |mut data| {
let conn_ids = self
.objects
.connector_ids()
.iter()
.map(|id| id.0)
.collect::<Vec<_>>();
let crtc_ids = self
.objects
.crtc_ids()
.iter()
.map(|id| id.0)
.collect::<Vec<_>>();
let enc_ids = self
.objects
.encoder_ids()
.iter()
.map(|id| id.0)
.collect::<Vec<_>>();
let fb_ids = self
.objects
.fb_ids()
.iter()
.map(|id| id.0)
.collect::<Vec<_>>();
data.set_fb_id_ptr(&fb_ids);
data.set_crtc_id_ptr(&crtc_ids);
data.set_connector_id_ptr(&conn_ids);
data.set_encoder_id_ptr(&enc_ids);
data.set_min_width(0);
data.set_max_width(16384);
data.set_min_height(0);
data.set_max_height(16384);
Ok(0)
}),
ipc::MODE_GET_CRTC => ipc::DrmModeCrtc::with(payload, |mut data| {
let crtc = self
.objects
.get_crtc(KmsObjectId(data.crtc_id()))?
.lock()
.unwrap();
// Don't touch set_connectors, that is only used by MODE_SET_CRTC
data.set_fb_id(crtc.state.fb_id.unwrap_or(KmsObjectId::INVALID).0);
// FIXME fill x and y with the data from the primary plane
data.set_x(0);
data.set_y(0);
data.set_gamma_size(crtc.gamma_size);
if let Some(mode) = crtc.state.mode {
data.set_mode_valid(1);
data.set_mode(mode);
} else {
data.set_mode_valid(0);
data.set_mode(Default::default());
}
Ok(0)
}),
ipc::MODE_SET_CRTC => ipc::DrmModeCrtc::with(payload, |data| {
let crtc = self.objects.get_crtc(KmsObjectId(data.crtc_id()))?;
let connector_ids: Vec<KmsObjectId> = data
.set_connectors_ptr()
.iter()
.take(data.count_connectors() as usize)
.map(|&id| KmsObjectId(id))
.collect();
let fb_id = if data.fb_id() != 0 {
Some(KmsObjectId(data.fb_id()))
} else {
None
};
let mode = if data.mode_valid() != 0 {
Some(data.mode())
} else {
None
};
let mut new_state = crtc.lock().unwrap().state.clone();
new_state.fb_id = fb_id;
new_state.mode = mode;
if *vt == self.active_vt {
self.adapter.set_crtc(
&self.objects,
crtc,
new_state.clone(),
Damage {
x: data.x(),
y: data.y(),
width: mode.map_or(0, |m| m.hdisplay as u32),
height: mode.map_or(0, |m| m.vdisplay as u32),
},
)?;
for connector in connector_ids {
self.objects
.get_connector(connector)?
.lock()
.unwrap()
.state
.crtc_id = KmsObjectId(data.crtc_id());
}
}
self.vts.get_mut(vt).unwrap().crtc_state
[crtc.lock().unwrap().crtc_index as usize] = new_state;
Ok(0)
}),
ipc::MODE_CURSOR => ipc::DrmModeCursor::with(payload, |data| {
let vt_state = self.vts.get_mut(vt).unwrap();
let cursor_plane = &mut vt_state.cursor_plane;
let update_buffer = data.flags() & DRM_MODE_CURSOR_BO != 0;
if update_buffer {
cursor_plane.buffer = if data.handle() == 0 {
None
} else if let Some(buffer) = buffers.get(&data.handle()) {
Some(buffer.clone())
} else {
return Err(Error::new(EINVAL));
};
}
if data.flags() & DRM_MODE_CURSOR_MOVE != 0 {
cursor_plane.x = data.x();
cursor_plane.y = data.y();
}
self.adapter.handle_cursor(cursor_plane, update_buffer);
Ok(0)
}),
ipc::MODE_GET_ENCODER => ipc::DrmModeGetEncoder::with(payload, |mut data| {
let encoder = self.objects.get_encoder(KmsObjectId(data.encoder_id()))?;
data.set_crtc_id(encoder.crtc_id.0);
data.set_possible_crtcs(encoder.possible_crtcs);
data.set_possible_clones(encoder.possible_clones);
Ok(0)
}),
ipc::MODE_GET_CONNECTOR => ipc::DrmModeGetConnector::with(payload, |mut data| {
if data.count_modes() == 0 {
self.adapter
.probe_connector(&mut self.objects, KmsObjectId(data.connector_id()));
}
let connector = self
.objects
.get_connector(KmsObjectId(data.connector_id()))?
.lock()
.unwrap();
data.set_encoders_ptr(&[connector.encoder_id.0]);
data.set_modes_ptr(&connector.modes);
data.set_connector_type(data.connector_type());
data.set_connector_type_id(data.connector_type_id());
data.set_connection(connector.connection as u32);
data.set_mm_width(connector.mm_width);
data.set_mm_height(connector.mm_width);
data.set_subpixel(connector.subpixel as u32);
drop(connector);
let (props, prop_vals) = self
.objects
.get_object_properties_data(KmsObjectId(data.connector_id()))?;
data.set_props_ptr(&props);
data.set_prop_values_ptr(&prop_vals);
Ok(0)
}),
ipc::MODE_GET_PROPERTY => ipc::DrmModeGetProperty::with(payload, |mut data| {
let property = self.objects.get_property(KmsObjectId(data.prop_id()))?;
data.set_name(property.name.0);
let mut flags = 0;
if property.immutable {
flags |= DRM_MODE_PROP_IMMUTABLE;
}
if property.atomic {
flags |= DRM_MODE_PROP_ATOMIC;
}
match &property.kind {
&KmsPropertyKind::Range(start, end) => {
data.set_flags(flags | DRM_MODE_PROP_RANGE);
data.set_values_ptr(&[start, end]);
data.set_enum_blob_ptr(&[]);
}
KmsPropertyKind::Enum(variants) => {
data.set_flags(flags | DRM_MODE_PROP_ENUM);
data.set_values_ptr(
&variants.iter().map(|&(_, value)| value).collect::<Vec<_>>(),
);
data.set_enum_blob_ptr(
&variants
.iter()
.map(|&(name, value)| drm_mode_property_enum {
name: name.0,
value,
})
.collect::<Vec<_>>(),
);
}
KmsPropertyKind::Blob => {
data.set_flags(flags | DRM_MODE_PROP_BLOB);
data.set_values_ptr(&[]);
data.set_enum_blob_ptr(&[]);
}
KmsPropertyKind::Bitmask(bitmask_flags) => {
data.set_flags(flags | DRM_MODE_PROP_BITMASK);
data.set_values_ptr(
&bitmask_flags
.iter()
.map(|&(_, value)| value)
.collect::<Vec<_>>(),
);
data.set_enum_blob_ptr(
&bitmask_flags
.iter()
.map(|&(name, value)| drm_mode_property_enum {
name: name.0,
value,
})
.collect::<Vec<_>>(),
);
}
KmsPropertyKind::Object { type_ } => {
data.set_flags(flags | DRM_MODE_PROP_OBJECT);
data.set_values_ptr(&[u64::from(*type_)]);
data.set_enum_blob_ptr(&[]);
}
&KmsPropertyKind::SignedRange(start, end) => {
data.set_flags(flags | DRM_MODE_PROP_SIGNED_RANGE);
data.set_values_ptr(&[start as u64, end as u64]);
data.set_enum_blob_ptr(&[]);
}
}
Ok(0)
}),
ipc::MODE_GET_PROP_BLOB => ipc::DrmModeGetBlob::with(payload, |mut data| {
let blob = self.objects.get_blob(KmsObjectId(data.blob_id()))?;
data.set_data(&blob);
Ok(0)
}),
ipc::MODE_GET_FB => ipc::DrmModeFbCmd::with(payload, |mut data| {
let fb = self.objects.get_framebuffer(KmsObjectId(data.fb_id()))?;
*next_id += 1;
buffers.insert(*next_id, fb.buffer.clone());
data.set_width(fb.width);
data.set_height(fb.height);
data.set_pitch(fb.pitch);
data.set_bpp(fb.bpp);
data.set_depth(fb.depth);
data.set_handle(*next_id);
Ok(0)
}),
ipc::MODE_ADD_FB => ipc::DrmModeFbCmd::with(payload, |mut data| {
let buffer = buffers.get(&data.handle()).ok_or(Error::new(EINVAL))?;
let fb = self.adapter.create_framebuffer(buffer);
let id = self.objects.add_framebuffer(objects::KmsFramebuffer {
width: data.width(),
height: data.height(),
pitch: data.pitch(),
bpp: data.bpp(),
depth: data.depth(),
buffer: buffer.clone(),
driver_data: fb,
});
data.set_fb_id(id.0);
Ok(0)
}),
ipc::MODE_RM_FB => ipc::StandinForUint::with(payload, |data| {
let fb_id = KmsObjectId(data.inner());
self.objects.remove_framebuffer(fb_id)?;
// Disable planes that use this framebuffer.
for (vt, vt_data) in &mut self.vts {
for (crtc_idx, crtc_state) in vt_data.crtc_state.iter_mut().enumerate() {
if crtc_state.fb_id != Some(fb_id) {
continue;
}
crtc_state.fb_id = None;
if *vt != self.active_vt {
continue;
}
let crtc = self.objects.crtcs().nth(crtc_idx).unwrap();
self.adapter
.set_crtc(
&self.objects,
crtc,
crtc_state.clone(),
Damage {
x: 0,
y: 0,
width: 0,
height: 0,
},
)
.unwrap();
}
}
Ok(0)
}),
ipc::MODE_DIRTYFB => ipc::DrmModeFbDirtyCmd::with(payload, |data| {
let fb = self.objects.get_framebuffer(KmsObjectId(data.fb_id()))?;
let damage = data
.clips_ptr()
.iter()
.map(|rect| Damage {
x: u32::from(rect.x1),
y: u32::from(rect.y1),
width: u32::from(rect.x2 - rect.x1),
height: u32::from(rect.y2 - rect.y1),
})
.reduce(Damage::merge)
.unwrap_or(Damage {
x: 0,
y: 0,
width: fb.width,
height: fb.height,
});
if *vt == self.active_vt {
for crtc in self.objects.crtcs() {
let state = crtc.lock().unwrap().state.clone();
if state.fb_id == Some(KmsObjectId(data.fb_id())) {
self.adapter.set_crtc(&self.objects, crtc, state, damage)?;
}
}
}
Ok(0)
}),
ipc::MODE_CREATE_DUMB => ipc::DrmModeCreateDumb::with(payload, |mut data| {
if data.bpp() != 32 || data.flags() != 0 {
return Err(Error::new(EINVAL));
}
let (buffer, pitch) =
self.adapter.create_dumb_buffer(data.width(), data.height());
data.set_pitch(pitch);
data.set_size(buffer.size() as u64);
*next_id += 1;
buffers.insert(*next_id, Arc::new(buffer));
data.set_handle(*next_id as u32);
Ok(0)
}),
ipc::MODE_MAP_DUMB => ipc::DrmModeMapDumb::with(payload, |mut data| {
if data.offset() != 0 {
return Err(Error::new(EINVAL));
}
let buffer_id = data.handle();
if !buffers.contains_key(&buffer_id) {
return Err(Error::new(EINVAL));
}
// FIXME use a better scheme for creating map offsets
assert!(buffers[&buffer_id].size() < MAP_FAKE_OFFSET_MULTIPLIER);
data.set_offset((buffer_id as usize * MAP_FAKE_OFFSET_MULTIPLIER) as u64);
Ok(0)
}),
ipc::MODE_DESTROY_DUMB => ipc::DrmModeDestroyDumb::with(payload, |data| {
if buffers.remove(&data.handle()).is_none() {
return Err(Error::new(ENOENT));
}
Ok(0)
}),
ipc::MODE_GET_PLANE_RES => ipc::DrmModeGetPlaneRes::with(payload, |mut data| {
let count = self.objects.crtc_ids().len();
let mut ids = Vec::with_capacity(count);
for i in 0..(count as u32) {
ids.push(plane_id(i));
}
data.set_plane_id_ptr(&ids);
Ok(0)
}),
ipc::MODE_GET_PLANE => ipc::DrmModeGetPlane::with(payload, |mut data| {
let i = id_index(data.plane_id());
let crtc_id = self.objects.crtc_ids()[i as usize];
let crtc = self.objects.get_crtc(crtc_id).unwrap();
data.set_crtc_id(crtc_id.0);
data.set_fb_id(
crtc.lock()
.unwrap()
.state
.fb_id
.unwrap_or(KmsObjectId::INVALID)
.0,
);
data.set_possible_crtcs(1 << i);
data.set_format_type_ptr(&[DrmFourcc::Argb8888 as u32]);
Ok(0)
}),
ipc::MODE_OBJ_GET_PROPERTIES => {
ipc::DrmModeObjGetProperties::with(payload, |mut data| {
// FIXME remove once all drm objects are materialized in self.objects
if data.obj_id() >= 1 << 11 {
data.set_props_ptr(&[]);
data.set_prop_values_ptr(&[]);
return Ok(0);
}
let (props, prop_vals) = self
.objects
.get_object_properties_data(KmsObjectId(data.obj_id()))?;
data.set_props_ptr(&props);
data.set_prop_values_ptr(&prop_vals);
data.set_obj_type(self.objects.object_type(KmsObjectId(data.obj_id()))?);
Ok(0)
})
}
ipc::MODE_CURSOR2 => ipc::DrmModeCursor2::with(payload, |data| {
let vt_state = self.vts.get_mut(vt).unwrap();
let cursor_plane = &mut vt_state.cursor_plane;
let update_buffer = data.flags() & DRM_MODE_CURSOR_BO != 0;
if update_buffer {
cursor_plane.buffer = if data.handle() == 0 {
None
} else if let Some(buffer) = buffers.get(&data.handle()) {
Some(buffer.clone())
} else {
return Err(Error::new(EINVAL));
};
cursor_plane.hot_x = data.hot_x();
cursor_plane.hot_y = data.hot_y();
}
if data.flags() & DRM_MODE_CURSOR_MOVE != 0 {
cursor_plane.x = data.x();
cursor_plane.y = data.y();
}
self.adapter.handle_cursor(cursor_plane, update_buffer);
Ok(0)
}),
ipc::MODE_GET_FB2 => ipc::DrmModeFbCmd2::with(payload, |mut data| {
let fb = self.objects.get_framebuffer(KmsObjectId(data.fb_id()))?;
*next_id += 1;
buffers.insert(*next_id, fb.buffer.clone());
data.set_width(fb.width);
data.set_height(fb.height);
data.set_pixel_format(DrmFourcc::Argb8888 as u32);
data.set_handles([*next_id, 0, 0, 0]);
data.set_pitches([fb.width * 4, 0, 0, 0]);
data.set_offsets([0; 4]);
data.set_modifier([0; 4]);
Ok(0)
}),
_ => return Err(Error::new(EINVAL)),
},
}
}
fn mmap_prep(
&mut self,
id: usize,
offset: u64,
_size: usize,
_flags: MapFlags,
_ctx: &CallerCtx,
) -> syscall::Result<usize> {
// log::trace!("KSMSG MMAP {} {:?} {} {}", id, _flags, _offset, _size);
let (framebuffer, offset) = match self.handles.get(id)? {
Handle::V2 {
vt: _,
next_id: _,
buffers,
} => (
buffers
.get(&((offset as usize / MAP_FAKE_OFFSET_MULTIPLIER) as u32))
.ok_or(Error::new(EINVAL))
.unwrap(),
offset & (MAP_FAKE_OFFSET_MULTIPLIER as u64 - 1),
),
Handle::SchemeRoot => return Err(Error::new(EOPNOTSUPP)),
};
let ptr = T::map_dumb_buffer(&mut self.adapter, framebuffer);
Ok(unsafe { ptr.add(offset as usize) } as usize)
}
fn on_close(&mut self, id: usize) {
self.handles.remove(id);
}
}
@@ -0,0 +1,26 @@
[package]
name = "fbbootlogd"
description = "Boot log drawing daemon"
version = "0.1.0"
edition = "2021"
[dependencies]
drm.workspace = true
orbclient.workspace = true
ransid.workspace = true
redox_event.workspace = true
redox_syscall.workspace = true
redox-scheme.workspace = true
scheme-utils = { path = "../../../scheme-utils" }
console-draw = { path = "../console-draw" }
daemon = { path = "../../../daemon" }
graphics-ipc = { path = "../graphics-ipc" }
inputd = { path = "../../inputd" }
libredox.workspace = true
[features]
default = []
[lints]
workspace = true
@@ -0,0 +1,115 @@
//! Fbbootlogd renders the boot log and presents it on VT1.
//!
//! While fbbootlogd is superficially similar to fbcond, the major difference is:
//!
//! * Fbbootlogd doesn't accept input coming from the keyboard. It only allows getting written to.
//!
//! In the future fbbootlogd may also pull from logd as opposed to have logd push logs to it. And it
//! it could display a boot splash like plymouth instead of a boot log when booting in quiet mode.
use std::ops::ControlFlow;
use std::os::fd::AsRawFd;
use event::EventQueue;
use inputd::ConsumerHandleEvent;
use orbclient::Event;
use redox_scheme::Socket;
use scheme_utils::Blocking;
use crate::scheme::FbbootlogScheme;
mod scheme;
fn main() {
daemon::SchemeDaemon::new(daemon);
}
fn daemon(daemon: daemon::SchemeDaemon) -> ! {
let event_queue = EventQueue::new().expect("fbbootlogd: failed to create event queue");
event::user_data! {
enum Source {
Scheme,
Input,
}
}
let socket = Socket::nonblock().expect("fbbootlogd: failed to create fbbootlog scheme");
let mut scheme = FbbootlogScheme::new();
let mut handler = Blocking::new(&socket, 16);
event_queue
.subscribe(
socket.inner().raw(),
Source::Scheme,
event::EventFlags::READ,
)
.expect("fbbootlogd: failed to subscribe to scheme events");
event_queue
.subscribe(
scheme.input_handle.event_handle().as_raw_fd() as usize,
Source::Input,
event::EventFlags::READ,
)
.expect("fbbootlogd: failed to subscribe to scheme events");
{
let log_fd = socket
.create_this_scheme_fd(0, 0, 0, 0)
.expect("fbbootlogd: failed to create log fd");
// Add ourself as log sink
let log_file = libredox::Fd::open(
"/scheme/log/add_sink",
libredox::flag::O_WRONLY | libredox::flag::O_CLOEXEC,
0,
)
.expect("fbbootlogd: failed to open log/add_sink");
log_file
.call_wo(&log_fd.to_ne_bytes(), syscall::CallFlags::FD, &[])
.expect("fbbootlogd: failed to send log fd to log scheme.");
}
let _ = daemon.ready_sync_scheme(&socket, &mut scheme);
// This is not possible for now as fbbootlogd needs to open new displays at runtime for graphics
// driver handoff. In the future inputd may directly pass a handle to the display instead.
//libredox::call::setrens(0, 0).expect("fbbootlogd: failed to enter null namespace");
for event in event_queue {
match event.expect("fbbootlogd: failed to get event").user_data {
Source::Scheme => loop {
match handler
.process_requests_nonblocking(&mut scheme)
.expect("fbbootlogd: failed to process requests")
{
ControlFlow::Continue(()) => {}
ControlFlow::Break(()) => break,
}
},
Source::Input => {
let mut events = [Event::new(); 16];
loop {
match scheme
.input_handle
.read_events(&mut events)
.expect("fbbootlogd: error while reading events")
{
ConsumerHandleEvent::Events(&[]) => break,
ConsumerHandleEvent::Events(events) => {
for event in events {
scheme.handle_input(&event);
}
}
ConsumerHandleEvent::Handoff => {
eprintln!("fbbootlogd: handoff requested");
scheme.handle_handoff();
}
}
}
}
}
}
std::process::exit(0);
}
@@ -0,0 +1,244 @@
use std::cmp;
use std::collections::VecDeque;
use console_draw::{Damage, TextScreen, V2DisplayMap};
use drm::buffer::Buffer;
use drm::control::Device;
use graphics_ipc::V2GraphicsHandle;
use inputd::ConsumerHandle;
use orbclient::{Event, EventOption};
use redox_scheme::scheme::SchemeSync;
use redox_scheme::{CallerCtx, OpenResult};
use scheme_utils::FpathWriter;
use syscall::schemev2::NewFdFlags;
use syscall::{Error, Result, EACCES, EBADF, EINVAL, ENOENT};
pub struct FbbootlogScheme {
pub input_handle: ConsumerHandle,
display_map: Option<V2DisplayMap>,
text_screen: console_draw::TextScreen,
text_buffer: console_draw::TextBuffer,
is_scrollback: bool,
scrollback_offset: usize,
shift: bool,
}
impl FbbootlogScheme {
pub fn new() -> FbbootlogScheme {
let mut scheme = FbbootlogScheme {
input_handle: ConsumerHandle::bootlog_vt().expect("fbbootlogd: Failed to open vt"),
display_map: None,
text_screen: console_draw::TextScreen::new(),
text_buffer: console_draw::TextBuffer::new(1000),
is_scrollback: false,
scrollback_offset: 1000,
shift: false,
};
scheme.handle_handoff();
scheme
}
pub fn handle_handoff(&mut self) {
let new_display_handle = match self.input_handle.open_display_v2() {
Ok(display) => V2GraphicsHandle::from_file(display).unwrap(),
Err(err) => {
eprintln!("fbbootlogd: No display present yet: {err}");
return;
}
};
match V2DisplayMap::new(new_display_handle) {
Ok(display_map) => self.display_map = Some(display_map),
Err(err) => {
eprintln!("fbbootlogd: failed to open display: {}", err);
return;
}
};
eprintln!("fbbootlogd: mapped display");
}
pub fn handle_input(&mut self, ev: &Event) {
match ev.to_option() {
EventOption::Key(key_event) => {
if key_event.scancode == 0x2A || key_event.scancode == 0x36 {
self.shift = key_event.pressed;
} else if !key_event.pressed || !self.shift {
return;
}
match key_event.scancode {
0x48 => {
// Up
if self.scrollback_offset >= 1 {
self.scrollback_offset -= 1;
}
}
0x49 => {
// Page up
if self.scrollback_offset >= 10 {
self.scrollback_offset -= 10;
} else {
self.scrollback_offset = 0;
}
}
0x50 => {
// Down
self.scrollback_offset += 1;
}
0x51 => {
// Page down
self.scrollback_offset += 10;
}
0x47 => {
// Home
self.scrollback_offset = 0;
}
0x4F => {
// End
self.scrollback_offset = self.text_buffer.lines_max;
}
_ => return,
}
}
_ => return,
}
self.handle_scrollback_render();
}
fn handle_scrollback_render(&mut self) {
let Some(map) = &mut self.display_map else {
return;
};
let buffer_len = self.text_buffer.lines.len();
// for both extra space on wrapping text and a scrollback indicator
let spare_lines = 3;
self.is_scrollback = true;
self.scrollback_offset = cmp::min(
self.scrollback_offset,
buffer_len - map.buffer.buffer().size().1 as usize / 16 + spare_lines,
);
let mut i = self.scrollback_offset;
self.text_screen
.write(map, b"\x1B[1;1H\x1B[2J", &mut VecDeque::new());
let mut total_damage = Damage::NONE;
while i < buffer_len {
let mut damage =
self.text_screen
.write(map, &self.text_buffer.lines[i][..], &mut VecDeque::new());
i += 1;
let yd = (damage.y + damage.height) as usize;
if i == buffer_len || yd + spare_lines * 16 > map.buffer.buffer().size().1 as usize {
// render until end of screen
damage.height = map.buffer.buffer().size().1 - damage.y;
total_damage = total_damage.merge(damage);
self.is_scrollback = i < buffer_len;
break;
} else {
total_damage = total_damage.merge(damage);
}
}
map.dirty_fb(total_damage).unwrap();
}
fn handle_resize(map: &mut V2DisplayMap, text_screen: &mut TextScreen) {
let mode = match map
.display_handle
.first_display()
.and_then(|handle| Ok(map.display_handle.get_connector(handle, true)?.modes()[0]))
{
Ok(mode) => mode,
Err(err) => {
eprintln!("fbbootlogd: failed to get display size: {}", err);
return;
}
};
if (u32::from(mode.size().0), u32::from(mode.size().1)) != map.buffer.buffer().size() {
match text_screen.resize(map, mode) {
Ok(()) => eprintln!("fbbootlogd: mapped display"),
Err(err) => {
eprintln!("fbbootlogd: failed to create or map framebuffer: {}", err);
return;
}
}
}
}
}
const SCHEME_ROOT_ID: usize = 1;
impl SchemeSync for FbbootlogScheme {
fn scheme_root(&mut self) -> Result<usize> {
Ok(SCHEME_ROOT_ID)
}
fn openat(
&mut self,
dirfd: usize,
path_str: &str,
_flags: usize,
_fcntl_flags: u32,
_ctx: &CallerCtx,
) -> Result<OpenResult> {
if dirfd != SCHEME_ROOT_ID {
return Err(Error::new(EACCES));
}
if !path_str.is_empty() {
return Err(Error::new(ENOENT));
}
Ok(OpenResult::ThisScheme {
number: 0,
flags: NewFdFlags::empty(),
})
}
fn fpath(&mut self, _id: usize, buf: &mut [u8], _ctx: &CallerCtx) -> Result<usize> {
FpathWriter::with_legacy(buf, "fbbootlog", |_| Ok(()))
}
fn fsync(&mut self, _id: usize, _ctx: &CallerCtx) -> Result<()> {
Ok(())
}
fn read(
&mut self,
_id: usize,
_buf: &mut [u8],
_offset: u64,
_fcntl_flags: u32,
_ctx: &CallerCtx,
) -> Result<usize> {
Err(Error::new(EINVAL))
}
fn write(
&mut self,
id: usize,
buf: &[u8],
_offset: u64,
_fcntl_flags: u32,
_ctx: &CallerCtx,
) -> Result<usize> {
if id == SCHEME_ROOT_ID {
return Err(Error::new(EBADF));
}
if let Some(map) = &mut self.display_map {
Self::handle_resize(map, &mut self.text_screen);
self.text_buffer.write(buf);
if !self.is_scrollback {
let damage = self.text_screen.write(map, buf, &mut VecDeque::new());
if let Some(map) = &mut self.display_map {
map.dirty_fb(damage).unwrap();
}
}
}
Ok(buf.len())
}
}
@@ -0,0 +1,28 @@
[package]
name = "fbcond"
description = "Terminal daemon"
version = "0.1.0"
edition = "2021"
[dependencies]
drm.workspace = true
log.workspace = true
orbclient.workspace = true
ransid.workspace = true
redox_event.workspace = true
redox_syscall.workspace = true
redox-scheme.workspace = true
scheme-utils = { path = "../../../scheme-utils" }
common = { path = "../../common" }
console-draw = { path = "../console-draw" }
daemon = { path = "../../../daemon" }
graphics-ipc = { path = "../graphics-ipc" }
inputd = { path = "../../inputd" }
libredox.workspace = true
[features]
default = []
[lints]
workspace = true
@@ -0,0 +1,83 @@
use console_draw::{Damage, TextScreen, V2DisplayMap};
use drm::buffer::Buffer;
use drm::control::Device;
use graphics_ipc::V2GraphicsHandle;
use inputd::ConsumerHandle;
use std::io;
pub struct Display {
pub input_handle: ConsumerHandle,
pub map: Option<V2DisplayMap>,
}
impl Display {
pub fn open_new_vt() -> io::Result<Self> {
let mut display = Self {
input_handle: ConsumerHandle::new_vt()?,
map: None,
};
display.reopen_for_handoff();
Ok(display)
}
/// Re-open the display after a handoff.
pub fn reopen_for_handoff(&mut self) {
let display_file = match self.input_handle.open_display_v2() {
Ok(display_file) => display_file,
Err(err) => {
log::error!("fbcond: No display present yet: {err}");
return;
}
};
let new_display_handle = V2GraphicsHandle::from_file(display_file).unwrap();
log::debug!("fbcond: Opened new display");
match V2DisplayMap::new(new_display_handle) {
Ok(map) => {
log::debug!(
"fbcond: Mapped new display with size {}x{}",
map.buffer.buffer().size().0,
map.buffer.buffer().size().1,
);
self.map = Some(map)
}
Err(err) => {
log::error!("fbcond: failed to map new display: {err}");
return;
}
}
}
pub fn handle_resize(map: &mut V2DisplayMap, text_screen: &mut TextScreen) {
let mode = match map
.display_handle
.first_display()
.and_then(|handle| Ok(map.display_handle.get_connector(handle, true)?.modes()[0]))
{
Ok(mode) => mode,
Err(err) => {
eprintln!("fbcond: failed to get display size: {}", err);
return;
}
};
if (u32::from(mode.size().0), u32::from(mode.size().1)) != map.buffer.buffer().size() {
match text_screen.resize(map, mode) {
Ok(()) => eprintln!("fbcond: mapped display"),
Err(err) => {
eprintln!("fbcond: failed to create or map framebuffer: {}", err);
return;
}
}
}
}
pub fn sync_rect(&mut self, damage: Damage) {
if let Some(map) = &mut self.map {
map.dirty_fb(damage).unwrap();
}
}
}
@@ -0,0 +1,253 @@
use event::EventQueue;
use inputd::ConsumerHandleEvent;
use libredox::errno::{EAGAIN, EINTR};
use orbclient::Event;
use redox_scheme::{
scheme::{Op, SchemeResponse, SchemeState, SchemeSync},
CallerCtx, RequestKind, Response, SignalBehavior, Socket,
};
use std::env;
use syscall::{EOPNOTSUPP, EVENT_READ};
use crate::scheme::{FbconScheme, Handle, VtIndex};
mod display;
mod scheme;
mod text;
fn main() {
daemon::SchemeDaemon::new(daemon);
}
fn daemon(daemon: daemon::SchemeDaemon) -> ! {
let vt_ids = env::args()
.skip(1)
.map(|arg| arg.parse().expect("invalid vt number"))
.collect::<Vec<_>>();
common::setup_logging(
"graphics",
"fbcond",
"fbcond",
common::output_level(),
common::file_level(),
);
let mut event_queue = EventQueue::new().expect("fbcond: failed to create event queue");
// FIXME listen for resize events from inputd and handle them
let mut socket = Socket::nonblock().expect("fbcond: failed to create fbcon scheme");
event_queue
.subscribe(
socket.inner().raw(),
VtIndex::SCHEMA_SENTINEL,
event::EventFlags::READ,
)
.expect("fbcond: failed to subscribe to scheme events");
let mut state = SchemeState::new();
let mut scheme = FbconScheme::new(&vt_ids, &mut event_queue);
let _ = daemon.ready_sync_scheme(&socket, &mut scheme);
// This is not possible for now as fbcond needs to open new displays at runtime for graphics
// driver handoff. In the future inputd may directly pass a handle to the display instead.
// libredox::call::setrens(0, 0).expect("fbcond: failed to enter null namespace");
let mut blocked = Vec::new();
// Handle all events that could have happened before registering with the event queue.
handle_event(
&mut socket,
&mut scheme,
&mut state,
&mut blocked,
VtIndex::SCHEMA_SENTINEL,
);
for vt_i in scheme.vts.keys().copied().collect::<Vec<_>>() {
handle_event(&mut socket, &mut scheme, &mut state, &mut blocked, vt_i);
}
for event in event_queue {
let event = event.expect("fbcond: failed to read event from event queue");
handle_event(
&mut socket,
&mut scheme,
&mut state,
&mut blocked,
event.user_data,
);
}
std::process::exit(0);
}
fn handle_event(
socket: &mut Socket,
scheme: &mut FbconScheme,
state: &mut SchemeState,
blocked: &mut Vec<(Op, CallerCtx)>,
event: VtIndex,
) {
match event {
VtIndex::SCHEMA_SENTINEL => loop {
let request = match socket.next_request(SignalBehavior::Restart) {
Ok(Some(request)) => request,
Ok(None) => {
// Scheme likely got unmounted
std::process::exit(0);
}
Err(err) if err.errno == EAGAIN => {
break;
}
Err(err) => panic!("fbcond: failed to read display scheme: {err}"),
};
match request.kind() {
RequestKind::Call(req) => {
let caller = req.caller();
let mut op = match req.op() {
Ok(op) => op,
Err(req) => {
let _ = socket
.write_response(
Response::err(EOPNOTSUPP, req),
SignalBehavior::Restart,
)
.expect("fbcond: failed to write responses to fbcon scheme");
continue;
}
};
match op.handle_sync_dont_consume(&caller, scheme, state) {
SchemeResponse::Opened(Err(e)) | SchemeResponse::Regular(Err(e))
if libredox::error::Error::from(e).is_wouldblock()
&& !op.is_explicitly_nonblock() =>
{
blocked.push((op, caller));
}
SchemeResponse::Regular(r) => {
let _ = socket
.write_response(Response::new(r, op), SignalBehavior::Restart)
.expect("fbcond: failed to write responses to fbcon scheme");
}
SchemeResponse::Opened(o) => {
let _ = socket
.write_response(
Response::open_dup_like(o, op),
SignalBehavior::Restart,
)
.expect("fbcond: failed to write responses to fbcon scheme");
}
SchemeResponse::RegularAndNotifyOnDetach(status) => {
let _ = socket
.write_response(
Response::new_notify_on_detach(status, op),
SignalBehavior::Restart,
)
.expect("fbcond: failed to write scheme");
}
}
}
RequestKind::OnClose { id } => {
scheme.on_close(id);
}
RequestKind::Cancellation(cancellation_request) => {
if let Some(i) = blocked
.iter()
.position(|(_op, caller)| caller.id == cancellation_request.id)
{
let (blocked_req, _) = blocked.remove(i);
let resp = Response::err(EINTR, blocked_req);
socket
.write_response(resp, SignalBehavior::Restart)
.expect("vesad: failed to write display scheme");
}
}
_ => {}
}
},
vt_i => {
let vt = scheme.vts.get_mut(&vt_i).unwrap();
let mut events = [Event::new(); 16];
loop {
match vt
.display
.input_handle
.read_events(&mut events)
.expect("fbcond: Error while reading events")
{
ConsumerHandleEvent::Events(&[]) => break,
ConsumerHandleEvent::Events(events) => {
for event in events {
vt.input(event)
}
}
ConsumerHandleEvent::Handoff => vt.handle_handoff(),
}
}
}
}
// If there are blocked readers, try to handle them.
{
let mut i = 0;
while i < blocked.len() {
let (op, caller) = blocked
.get_mut(i)
.expect("vesad: Failed to get blocked request");
let resp = match op.handle_sync_dont_consume(&caller, scheme, state) {
SchemeResponse::Opened(Err(e)) | SchemeResponse::Regular(Err(e))
if libredox::error::Error::from(e).is_wouldblock()
&& !op.is_explicitly_nonblock() =>
{
i += 1;
continue;
}
SchemeResponse::Regular(r) => {
let (op, _) = blocked.remove(i);
Response::new(r, op)
}
SchemeResponse::Opened(o) => {
let (op, _) = blocked.remove(i);
Response::open_dup_like(o, op)
}
SchemeResponse::RegularAndNotifyOnDetach(status) => {
let (op, _) = blocked.remove(i);
Response::new_notify_on_detach(status, op)
}
};
let _ = socket
.write_response(resp, SignalBehavior::Restart)
.expect("vesad: failed to write display scheme");
}
}
for (handle_id, handle) in scheme.handles.iter_mut() {
let handle = match handle {
Handle::SchemeRoot => continue,
Handle::Vt(handle) => handle,
};
if !handle.events.contains(EVENT_READ) {
continue;
}
let can_read = scheme
.vts
.get(&handle.vt_i)
.map_or(false, |console| console.can_read());
if can_read {
if !handle.notified_read {
handle.notified_read = true;
let response = Response::post_fevent(*handle_id, EVENT_READ.bits());
socket
.write_response(response, SignalBehavior::Restart)
.expect("fbcond: failed to write display event");
}
} else {
handle.notified_read = false;
}
}
}
@@ -0,0 +1,193 @@
use std::collections::BTreeMap;
use std::os::fd::AsRawFd;
use event::{EventQueue, UserData};
use redox_scheme::scheme::SchemeSync;
use redox_scheme::{CallerCtx, OpenResult};
use scheme_utils::{FpathWriter, HandleMap};
use syscall::schemev2::NewFdFlags;
use syscall::{Error, EventFlags, Result, EACCES, EAGAIN, EBADF, ENOENT, O_NONBLOCK};
use crate::display::Display;
use crate::text::TextScreen;
#[derive(Clone, Copy, Eq, Ord, PartialEq, PartialOrd, Debug)]
pub struct VtIndex(usize);
impl VtIndex {
pub const SCHEMA_SENTINEL: VtIndex = VtIndex(usize::MAX);
}
impl UserData for VtIndex {
fn into_user_data(self) -> usize {
self.0
}
fn from_user_data(user_data: usize) -> Self {
VtIndex(user_data)
}
}
pub struct FdHandle {
pub vt_i: VtIndex,
pub flags: usize,
pub events: EventFlags,
pub notified_read: bool,
}
pub enum Handle {
Vt(FdHandle),
SchemeRoot,
}
pub struct FbconScheme {
pub vts: BTreeMap<VtIndex, TextScreen>,
pub handles: HandleMap<Handle>,
}
impl FbconScheme {
pub fn new(vt_ids: &[usize], event_queue: &mut EventQueue<VtIndex>) -> FbconScheme {
let mut vts = BTreeMap::new();
for &vt_i in vt_ids {
let display = Display::open_new_vt().expect("Failed to open display for vt");
event_queue
.subscribe(
display.input_handle.event_handle().as_raw_fd() as usize,
VtIndex(vt_i),
event::EventFlags::READ,
)
.expect("Failed to subscribe to input events for vt");
vts.insert(VtIndex(vt_i), TextScreen::new(display));
}
FbconScheme {
vts,
handles: HandleMap::new(),
}
}
fn get_vt_handle_mut(&mut self, id: usize) -> Result<&mut FdHandle> {
match self.handles.get_mut(id)? {
Handle::Vt(handle) => Ok(handle),
Handle::SchemeRoot => Err(Error::new(EBADF)),
}
}
}
impl SchemeSync for FbconScheme {
fn scheme_root(&mut self) -> Result<usize> {
Ok(self.handles.insert(Handle::SchemeRoot))
}
fn openat(
&mut self,
dirfd: usize,
path_str: &str,
flags: usize,
fcntl_flags: u32,
_ctx: &CallerCtx,
) -> Result<OpenResult> {
if !matches!(self.handles.get(dirfd)?, Handle::SchemeRoot) {
return Err(Error::new(EACCES));
}
let vt_i = VtIndex(path_str.parse::<usize>().map_err(|_| Error::new(ENOENT))?);
if self.vts.contains_key(&vt_i) {
let id = self.handles.insert(Handle::Vt(FdHandle {
vt_i,
flags: flags | fcntl_flags as usize,
events: EventFlags::empty(),
notified_read: false,
}));
Ok(OpenResult::ThisScheme {
number: id,
flags: NewFdFlags::empty(),
})
} else {
Err(Error::new(ENOENT))
}
}
fn fevent(
&mut self,
id: usize,
flags: syscall::EventFlags,
_ctx: &CallerCtx,
) -> Result<syscall::EventFlags> {
let handle = self.get_vt_handle_mut(id)?;
handle.notified_read = false;
handle.events = flags;
Ok(syscall::EventFlags::empty())
}
fn fpath(&mut self, id: usize, buf: &mut [u8], _ctx: &CallerCtx) -> Result<usize> {
FpathWriter::with_legacy(buf, "fbcon", |w| {
let handle = self.get_vt_handle_mut(id)?;
write!(w, "{}", handle.vt_i.0).unwrap();
Ok(())
})
}
fn fsync(&mut self, id: usize, _ctx: &CallerCtx) -> Result<()> {
let _handle = self.get_vt_handle_mut(id)?;
Ok(())
}
fn fcntl(&mut self, id: usize, _cmd: usize, _arg: usize, _ctx: &CallerCtx) -> Result<usize> {
self.handles.get(id)?;
Ok(0)
}
fn read(
&mut self,
id: usize,
buf: &mut [u8],
_offset: u64,
_fcntl_flags: u32,
_ctx: &CallerCtx,
) -> Result<usize> {
let handle = match self.handles.get(id)? {
Handle::Vt(handle) => Ok(handle),
Handle::SchemeRoot => Err(Error::new(EBADF)),
}?;
if let Some(screen) = self.vts.get_mut(&handle.vt_i) {
if !screen.can_read() {
if handle.flags & O_NONBLOCK != 0 {
Err(Error::new(EAGAIN))
} else {
Err(Error::new(EAGAIN))
}
} else {
screen.read(buf)
}
} else {
Err(Error::new(EBADF))
}
}
fn write(
&mut self,
id: usize,
buf: &[u8],
_offset: u64,
_fcntl_flags: u32,
_ctx: &CallerCtx,
) -> Result<usize> {
let vt_i = self.get_vt_handle_mut(id)?.vt_i;
if let Some(console) = self.vts.get_mut(&vt_i) {
console.write(buf)
} else {
Err(Error::new(EBADF))
}
}
fn on_close(&mut self, id: usize) {
self.handles.remove(id);
}
}
@@ -0,0 +1,134 @@
use std::collections::VecDeque;
use orbclient::{Event, EventOption};
use syscall::error::*;
use crate::display::Display;
pub struct TextScreen {
pub display: Display,
inner: console_draw::TextScreen,
ctrl: bool,
input: VecDeque<u8>,
}
impl TextScreen {
pub fn new(display: Display) -> TextScreen {
TextScreen {
display,
inner: console_draw::TextScreen::new(),
ctrl: false,
input: VecDeque::new(),
}
}
pub fn handle_handoff(&mut self) {
log::info!("fbcond: Performing handoff");
self.display.reopen_for_handoff();
}
pub fn input(&mut self, event: &Event) {
let mut buf = vec![];
match event.to_option() {
EventOption::Key(key_event) => {
if key_event.scancode == 0x1D {
self.ctrl = key_event.pressed;
} else if key_event.pressed {
match key_event.scancode {
0x0E => {
// Backspace
buf.extend_from_slice(b"\x7F");
}
0x47 => {
// Home
buf.extend_from_slice(b"\x1B[H");
}
0x48 => {
// Up
buf.extend_from_slice(b"\x1B[A");
}
0x49 => {
// Page up
buf.extend_from_slice(b"\x1B[5~");
}
0x4B => {
// Left
buf.extend_from_slice(b"\x1B[D");
}
0x4D => {
// Right
buf.extend_from_slice(b"\x1B[C");
}
0x4F => {
// End
buf.extend_from_slice(b"\x1B[F");
}
0x50 => {
// Down
buf.extend_from_slice(b"\x1B[B");
}
0x51 => {
// Page down
buf.extend_from_slice(b"\x1B[6~");
}
0x52 => {
// Insert
buf.extend_from_slice(b"\x1B[2~");
}
0x53 => {
// Delete
buf.extend_from_slice(b"\x1B[3~");
}
_ => {
let c = match key_event.character {
c @ 'A'..='Z' if self.ctrl => ((c as u8 - b'A') + b'\x01') as char,
c @ 'a'..='z' if self.ctrl => ((c as u8 - b'a') + b'\x01') as char,
c => c,
};
if c != '\0' {
let mut b = [0; 4];
buf.extend_from_slice(c.encode_utf8(&mut b).as_bytes());
}
}
}
}
}
_ => (), //TODO: Mouse in terminal
}
for &b in buf.iter() {
self.input.push_back(b);
}
}
pub fn can_read(&self) -> bool {
!self.input.is_empty()
}
}
impl TextScreen {
pub fn read(&mut self, buf: &mut [u8]) -> Result<usize> {
let mut i = 0;
while i < buf.len() && !self.input.is_empty() {
buf[i] = self.input.pop_front().unwrap();
i += 1;
}
Ok(i)
}
pub fn write(&mut self, buf: &[u8]) -> Result<usize> {
if let Some(map) = &mut self.display.map {
Display::handle_resize(map, &mut self.inner);
let damage = self.inner.write(map, buf, &mut self.input);
self.display.sync_rect(damage);
}
Ok(buf.len())
}
}
@@ -0,0 +1,11 @@
[package]
name = "graphics-ipc"
description = "Shared graphics IPC code library"
version = "0.1.0"
edition = "2021"
[dependencies]
drm.workspace = true
[lints]
workspace = true
@@ -0,0 +1,127 @@
use std::fs::File;
use std::os::fd::{AsFd, BorrowedFd};
use std::{io, mem, ptr};
use drm::buffer::Buffer;
use drm::control::connector::{self, State};
use drm::control::dumbbuffer::{DumbBuffer, DumbMapping};
use drm::control::Device as _;
use drm::{Device as _, DriverCapability};
/// A graphics handle using the v2 graphics API.
///
/// The v2 graphics API allows creating framebuffers on the fly, using them for page flipping and
/// handles all displays using a single fd. This is basically a subset of the Linux DRM interface
/// with a couple of custom ioctls in the place of the KMS ioctls that are missing.
pub struct V2GraphicsHandle {
file: File,
}
impl AsFd for V2GraphicsHandle {
fn as_fd(&self) -> BorrowedFd<'_> {
self.file.as_fd()
}
}
impl drm::Device for V2GraphicsHandle {}
impl drm::control::Device for V2GraphicsHandle {}
impl V2GraphicsHandle {
pub fn from_file(file: File) -> io::Result<Self> {
let handle = V2GraphicsHandle { file };
assert!(handle.get_driver_capability(DriverCapability::DumbBuffer)? == 1);
Ok(handle)
}
pub fn first_display(&self) -> io::Result<connector::Handle> {
for &connector in self.resource_handles().unwrap().connectors() {
if self.get_connector(connector, true)?.state() == State::Connected {
return Ok(connector);
}
}
Err(io::Error::other("no connected display"))
}
}
pub struct CpuBackedBuffer {
buffer: DumbBuffer,
map: DumbMapping<'static>,
shadow: Option<Box<[u8]>>,
}
impl CpuBackedBuffer {
pub fn new(
display_handle: &V2GraphicsHandle,
size: (u32, u32),
format: drm::buffer::DrmFourcc,
bpp: u32,
) -> io::Result<CpuBackedBuffer> {
let mut buffer = display_handle.create_dumb_buffer(size, format, bpp)?;
let map = display_handle.map_dumb_buffer(&mut buffer)?;
let map = unsafe { mem::transmute::<DumbMapping<'_>, DumbMapping<'static>>(map) };
let shadow = if display_handle
.get_driver_capability(DriverCapability::DumbPreferShadow)
.unwrap_or(1)
== 0
{
None
} else {
Some(vec![0; map.len()].into_boxed_slice())
};
Ok(CpuBackedBuffer {
buffer,
map,
shadow,
})
}
pub fn buffer(&self) -> &DumbBuffer {
&self.buffer
}
pub fn has_shadow_buf(&self) -> bool {
self.shadow.is_some()
}
pub fn shadow_buf(&mut self) -> &mut [u8] {
self.shadow.as_deref_mut().unwrap_or(&mut *self.map)
}
pub fn sync_rect(&mut self, x: u32, y: u32, width: u32, height: u32) {
let Some(shadow) = &self.shadow else {
return; // No shadow buffer; all writes are already propagated to the GPU.
};
assert!(x.checked_add(width).unwrap() <= self.buffer.size().0);
assert!(y.checked_add(height).unwrap() <= self.buffer.size().1);
let start_x: usize = x.try_into().unwrap();
let start_y: usize = y.try_into().unwrap();
let w: usize = width.try_into().unwrap();
let h: usize = height.try_into().unwrap();
let offscreen_ptr = shadow.as_ptr().cast::<u32>();
let onscreen_ptr = self.map.as_mut_ptr().cast::<u32>();
for row in start_y..start_y + h {
unsafe {
ptr::copy_nonoverlapping(
offscreen_ptr.add(row * self.buffer.pitch() as usize / 4 + start_x),
onscreen_ptr.add(row * self.buffer.pitch() as usize / 4 + start_x),
w,
);
}
}
// No need for a wbinvd to flush the write combining writes as they are
// already flushed on the next syscall anyway. And the user will need
// to do a DRM ioctl to actually present the changes on the display.
}
pub fn destroy(self, display_handle: &V2GraphicsHandle) -> io::Result<()> {
display_handle.destroy_dumb_buffer(self.buffer)
}
}
@@ -0,0 +1,30 @@
[package]
name = "ihdgd"
description = "Intel graphics driver"
version = "0.1.0"
edition = "2021"
[dependencies]
bitbang-hal = "0.3"
drm-sys.workspace = true
edid.workspace = true #TODO: edid is abandoned, fork it and maintain?
#TODO: waiting for bitbang-hal to update to embedded-hal 1.0
embedded-hal = { version = "0.2.7", features = ["unproven"] }
log.workspace = true
nb = "1.0"
# Patched to allow for exact range allocation
range-alloc = { git = "https://github.com/jackpot51/range-alloc.git" }
void = "1.0"
common = { path = "../../common" }
daemon = { path = "../../../daemon" }
driver-graphics = { path = "../driver-graphics" }
pcid = { path = "../../pcid" }
libredox.workspace = true
redox-scheme.workspace = true
redox_event.workspace = true
redox_syscall.workspace = true
[lints]
workspace = true
@@ -0,0 +1,55 @@
[[drivers]]
name = "Intel HD Graphics"
class = 0x03
ids = { 0x8086 = [
# Kaby Lake from Volume 4: Configurations in
# https://www.intel.com/content/www/us/en/docs/graphics-for-linux/developer-reference/1-0/kaby-lake.html
0x5912,
0x5916,
0x591B,
0x591E,
0x5926,
# Comet Lake from Volume 1: Configurations in
# https://www.intel.com/content/www/us/en/docs/graphics-for-linux/developer-reference/1-0/comet-lake.html
0x9B21,
0x9B41,
0x9BA4,
0x9BAA,
0x9BAC,
0x9BC4,
0x9BC5,
0x9BC6,
0x9BC8,
0x9BCA,
0x9BCC,
0x9BE6,
0x9BF6,
# Tiger Lake Mobile from Volume 4: Configurations in
# https://www.intel.com/content/www/us/en/docs/graphics-for-linux/developer-reference/1-0/tiger-lake.html
0x9A40,
0x9A49,
0x9A60,
0x9A68,
0x9A70,
0x9A78,
# Alchemist from Volume 4: Configurations in
# https://www.intel.com/content/www/us/en/docs/graphics-for-linux/developer-reference/1-0/alchemist-arctic-sound-m.html
0x5690, # A770M
0x5691, # A730M
0x5692, # A550M
0x5693, # A370M
0x5694, # A350M
0x5696, # A570M
0x5697, # A530M
0x56A0, # A770
0x56A1, # A750
0x56A5, # A380
0x56A6, # A310
0x56B0, # Pro A30M
0x56B1, # Pro A40/A50
0x56B2, # Pro A60M
0x56B3, # Pro A60
0x56C0, # GPU Flex 170
0x56C1, # GPU Flex 140
] }
command = ["ihdgd"]
@@ -0,0 +1,169 @@
use common::{io::Io, timeout::Timeout};
use embedded_hal::blocking::i2c::{self, Operation, SevenBitAddress, Transactional};
use super::ddi::*;
pub struct Aux<'a> {
ddi: &'a mut Ddi,
}
impl<'a> Aux<'a> {
pub fn new(ddi: &'a mut Ddi) -> Self {
Self { ddi }
}
}
impl<'a> Transactional for Aux<'a> {
type Error = ();
fn exec(&mut self, addr7: SevenBitAddress, full_ops: &mut [Operation<'_>]) -> Result<(), ()> {
// Break ops into 16-byte chunks that will fit into aux data
let mut ops = Vec::new();
for op in full_ops.iter_mut() {
match op {
Operation::Read(buf) => {
for chunk in buf.chunks_mut(16) {
ops.push(Operation::Read(chunk));
}
}
Operation::Write(buf) => {
for chunk in buf.chunks(16) {
ops.push(Operation::Write(chunk));
}
}
}
}
let ops_len = ops.len();
for (i, op) in ops.iter_mut().enumerate() {
// Write header and data
let mut header = 0;
match op {
Operation::Read(_) => {
header |= 1 << 4;
}
Operation::Write(_) => (),
}
if (i + 1) < ops_len {
// Middle of transaction
header |= 1 << 6;
}
let mut aux_datas = [0u8; 20];
let mut aux_data_i = 0;
aux_datas[aux_data_i] = header;
aux_data_i += 1;
//TODO: what is this byte?
aux_datas[aux_data_i] = 0;
aux_data_i += 1;
aux_datas[aux_data_i] = addr7;
aux_data_i += 1;
match op {
Operation::Read(buf) => {
if !buf.is_empty() {
aux_datas[aux_data_i] = (buf.len() - 1) as u8;
aux_data_i += 1;
}
}
Operation::Write(buf) => {
if !buf.is_empty() {
aux_datas[aux_data_i] = (buf.len() - 1) as u8;
aux_data_i += 1;
for b in buf.iter() {
aux_datas[aux_data_i] = *b;
aux_data_i += 1;
}
}
}
}
// Write data to registers (big endian, dword access only)
for (i, chunk) in aux_datas.chunks(4).enumerate() {
let mut bytes = [0; 4];
bytes[..chunk.len()].copy_from_slice(&chunk);
self.ddi.aux_datas[i].write(u32::from_be_bytes(bytes));
}
let mut v = self.ddi.aux_ctl.read();
// Set length
v &= !DDI_AUX_CTL_SIZE_MASK;
v |= (aux_data_i as u32) << DDI_AUX_CTL_SIZE_SHIFT;
// Set timeout
v &= !DDI_AUX_CTL_TIMEOUT_MASK;
v |= DDI_AUX_CTL_TIMEOUT_4000US;
// Set I/O select to legacy (cleared)
//TODO: TBT support?
v &= !DDI_AUX_CTL_IO_SELECT;
// Start transaction
v |= DDI_AUX_CTL_BUSY;
self.ddi.aux_ctl.write(v);
// Wait while busy
let timeout = Timeout::from_secs(1);
while self.ddi.aux_ctl.readf(DDI_AUX_CTL_BUSY) {
timeout.run().map_err(|()| {
log::debug!(
"AUX I2C transaction wait timeout 0x{:08X}",
self.ddi.aux_ctl.read()
);
()
})?;
}
// Read result
v = self.ddi.aux_ctl.read();
if (v & DDI_AUX_CTL_TIMEOUT_ERROR) != 0 {
log::debug!("AUX I2C transaction timeout error");
return Err(());
}
if (v & DDI_AUX_CTL_RECEIVE_ERROR) != 0 {
log::debug!("AUX I2C transaction receive error");
return Err(());
}
if (v & DDI_AUX_CTL_DONE) == 0 {
log::debug!("AUX I2C transaction done not set");
return Err(());
}
// Read data from registers (big endian, dword access only)
for (i, chunk) in aux_datas.chunks_mut(4).enumerate() {
let bytes = self.ddi.aux_datas[i].read().to_be_bytes();
chunk.copy_from_slice(&bytes[..chunk.len()]);
}
aux_data_i = 0;
let response = aux_datas[aux_data_i];
if response != 0 {
log::debug!("AUX I2C unexpected response {:02X}", response);
return Err(());
}
aux_data_i += 1;
match op {
Operation::Read(buf) => {
if !buf.is_empty() {
for b in buf.iter_mut() {
*b = aux_datas[aux_data_i];
aux_data_i += 1;
}
}
}
Operation::Write(_) => (),
}
}
Ok(())
}
}
impl<'a> i2c::WriteRead for Aux<'a> {
type Error = ();
fn write_read(
&mut self,
addr7: SevenBitAddress,
bytes: &[u8],
buffer: &mut [u8],
) -> Result<(), ()> {
self.exec(
addr7,
&mut [Operation::Write(bytes), Operation::Read(buffer)],
)
}
}
@@ -0,0 +1,233 @@
use common::io::{Io, Mmio, ReadOnly};
use std::mem;
use syscall::error::{Error, Result, EIO};
use super::MmioRegion;
const MBOX_VBT: u32 = 1 << 3;
fn read_bios_string(array: &[ReadOnly<Mmio<u8>>]) -> String {
let mut string = String::new();
for reg in array.iter() {
let b = reg.read();
if b == 0 {
break;
}
string.push(b as char);
}
string
}
#[repr(C, packed)]
pub struct BiosHeader {
signature: [ReadOnly<Mmio<u8>>; 16],
size: ReadOnly<Mmio<u32>>,
struct_version: ReadOnly<Mmio<u32>>,
system_bios_version: [ReadOnly<Mmio<u8>>; 32],
video_bios_version: [ReadOnly<Mmio<u8>>; 16],
//TODO: should we write graphics driver version?
graphics_driver_version: [ReadOnly<Mmio<u8>>; 16],
mailboxes: ReadOnly<Mmio<u32>>,
driver_model: Mmio<u32>,
platform_config: ReadOnly<Mmio<u32>>,
gop_version: [ReadOnly<Mmio<u8>>; 32],
}
impl BiosHeader {
pub fn dump(&self) {
eprint!(" op region header");
eprint!(" signature {:?}", read_bios_string(&self.signature));
eprint!(" size {:08X}", self.size.read());
eprint!(" struct_version {:08X}", self.struct_version.read());
eprint!(
" system_bios_version {:?}",
read_bios_string(&self.system_bios_version)
);
eprint!(
" video_bios_version {:?}",
read_bios_string(&self.video_bios_version)
);
eprint!(
" graphics_driver_version {:?}",
read_bios_string(&self.graphics_driver_version)
);
eprint!(" mailboxes {:08X}", self.mailboxes.read());
eprint!(" driver_model {:08X}", self.driver_model.read());
eprint!(" platform_config {:08X}", self.platform_config.read());
eprint!(" gop_version {:?}", read_bios_string(&self.gop_version));
eprintln!();
}
}
#[repr(C, packed)]
pub struct VbtHeader {
signature: [ReadOnly<Mmio<u8>>; 20],
version: Mmio<u16>,
header_size: Mmio<u16>,
vbt_size: Mmio<u16>,
vbt_checksum: Mmio<u8>,
_rsvd: Mmio<u8>,
bdb_offset: Mmio<u32>,
aim_offsets: [Mmio<u32>; 4],
}
impl VbtHeader {
pub fn dump(&self) {
eprint!(" VBT header");
eprint!(" signature {:?}", read_bios_string(&self.signature));
eprint!(" version {:04X}", self.version.read());
eprint!(" header_size {:04X}", self.header_size.read());
eprint!(" vbt_size {:04X}", self.vbt_size.read());
eprint!(" vbt_checksum {:02X}", self.vbt_checksum.read());
eprint!(" bdb_offset {:08X}", self.bdb_offset.read());
for (i, aim_offset) in self.aim_offsets.iter().enumerate() {
eprint!(" aim_offset{} {:08X}", i, aim_offset.read());
}
eprintln!();
}
}
#[repr(C, packed)]
pub struct BdbHeader {
signature: [ReadOnly<Mmio<u8>>; 16],
version: Mmio<u16>,
header_size: Mmio<u16>,
bdb_size: Mmio<u16>,
}
impl BdbHeader {
pub fn dump(&self) {
eprint!(" BDB header");
eprint!(" signature {:?}", read_bios_string(&self.signature));
eprint!(" version {:04X}", self.version.read());
eprint!(" header_size {:04X}", self.header_size.read());
eprint!(" bdb_size {:04X}", self.bdb_size.read());
eprintln!();
}
}
#[repr(C, packed)]
pub struct BdbBlock {
id: Mmio<u8>,
size: Mmio<u16>,
}
impl BdbBlock {
pub fn dump(&self) {
eprint!(" BDB block");
eprint!(" id {}", self.id.read());
eprint!(" size {}", self.size.read());
eprintln!();
}
}
#[repr(C, packed)]
pub struct BdbGeneralDefinitions {
crt_ddc_gmbus_pin: Mmio<u8>,
dpms: Mmio<u8>,
boot_displays: [Mmio<u8>; 2],
child_dev_size: Mmio<u8>,
}
impl BdbGeneralDefinitions {
pub fn dump(&self) {
eprint!(" BDB general definitions");
eprint!(" crt_ddc_gmbus_pin {:02X}", self.crt_ddc_gmbus_pin.read());
eprint!(" dpms {:02X}", self.dpms.read());
for (i, boot_display) in self.boot_displays.iter().enumerate() {
eprint!(" boot_display{} {:02X}", i, boot_display.read());
}
eprint!(" child_dev_size {:02X}", self.child_dev_size.read());
eprintln!();
}
}
pub struct Bios {
region: MmioRegion,
header: &'static mut BiosHeader,
}
impl Bios {
pub fn new(region: MmioRegion) -> Result<Self> {
let header = unsafe { &mut *(region.virt as *mut BiosHeader) };
header.dump();
{
let sig = read_bios_string(&header.signature);
if sig != "IntelGraphicsMem" {
log::warn!("invalid op region signature {:?}", sig);
return Err(Error::new(EIO));
}
}
let size = (header.size.read() as usize) * 1024;
if size != region.size {
log::warn!("invalid op region size {}", size);
return Err(Error::new(EIO));
}
//TODO: other mailboxes?
if header.mailboxes.read() & MBOX_VBT == 0 {
log::warn!("op region does not support VBT mailbox");
return Err(Error::new(EIO));
}
//TODO: read VBT from mailbox 3 RVDA (0x3BA) and RVDS (0x3C2) if missing in mailbox 4
let vbt_addr = region.virt + 1024;
let vbt_header = unsafe { &*(vbt_addr as *const VbtHeader) };
vbt_header.dump();
//TODO: check vbt checksum
{
let sig = read_bios_string(&vbt_header.signature);
if !sig.starts_with("$VBT") {
log::warn!("invalid VBT signature {:?}", sig);
return Err(Error::new(EIO));
}
}
let bdb_addr = vbt_addr + (vbt_header.bdb_offset.read() as usize);
let bdb_header = unsafe { &*(bdb_addr as *const BdbHeader) };
bdb_header.dump();
{
let sig = read_bios_string(&bdb_header.signature);
if sig != "BIOS_DATA_BLOCK " {
log::warn!("invalid BDB signature {:?}", sig);
bdb_header.dump();
return Err(Error::new(EIO));
}
}
let mut block_addr = bdb_addr + bdb_header.header_size.read() as usize;
let block_end = bdb_addr + bdb_header.bdb_size.read() as usize;
while block_addr + mem::size_of::<BdbBlock>() <= block_end {
let block = unsafe { &*(block_addr as *const BdbBlock) };
//TODO: mipi sequence v3 has different size field
let id = block.id.read();
let size = block.size.read() as usize;
block_addr += mem::size_of::<BdbBlock>();
if block_addr + size <= block_end {
match id {
2 => {
if size >= mem::size_of::<BdbGeneralDefinitions>() {
let gen_defs =
unsafe { &*(block_addr as *const BdbGeneralDefinitions) };
gen_defs.dump();
} else {
log::warn!("BDB general definitions too small");
block.dump();
}
}
_ => block.dump(),
}
block_addr += block.size.read() as usize;
} else {
log::warn!("truncated block id {} size {}", id, size);
break;
}
}
Ok(Self { region, header })
}
}
@@ -0,0 +1,46 @@
use std::{ptr, slice};
use crate::device::ggtt::GlobalGtt;
use crate::device::MmioRegion;
#[derive(Debug)]
pub struct GpuBuffer {
pub virt: *mut u8,
pub gm_offset: u32,
pub size: u32,
}
impl GpuBuffer {
pub unsafe fn new(gm: &MmioRegion, gm_offset: u32, size: u32, clear: bool) -> Self {
let virt = ptr::with_exposed_provenance_mut::<u8>(gm.virt + gm_offset as usize);
if clear {
let onscreen = slice::from_raw_parts_mut(virt, size as usize);
onscreen.fill(0);
}
Self {
virt,
gm_offset,
size,
}
}
pub fn alloc(gm: &MmioRegion, ggtt: &mut GlobalGtt, size: u32) -> syscall::Result<Self> {
let gm_offset = ggtt.alloc_phys_mem(size)?;
Ok(unsafe { GpuBuffer::new(gm, gm_offset, size, true) })
}
pub fn alloc_dumb(
gm: &MmioRegion,
ggtt: &mut GlobalGtt,
width: u32,
height: u32,
) -> syscall::Result<(Self, u32)> {
//TODO: documentation on this is not great
let stride = (width * 4).next_multiple_of(64);
Ok((GpuBuffer::alloc(gm, ggtt, stride * height)?, stride))
}
}
@@ -0,0 +1,758 @@
use common::io::{Io, MmioPtr, WriteOnly};
use common::timeout::Timeout;
use embedded_hal::prelude::*;
use std::sync::Arc;
use syscall::error::{Error, Result, EIO};
use crate::device::aux::Aux;
use crate::device::power::PowerWells;
use crate::device::{CallbackGuard, Gmbus};
use super::{GpioPort, MmioRegion};
// IHD-OS-TGL-Vol 2c-12.21 DDI_AUX_CTL
pub const DDI_AUX_CTL_BUSY: u32 = 1 << 31;
pub const DDI_AUX_CTL_DONE: u32 = 1 << 30;
pub const DDI_AUX_CTL_TIMEOUT_ERROR: u32 = 1 << 28;
pub const DDI_AUX_CTL_TIMEOUT_SHIFT: u32 = 26;
pub const DDI_AUX_CTL_TIMEOUT_MASK: u32 = 0b11 << DDI_AUX_CTL_TIMEOUT_SHIFT;
pub const DDI_AUX_CTL_TIMEOUT_4000US: u32 = 0b11 << DDI_AUX_CTL_TIMEOUT_SHIFT;
pub const DDI_AUX_CTL_RECEIVE_ERROR: u32 = 1 << 25;
pub const DDI_AUX_CTL_SIZE_SHIFT: u32 = 20;
pub const DDI_AUX_CTL_SIZE_MASK: u32 = 0b11111 << 20;
pub const DDI_AUX_CTL_IO_SELECT: u32 = 1 << 11;
// IHD-OS-TGL-Vol 2c-12.21 DDI_BUF_CTL
pub const DDI_BUF_CTL_ENABLE: u32 = 1 << 31;
pub const DDI_BUF_CTL_IDLE: u32 = 1 << 7;
// IHD-OS-TGL-Vol 2c-12.21 PORT_CL_DW5
pub const PORT_CL_DW5_SUS_CLOCK_MASK: u32 = 0b11 << 0;
// IHD-OS-TGL-Vol 2c-12.21 PORT_CL_DW10
pub const PORT_CL_DW10_EDP4K2K_MODE_OVRD_EN: u32 = 1 << 3;
pub const PORT_CL_DW10_EDP4K2K_MODE_OVRD_VAL: u32 = 1 << 2;
// IHD-OS-TGL-Vol 2c-12.21 PORT_PCS_DW9
pub const PORT_PCS_DW1_CMNKEEPER_ENABLE: u32 = 1 << 26;
// IHD-OS-TGL-Vol 2c-12.21 PORT_TX_DW2
pub const PORT_TX_DW2_SWING_SEL_UPPER_SHIFT: u32 = 15;
pub const PORT_TX_DW2_SWING_SEL_UPPER_MASK: u32 = 1 << PORT_TX_DW2_SWING_SEL_UPPER_SHIFT;
pub const PORT_TX_DW2_SWING_SEL_LOWER_SHIFT: u32 = 11;
pub const PORT_TX_DW2_SWING_SEL_LOWER_MASK: u32 = 0b111 << PORT_TX_DW2_SWING_SEL_LOWER_SHIFT;
pub const PORT_TX_DW2_RCOMP_SCALAR_SHIFT: u32 = 0;
pub const PORT_TX_DW2_RCOMP_SCALAR_MASK: u32 = 0xFF << PORT_TX_DW2_RCOMP_SCALAR_SHIFT;
// IHD-OS-TGL-Vol 2c-12.21 PORT_TX_DW4
pub const PORT_TX_DW4_SELECT: u32 = 1 << 31;
pub const PORT_TX_DW4_POST_CURSOR_1_SHIFT: u32 = 12;
pub const PORT_TX_DW4_POST_CURSOR_1_MASK: u32 = 0b111111 << PORT_TX_DW4_POST_CURSOR_1_SHIFT;
pub const PORT_TX_DW4_POST_CURSOR_2_SHIFT: u32 = 6;
pub const PORT_TX_DW4_POST_CURSOR_2_MASK: u32 = 0b111111 << PORT_TX_DW4_POST_CURSOR_2_SHIFT;
pub const PORT_TX_DW4_CURSOR_COEFF_SHIFT: u32 = 0;
pub const PORT_TX_DW4_CURSOR_COEFF_MASK: u32 = 0b111111 << PORT_TX_DW4_CURSOR_COEFF_SHIFT;
// IHD-OS-TGL-Vol 2c-12.21 PORT_TX_DW5
pub const PORT_TX_DW5_TRAINING_ENABLE: u32 = 1 << 31;
pub const PORT_TX_DW5_DISABLE_2_TAP_SHIFT: u32 = 29;
pub const PORT_TX_DW5_DISABLE_2_TAP: u32 = 1 << PORT_TX_DW5_DISABLE_2_TAP_SHIFT;
pub const PORT_TX_DW5_DISABLE_3_TAP: u32 = 1 << 29;
pub const PORT_TX_DW5_CURSOR_PROGRAM: u32 = 1 << 26;
pub const PORT_TX_DW5_COEFF_POLARITY: u32 = 1 << 25;
pub const PORT_TX_DW5_SCALING_MODE_SEL_SHIFT: u32 = 18;
pub const PORT_TX_DW5_SCALING_MODE_SEL_MASK: u32 = 0b111 << PORT_TX_DW5_SCALING_MODE_SEL_SHIFT;
pub const PORT_TX_DW5_RTERM_SELECT_SHIFT: u32 = 3;
pub const PORT_TX_DW5_RTERM_SELECT_MASK: u32 = 0b111 << PORT_TX_DW5_RTERM_SELECT_SHIFT;
// IHD-OS-TGL-Vol 2c-12.21 PORT_TX_DW7
pub const PORT_TX_DW7_N_SCALAR_SHIFT: u32 = 24;
#[derive(Clone, Copy, Debug)]
#[repr(usize)]
pub enum PortClReg {
Dw5 = 0x14,
Dw10 = 0x28,
Dw12 = 0x30,
Dw15 = 0x3C,
Dw16 = 0x40,
}
#[derive(Clone, Copy, Debug)]
#[repr(usize)]
pub enum PortCompReg {
Dw0 = 0x100,
Dw1 = 0x104,
Dw3 = 0x10C,
Dw8 = 0x120,
Dw9 = 0x124,
Dw10 = 0x128,
}
#[derive(Clone, Copy, Debug)]
#[repr(usize)]
pub enum PortPcsReg {
Dw1 = 0x04,
Dw9 = 0x24,
}
#[derive(Clone, Copy, Debug)]
#[repr(usize)]
pub enum PortTxReg {
Dw0 = 0x80,
Dw1 = 0x84,
Dw2 = 0x88,
Dw4 = 0x90,
Dw5 = 0x94,
Dw6 = 0x98,
Dw7 = 0x9C,
Dw8 = 0xA0,
}
#[derive(Clone, Copy, Debug)]
#[repr(usize)]
pub enum PortLane {
Aux = 0x300,
Grp = 0x600,
Ln0 = 0x800,
Ln1 = 0x900,
Ln2 = 0xA00,
Ln3 = 0xB00,
}
pub struct Ddi {
pub name: &'static str,
pub index: usize,
pub gttmm: Arc<MmioRegion>,
pub port_base: Option<usize>,
pub aux_ctl: MmioPtr<u32>,
pub aux_datas: [MmioPtr<u32>; 5],
pub buf_ctl: MmioPtr<u32>,
pub dpclka_cfgcr0_clock_shift: Option<u32>,
pub dpclka_cfgcr0_clock_off: Option<u32>,
pub gmbus_pin_pair: Option<u8>,
pub gpio_port: Option<GpioPort>,
pub pwr_well_ctl_aux_request: u32,
pub pwr_well_ctl_aux_state: u32,
pub pwr_well_ctl_ddi_request: u32,
pub pwr_well_ctl_ddi_state: u32,
pub sde_interrupt_hotplug: Option<u32>,
pub transcoder_index: Option<u32>,
}
//TODO: verify offsets and count using DeviceKind?
impl Ddi {
pub fn dump(&self) {
eprint!("Ddi {} {}", self.name, self.index);
eprint!(" buf_ctl {:08X}", self.buf_ctl.read());
let lanes = [PortLane::Ln0, PortLane::Ln1, PortLane::Ln2, PortLane::Ln3];
for reg in [
PortClReg::Dw5,
PortClReg::Dw10,
PortClReg::Dw12,
PortClReg::Dw15,
PortClReg::Dw16,
] {
if let Some(mmio) = self.port_cl(reg) {
eprint!(" CL_{:?} {:08X}", reg, mmio.read());
}
}
for reg in [PortPcsReg::Dw1, PortPcsReg::Dw9] {
for lane in lanes {
if let Some(mmio) = self.port_pcs(reg, lane) {
eprint!(" PCS_{:?}_{:?} {:08X}", reg, lane, mmio.read());
}
}
}
for reg in [
PortTxReg::Dw0,
PortTxReg::Dw1,
PortTxReg::Dw2,
PortTxReg::Dw4,
PortTxReg::Dw5,
PortTxReg::Dw6,
PortTxReg::Dw7,
PortTxReg::Dw8,
] {
for lane in lanes {
if let Some(mmio) = self.port_tx(reg, lane) {
eprint!(" TX_{:?}_{:?} {:08X}", reg, lane, mmio.read());
}
}
}
eprintln!();
}
fn port_reg(&self, offset: usize) -> Option<MmioPtr<u32>> {
//TODO: handle gttmm.mmio error?
unsafe { self.gttmm.mmio(self.port_base? + offset).ok() }
}
pub fn port_cl(&self, reg: PortClReg) -> Option<MmioPtr<u32>> {
self.port_reg(reg as usize)
}
pub fn port_comp(&self, reg: PortCompReg) -> Option<MmioPtr<u32>> {
self.port_reg(reg as usize)
}
//TODO: return WriteOnly if PortLane::Grp?
pub fn port_pcs(&self, reg: PortPcsReg, lane: PortLane) -> Option<MmioPtr<u32>> {
self.port_reg((reg as usize) + (lane as usize))
}
//TODO: return WriteOnly if PortLane::Grp?
pub fn port_tx(&self, reg: PortTxReg, lane: PortLane) -> Option<MmioPtr<u32>> {
self.port_reg((reg as usize) + (lane as usize))
}
pub fn probe_edid(
&mut self,
power_wells: &mut PowerWells,
gttmm: &MmioRegion,
gmbus: &mut Gmbus,
) -> Result<Option<(&'static str, [u8; 128])>, Error> {
if let Some(port_comp_dw0) = self.port_comp(PortCompReg::Dw0) {
log::debug!("PORT_COMP_DW0_{}: {:08X}", self.name, port_comp_dw0.read());
}
let mut aux_read_edid = |ddi: &mut Ddi| -> Result<[u8; 128]> {
//TODO: BLOCK TCCOLD?
//TODO: the request can be shared by multiple DDIs
let pwr_well_ctl_aux_request = ddi.pwr_well_ctl_aux_request;
let pwr_well_ctl_aux_state = ddi.pwr_well_ctl_aux_state;
let mut pwr_well_ctl_aux = unsafe { MmioPtr::new(power_wells.ctl_aux.as_mut_ptr()) };
let _pwr_guard = CallbackGuard::new(
&mut pwr_well_ctl_aux,
|pwr_well_ctl_aux| {
// Enable aux power
pwr_well_ctl_aux.writef(pwr_well_ctl_aux_request, true);
let timeout = Timeout::from_micros(1500);
while !pwr_well_ctl_aux.readf(pwr_well_ctl_aux_state) {
timeout.run().map_err(|()| {
log::debug!("timeout while requesting DDI {} aux power", ddi.name);
Error::new(EIO)
})?;
}
Ok(())
},
|pwr_well_ctl_aux| {
// Disable aux power
pwr_well_ctl_aux.writef(pwr_well_ctl_aux_request, false);
},
)?;
let mut edid_data = [0; 128];
Aux::new(ddi)
.write_read(0x50, &[0x00], &mut edid_data)
.map_err(|_err| Error::new(EIO))?;
Ok(edid_data)
};
let mut gmbus_read_edid = |ddi: &mut Ddi| -> Result<[u8; 128]> {
let Some(pin_pair) = ddi.gmbus_pin_pair else {
return Err(Error::new(EIO));
};
let mut edid_data = [0; 128];
gmbus
.pin_pair(pin_pair)
.write_read(0x50, &[0x00], &mut edid_data)
.map_err(|_err| Error::new(EIO))?;
Ok(edid_data)
};
let gpio_read_edid = |ddi: &mut Ddi| -> Result<[u8; 128]> {
let Some(port) = &ddi.gpio_port else {
return Err(Error::new(EIO));
};
let mut edid_data = [0; 128];
unsafe { port.i2c(gttmm)? }
.write_read(0x50, &[0x00], &mut edid_data)
.map_err(|_err| Error::new(EIO))?;
Ok(edid_data)
};
match aux_read_edid(self) {
Ok(edid_data) => return Ok(Some(("AUX", edid_data))),
Err(err) => {
log::debug!("DDI {} failed to read EDID from AUX: {}", self.name, err);
}
}
match gmbus_read_edid(self) {
Ok(edid_data) => return Ok(Some(("GMBUS", edid_data))),
Err(err) => {
log::debug!("DDI {} failed to read EDID from GMBUS: {}", self.name, err);
}
}
match gpio_read_edid(self) {
Ok(edid_data) => return Ok(Some(("GPIO", edid_data))),
Err(err) => {
log::debug!("DDI {} failed to read EDID from GPIO: {}", self.name, err);
}
}
// Will try again but not fail the driver
Ok(None)
}
pub fn voltage_swing_hdmi(
&mut self,
gttmm: &MmioRegion,
timing: &edid::DetailedTiming,
) -> Result<()> {
struct Setting {
dw2_swing_sel: u32,
dw7_n_scalar: u32,
dw4_cursor_coeff: u32,
dw4_post_cursor_1: u32,
dw5_2_tap_disable: u32,
}
impl Setting {
pub fn new(
dw2_swing_sel: u32,
dw7_n_scalar: u32,
dw4_cursor_coeff: u32,
dw4_post_cursor_1: u32,
dw5_2_tap_disable: u32,
) -> Self {
Self {
dw2_swing_sel,
dw7_n_scalar,
dw4_cursor_coeff,
dw4_post_cursor_1,
dw5_2_tap_disable,
}
}
}
// IHD-OS-TGL-Vol 12-1.22-Rev2.0 "Voltage Swing Programming"
let settings = vec![
// HDMI 450mV, 450mV, 0.0dB
Setting::new(0b1010, 0x60, 0x3F, 0x00, 0b0),
// HDMI 450mV, 650mV, 3.2dB
Setting::new(0b1011, 0x73, 0x36, 0x09, 0b0),
// HDMI 450mV, 850mV, 5.5dB
Setting::new(0b0110, 0x7F, 0x31, 0x0E, 0b0),
// HDMI 650mV, 650mV, 0.0dB
Setting::new(0b1011, 0x73, 0x3F, 0x00, 0b0),
// HDMI 650mV, 850mV, 2.3dB
Setting::new(0b0110, 0x7F, 0x37, 0x08, 0b0),
// HDMI 850mV, 850mV, 0.0dB
Setting::new(0b0110, 0x7F, 0x3F, 0x00, 0b0),
// HDMI 600mV, 850mV, 3.0dB
Setting::new(0b0110, 0x7F, 0x35, 0x0A, 0b0),
];
// Last setting is the default
//TODO: get correct setting index from BIOS
let setting = settings.last().unwrap();
// This allows unwraps on port functions below without panic
if self.port_base.is_none() {
log::error!("HDMI voltage swing procedure only implemented on combo DDI");
return Err(Error::new(EIO));
};
// Clear cmnkeeper_enable for HDMI
{
// It is not possible to read from GRP register, so use LN0 as template
let pcs_dw1_ln0 = self.port_pcs(PortPcsReg::Dw1, PortLane::Ln0).unwrap();
let mut pcs_dw1_grp =
WriteOnly::new(self.port_pcs(PortPcsReg::Dw1, PortLane::Grp).unwrap());
let mut v = pcs_dw1_ln0.read();
v &= !PORT_PCS_DW1_CMNKEEPER_ENABLE;
pcs_dw1_grp.write(v);
}
// Program loadgen select
//TODO: this assumes bit rate <= 6 GHz and 4 lanes enabled
{
let mut tx_dw4_ln0 = self.port_tx(PortTxReg::Dw4, PortLane::Ln0).unwrap();
tx_dw4_ln0.writef(PORT_TX_DW4_SELECT, false);
let mut tx_dw4_ln1 = self.port_tx(PortTxReg::Dw4, PortLane::Ln1).unwrap();
tx_dw4_ln1.writef(PORT_TX_DW4_SELECT, true);
let mut tx_dw4_ln2 = self.port_tx(PortTxReg::Dw4, PortLane::Ln2).unwrap();
tx_dw4_ln2.writef(PORT_TX_DW4_SELECT, true);
let mut tx_dw4_ln3 = self.port_tx(PortTxReg::Dw4, PortLane::Ln3).unwrap();
tx_dw4_ln3.writef(PORT_TX_DW4_SELECT, true);
}
// Set PORT_CL_DW5 sus clock config to 11b
{
let mut cl_dw5 = self.port_cl(PortClReg::Dw5).unwrap();
cl_dw5.writef(PORT_CL_DW5_SUS_CLOCK_MASK, true);
}
// Clear training enable to change swing values
let tx_dw5_ln0 = self.port_tx(PortTxReg::Dw5, PortLane::Ln0).unwrap();
let mut tx_dw5_grp = WriteOnly::new(self.port_tx(PortTxReg::Dw5, PortLane::Grp).unwrap());
{
let mut v = tx_dw5_ln0.read();
v &= !PORT_TX_DW5_TRAINING_ENABLE;
tx_dw5_grp.write(v);
}
// Program swing and de-emphasis
// Disable eDP bits in PORT_CL_DW10
let mut cl_dw10 = self.port_cl(PortClReg::Dw10).unwrap();
cl_dw10.writef(
PORT_CL_DW10_EDP4K2K_MODE_OVRD_EN | PORT_CL_DW10_EDP4K2K_MODE_OVRD_VAL,
false,
);
// For PORT_TX_DW5:
// - Set 2 tap disable from settings
// - Set scaling mode sel to 010b
// - Set rterm select to 110b
// - Set 3 tap disable to 1
// - Set cursor program to 0
// - Set coeff polarity to 0
{
let mut v = tx_dw5_ln0.read();
v &= !(PORT_TX_DW5_DISABLE_2_TAP
| PORT_TX_DW5_CURSOR_PROGRAM
| PORT_TX_DW5_COEFF_POLARITY
| PORT_TX_DW5_SCALING_MODE_SEL_MASK
| PORT_TX_DW5_RTERM_SELECT_MASK);
v |= (setting.dw5_2_tap_disable << PORT_TX_DW5_DISABLE_2_TAP_SHIFT)
| PORT_TX_DW5_DISABLE_3_TAP
| (0b010 << PORT_TX_DW5_SCALING_MODE_SEL_SHIFT)
| (0b110 << PORT_TX_DW5_RTERM_SELECT_SHIFT);
tx_dw5_grp.write(v);
}
// Individual lane settings are used to avoid overwriting lane-specific settings, and because
// group registers cannot be read
let lanes = [PortLane::Ln0, PortLane::Ln1, PortLane::Ln2, PortLane::Ln3];
// For PORT_TX_DW2:
// - Set swing sel from settings
// - Set rcomp scalar to 0x98
for lane in lanes {
let mut tx_dw2 = self.port_tx(PortTxReg::Dw2, lane).unwrap();
let mut v = tx_dw2.read();
v &= !(PORT_TX_DW2_SWING_SEL_UPPER_MASK
| PORT_TX_DW2_SWING_SEL_LOWER_MASK
| PORT_TX_DW2_RCOMP_SCALAR_MASK);
v |= (((setting.dw2_swing_sel >> 3) & 1) << PORT_TX_DW2_SWING_SEL_UPPER_SHIFT)
| ((setting.dw2_swing_sel & 0b111) << PORT_TX_DW2_SWING_SEL_LOWER_SHIFT)
| (0x98 << PORT_TX_DW2_RCOMP_SCALAR_SHIFT);
tx_dw2.write(v);
}
// For PORT_TX_DW4:
// - Set post cursor 1 from settings
// - Set post cursor 2 to 0x0
// - Set cursor coeff from settings
for lane in lanes {
let mut tx_dw4 = self.port_tx(PortTxReg::Dw4, lane).unwrap();
let mut v = tx_dw4.read();
v &= !(PORT_TX_DW4_POST_CURSOR_1_MASK
| PORT_TX_DW4_POST_CURSOR_2_MASK
| PORT_TX_DW4_CURSOR_COEFF_MASK);
v |= (setting.dw4_post_cursor_1 << PORT_TX_DW4_POST_CURSOR_1_SHIFT)
| (setting.dw4_cursor_coeff << PORT_TX_DW4_CURSOR_COEFF_SHIFT);
tx_dw4.write(v);
}
// For PORT_TX_DW7:
// - Set n scalar from settings
for lane in lanes {
let mut tx_dw7 = self.port_tx(PortTxReg::Dw7, lane).unwrap();
// All other bits are spare
tx_dw7.write(setting.dw7_n_scalar << PORT_TX_DW7_N_SCALAR_SHIFT);
}
// Set training enable to trigger update
{
let mut v = tx_dw5_ln0.read();
v |= PORT_TX_DW5_TRAINING_ENABLE;
tx_dw5_grp.write(v);
}
Ok(())
}
pub fn kabylake(gttmm: &Arc<MmioRegion>) -> Result<Vec<Self>> {
let mut ddis = Vec::new();
for (i, name) in [
"A", "B", "C", "D",
//TODO: missing AUX regs? "E",
]
.iter()
.enumerate()
{
ddis.push(Self {
name,
index: i,
port_base: None, //TODO: port regs
gttmm: gttmm.clone(),
// IHD-OS-KBL-Vol 2c-1.17 DDI_AUX_CTL
aux_ctl: unsafe { gttmm.mmio(0x64010 + i * 0x100)? },
// IHD-OS-KBL-Vol 2c-1.17 DDI_AUX_DATA
aux_datas: [
unsafe { gttmm.mmio(0x64014 + i * 0x100)? },
unsafe { gttmm.mmio(0x64018 + i * 0x100)? },
unsafe { gttmm.mmio(0x6401C + i * 0x100)? },
unsafe { gttmm.mmio(0x64020 + i * 0x100)? },
unsafe { gttmm.mmio(0x64024 + i * 0x100)? },
],
// IHD-OS-KBL-Vol 2c-1.17 DDI_BUF_CTL
buf_ctl: unsafe { gttmm.mmio(0x64000 + i * 0x100)? },
// N/A
dpclka_cfgcr0_clock_shift: None,
dpclka_cfgcr0_clock_off: None,
// IHD-OS-KBL-Vol 2c-1.17 GMBUS0
gmbus_pin_pair: match *name {
"B" => Some(0b101),
"C" => Some(0b100),
"D" => Some(0b110),
_ => None,
},
// IHD-OS-KBL-Vol 12-1.17 GMBUS and GPIO
gpio_port: match *name {
"B" => Some(GpioPort::Port4),
"C" => Some(GpioPort::Port3),
"D" => Some(GpioPort::Port5),
_ => None,
},
// IHD-OS-KBL-Vol 2c-1.17 PWR_WELL_CTL
// All auxes go through the same Misc IO request
pwr_well_ctl_aux_request: 1 << 1,
pwr_well_ctl_aux_state: 1 << 0,
pwr_well_ctl_ddi_request: match *name {
"A" | "E" => 1 << 3,
"B" => 1 << 5,
"C" => 1 << 7,
"D" => 1 << 9,
_ => unreachable!(),
},
pwr_well_ctl_ddi_state: match *name {
"A" | "E" => 1 << 2,
"B" => 1 << 4,
"C" => 1 << 6,
"D" => 1 << 8,
_ => unreachable!(),
},
// IHD-OS-KBL-Vol 2c-1.17 SDE_INTERRUPT
sde_interrupt_hotplug: match *name {
"A" => Some(1 << 24),
"B" => Some(1 << 21),
"C" => Some(1 << 22),
"D" => Some(1 << 23),
"E" => Some(1 << 25),
_ => None,
},
// IHD-OS-KBL-Vol 2c-1.17 TRANS_CLK_SEL
transcoder_index: match *name {
"B" => Some(0b010),
"C" => Some(0b011),
"D" => Some(0b100),
"E" => Some(0b101),
_ => None,
},
});
}
Ok(ddis)
}
pub fn tigerlake(gttmm: &Arc<MmioRegion>) -> Result<Vec<Self>> {
let mut ddis = Vec::new();
for (i, name) in [
"A", "B", "C", "USBC1", "USBC2", "USBC3", "USBC4", "USBC5", "USBC6",
]
.iter()
.enumerate()
{
let port_base = match i {
0 => Some(0x162000),
1 => Some(0x6C000),
2 => Some(0x160000),
_ => None,
};
ddis.push(Self {
name,
index: i,
port_base,
gttmm: gttmm.clone(),
// IHD-OS-TGL-Vol 2c-12.21 DDI_AUX_CTL
aux_ctl: unsafe { gttmm.mmio(0x64010 + i * 0x100)? },
// IHD-OS-TGL-Vol 2c-12.21 DDI_AUX_DATA
aux_datas: [
unsafe { gttmm.mmio(0x64014 + i * 0x100)? },
unsafe { gttmm.mmio(0x64018 + i * 0x100)? },
unsafe { gttmm.mmio(0x6401C + i * 0x100)? },
unsafe { gttmm.mmio(0x64020 + i * 0x100)? },
unsafe { gttmm.mmio(0x64024 + i * 0x100)? },
],
// IHD-OS-TGL-Vol 2c-12.21 DDI_BUF_CTL
buf_ctl: unsafe { gttmm.mmio(0x64000 + i * 0x100)? },
// IHD-OS-TGL-Vol 2c-12.21 DPCLKA_CFGCR0
dpclka_cfgcr0_clock_shift: match i {
0 => Some(0),
1 => Some(2),
2 => Some(4),
_ => None,
},
dpclka_cfgcr0_clock_off: match i {
// DDI
0 => Some(1 << 10),
1 => Some(1 << 11),
2 => Some(1 << 24),
// Type C
3 => Some(1 << 12),
4 => Some(1 << 13),
5 => Some(1 << 14),
6 => Some(1 << 21),
7 => Some(1 << 22),
8 => Some(1 << 23),
_ => None,
},
//TODO: link to docs
gmbus_pin_pair: match i {
// DDI pins
0 => Some(1),
1 => Some(2),
2 => Some(3),
// Type C pins
3 => Some(9),
4 => Some(10),
5 => Some(11),
6 => Some(12),
7 => Some(13),
8 => Some(14),
_ => None,
},
// IHD-OS-TGL-Vol 12-1.22-Rev2.0 GMBUS and GPIO
gpio_port: match *name {
"A" => Some(GpioPort::Port1),
"B" => Some(GpioPort::Port2),
"C" => Some(GpioPort::Port3),
"USBC1" => Some(GpioPort::Port9),
"USBC2" => Some(GpioPort::Port10),
"USBC3" => Some(GpioPort::Port11),
"USBC4" => Some(GpioPort::Port12),
"USBC5" => Some(GpioPort::Port13),
"USBC6" => Some(GpioPort::Port14),
_ => None,
},
// IHD-OS-TGL-Vol 2c-12.21 PWR_WELL_CTL_AUX
pwr_well_ctl_aux_request: 2 << (i * 2),
pwr_well_ctl_aux_state: 1 << (i * 2),
// IHD-OS-TGL-Vol 2c-12.21 PWR_WELL_CTL_DDI
pwr_well_ctl_ddi_request: 2 << (i * 2),
pwr_well_ctl_ddi_state: 1 << (i * 2),
// IHD-OS-TGL-Vol 2c-12.21 SDE_INTERRUPT
sde_interrupt_hotplug: match i {
0 => Some(1 << 16),
1 => Some(1 << 17),
2 => Some(1 << 18),
_ => None,
},
// IHD-OS-TGL-Vol 2c-12.21 TRANS_CLK_SEL
transcoder_index: Some((i + 1) as u32),
})
}
Ok(ddis)
}
pub fn alchemist(gttmm: &Arc<MmioRegion>) -> Result<Vec<Self>> {
let mut ddis = Vec::new();
for (i, name) in ["A", "B", "C", "USBC1", "USBC2", "USBC3", "USBC4", "D", "E"]
.iter()
.enumerate()
{
let port_base = match i {
0 => Some(0x162000),
1 => Some(0x6C000),
2 => Some(0x160000),
_ => None,
};
ddis.push(Self {
name,
index: i,
port_base,
gttmm: gttmm.clone(),
// IHD-OS-ACM-Vol 2c-3.23 DDI_AUX_CTL
aux_ctl: unsafe { gttmm.mmio(0x64010 + i * 0x100)? },
// IHD-OS-ACM-Vol 2c-3.23 DDI_AUX_DATA
aux_datas: [
unsafe { gttmm.mmio(0x64014 + i * 0x100)? },
unsafe { gttmm.mmio(0x64018 + i * 0x100)? },
unsafe { gttmm.mmio(0x6401C + i * 0x100)? },
unsafe { gttmm.mmio(0x64020 + i * 0x100)? },
unsafe { gttmm.mmio(0x64024 + i * 0x100)? },
],
// IHD-OS-ACM-Vol 2c-3.23 DDI_BUF_CTL
buf_ctl: unsafe { gttmm.mmio(0x64000 + i * 0x100)? },
// IHD-OS-ACM-Vol 2c-3.23 DPCLKA_CFGCR0
dpclka_cfgcr0_clock_shift: match i {
0 => Some(0),
1 => Some(2),
2 => Some(4),
_ => None,
},
dpclka_cfgcr0_clock_off: match i {
// DDI
0 => Some(1 << 10),
1 => Some(1 << 11),
2 => Some(1 << 24),
// Type C
3 => Some(1 << 12),
4 => Some(1 << 13),
5 => Some(1 << 14),
6 => Some(1 << 21),
7 => Some(1 << 22),
8 => Some(1 << 23),
_ => None,
},
//TODO: link to docs
gmbus_pin_pair: match i {
// DDI pins
0 => Some(1),
1 => Some(2),
2 => Some(3),
// Type C pins
3 => Some(9),
4 => Some(10),
5 => Some(11),
6 => Some(12),
7 => Some(13),
8 => Some(14),
_ => None,
},
// IHD-OS-ACM-Vol 12-3.23 GMBUS and GPIO
gpio_port: match *name {
"A" => Some(GpioPort::Port1),
"B" => Some(GpioPort::Port2),
"C" => Some(GpioPort::Port3),
"D" => Some(GpioPort::Port4),
"USBC1" => Some(GpioPort::Port9),
_ => None,
},
// IHD-OS-ACM-Vol 2c-3.23 PWR_WELL_CTL_AUX
pwr_well_ctl_aux_request: 2 << (i * 2),
pwr_well_ctl_aux_state: 1 << (i * 2),
// IHD-OS-ACM-Vol 2c-3.23 PWR_WELL_CTL_DDI
pwr_well_ctl_ddi_request: 2 << (i * 2),
pwr_well_ctl_ddi_state: 1 << (i * 2),
// IHD-OS-ACM-Vol 2c-3.23 SDE_INTERRUPT
sde_interrupt_hotplug: match i {
0 => Some(1 << 16),
1 => Some(1 << 17),
2 => Some(1 << 18),
_ => None,
},
// IHD-OS-ACM-Vol 2c-3.23 TRANS_CLK_SEL
transcoder_index: Some((i + 1) as u32),
})
}
Ok(ddis)
}
}
@@ -0,0 +1,197 @@
use common::io::{Io, MmioPtr};
use syscall::error::{Error, Result, EIO};
use super::MmioRegion;
pub const DPLL_CFGCR1_QDIV_RATIO_SHIFT: u32 = 10;
pub const DPLL_CFGCR1_QDIV_RATIO_MASK: u32 = 0xFF << DPLL_CFGCR1_QDIV_RATIO_SHIFT;
pub const DPLL_CFGCR1_QDIV_MODE: u32 = 1 << 9;
pub const DPLL_CFGCR1_KDIV_1: u32 = 0b001 << 6;
pub const DPLL_CFGCR1_KDIV_2: u32 = 0b010 << 6;
pub const DPLL_CFGCR1_KDIV_3: u32 = 0b100 << 6;
pub const DPLL_CFGCR1_KDIV_MASK: u32 = 0b111 << 6;
pub const DPLL_CFGCR1_PDIV_2: u32 = 0b0001 << 2;
pub const DPLL_CFGCR1_PDIV_3: u32 = 0b0010 << 2;
pub const DPLL_CFGCR1_PDIV_5: u32 = 0b0100 << 2;
pub const DPLL_CFGCR1_PDIV_7: u32 = 0b1000 << 2;
pub const DPLL_CFGCR1_PDIV_MASK: u32 = 0b1111 << 2;
pub const DPLL_ENABLE_ENABLE: u32 = 1 << 31;
pub const DPLL_ENABLE_LOCK: u32 = 1 << 30;
pub const DPLL_ENABLE_POWER_ENABLE: u32 = 1 << 27;
pub const DPLL_ENABLE_POWER_STATE: u32 = 1 << 26;
pub const DPLL_SSC_ENABLE: u32 = 1 << 9;
pub struct Dpll {
pub name: &'static str,
// IHD-OS-TGL-Vol 2c-12.21 DPLL_CFGCR0
pub cfgcr0: MmioPtr<u32>,
// IHD-OS-TGL-Vol 2c-12.21 DPLL_CFGCR1
pub cfgcr1: MmioPtr<u32>,
// IHD-OS-TGL-Vol 2c-12.21 DPLL_DIV0
pub div0: MmioPtr<u32>,
// IHD-OS-TGL-Vol 2c-12.21 DPCLKA_CFGCR0
pub dpclka_cfgcr0_clock_value: u32,
// IHD-OS-TGL-Vol 2c-12.21 DPLL_ENABLE
pub enable: MmioPtr<u32>,
// IHD-OS-TGL-Vol 2c-12.21 DPLL_SSC
pub ssc: MmioPtr<u32>,
}
//TODO: verify offsets and count using DeviceKind?
impl Dpll {
pub fn dump(&self) {
eprint!("Dpll {}", self.name);
eprint!(" cfgcr0 {:08X}", self.cfgcr0.read());
eprint!(" cfgcr1 {:08X}", self.cfgcr1.read());
eprint!(" div0 {:08X}", self.div0.read());
eprint!(" enable {:08X}", self.enable.read());
eprint!(" ssc {:08X}", self.ssc.read());
eprintln!();
}
pub fn set_freq_hdmi(
&mut self,
mut ref_freq: u64,
timing: &edid::DetailedTiming,
) -> Result<()> {
// IHD-OS-TGL-Vol 12-1.22-Rev2.0 "Formula for HDMI Mode DPLL Programming"
const KHz: u64 = 1_000;
const MHz: u64 = KHz * 1_000;
let dco_min: u64 = 7_998 * MHz;
let dco_mid: u64 = 8_999 * MHz;
let dco_max: u64 = 10_000 * MHz;
// If reference frequency is 38.4, use 19.2 because the DPLL automatically divides that by 2.
if ref_freq == 38_400_000 {
ref_freq /= 2;
}
//TODO: this symbol frequency is only valid for RGB 8 bits per color
let symbol_freq = (timing.pixel_clock as u64) * KHz;
let pll_freq = symbol_freq * 5;
#[derive(Debug)]
struct Setting {
pdiv: u64,
kdiv: u64,
qdiv: u64,
cfgcr1: u32,
dco: u64,
dco_dist: u64,
}
let mut best_setting: Option<Setting> = None;
for (pdiv, pdiv_reg) in [
(2, DPLL_CFGCR1_PDIV_2),
(3, DPLL_CFGCR1_PDIV_3),
(5, DPLL_CFGCR1_PDIV_5),
(7, DPLL_CFGCR1_PDIV_7),
] {
for (kdiv, kdiv_reg) in [
(1, DPLL_CFGCR1_KDIV_1),
(2, DPLL_CFGCR1_KDIV_2),
(3, DPLL_CFGCR1_KDIV_3),
] {
let qdiv_range = if kdiv == 2 { 1..=0xFF } else { 1..=1 };
for qdiv in qdiv_range {
let qdiv_reg = if qdiv == 1 {
0
} else {
((qdiv as u32) << DPLL_CFGCR1_QDIV_RATIO_SHIFT) | DPLL_CFGCR1_QDIV_MODE
};
let dco = pll_freq * pdiv * kdiv * qdiv;
if dco <= dco_min || dco >= dco_max {
// DCO outside of valid range
continue;
}
let dco_dist = dco.abs_diff(dco_mid);
let setting = Setting {
pdiv,
kdiv,
qdiv,
cfgcr1: pdiv_reg | kdiv_reg | qdiv_reg,
dco,
dco_dist,
};
best_setting = match best_setting.take() {
Some(other) if other.dco_dist < setting.dco_dist => Some(other),
_ => Some(setting),
};
}
}
}
let Some(setting) = best_setting else {
log::error!("failed to find valid DPLL setting");
return Err(Error::new(EIO));
};
eprintln!("{:?}", setting);
// Configure DPLL_CFGCR0 to set DCO frequency
{
let dco_int = setting.dco / ref_freq;
let dco_fract = ((setting.dco - (dco_int * ref_freq)) << 15) / ref_freq;
self.cfgcr0
.write(((dco_fract as u32) << 10) | (dco_int as u32));
}
// Configure DPLL_CFGCR1 to set the dividers
{
let mut v = self.cfgcr1.read();
let mask = DPLL_CFGCR1_QDIV_RATIO_MASK
| DPLL_CFGCR1_QDIV_MODE
| DPLL_CFGCR1_KDIV_MASK
| DPLL_CFGCR1_PDIV_MASK;
v &= !mask;
v |= setting.cfgcr1 & mask;
self.cfgcr1.write(v);
}
// Read back DPLL_CFGCR0 and DPLL_CFGCR1 to ensure writes are complete
let _ = self.cfgcr0.read();
let _ = self.cfgcr1.read();
Ok(())
}
pub fn tigerlake(gttmm: &MmioRegion) -> Result<Vec<Self>> {
let mut dplls = Vec::new();
dplls.push(Self {
name: "0",
cfgcr0: unsafe { gttmm.mmio(0x164284)? },
cfgcr1: unsafe { gttmm.mmio(0x164288)? },
div0: unsafe { gttmm.mmio(0x164B00)? },
dpclka_cfgcr0_clock_value: 0b00,
enable: unsafe { gttmm.mmio(0x46010)? },
ssc: unsafe { gttmm.mmio(0x164B10)? },
});
dplls.push(Self {
name: "1",
cfgcr0: unsafe { gttmm.mmio(0x16428C)? },
cfgcr1: unsafe { gttmm.mmio(0x164290)? },
div0: unsafe { gttmm.mmio(0x164C00)? },
dpclka_cfgcr0_clock_value: 0b01,
enable: unsafe { gttmm.mmio(0x46014)? },
ssc: unsafe { gttmm.mmio(0x164C10)? },
});
/*TODO: not present on U-class CPUs
dplls.push(Self {
name: "4",
cfgcr0: unsafe { gttmm.mmio(0x164294)? },
cfgcr1: unsafe { gttmm.mmio(0x164298)? },
div0: unsafe { gttmm.mmio(0x164E00)? },
dpclka_cfgcr0_clock_value: 0b10,
enable: unsafe { gttmm.mmio(0x46018)? },
ssc: unsafe { gttmm.mmio(0x164E10)? },
});
*/
Ok(dplls)
}
}
@@ -0,0 +1,134 @@
use std::sync::Arc;
use std::{mem, ptr};
use pcid_interface::PciFunctionHandle;
use range_alloc::RangeAllocator;
use syscall::{Error, EIO};
use crate::device::MmioRegion;
/// Global Graphics Translation Table (global GTT)
///
/// The global GTT is a page table used by all parts of the GPU that don't use
/// the PPGTT (Per-Process GTT). This includes the display engine and the GM
/// aperture that the CPU can access.
///
/// The global GTT is located in the GTTMM BAR at offset 8MiB, is up to 8MiB big
/// and consists of 64bit entries. Each entry has a present bit as LSB and the
/// address of the frame at bits 12 through 38. The rest of the bits are ignored.
///
/// Source: Pages 6 and 75 of intel-gfx-prm-osrc-kbl-vol05-memory_views.pdf
pub struct GlobalGtt {
gttmm: Arc<MmioRegion>,
/// Base the GTT
gtt_base: *mut u64,
/// Size of the GTT
gtt_size: usize,
/// Allocator for GM aperture pages
gm_alloc: RangeAllocator<u32>,
// FIXME reuse DSM memory for something useful
/// Base Data of Stolen Memory (DSM)
base_dsm: *mut (),
/// Size of DSM
size_data_stolen_memory: usize,
}
const GTT_PAGE_SIZE: u32 = 4096;
impl GlobalGtt {
pub unsafe fn new(
pcid_handle: &mut PciFunctionHandle,
gttmm: Arc<MmioRegion>,
gm_size: u32,
) -> Self {
let gtt_offset = 8 * 1024 * 1024;
let gtt_base = ptr::with_exposed_provenance_mut(gttmm.virt + gtt_offset);
let base_dsm = unsafe { pcid_handle.read_config(0x5C) };
let ggc = unsafe { pcid_handle.read_config(0x50) };
let dsm_size = match (ggc >> 8) & 0xFF {
size if size & 0xF0 == 0 => size * 32 * 1024 * 1024,
size => (size & !0xF0) * 4 * 1024 * 1024,
} as usize;
let gtt_size = match (ggc >> 6) & 0x3 {
0 => 0,
1 => 2 * 1024 * 1024,
2 => 4 * 1024 * 1024,
3 => 8 * 1024 * 1024,
_ => unreachable!(),
} as usize;
log::info!("Base DSM: {:X}", base_dsm);
log::info!(
"GGC: {:X} => global GTT size: {}MiB; DSM size: {}MiB",
ggc,
gtt_size / 1024 / 1024,
dsm_size / 1024 / 1024,
);
let gm_alloc = RangeAllocator::new(0..gm_size / 4096);
GlobalGtt {
gttmm,
gtt_base,
gtt_size,
gm_alloc,
base_dsm: core::ptr::with_exposed_provenance_mut(base_dsm as usize),
size_data_stolen_memory: dsm_size,
}
}
/// Reset the global GTT by clearing out all existing mappings.
pub unsafe fn reset(&mut self) {
for i in 0..self.gtt_size / 8 {
unsafe { *self.gtt_base.add(i) = 0 };
}
}
pub fn reserve(&mut self, surf: u32, surf_size: u32) {
assert!(surf.is_multiple_of(GTT_PAGE_SIZE));
assert!(surf_size.is_multiple_of(GTT_PAGE_SIZE));
self.gm_alloc
.allocate_exact_range(
surf / GTT_PAGE_SIZE..surf / GTT_PAGE_SIZE + surf_size / GTT_PAGE_SIZE,
)
.unwrap_or_else(|err| {
panic!(
"failed to allocate pre-existing surface at 0x{:x} of size {}: {:?}",
surf, surf_size, err
);
});
}
pub fn alloc_phys_mem(&mut self, size: u32) -> syscall::Result<u32> {
let size = size.next_multiple_of(GTT_PAGE_SIZE);
let sgl = common::sgl::Sgl::new(size as usize)?;
let range = self
.gm_alloc
.allocate_range(size / GTT_PAGE_SIZE)
.map_err(|err| {
log::warn!("failed to allocate buffer of size {}: {:?}", size, err);
Error::new(EIO)
})?;
for chunk in sgl.chunks() {
for i in 0..chunk.length / GTT_PAGE_SIZE as usize {
unsafe {
*self
.gtt_base
.add(range.start as usize + chunk.offset / GTT_PAGE_SIZE as usize + i) =
chunk.phys as u64 + i as u64 * u64::from(GTT_PAGE_SIZE) + 1;
}
}
}
mem::forget(sgl);
Ok(range.start * 4096)
}
}
@@ -0,0 +1,150 @@
use common::{
io::{Io, MmioPtr},
timeout::Timeout,
};
use embedded_hal::blocking::i2c::{self, Operation, SevenBitAddress, Transactional};
use super::MmioRegion;
const GMBUS1_SW_RDY: u32 = 1 << 30;
const GMBUS1_CYCLE_STOP: u32 = 1 << 27;
const GMBUS1_CYCLE_INDEX: u32 = 1 << 26;
const GMBUS1_CYCLE_WAIT: u32 = 1 << 25;
const GMBUS1_SIZE_SHIFT: u32 = 16;
const GMBUS1_INDEX_SHIFT: u32 = 8;
const GMBUS2_HW_RDY: u32 = 1 << 11;
const GMBUS2_ACTIVE: u32 = 1 << 9;
pub struct Gmbus {
regs: [MmioPtr<u32>; 6],
}
impl Gmbus {
pub unsafe fn new(gttmm: &MmioRegion) -> syscall::Result<Self> {
Ok(Self {
regs: [
gttmm.mmio(0xC5100)?,
gttmm.mmio(0xC5104)?,
gttmm.mmio(0xC5108)?,
gttmm.mmio(0xC510C)?,
gttmm.mmio(0xC5110)?,
gttmm.mmio(0xC5120)?,
],
})
}
pub fn pin_pair<'a>(&'a mut self, pin_pair: u8) -> GmbusPinPair<'a> {
GmbusPinPair {
regs: &mut self.regs,
pin_pair,
}
}
}
pub struct GmbusPinPair<'a> {
regs: &'a mut [MmioPtr<u32>; 6],
pin_pair: u8,
}
impl<'a> Transactional for GmbusPinPair<'a> {
type Error = ();
fn exec(&mut self, addr7: SevenBitAddress, ops: &mut [Operation<'_>]) -> Result<(), ()> {
let mut ops_iter = ops.iter_mut();
//TODO: gmbus is actually smbus, not fully i2c compatible!
// The first operation MUST be a write of the index
let index = match ops_iter.next() {
Some(Operation::Write(buf)) if buf.len() == 1 => buf[0],
unsupported => {
log::error!("GMBUS unsupported first operation {:?}", unsupported);
return Err(());
}
};
// Reset
self.regs[1].write(0);
// Set pin pair, enabling interface
self.regs[0].write(self.pin_pair as u32);
for op in ops_iter {
// Start operation
let (addr8, size) = match op {
Operation::Read(buf) => ((addr7 << 1) | 1, buf.len() as u32),
Operation::Write(buf) => (addr7 << 1, buf.len() as u32),
};
if size >= 512 {
log::error!("GMBUS transaction size {} too large", size);
return Err(());
}
self.regs[1].write(
GMBUS1_SW_RDY
| GMBUS1_CYCLE_INDEX
| GMBUS1_CYCLE_WAIT
| (size << GMBUS1_SIZE_SHIFT)
| (index as u32) << GMBUS1_INDEX_SHIFT
| (addr8 as u32),
);
// Perform transaction
match op {
Operation::Read(buf) => {
for chunk in buf.chunks_mut(4) {
{
//TODO: ideal timeout for gmbus read?
let timeout = Timeout::from_millis(10);
while !self.regs[2].readf(GMBUS2_HW_RDY) {
timeout.run().map_err(|()| {
log::debug!(
"timeout on GMBUS read 0x{:08x}",
self.regs[2].read()
);
()
})?;
}
}
let bytes = self.regs[3].read().to_le_bytes();
chunk.copy_from_slice(&bytes[..chunk.len()]);
}
}
Operation::Write(buf) => {
log::warn!("TODO: GMBUS WRITE");
return Err(());
}
}
}
// Stop transaction
self.regs[1].write(GMBUS1_SW_RDY | GMBUS1_CYCLE_STOP);
// Wait idle
let timeout = Timeout::from_millis(10);
while self.regs[2].readf(GMBUS2_ACTIVE) {
timeout.run().map_err(|()| {
log::debug!("timeout on GMBUS active 0x{:08x}", self.regs[2].read());
()
})?;
}
// Disable GMBUS interface
self.regs[0].write(0);
Ok(())
}
}
impl<'a> i2c::WriteRead for GmbusPinPair<'a> {
type Error = ();
fn write_read(
&mut self,
addr7: SevenBitAddress,
bytes: &[u8],
buffer: &mut [u8],
) -> Result<(), ()> {
self.exec(
addr7,
&mut [Operation::Write(bytes), Operation::Read(buffer)],
)
}
}
@@ -0,0 +1,99 @@
use std::convert::Infallible;
use std::time::Duration;
use common::io::{Io, MmioPtr};
use embedded_hal::digital::v2 as digital;
use crate::device::HalTimer;
use super::MmioRegion;
const GPIO_DIR_MASK: u32 = 1 << 0;
const GPIO_DIR_OUT: u32 = 1 << 1;
const GPIO_VAL_MASK: u32 = 1 << 2;
const GPIO_VAL_OUT: u32 = 1 << 3;
const GPIO_VAL_IN: u32 = 1 << 4;
const GPIO_CLOCK_SHIFT: u32 = 0;
const GPIO_DATA_SHIFT: u32 = 8;
#[derive(Copy, Clone, Debug)]
#[repr(usize)]
pub enum GpioPort {
Port0 = 0xC5010,
Port1 = 0xC5014,
Port2 = 0xC5018,
Port3 = 0xC501C,
Port4 = 0xC5020,
Port5 = 0xC5024,
Port6 = 0xC5028,
Port7 = 0xC502C,
Port8 = 0xC5030,
Port9 = 0xC5034,
Port10 = 0xC5038,
Port11 = 0xC503C,
Port12 = 0xC5040,
Port13 = 0xC5044,
Port14 = 0xC5048,
Port15 = 0xC504C,
}
impl GpioPort {
pub unsafe fn i2c(
&self,
gttmm: &MmioRegion,
) -> syscall::Result<bitbang_hal::i2c::I2cBB<GpioPin, GpioPin, HalTimer>> {
let i2c_freq = 100_000.0;
let (scl, sda) = unsafe {
(
GpioPin {
ctl: gttmm.mmio(*self as usize)?,
shift: GPIO_CLOCK_SHIFT,
},
GpioPin {
ctl: gttmm.mmio(*self as usize)?,
shift: GPIO_DATA_SHIFT,
},
)
};
Ok(bitbang_hal::i2c::I2cBB::new(
scl,
sda,
HalTimer::new(Duration::from_secs_f64(1.0 / i2c_freq)),
))
}
}
pub struct GpioPin {
ctl: MmioPtr<u32>,
shift: u32,
}
impl digital::InputPin for GpioPin {
type Error = Infallible;
fn is_high(&self) -> Result<bool, Infallible> {
Ok(((self.ctl.read() >> self.shift) & GPIO_VAL_IN) == GPIO_VAL_IN)
}
fn is_low(&self) -> Result<bool, Infallible> {
Ok(((self.ctl.read() >> self.shift) & GPIO_VAL_IN) == 0)
}
}
impl digital::OutputPin for GpioPin {
type Error = Infallible;
fn set_low(&mut self) -> Result<(), Infallible> {
// Set GPIO to output with value 0
let value = GPIO_DIR_MASK | GPIO_DIR_OUT | GPIO_VAL_MASK;
self.ctl.write(value << self.shift);
Ok(())
}
fn set_high(&mut self) -> Result<(), Infallible> {
// Assuming external pull-up, set GPIO to input
let value = GPIO_DIR_MASK;
self.ctl.write(value << self.shift);
Ok(())
}
}
@@ -0,0 +1,2 @@
mod timer;
pub use self::timer::*;
@@ -0,0 +1,38 @@
use embedded_hal::timer;
use std::time::{Duration, Instant};
use void::Void;
pub struct HalTimer {
instant: Instant,
duration: Duration,
}
impl HalTimer {
pub fn new(duration: Duration) -> Self {
Self {
instant: Instant::now(),
duration,
}
}
}
impl timer::CountDown for HalTimer {
type Time = Duration;
fn start<T: Into<Duration>>(&mut self, duration: T) {
self.instant = Instant::now();
self.duration = duration.into();
}
fn wait(&mut self) -> nb::Result<(), Void> {
if self.instant.elapsed() < self.duration {
std::thread::yield_now();
Err(nb::Error::WouldBlock)
} else {
// Since this is periodic it must trigger at the next duration
self.instant += self.duration;
Ok(())
}
}
}
impl timer::Periodic for HalTimer {}
@@ -0,0 +1,966 @@
use common::{
io::{Io, MmioPtr},
timeout::Timeout,
};
use pcid_interface::{PciFunction, PciFunctionHandle};
use range_alloc::RangeAllocator;
use std::{collections::VecDeque, fmt, mem, sync::Arc};
use syscall::error::{Error, Result, EIO, ENODEV, ERANGE};
mod aux;
mod bios;
use self::bios::*;
mod buffer;
mod ddi;
use self::ddi::*;
mod dpll;
use self::dpll::*;
mod gmbus;
pub use self::gmbus::*;
mod gpio;
pub use self::gpio::*;
mod ggtt;
use ggtt::*;
mod hal;
pub use self::hal::*;
mod pipe;
use self::pipe::*;
mod power;
use self::power::*;
mod scheme;
mod transcoder;
use self::transcoder::*;
//TODO: move to common?
pub struct CallbackGuard<'a, T, F: FnOnce(&mut T)> {
value: &'a mut T,
fini: Option<F>,
}
impl<'a, T, F: FnOnce(&mut T)> CallbackGuard<'a, T, F> {
// Note that fini will also run if init fails
pub fn new(value: &'a mut T, init: impl FnOnce(&mut T) -> Result<()>, fini: F) -> Result<Self> {
let mut this = Self {
value,
fini: Some(fini),
};
init(&mut this.value)?;
Ok(this)
}
}
impl<'a, T, F: FnOnce(&mut T)> Drop for CallbackGuard<'a, T, F> {
fn drop(&mut self) {
let fini = self.fini.take().unwrap();
fini(&mut self.value);
}
}
pub struct ChangeDetect {
name: &'static str,
reg: MmioPtr<u32>,
value: u32,
}
impl ChangeDetect {
fn new(name: &'static str, reg: MmioPtr<u32>) -> Self {
let value = reg.read();
Self { name, reg, value }
}
fn log(&self) {
log::info!("{} {:08X}", self.name, self.value);
}
fn check(&mut self) {
let value = self.reg.read();
if value != self.value {
self.value = value;
self.log();
}
}
}
#[derive(Clone, Copy, Debug)]
pub enum DeviceKind {
KabyLake,
TigerLake,
Alchemist,
}
pub enum Event {
DdiHotplug(&'static str),
}
pub struct InterruptRegs {
// Interrupt status register, has live status of interrupts
pub isr: MmioPtr<u32>,
// Interrupt mask register, masks isr for iir, 0 is unmasked
pub imr: MmioPtr<u32>,
// Interrupt identity register, write 1 to clear
pub iir: MmioPtr<u32>,
// Interrupt enable register, 1 allows interrupt to propogate
pub ier: MmioPtr<u32>,
}
pub struct Interrupter {
change_detects: Vec<ChangeDetect>,
display_int_ctl: MmioPtr<u32>,
display_int_ctl_enable: u32,
display_int_ctl_sde: u32,
gfx_mstr_intr: Option<MmioPtr<u32>>,
gfx_mstr_intr_display: u32,
gfx_mstr_intr_enable: u32,
sde_interrupt: InterruptRegs,
}
#[derive(Debug)]
pub struct MmioRegion {
phys: usize,
virt: usize,
size: usize,
}
impl MmioRegion {
fn new(phys: usize, size: usize, memory_type: common::MemoryType) -> Result<Self> {
let virt = unsafe { common::physmap(phys, size, common::Prot::RW, memory_type)? as usize };
Ok(Self { phys, virt, size })
}
unsafe fn mmio(&self, offset: usize) -> Result<MmioPtr<u32>> {
// Any errors here will return ERANGE
let err = Error::new(ERANGE);
if offset.checked_add(mem::size_of::<u32>()).ok_or(err)? > self.size {
return Err(err);
}
let addr = self.virt.checked_add(offset).ok_or(err)?;
Ok(unsafe { MmioPtr::new(addr as *mut u32) })
}
}
impl Drop for MmioRegion {
fn drop(&mut self) {
unsafe {
let _ = libredox::call::munmap(self.virt as *mut (), self.size);
}
}
}
#[derive(Clone, Copy, Debug)]
enum VideoInput {
Hdmi,
Dp,
}
pub struct Device {
kind: DeviceKind,
alloc_buffers: RangeAllocator<u32>,
bios: Option<Bios>,
ddis: Vec<Ddi>,
dpclka_cfgcr0: Option<MmioPtr<u32>>,
dplls: Vec<Dpll>,
events: VecDeque<Event>,
framebuffers: Vec<DeviceFb>,
int: Interrupter,
gttmm: Arc<MmioRegion>,
ggtt: GlobalGtt,
gm: MmioRegion,
gmbus: Gmbus,
pipes: Vec<Pipe>,
power_wells: PowerWells,
ref_freq: u64,
transcoders: Vec<Transcoder>,
}
impl fmt::Debug for Device {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Device")
.field("kind", &self.kind)
.field("alloc_buffers", &self.alloc_buffers)
.field("gttmm", &self.gttmm)
.field("gm", &self.gm)
.field("ref_freq", &self.ref_freq)
.finish_non_exhaustive()
}
}
impl Device {
pub fn new(pcid_handle: &mut PciFunctionHandle, func: &PciFunction) -> Result<Self> {
let kind = match (func.full_device_id.vendor_id, func.full_device_id.device_id) {
// Kaby Lake
(0x8086, 0x5912) |
(0x8086, 0x5916) |
(0x8086, 0x591B) |
(0x8086, 0x591E) |
(0x8086, 0x5926) |
// Comet Lake, seems to be compatible with Kaby Lake
(0x8086, 0x9B21) |
(0x8086, 0x9B41) |
(0x8086, 0x9BA4) |
(0x8086, 0x9BAA) |
(0x8086, 0x9BAC) |
(0x8086, 0x9BC4) |
(0x8086, 0x9BC5) |
(0x8086, 0x9BC6) |
(0x8086, 0x9BC8) |
(0x8086, 0x9BCA) |
(0x8086, 0x9BCC) |
(0x8086, 0x9BE6) |
(0x8086, 0x9BF6) => {
DeviceKind::KabyLake
}
// Tiger Lake
(0x8086, 0x9A40) |
(0x8086, 0x9A49) |
(0x8086, 0x9A60) |
(0x8086, 0x9A68) |
(0x8086, 0x9A70) |
(0x8086, 0x9A78) => {
DeviceKind::TigerLake
}
// Alchemist
(0x8086, 0x5690) | // A770M
(0x8086, 0x5691) | // A730M
(0x8086, 0x5692) | // A550M
(0x8086, 0x5693) | // A370M
(0x8086, 0x5694) | // A350M
(0x8086, 0x5696) | // A570M
(0x8086, 0x5697) | // A530M
(0x8086, 0x56A0) | // A770
(0x8086, 0x56A1) | // A750
(0x8086, 0x56A5) | // A380
(0x8086, 0x56A6) | // A310
(0x8086, 0x56B0) | // Pro A30M
(0x8086, 0x56B1) | // Pro A40/A50
(0x8086, 0x56B2) | // Pro A60M
(0x8086, 0x56B3) | // Pro A60
(0x8086, 0x56C0) | // GPU Flex 170
(0x8086, 0x56C1) // GPU Flex 140
=> {
DeviceKind::Alchemist
}
(vendor_id, device_id) => {
log::error!("unsupported ID {:04X}:{:04X}", vendor_id, device_id);
return Err(Error::new(ENODEV));
}
};
let gttmm = {
let (phys, size) = func.bars[0].expect_mem();
Arc::new(MmioRegion::new(
phys,
size,
common::MemoryType::Uncacheable,
)?)
};
log::info!("GTTMM {:X?}", gttmm);
let gm = {
let (phys, size) = func.bars[2].expect_mem();
MmioRegion::new(phys, size, common::MemoryType::WriteCombining)?
};
log::info!("GM {:X?}", gm);
/* IOBAR not used, not present on all generations
let iobar = func.bars[4].expect_port();
log::debug!("IOBAR {:X?}", iobar);
*/
// IGD OpRegion/Software SCI/_DSM for Skylake Processors
let bios_base = unsafe { pcid_handle.read_config(0xFC) };
let bios = if bios_base != 0 {
log::info!("BIOS {:X?}", bios_base);
// This is the default BIOS size
let bios_size = 8 * 1024;
match MmioRegion::new(
bios_base as usize,
bios_size,
common::MemoryType::Uncacheable,
) {
Ok(region) => match Bios::new(region) {
Ok(bios) => Some(bios),
Err(err) => {
log::warn!("failed to parse BIOS at {:08X}: {}", bios_base, err);
None
}
},
Err(err) => {
log::warn!("failed to map BIOS at {:08X}: {}", bios_base, err);
None
}
}
} else {
None
};
let ggtt = unsafe {
GlobalGtt::new(
pcid_handle,
gttmm.clone(),
//TODO: how to use 64-bit surface addresses?
gm.size.min(u32::MAX as usize) as u32,
)
};
//unsafe { ggtt.reset() };
// GMBUS seems to be stable for all generations
let gmbus = unsafe { Gmbus::new(&gttmm)? };
let dpclka_cfgcr0;
let int;
let ref_freq;
match kind {
DeviceKind::KabyLake => {
dpclka_cfgcr0 = None;
int = Interrupter {
change_detects: Vec::new(),
// IHD-OS-KBL-Vol 2c-1.17 MASTER_INT_CTL
display_int_ctl: unsafe { gttmm.mmio(0x44200)? },
display_int_ctl_enable: 1 << 31,
display_int_ctl_sde: 1 << 23,
gfx_mstr_intr: None,
gfx_mstr_intr_display: 0,
gfx_mstr_intr_enable: 0,
sde_interrupt: InterruptRegs {
isr: unsafe { gttmm.mmio(0xC4000)? },
imr: unsafe { gttmm.mmio(0xC4004)? },
iir: unsafe { gttmm.mmio(0xC4008)? },
ier: unsafe { gttmm.mmio(0xC400C)? },
},
};
// IHD-OS-KBL-Vol 12-1.17
ref_freq = 24_000_000;
}
DeviceKind::TigerLake | DeviceKind::Alchemist => {
// TigerLake: IHD-OS-TGL-Vol 2c-12.21
// Alchemist: IHD-OS-ACM-Vol 2c-3.23
dpclka_cfgcr0 = Some(unsafe { gttmm.mmio(0x164280)? });
let dssm = unsafe { gttmm.mmio(0x51004)? };
log::debug!("dssm {:08X}", dssm.read());
const DSSM_REF_FREQ_24_MHZ: u32 = 0b000 << 29;
const DSSM_REF_FREQ_19_2_MHZ: u32 = 0b001 << 29;
const DSSM_REF_FREQ_38_4_MHZ: u32 = 0b010 << 29;
const DSSM_REF_FREQ_MASK: u32 = 0b111 << 29;
ref_freq = match dssm.read() & DSSM_REF_FREQ_MASK {
DSSM_REF_FREQ_24_MHZ => 24_000_000,
DSSM_REF_FREQ_19_2_MHZ => 19_200_000,
DSSM_REF_FREQ_38_4_MHZ => 38_400_000,
unknown => {
log::error!("unknown DSSM reference frequency {}", unknown);
return Err(Error::new(EIO));
}
};
int = Interrupter {
change_detects: vec![
ChangeDetect::new("de_hpd_interrupt", unsafe { gttmm.mmio(0x44470)? }),
ChangeDetect::new("de_port_interrupt", unsafe { gttmm.mmio(0x44440)? }),
ChangeDetect::new("shotplug_ctl_ddi", unsafe { gttmm.mmio(0xC4030)? }),
ChangeDetect::new("shotplug_ctl_tc", unsafe { gttmm.mmio(0xC4034)? }),
ChangeDetect::new("tbt_hotplug_ctl", unsafe { gttmm.mmio(0x44030)? }),
ChangeDetect::new("tc_hotplug_ctl", unsafe { gttmm.mmio(0x44038)? }),
],
display_int_ctl: unsafe { gttmm.mmio(0x44200)? },
display_int_ctl_enable: 1 << 31,
display_int_ctl_sde: 1 << 23,
gfx_mstr_intr: Some(unsafe { gttmm.mmio(0x190010)? }),
gfx_mstr_intr_display: 1 << 16,
gfx_mstr_intr_enable: 1 << 31,
sde_interrupt: InterruptRegs {
isr: unsafe { gttmm.mmio(0xC4000)? },
imr: unsafe { gttmm.mmio(0xC4004)? },
iir: unsafe { gttmm.mmio(0xC4008)? },
ier: unsafe { gttmm.mmio(0xC400C)? },
},
};
}
}
let ddis;
let dplls;
let pipes;
let power_wells;
let transcoders;
match kind {
DeviceKind::KabyLake => {
ddis = Ddi::kabylake(&gttmm)?;
//TODO: kaby lake dplls
dplls = Vec::new();
pipes = Pipe::kabylake(&gttmm)?;
power_wells = PowerWells::kabylake(&gttmm)?;
transcoders = Transcoder::kabylake(&gttmm)?;
}
DeviceKind::TigerLake => {
ddis = Ddi::tigerlake(&gttmm)?;
dplls = Dpll::tigerlake(&gttmm)?;
pipes = Pipe::tigerlake(&gttmm)?;
power_wells = PowerWells::tigerlake(&gttmm)?;
transcoders = Transcoder::tigerlake(&gttmm)?;
}
DeviceKind::Alchemist => {
// Many registers are identical to tigerlake
dplls = Dpll::tigerlake(&gttmm)?;
pipes = Pipe::alchemist(&gttmm)?;
// FIXME transcoders are probably different too
transcoders = Transcoder::tigerlake(&gttmm)?;
// Power wells are distinct
ddis = Ddi::alchemist(&gttmm)?;
power_wells = PowerWells::alchemist(&gttmm)?;
}
}
//TODO: get number of available buffers
let buffers = 1024;
Ok(Self {
kind,
alloc_buffers: RangeAllocator::new(0..buffers),
bios,
ddis,
dpclka_cfgcr0,
dplls,
events: VecDeque::new(),
framebuffers: Vec::new(),
int,
gttmm,
ggtt,
gm,
gmbus,
pipes,
power_wells,
ref_freq,
transcoders,
})
}
pub fn init_inner(&mut self) {
// Discover current framebuffers
self.alloc_buffers.reset();
self.framebuffers.clear();
for pipe in self.pipes.iter() {
for plane in pipe.planes.iter() {
if plane.ctl.readf(PLANE_CTL_ENABLE) {
plane.fetch_modeset(&mut self.alloc_buffers);
self.framebuffers
.push(plane.fetch_framebuffer(&self.gm, &mut self.ggtt));
}
}
}
// Probe all DDIs
let ddi_names: Vec<&str> = self.ddis.iter().map(|ddi| ddi.name).collect();
for ddi_name in ddi_names {
self.probe_ddi(ddi_name).expect("failed to probe DDI");
}
self.dump();
log::info!(
"device initialized with {} framebuffers",
self.framebuffers.len()
);
// Enable SDE interrupts
{
let mut mask = 0;
for ddi in self.ddis.iter() {
if let Some(sde_interrupt_hotplug) = ddi.sde_interrupt_hotplug {
mask |= sde_interrupt_hotplug;
}
}
let sde_int = &mut self.int.sde_interrupt;
// Enable DDI hotplug interrupts
sde_int.ier.write(mask);
// Clear identity register
sde_int.iir.write(sde_int.iir.read());
// Unmask all interrupts
sde_int.imr.write(0);
}
// Enable display interrupts
self.int
.display_int_ctl
.write(self.int.display_int_ctl_enable);
if let Some(gfx_mstr_intr) = &mut self.int.gfx_mstr_intr {
// Enable graphics interrupts
gfx_mstr_intr.write(self.int.gfx_mstr_intr_enable);
}
for change_detect in self.int.change_detects.iter_mut() {
change_detect.log();
}
}
pub fn dump(&self) {
for ddi in self.ddis.iter() {
if ddi.buf_ctl.readf(DDI_BUF_CTL_ENABLE) {
ddi.dump();
}
}
if let Some(dpclka_cfgcr0) = &self.dpclka_cfgcr0 {
eprintln!("dpclka_cfgcr0 {:08X}", dpclka_cfgcr0.read());
}
for dpll in self.dplls.iter() {
if dpll.enable.readf(DPLL_ENABLE_ENABLE) {
dpll.dump();
}
}
for (transcoder, pipe) in self.transcoders.iter().zip(self.pipes.iter()) {
if transcoder.conf.readf(TRANS_CONF_ENABLE) {
transcoder.dump();
pipe.dump();
for plane in pipe.planes.iter() {
if plane.index == 0 || plane.ctl.readf(PLANE_CTL_ENABLE) {
eprint!(" ");
plane.dump();
}
}
}
}
}
pub fn probe_ddi(&mut self, name: &str) -> Result<bool> {
let Some(ddi) = self.ddis.iter_mut().find(|ddi| ddi.name == name) else {
log::warn!("DDI {} not found", name);
return Err(Error::new(EIO));
};
// Enable DDI power well
self.power_wells.enable_well_by_ddi(ddi.name)?;
let Some((source, edid_data)) =
ddi.probe_edid(&mut self.power_wells, &self.gttmm, &mut self.gmbus)?
else {
return Ok(false);
};
let edid = match edid::parse(&edid_data).to_full_result() {
Ok(edid) => {
log::info!("DDI {} EDID from {}: {:?}", ddi.name, source, edid);
edid
}
Err(err) => {
log::warn!(
"DDI {} failed to parse EDID from {}: {:?}",
ddi.name,
source,
err
);
// Will try again but not fail the driver
return Ok(false);
}
};
let timing_opt = edid.descriptors.iter().find_map(|desc| match desc {
edid::Descriptor::DetailedTiming(timing) => Some(timing),
_ => None,
});
let Some(timing) = timing_opt else {
log::warn!(
"DDI {} EDID from {} missing detailed timing",
ddi.name,
source
);
// Will try again but not fail the driver
return Ok(false);
};
let mut modeset = |ddi: &mut Ddi, input: VideoInput| -> Result<()> {
// IHD-OS-TGL-Vol 12-1.22-Rev2.0 "Sequences for HDMI and DVI"
// Power wells should already be enabled
//TODO: Type-C needs aux power enabled and max lanes set
// Enable port PLL without SSC. Not required on Type-C ports
if let Some(clock_shift) = ddi.dpclka_cfgcr0_clock_shift {
// Find free DPLL
let dpll = self
.dplls
.iter_mut()
.find(|dpll| !dpll.enable.readf(DPLL_ENABLE_ENABLE))
.ok_or_else(|| {
log::error!("failed to find free DPLL");
Error::new(EIO)
})?;
// DPLL power guard
let mut dpll_enable = unsafe { MmioPtr::new(dpll.enable.as_mut_ptr()) };
let dpll_power_guard = CallbackGuard::new(
&mut dpll_enable,
|dpll_enable| {
// Enable DPLL power
dpll_enable.writef(DPLL_ENABLE_POWER_ENABLE, true);
//TODO: timeout not specified in docs, should be very fast
let timeout = Timeout::from_micros(1);
while !dpll_enable.readf(DPLL_ENABLE_POWER_STATE) {
timeout.run().map_err(|()| {
log::debug!("timeout while enabling DPLL {} power", dpll.name);
Error::new(EIO)
})?;
}
Ok(())
},
|dpll_enable| {
// Disable DPLL power
dpll_enable.writef(DPLL_ENABLE_POWER_ENABLE, false);
},
)?;
match input {
VideoInput::Hdmi => {
// Set SSC enable/disable. For HDMI, always disable
dpll.ssc.writef(DPLL_SSC_ENABLE, false);
// Configure DPLL frequency
dpll.set_freq_hdmi(self.ref_freq, &timing)?;
}
VideoInput::Dp => {
log::warn!("DPLL for DisplayPort not implemented");
return Err(Error::new(EIO));
}
}
//TODO: "Sequence Before Frequency Change"
// Enable DPLL
//TODO: use guard?
{
dpll.enable.writef(DPLL_ENABLE_ENABLE, true);
let timeout = Timeout::from_micros(50);
while !dpll.enable.readf(DPLL_ENABLE_LOCK) {
timeout.run().map_err(|()| {
log::debug!("timeout while enabling DPLL {}", dpll.name);
Error::new(EIO)
})?;
}
}
//TODO: "Sequence After Frequency Change"
// Update DPLL mapping
if let Some(dpclka_cfgcr0) = &mut self.dpclka_cfgcr0 {
const DPCLKA_CFGCR0_CLOCK_MASK: u32 = 0b11;
let mut v = dpclka_cfgcr0.read();
v &= !(DPCLKA_CFGCR0_CLOCK_MASK << clock_shift);
v |= dpll.dpclka_cfgcr0_clock_value << clock_shift;
dpclka_cfgcr0.write(v);
}
// Continue to allow DPLL power
mem::forget(dpll_power_guard);
}
// Enable DPLL clock (must be done separately from PLL mapping)
if let Some(dpclka_cfgcr0) = &mut self.dpclka_cfgcr0 {
if let Some(clock_off) = ddi.dpclka_cfgcr0_clock_off {
dpclka_cfgcr0.writef(clock_off, false);
}
}
// Enable IO power
//TODO: the request can be shared by multiple DDIs
//TODO: skip if TBT
let pwr_well_ctl_ddi_request = ddi.pwr_well_ctl_ddi_request;
let pwr_well_ctl_ddi_state = ddi.pwr_well_ctl_ddi_state;
let mut pwr_well_ctl_ddi =
unsafe { MmioPtr::new(self.power_wells.ctl_ddi.as_mut_ptr()) };
let pwr_guard = CallbackGuard::new(
&mut pwr_well_ctl_ddi,
|pwr_well_ctl_ddi| {
// Enable IO power
pwr_well_ctl_ddi.writef(pwr_well_ctl_ddi_request, true);
let timeout = Timeout::from_micros(30);
while !pwr_well_ctl_ddi.readf(pwr_well_ctl_ddi_state) {
timeout.run().map_err(|()| {
log::debug!("timeout while requesting DDI {} IO power", ddi.name);
Error::new(EIO)
})?;
}
Ok(())
},
|pwr_well_ctl_ddi| {
// Disable IO power
pwr_well_ctl_ddi.writef(pwr_well_ctl_ddi_request, false);
},
)?;
//TODO: Type-C DP_MODE
// Enable planes, pipe, and transcoder
{
// Find free transcoder with free pipe
let mut transcoder_pipe = None;
for (transcoder, pipe) in self.transcoders.iter_mut().zip(self.pipes.iter_mut()) {
if transcoder.conf.readf(TRANS_CONF_ENABLE) {
continue;
}
//TODO: how would we know if pipe is in use?
transcoder_pipe = Some((transcoder, pipe));
break;
}
let Some((transcoder, pipe)) = transcoder_pipe else {
log::error!("free transcoder and pipe not found");
return Err(Error::new(EIO));
};
// Enable pipe and transcoder power wells
self.power_wells.enable_well_by_pipe(pipe.name)?;
self.power_wells
.enable_well_by_transcoder(transcoder.name)?;
// Configure transcoder clock select
if let Some(transcoder_index) = ddi.transcoder_index {
transcoder
.clk_sel
.write(transcoder_index << transcoder.clk_sel_shift);
}
// Set pipe bottom color to blue for debugging
pipe.bottom_color.write(0x3FF);
// Configure and enable planes
//TODO: THIS IS HACKY
if let Some(plane) = pipe.planes.first_mut() {
let width = timing.horizontal_active_pixels as u32;
let height = timing.vertical_active_lines as u32;
let fb = DeviceFb::alloc(&self.gm, &mut self.ggtt, width, height)?;
plane.modeset(&mut self.alloc_buffers)?;
plane.set_framebuffer(&fb);
self.framebuffers.push(fb);
}
//TODO: VGA and panel fitter steps?
// Configure transcoder timings and other pipe and transcoder settings
transcoder.modeset(pipe, &timing);
// Configure and enable TRANS_DDI_FUNC_CTL
{
let mut ddi_func_ctl = TRANS_DDI_FUNC_CTL_ENABLE |
//TODO: allow different bits per color
TRANS_DDI_FUNC_CTL_BPC_8 |
//TODO: correct port width selection
TRANS_DDI_FUNC_CTL_PORT_WIDTH_4;
if let Some(transcoder_index) = ddi.transcoder_index {
ddi_func_ctl |= transcoder_index << transcoder.ddi_func_ctl_ddi_shift;
}
match input {
VideoInput::Hdmi => {
ddi_func_ctl |= TRANS_DDI_FUNC_CTL_MODE_HDMI;
// Set HDMI scrambling and high TMDS char rate based on symbol rate > 340 MHz
if timing.pixel_clock > 340_000 {
ddi_func_ctl |= transcoder.ddi_func_ctl_hdmi_scrambling
| transcoder.ddi_func_ctl_high_tmds_char_rate;
}
}
VideoInput::Dp => {
//TODO: MST
ddi_func_ctl |= TRANS_DDI_FUNC_CTL_MODE_DP_SST;
}
}
match (timing.features >> 3) & 0b11 {
// Digital sync, separate
0b11 => {
if (timing.features & (1 << 2)) != 0 {
ddi_func_ctl |= TRANS_DDI_FUNC_CTL_SYNC_POLARITY_VSHIGH;
}
if (timing.features & (1 << 1)) != 0 {
ddi_func_ctl |= TRANS_DDI_FUNC_CTL_SYNC_POLARITY_HSHIGH;
}
}
unsupported => {
log::warn!("unsupported sync {:#x}", unsupported);
}
}
transcoder.ddi_func_ctl.write(ddi_func_ctl);
}
// Configure and enable TRANS_CONF
let mut conf = transcoder.conf.read();
// Set mode to progressive
conf &= !TRANS_CONF_MODE_MASK;
// Enable transcoder
conf |= TRANS_CONF_ENABLE;
transcoder.conf.write(conf);
//TODO: what is the correct timeout?
let timeout = Timeout::from_millis(100);
while !transcoder.conf.readf(TRANS_CONF_STATE) {
timeout.run().map_err(|()| {
log::error!(
"timeout on DDI {} transcoder {} enable",
ddi.name,
transcoder.name
);
Error::new(EIO)
})?;
}
}
// Enable port
{
// Configure voltage swing and related IO settings
match input {
VideoInput::Hdmi => {
ddi.voltage_swing_hdmi(&self.gttmm, &timing)?;
}
VideoInput::Dp => {
//TODO ddi.voltage_swing_dp(&self.gttmm)?;
log::error!("voltage swing for DP not implemented");
return Err(Error::new(EIO));
}
}
// Configure PORT_CL_DW10 static power down to power up all lanes
//TODO: only power up required lanes
if let Some(mut port_cl_dw10) = ddi.port_cl(PortClReg::Dw10) {
port_cl_dw10.writef(0b1111 << 4, false);
}
// Configure and enable DDI_BUF_CTL
//TODO: more DDI_BUF_CTL bits?
ddi.buf_ctl.writef(DDI_BUF_CTL_ENABLE, true);
// Wait for DDI_BUF_CTL IDLE = 0, timeout after 500 us
let timeout = Timeout::from_micros(500);
while ddi.buf_ctl.readf(DDI_BUF_CTL_IDLE) {
timeout.run().map_err(|()| {
log::warn!("timeout while waiting for DDI {} active", ddi.name);
Error::new(EIO)
})?;
}
}
// Keep IO power on if finished
mem::forget(pwr_guard);
Ok(())
};
if ddi.buf_ctl.readf(DDI_BUF_CTL_IDLE) {
log::info!("DDI {} idle, will attempt mode setting", ddi.name);
const EDID_VIDEO_INPUT_UNDEFINED: u8 = (1 << 7) | 0b0000;
const EDID_VIDEO_INPUT_DVI: u8 = (1 << 7) | 0b0001;
const EDID_VIDEO_INPUT_HDMI_A: u8 = (1 << 7) | 0b0010;
const EDID_VIDEO_INPUT_HDMI_B: u8 = (1 << 7) | 0b0011;
const EDID_VIDEO_INPUT_DP: u8 = (1 << 7) | 0b0101;
const EDID_VIDEO_INPUT_MASK: u8 = (1 << 7) | 0b1111;
let input = match edid_data[20] & EDID_VIDEO_INPUT_MASK {
//TODO: how to accurately discover input type?
//TODO: HDMI often shows up as undefined, do others?
EDID_VIDEO_INPUT_UNDEFINED
| EDID_VIDEO_INPUT_DVI
| EDID_VIDEO_INPUT_HDMI_A
| EDID_VIDEO_INPUT_HDMI_B => VideoInput::Hdmi,
EDID_VIDEO_INPUT_DP => VideoInput::Dp,
unknown => {
log::warn!("EDID video input 0x{:02X} not supported", unknown);
return Err(Error::new(EIO));
}
};
//TODO: DisplayPort modeset not complete
match modeset(ddi, input) {
Ok(()) => {
log::info!("DDI {} modeset {:?} finished", ddi.name, input);
}
Err(err) => {
log::warn!("DDI {} modeset {:?} failed: {}", ddi.name, input, err);
// Will try again but not fail the driver
return Ok(false);
}
}
} else {
log::info!("DDI {} already active", ddi.name);
}
Ok(true)
}
pub fn handle_display_irq(&mut self) -> bool {
let display_ints = self.int.display_int_ctl.read() & !self.int.display_int_ctl_enable;
if display_ints != 0 {
log::info!(" display ints {:08X}", display_ints);
if display_ints & self.int.display_int_ctl_sde != 0 {
let sde_ints = self.int.sde_interrupt.iir.read();
self.int.sde_interrupt.iir.write(sde_ints);
log::info!(" south display engine ints {:08X}", sde_ints);
for ddi in self.ddis.iter() {
if let Some(sde_interrupt_hotplug) = ddi.sde_interrupt_hotplug {
if sde_ints & sde_interrupt_hotplug == sde_interrupt_hotplug {
self.events.push_back(Event::DdiHotplug(ddi.name));
}
}
}
}
true
} else {
false
}
}
pub fn handle_irq(&mut self) -> bool {
let had_irq = if let Some(gfx_mstr_intr) = &mut self.int.gfx_mstr_intr {
let gfx_ints = gfx_mstr_intr.read() & !self.int.gfx_mstr_intr_enable;
if gfx_ints != 0 {
log::info!("gfx ints {:08X}", gfx_ints);
gfx_mstr_intr.write(gfx_ints | self.int.gfx_mstr_intr_enable);
if gfx_ints & self.int.gfx_mstr_intr_display != 0 {
self.handle_display_irq();
}
true
} else {
false
}
} else {
self.handle_display_irq()
};
if had_irq {
for change_detect in self.int.change_detects.iter_mut() {
change_detect.check();
}
}
had_irq
}
pub fn handle_events(&mut self) {
while let Some(event) = self.events.pop_front() {
match event {
Event::DdiHotplug(ddi_name) => {
log::info!("DDI {} plugged", ddi_name);
for _attempt in 0..4 {
//TODO: gmbus times out!
match self.probe_ddi(ddi_name) {
Ok(true) => {
break;
}
Ok(false) => {
log::warn!("timeout probing {}", ddi_name);
}
Err(err) => {
log::warn!("failed to probe {}: {}", ddi_name, err);
}
}
//TODO: do this asynchronously so scheme events can be handled
std::thread::sleep(std::time::Duration::from_secs(1));
}
}
}
}
}
}
@@ -0,0 +1,356 @@
use common::io::{Io, MmioPtr};
use range_alloc::RangeAllocator;
use syscall::error::Result;
use syscall::{Error, EIO};
use super::buffer::GpuBuffer;
use super::{GlobalGtt, MmioRegion};
pub const PLANE_CTL_ENABLE: u32 = 1 << 31;
pub const PLANE_WM_ENABLE: u32 = 1 << 31;
pub const PLANE_WM_LINES_SHIFT: u32 = 14;
#[derive(Debug)]
pub struct DeviceFb {
pub buffer: GpuBuffer,
pub width: u32,
pub height: u32,
pub stride: u32,
}
impl DeviceFb {
pub unsafe fn new(
gm: &MmioRegion,
surf: u32,
width: u32,
height: u32,
stride: u32,
clear: bool,
) -> Self {
Self {
buffer: unsafe { GpuBuffer::new(gm, surf, stride * height, clear) },
width,
height,
stride,
}
}
pub fn alloc(
gm: &MmioRegion,
ggtt: &mut GlobalGtt,
width: u32,
height: u32,
) -> syscall::Result<Self> {
let (buffer, stride) = GpuBuffer::alloc_dumb(gm, ggtt, width, height)?;
Ok(DeviceFb {
buffer,
width,
height,
stride,
})
}
}
pub struct Plane {
pub name: &'static str,
pub index: usize,
pub buf_cfg: MmioPtr<u32>,
pub color_ctl: Option<MmioPtr<u32>>,
pub color_ctl_gamma_disable: u32,
pub ctl: MmioPtr<u32>,
pub ctl_source_rgb_8888: u32,
pub ctl_source_mask: u32,
pub offset: MmioPtr<u32>,
pub pos: MmioPtr<u32>,
pub size: MmioPtr<u32>,
pub stride: MmioPtr<u32>,
pub surf: MmioPtr<u32>,
pub wm: [MmioPtr<u32>; 8],
pub wm_trans: MmioPtr<u32>,
}
impl Plane {
pub fn fetch_modeset(&self, alloc_buffers: &mut RangeAllocator<u32>) {
let buf_cfg = self.buf_cfg.read();
let buffer_start = buf_cfg & 0x7FF;
let buffer_end = (buf_cfg >> 16) & 0x7FF;
alloc_buffers
.allocate_exact_range(buffer_start..(buffer_end + 1))
.unwrap_or_else(|err| {
panic!(
"failed to allocate pre-existing buffer blocks {} to {}: {:?}",
buffer_start, buffer_end, err
);
});
}
pub fn modeset(&mut self, alloc_buffers: &mut RangeAllocator<u32>) -> syscall::Result<()> {
// FIXME handle runtime buffer reconfiguration
//TODO: enable DBUF if more buffers needed
//TODO: more blocks would mean better power usage
// Minimum is 8 blocks for linear planes, 160 blocks is recommended for pre-OS init
let buffer_size = 160;
let buffer = alloc_buffers.allocate_range(buffer_size).map_err(|err| {
log::warn!(
"failed to allocate {} buffer blocks: {:?}",
buffer_size,
err
);
Error::new(EIO)
})?;
self.buf_cfg.write(buffer.start | (buffer.end << 16));
//TODO: correct watermark calculation
self.wm[0].write(PLANE_WM_ENABLE | (2 << PLANE_WM_LINES_SHIFT) | buffer.len() as u32);
for i in 1..self.wm.len() {
self.wm[i].writef(PLANE_WM_ENABLE, false);
}
self.wm_trans.writef(PLANE_WM_ENABLE, false);
Ok(())
}
pub fn fetch_framebuffer(&self, gm: &MmioRegion, ggtt: &mut GlobalGtt) -> DeviceFb {
let size = self.size.read();
let width = (size & 0xFFFF) + 1;
let height = ((size >> 16) & 0xFFFF) + 1;
let stride_64 = self.stride.read() & 0x7FF;
//TODO: this will be wrong for tiled planes
let stride = stride_64 * 64;
let surf = self.surf.read() & 0xFFFFF000;
//TODO: read bits per pixel
let surf_size = (stride * height).next_multiple_of(4096);
ggtt.reserve(surf, surf_size);
unsafe { DeviceFb::new(gm, surf, width, height, stride, true) }
}
pub fn set_framebuffer(&mut self, fb: &DeviceFb) {
//TODO: documentation on this is not great
let stride_64 = fb.stride / 64;
self.size.write((fb.width - 1) | ((fb.height - 1) << 16));
self.stride.write(stride_64);
self.surf.write(fb.buffer.gm_offset);
// Disable gamma
if let Some(color_ctl) = &mut self.color_ctl {
color_ctl.write(self.color_ctl_gamma_disable);
}
//TODO: more PLANE_CTL bits
self.ctl.write(PLANE_CTL_ENABLE | self.ctl_source_rgb_8888);
}
pub fn dump(&self) {
eprint!("Plane {}", self.name);
eprint!(" buf_cfg {:08X}", self.buf_cfg.read());
if let Some(reg) = &self.color_ctl {
eprint!(" color_ctl {:08X}", reg.read());
}
eprint!(" ctl {:08X}", self.ctl.read());
eprint!(" offset {:08X}", self.offset.read());
eprint!(" pos {:08X}", self.offset.read());
eprint!(" size {:08X}", self.size.read());
eprint!(" stride {:08X}", self.stride.read());
eprint!(" surf {:08X}", self.surf.read());
for i in 0..self.wm.len() {
eprint!(" wm_{} {:08X}", i, self.wm[i].read());
}
eprint!(" wm_trans {:08X}", self.wm_trans.read());
eprintln!();
}
}
pub struct Pipe {
pub name: &'static str,
pub index: usize,
pub planes: Vec<Plane>,
pub bottom_color: MmioPtr<u32>,
pub misc: MmioPtr<u32>,
pub srcsz: MmioPtr<u32>,
}
impl Pipe {
pub fn dump(&self) {
eprint!("Pipe {}", self.name);
eprint!(" bottom_color {:08X}", self.bottom_color.read());
eprint!(" misc {:08X}", self.misc.read());
eprint!(" srcsz {:08X}", self.srcsz.read());
eprintln!();
}
pub fn kabylake(gttmm: &MmioRegion) -> Result<Vec<Self>> {
let mut pipes = Vec::with_capacity(3);
for (i, name) in ["A", "B", "C"].iter().enumerate() {
let mut planes = Vec::new();
//TODO: cursor plane
for (j, name) in ["1", "2", "3"].iter().enumerate() {
planes.push(Plane {
name,
index: j,
// IHD-OS-KBL-Vol 2c-1.17 PLANE_BUF_CFG
buf_cfg: unsafe { gttmm.mmio(0x7027C + i * 0x1000 + j * 0x100)? },
// N/A
color_ctl: None,
color_ctl_gamma_disable: 0,
// IHD-OS-KBL-Vol 2c-1.17 PLANE_CTL
ctl: unsafe { gttmm.mmio(0x70180 + i * 0x1000 + j * 0x100)? },
ctl_source_rgb_8888: 0b0100 << 24,
ctl_source_mask: 0b1111 << 24,
// IHD-OS-KBL-Vol 2c-1.17 PLANE_OFFSET
offset: unsafe { gttmm.mmio(0x701A4 + i * 0x1000 + j * 0x100)? },
// IHD-OS-KBL-Vol 2c-1.17 PLANE_POS
pos: unsafe { gttmm.mmio(0x7018C + i * 0x1000 + j * 0x100)? },
// IHD-OS-KBL-Vol 2c-1.17 PLANE_SIZE
size: unsafe { gttmm.mmio(0x70190 + i * 0x1000 + j * 0x100)? },
// IHD-OS-KBL-Vol 2c-1.17 PLANE_STRIDE
stride: unsafe { gttmm.mmio(0x70188 + i * 0x1000 + j * 0x100)? },
// IHD-OS-KBL-Vol 2c-1.17 PLANE_SURF
surf: unsafe { gttmm.mmio(0x7019C + i * 0x1000 + j * 0x100)? },
// IHD-OS-KBL-Vol 2c-1.17 PLANE_WM
wm: [
unsafe { gttmm.mmio(0x70240 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70244 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70248 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x7024C + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70250 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70254 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70258 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x7025C + i * 0x1000 + j * 0x100)? },
],
wm_trans: unsafe { gttmm.mmio(0x70268 + i * 0x1000 + j * 0x100)? },
});
}
pipes.push(Pipe {
name,
index: i,
planes,
// IHD-OS-KBL-Vol 2c-1.17 PIPE_BOTTOM_COLOR
bottom_color: unsafe { gttmm.mmio(0x70034 + i * 0x1000)? },
// IHD-OS-KBL-Vol 2c-1.17 PIPE_MISC
misc: unsafe { gttmm.mmio(0x70030 + i * 0x1000)? },
// IHD-OS-KBL-Vol 2c-1.17 PIPE_SRCSZ
srcsz: unsafe { gttmm.mmio(0x6001C + i * 0x1000)? },
})
}
Ok(pipes)
}
pub fn tigerlake(gttmm: &MmioRegion) -> Result<Vec<Self>> {
let mut pipes = Vec::with_capacity(4);
for (i, name) in ["A", "B", "C", "D"].iter().enumerate() {
let mut planes = Vec::new();
//TODO: cursor plane
for (j, name) in ["1", "2", "3", "4", "5", "6", "7"].iter().enumerate() {
planes.push(Plane {
name,
index: j,
// IHD-OS-TGL-Vol 2c-12.21 PLANE_BUF_CFG
buf_cfg: unsafe { gttmm.mmio(0x7027C + i * 0x1000 + j * 0x100)? },
// IHD-OS-TGL-Vol 2c-12.21 PLANE_COLOR_CTL
color_ctl: Some(unsafe { gttmm.mmio(0x701CC + i * 0x1000 + j * 0x100)? }),
color_ctl_gamma_disable: 1 << 13,
// IHD-OS-TGL-Vol 2c-12.21 PLANE_CTL
ctl: unsafe { gttmm.mmio(0x70180 + i * 0x1000 + j * 0x100)? },
ctl_source_rgb_8888: 0b01000 << 23,
ctl_source_mask: 0b11111 << 23,
// IHD-OS-TGL-Vol 2c-12.21 PLANE_OFFSET
offset: unsafe { gttmm.mmio(0x701A4 + i * 0x1000 + j * 0x100)? },
// IHD-OS-TGL-Vol 2c-12.21 PLANE_POS
pos: unsafe { gttmm.mmio(0x7018C + i * 0x1000 + j * 0x100)? },
// IHD-OS-TGL-Vol 2c-12.21 PLANE_SIZE
size: unsafe { gttmm.mmio(0x70190 + i * 0x1000 + j * 0x100)? },
// IHD-OS-TGL-Vol 2c-12.21 PLANE_STRIDE
stride: unsafe { gttmm.mmio(0x70188 + i * 0x1000 + j * 0x100)? },
// IHD-OS-TGL-Vol 2c-12.21 PLANE_SURF
surf: unsafe { gttmm.mmio(0x7019C + i * 0x1000 + j * 0x100)? },
// IHD-OS-TGL-Vol 2c-12.21 PLANE_WM
wm: [
unsafe { gttmm.mmio(0x70240 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70244 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70248 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x7024C + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70250 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70254 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70258 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x7025C + i * 0x1000 + j * 0x100)? },
],
wm_trans: unsafe { gttmm.mmio(0x70268 + i * 0x1000 + j * 0x100)? },
});
}
pipes.push(Pipe {
name,
index: i,
planes,
// IHD-OS-TGL-Vol 2c-12.21 PIPE_BOTTOM_COLOR
bottom_color: unsafe { gttmm.mmio(0x70034 + i * 0x1000)? },
// IHD-OS-TGL-Vol 2c-12.21 PIPE_MISC
misc: unsafe { gttmm.mmio(0x70030 + i * 0x1000)? },
// IHD-OS-TGL-Vol 2c-12.21 PIPE_SRCSZ
srcsz: unsafe { gttmm.mmio(0x6001C + i * 0x1000)? },
})
}
Ok(pipes)
}
pub fn alchemist(gttmm: &MmioRegion) -> Result<Vec<Self>> {
let mut pipes = Vec::with_capacity(4);
for (i, name) in ["A", "B", "C", "D"].iter().enumerate() {
let mut planes = Vec::new();
//TODO: cursor plane
for (j, name) in ["1", "2", "3", "4", "5"].iter().enumerate() {
planes.push(Plane {
name,
index: j,
// IHD-OS-ACM-Vol 2c-3.23 PLANE_BUF_CFG
buf_cfg: unsafe { gttmm.mmio(0x7057C + i * 0x1000 + j * 0x100)? },
// IHD-OS-ACM-Vol 2c-3.23 PLANE_COLOR_CTL
color_ctl: Some(unsafe { gttmm.mmio(0x704CC + i * 0x1000 + j * 0x100)? }),
color_ctl_gamma_disable: 1 << 13,
// IHD-OS-ACM-Vol 2c-3.23 PLANE_CTL
ctl: unsafe { gttmm.mmio(0x70480 + i * 0x1000 + j * 0x100)? },
ctl_source_rgb_8888: 0b01000 << 23,
ctl_source_mask: 0b11111 << 23,
// IHD-OS-ACM-Vol 2c-3.23 PLANE_OFFSET
offset: unsafe { gttmm.mmio(0x704A4 + i * 0x1000 + j * 0x100)? },
// IHD-OS-ACM-Vol 2c-3.23 PLANE_POS
pos: unsafe { gttmm.mmio(0x7048C + i * 0x1000 + j * 0x100)? },
// IHD-OS-ACM-Vol 2c-3.23 PLANE_SIZE
size: unsafe { gttmm.mmio(0x70490 + i * 0x1000 + j * 0x100)? },
// IHD-OS-ACM-Vol 2c-3.23 PLANE_STRIDE
stride: unsafe { gttmm.mmio(0x70488 + i * 0x1000 + j * 0x100)? },
// IHD-OS-ACM-Vol 2c-3.23 PLANE_SURF
surf: unsafe { gttmm.mmio(0x7049C + i * 0x1000 + j * 0x100)? },
// IHD-OS-ACM-Vol 2c-3.23 PLANE_WM
wm: [
unsafe { gttmm.mmio(0x70540 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70544 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70548 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x7054C + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70550 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70554 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x70558 + i * 0x1000 + j * 0x100)? },
unsafe { gttmm.mmio(0x7055C + i * 0x1000 + j * 0x100)? },
],
wm_trans: unsafe { gttmm.mmio(0x70568 + i * 0x1000 + j * 0x100)? },
});
}
pipes.push(Pipe {
name,
index: i,
planes,
// IHD-OS-ACM-Vol 2c-3.23 PIPE_BOTTOM_COLOR
bottom_color: unsafe { gttmm.mmio(0x70034 + i * 0x1000)? },
// IHD-OS-ACM-Vol 2c-3.23 PIPE_MISC
misc: unsafe { gttmm.mmio(0x70030 + i * 0x1000)? },
// IHD-OS-ACM-Vol 2c-3.23 PIPE_SRCSZ
srcsz: unsafe { gttmm.mmio(0x6001C + i * 0x1000)? },
})
}
Ok(pipes)
}
}
@@ -0,0 +1,323 @@
use common::{
io::{Io, MmioPtr},
timeout::Timeout,
};
use syscall::error::{Error, Result, EIO};
use super::MmioRegion;
#[derive(Clone, Copy)]
pub struct PowerWell {
pub name: &'static str,
pub depends: &'static [&'static str],
pub ddis: &'static [&'static str],
pub pipes: &'static [&'static str],
pub transcoders: &'static [&'static str],
pub request: u32,
pub state: u32,
pub fuse_status: u32,
}
pub struct PowerWells {
pub ctl: MmioPtr<u32>,
pub ctl_aux: MmioPtr<u32>,
pub ctl_ddi: MmioPtr<u32>,
pub fuse_status: MmioPtr<u32>,
pub fuse_status_pg0: u32,
pub wells: Vec<PowerWell>,
}
impl PowerWells {
//TODO: return guard?
pub fn enable_well(&mut self, name: &'static str) -> Result<()> {
// Wait 20us for distribution of PG0
{
let timeout = Timeout::from_micros(20);
while !self.fuse_status.readf(self.fuse_status_pg0) {
timeout.run().map_err(|()| {
log::warn!("timeout on distribution of power well 0");
Error::new(EIO)
})?;
}
}
// self.wells iter copied to allow mutable self.enable_well later
for well in self.wells.iter().copied() {
if well.name == name {
// Enable dependent wells
for depend in well.depends.iter() {
self.enable_well(depend)?;
}
if !self.ctl.readf(well.request) {
log::info!("enabling power well {}", well.name);
}
// Set request bit
self.ctl.writef(well.request, true);
// Wait 100us for enabled state
{
let timeout = Timeout::from_micros(100);
while !self.ctl.readf(well.state) {
timeout.run().map_err(|()| {
log::warn!("timeout enabling power well {}", well.name);
Error::new(EIO)
})?;
}
}
// Wait 20us for distribution
{
let timeout = Timeout::from_micros(20);
while !self.fuse_status.readf(well.fuse_status) {
timeout.run().map_err(|()| {
log::warn!("timeout on distribution of power well {}", well.name);
Error::new(EIO)
})?;
}
}
return Ok(());
}
}
log::warn!("power well {} not found", name);
Err(Error::new(EIO))
}
pub fn enable_well_by_ddi(&mut self, name: &'static str) -> Result<()> {
for well in self.wells.iter() {
if well.ddis.contains(&name) {
return self.enable_well(well.name);
}
}
log::warn!("power well for DDI {} not found", name);
Err(Error::new(EIO))
}
pub fn enable_well_by_pipe(&mut self, name: &'static str) -> Result<()> {
for well in self.wells.iter() {
if well.pipes.contains(&name) {
return self.enable_well(well.name);
}
}
log::warn!("power well for pipe {} not found", name);
Err(Error::new(EIO))
}
pub fn enable_well_by_transcoder(&mut self, name: &'static str) -> Result<()> {
for well in self.wells.iter() {
if well.transcoders.contains(&name) {
return self.enable_well(well.name);
}
}
log::warn!("power well for transcoder {} not found", name);
Err(Error::new(EIO))
}
pub fn kabylake(gttmm: &MmioRegion) -> Result<Self> {
// IHD-OS-KBL-Vol 2c-1.17 PWR_WELL_CTL
let ctl = unsafe { gttmm.mmio(0x45404)? };
// Hack since these power ctl registers are combined
let ctl_aux = unsafe { gttmm.mmio(0x45404)? };
let ctl_ddi = unsafe { gttmm.mmio(0x45404)? };
// IHD-OS-KBL-Vol 2c-1.17 FUSE_STATUS
let fuse_status = unsafe { gttmm.mmio(0x42000)? };
let fuse_status_pg0 = 1 << 27;
let wells = vec![
PowerWell {
name: "1",
depends: &[],
ddis: &["A"],
pipes: &["A"],
transcoders: &["EDP"],
request: 1 << 29,
state: 1 << 28,
fuse_status: 1 << 26,
},
PowerWell {
name: "2",
depends: &["1"],
ddis: &["B", "C", "D", "E"],
pipes: &["B", "C"],
transcoders: &["A", "B", "C"],
request: 1 << 31,
state: 1 << 30,
fuse_status: 1 << 25,
},
];
Ok(Self {
ctl,
ctl_aux,
ctl_ddi,
fuse_status,
fuse_status_pg0,
wells,
})
}
pub fn tigerlake(gttmm: &MmioRegion) -> Result<Self> {
// IHD-OS-TGL-Vol 2c-12.21 PWR_WELL_CTL
let ctl = unsafe { gttmm.mmio(0x45404)? };
// IHD-OS-TGL-Vol 2c-12.21 PWR_WELL_CTL_AUX
let ctl_aux = unsafe { gttmm.mmio(0x45444)? };
// IHD-OS-TGL-Vol 2c-12.21 PWR_WELL_CTL_DDI
let ctl_ddi = unsafe { gttmm.mmio(0x45454)? };
// IHD-OS-TGL-Vol 2c-12.21 FUSE_STATUS
let fuse_status = unsafe { gttmm.mmio(0x42000)? };
let fuse_status_pg0 = 1 << 27;
let wells = vec![
// DBUF functionality, Pipe A, Transcoder A and DSI, DDI A-C, FBC, DSS
PowerWell {
name: "1",
depends: &[],
ddis: &["A", "B", "C"],
pipes: &["A"],
transcoders: &["A"],
request: 1 << 1,
state: 1 << 0,
fuse_status: 1 << 26,
},
// VDSC for pipe A
PowerWell {
name: "2",
depends: &["1"],
ddis: &[],
pipes: &[],
transcoders: &[],
request: 1 << 3,
state: 1 << 2,
fuse_status: 1 << 25,
},
// Pipe B, Audio, Transcoder WD, VGA, Transcoder B, DDI USBC1-6, KVMR
PowerWell {
name: "3",
depends: &["2"],
ddis: &["USBC1", "USBC2", "USBC3", "USBC4", "USBC5", "USBC6"],
pipes: &["B"],
transcoders: &["B"],
request: 1 << 5,
state: 1 << 4,
fuse_status: 1 << 24,
},
// Pipe C, Transcoder C
PowerWell {
name: "4",
depends: &["3"],
ddis: &[],
pipes: &["C"],
transcoders: &["C"],
request: 1 << 7,
state: 1 << 6,
fuse_status: 1 << 23,
},
// Pipe D, Transcoder D
PowerWell {
name: "5",
depends: &["4"],
ddis: &[],
pipes: &["D"],
transcoders: &["D"],
request: 1 << 9,
state: 1 << 8,
fuse_status: 1 << 22,
},
];
Ok(Self {
ctl,
ctl_aux,
ctl_ddi,
fuse_status,
fuse_status_pg0,
wells,
})
}
pub fn alchemist(gttmm: &MmioRegion) -> Result<Self> {
// IHD-OS-ACM-Vol 2c-3.23 PWR_WELL_CTL
let ctl = unsafe { gttmm.mmio(0x45404)? };
// IHD-OS-ACM-Vol 2c-3.23 PWR_WELL_CTL_AUX
let ctl_aux = unsafe { gttmm.mmio(0x45444)? };
// IHD-OS-ACM-Vol 2c-3.23 PWR_WELL_CTL_DDI
let ctl_ddi = unsafe { gttmm.mmio(0x45454)? };
// IHD-OS-ACM-Vol 2c-3.23 FUSE_STATUS
let fuse_status = unsafe { gttmm.mmio(0x42000)? };
let fuse_status_pg0 = 1 << 27;
let wells = vec![
// DBUF functionality, Transcoder A, DDI A-B
PowerWell {
name: "1",
depends: &[],
ddis: &["A", "B"],
pipes: &[],
transcoders: &["A"],
request: 1 << 1,
state: 1 << 0,
fuse_status: 1 >> 26,
},
// Audio playback, Transcoder WD, VGA, DDI C-E, Type-C, KVMR
PowerWell {
name: "2",
depends: &["1"],
ddis: &["C", "D", "E", "USBC1", "USBC2", "USBC3", "USBC4"],
pipes: &[],
transcoders: &[],
request: 1 << 3,
state: 1 << 2,
fuse_status: 1 << 25,
},
// Pipe A, FBC
PowerWell {
name: "A",
depends: &["1"],
ddis: &[],
pipes: &["A"],
transcoders: &[],
request: 1 << 11,
state: 1 << 10,
fuse_status: 1 << 21,
},
// Pipe B, Transcoder B
PowerWell {
name: "B",
depends: &["2"],
ddis: &[],
pipes: &["B"],
transcoders: &["B"],
request: 1 << 13,
state: 1 << 12,
fuse_status: 1 << 20,
},
// Pipe C, Transcoder C
PowerWell {
name: "C",
depends: &["2"],
ddis: &[],
pipes: &["C"],
transcoders: &["C"],
request: 1 << 15,
state: 1 << 14,
fuse_status: 1 << 19,
},
// Pipe D, Transcoder D
PowerWell {
name: "D",
depends: &["2"],
ddis: &[],
pipes: &["D"],
transcoders: &["D"],
request: 1 << 17,
state: 1 << 16,
fuse_status: 1 << 18,
},
];
Ok(Self {
ctl,
ctl_aux,
ctl_ddi,
fuse_status,
fuse_status_pg0,
wells,
})
}
}
@@ -0,0 +1,208 @@
//TODO: this is copied from vesad and should be adapted
use std::alloc::{self, Layout};
use std::convert::TryInto;
use std::ptr::{self, NonNull};
use std::sync::Mutex;
use driver_graphics::kms::connector::{KmsConnectorDriver, KmsConnectorStatus};
use driver_graphics::kms::objects::{KmsCrtc, KmsCrtcState, KmsObjectId, KmsObjects};
use driver_graphics::{Buffer, CursorPlane, Damage, GraphicsAdapter};
use drm_sys::{
DRM_CAP_DUMB_BUFFER, DRM_CAP_DUMB_PREFER_SHADOW, DRM_CLIENT_CAP_CURSOR_PLANE_HOTSPOT,
};
use syscall::{error::EINVAL, PAGE_SIZE};
use super::pipe::DeviceFb;
use super::Device;
#[derive(Debug)]
pub struct Connector {
framebuffer_id: usize,
}
impl KmsConnectorDriver for Connector {
type State = ();
}
impl GraphicsAdapter for Device {
type Connector = Connector;
type Crtc = ();
type Buffer = DumbFb;
type Framebuffer = ();
fn name(&self) -> &'static [u8] {
b"ihdgd"
}
fn desc(&self) -> &'static [u8] {
b"Intel HD Graphics"
}
fn init(&mut self, objects: &mut KmsObjects<Self>) {
self.init_inner();
// FIXME enumerate actual connectors
for (framebuffer_id, _) in self.framebuffers.iter().enumerate() {
let crtc = objects.add_crtc((), ());
objects.add_connector(Connector { framebuffer_id }, (), &[crtc]);
}
}
fn get_cap(&self, cap: u32) -> syscall::Result<u64> {
match cap {
DRM_CAP_DUMB_BUFFER => Ok(1),
DRM_CAP_DUMB_PREFER_SHADOW => Ok(0),
_ => Err(syscall::Error::new(EINVAL)),
}
}
fn set_client_cap(&self, cap: u32, _value: u64) -> syscall::Result<()> {
match cap {
// FIXME hide cursor plane unless this client cap is set
DRM_CLIENT_CAP_CURSOR_PLANE_HOTSPOT => Ok(()),
_ => Err(syscall::Error::new(EINVAL)),
}
}
fn probe_connector(&mut self, objects: &mut KmsObjects<Self>, id: KmsObjectId) {
let mut connector = objects.get_connector(id).unwrap().lock().unwrap();
let framebuffer = &self.framebuffers[connector.driver_data.framebuffer_id];
connector.connection = KmsConnectorStatus::Connected;
connector.update_from_size(framebuffer.width as u32, framebuffer.height as u32);
// FIXME fetch EDID
}
fn create_dumb_buffer(&mut self, width: u32, height: u32) -> (Self::Buffer, u32) {
(DumbFb::new(width as usize, height as usize), width * 4)
}
fn map_dumb_buffer(&mut self, framebuffer: &Self::Buffer) -> *mut u8 {
framebuffer.ptr.as_ptr().cast::<u8>()
}
fn create_framebuffer(&mut self, _buffer: &Self::Buffer) -> Self::Framebuffer {
()
}
fn set_crtc(
&mut self,
objects: &KmsObjects<Self>,
crtc: &Mutex<KmsCrtc<Self>>,
state: KmsCrtcState<Self>,
damage: Damage,
) -> syscall::Result<()> {
let mut crtc = crtc.lock().unwrap();
let buffer = state
.fb_id
.map(|fb_id| objects.get_framebuffer(fb_id))
.transpose()?;
crtc.state = state;
for connector in objects.connectors() {
let connector = connector.lock().unwrap();
if connector.state.crtc_id != objects.crtc_ids()[crtc.crtc_index as usize] {
continue;
}
let framebuffer_id = connector.driver_data.framebuffer_id;
let framebuffer = &mut self.framebuffers[framebuffer_id];
if let Some(buffer) = buffer {
buffer.buffer.sync(framebuffer, damage)
} else {
let onscreen_ptr = framebuffer.buffer.virt.cast::<u32>();
for row in 0..framebuffer.height {
unsafe {
ptr::write_bytes(
onscreen_ptr.add((row * framebuffer.stride) as usize),
0,
framebuffer.width as usize,
);
}
}
}
}
Ok(())
}
fn hw_cursor_size(&self) -> Option<(u32, u32)> {
None
}
fn handle_cursor(&mut self, _cursor: &CursorPlane<Self::Buffer>, _dirty_fb: bool) {
unimplemented!("ihdgd does not support this function");
}
}
#[derive(Debug)]
pub struct DumbFb {
width: usize,
height: usize,
ptr: NonNull<[u32]>,
}
impl DumbFb {
fn new(width: usize, height: usize) -> DumbFb {
let len = width * height;
let layout = Self::layout(len);
let ptr = unsafe { alloc::alloc_zeroed(layout) };
let ptr = ptr::slice_from_raw_parts_mut(ptr.cast(), len);
let ptr = NonNull::new(ptr).unwrap_or_else(|| alloc::handle_alloc_error(layout));
DumbFb { width, height, ptr }
}
#[inline]
fn layout(len: usize) -> Layout {
// optimizes to an integer mul
Layout::array::<u32>(len)
.unwrap()
.align_to(PAGE_SIZE)
.unwrap()
}
}
impl Drop for DumbFb {
fn drop(&mut self) {
let layout = Self::layout(self.ptr.len());
unsafe { alloc::dealloc(self.ptr.as_ptr().cast(), layout) };
}
}
impl Buffer for DumbFb {
fn size(&self) -> usize {
self.width * self.height * 4
}
}
impl DumbFb {
fn sync(&self, framebuffer: &mut DeviceFb, sync_rect: Damage) {
let sync_rect = sync_rect.clip(
self.width.try_into().unwrap(),
self.height.try_into().unwrap(),
);
let start_x: usize = sync_rect.x.try_into().unwrap();
let start_y: usize = sync_rect.y.try_into().unwrap();
let w: usize = sync_rect.width.try_into().unwrap();
let h: usize = sync_rect.height.try_into().unwrap();
let offscreen_ptr = self.ptr.as_ptr() as *mut u32;
let onscreen_ptr = framebuffer.buffer.virt.cast::<u32>();
for row in start_y..start_y + h {
unsafe {
ptr::copy(
offscreen_ptr.add(row * self.width + start_x),
onscreen_ptr.add(row * framebuffer.stride as usize / 4 + start_x),
w,
);
}
}
}
}
@@ -0,0 +1,258 @@
use common::io::{Io, MmioPtr};
use syscall::error::Result;
use super::{MmioRegion, Pipe};
// IHD-OS-KBL-Vol 2c-1.17 TRANS_CONF
// IHD-OS-TGL-Vol 2c-12.21 TRANS_CONF
pub const TRANS_CONF_ENABLE: u32 = 1 << 31;
pub const TRANS_CONF_STATE: u32 = 1 << 30;
pub const TRANS_CONF_MODE_MASK: u32 = 0b11 << 21;
// IHD-OS-KBL-Vol 2c-1.17 TRANS_DDI_FUNC_CTL
// IHD-OS-TGL-Vol 2c-12.21 TRANS_DDI_FUNC_CTL
pub const TRANS_DDI_FUNC_CTL_ENABLE: u32 = 1 << 31;
pub const TRANS_DDI_FUNC_CTL_MODE_HDMI: u32 = 0b000 << 24;
pub const TRANS_DDI_FUNC_CTL_MODE_DVI: u32 = 0b001 << 24;
pub const TRANS_DDI_FUNC_CTL_MODE_DP_SST: u32 = 0b010 << 24;
pub const TRANS_DDI_FUNC_CTL_MODE_DP_MST: u32 = 0b011 << 24;
pub const TRANS_DDI_FUNC_CTL_BPC_8: u32 = 0b000 << 20;
pub const TRANS_DDI_FUNC_CTL_BPC_10: u32 = 0b001 << 20;
pub const TRANS_DDI_FUNC_CTL_BPC_6: u32 = 0b010 << 20;
pub const TRANS_DDI_FUNC_CTL_BPC_12: u32 = 0b011 << 20;
pub const TRANS_DDI_FUNC_CTL_SYNC_POLARITY_HSHIGH: u32 = 0b01 << 16;
pub const TRANS_DDI_FUNC_CTL_SYNC_POLARITY_VSHIGH: u32 = 0b10 << 16;
pub const TRANS_DDI_FUNC_CTL_DSI_INPUT_PIPE_SHIFT: u32 = 12;
pub const TRANS_DDI_FUNC_CTL_PORT_WIDTH_1: u32 = 0b000 << 1;
pub const TRANS_DDI_FUNC_CTL_PORT_WIDTH_2: u32 = 0b001 << 1;
pub const TRANS_DDI_FUNC_CTL_PORT_WIDTH_3: u32 = 0b010 << 1;
pub const TRANS_DDI_FUNC_CTL_PORT_WIDTH_4: u32 = 0b011 << 1;
pub struct Transcoder {
pub name: &'static str,
pub index: usize,
pub clk_sel: MmioPtr<u32>,
pub clk_sel_shift: u32,
pub conf: MmioPtr<u32>,
pub ddi_func_ctl: MmioPtr<u32>,
pub ddi_func_ctl_ddi_shift: u32,
pub ddi_func_ctl_hdmi_scrambling: u32,
pub ddi_func_ctl_high_tmds_char_rate: u32,
pub ddi_func_ctl2: Option<MmioPtr<u32>>,
pub hblank: MmioPtr<u32>,
pub hsync: MmioPtr<u32>,
pub htotal: MmioPtr<u32>,
pub msa_misc: MmioPtr<u32>,
pub mult: MmioPtr<u32>,
pub push: Option<MmioPtr<u32>>,
pub space: MmioPtr<u32>,
pub stereo3d_ctl: MmioPtr<u32>,
pub vblank: MmioPtr<u32>,
pub vrr_ctl: Option<MmioPtr<u32>>,
pub vrr_flipline: Option<MmioPtr<u32>>,
pub vrr_status: Option<MmioPtr<u32>>,
pub vrr_status2: Option<MmioPtr<u32>>,
pub vrr_vmax: Option<MmioPtr<u32>>,
pub vrr_vmaxshift: Option<MmioPtr<u32>>,
pub vrr_vmin: Option<MmioPtr<u32>>,
pub vrr_vtotal_prev: Option<MmioPtr<u32>>,
pub vsync: MmioPtr<u32>,
pub vsyncshift: MmioPtr<u32>,
pub vtotal: MmioPtr<u32>,
}
impl Transcoder {
pub fn dump(&self) {
eprint!("Transcoder {} {}", self.name, self.index);
eprint!(" clk_sel {:08X}", self.clk_sel.read());
eprint!(" conf {:08X}", self.conf.read());
eprint!(" ddi_func_ctl {:08X}", self.ddi_func_ctl.read());
if let Some(reg) = &self.ddi_func_ctl2 {
eprint!(" ddi_func_ctl2 {:08X}", reg.read());
}
eprint!(" hblank {:08X}", self.hblank.read());
eprint!(" hsync {:08X}", self.hsync.read());
eprint!(" htotal {:08X}", self.htotal.read());
eprint!(" msa_misc {:08X}", self.msa_misc.read());
eprint!(" mult {:08X}", self.mult.read());
if let Some(reg) = &self.push {
eprint!(" push {:08X}", reg.read());
}
eprint!(" space {:08X}", self.space.read());
eprint!(" stereo3d_ctl {:08X}", self.stereo3d_ctl.read());
eprint!(" vblank {:08X}", self.vblank.read());
if let Some(reg) = &self.vrr_ctl {
eprint!(" vrr_ctl {:08X}", reg.read());
}
if let Some(reg) = &self.vrr_flipline {
eprint!(" vrr_flipline {:08X}", reg.read());
}
if let Some(reg) = &self.vrr_status {
eprint!(" vrr_status {:08X}", reg.read());
}
if let Some(reg) = &self.vrr_status2 {
eprint!(" vrr_status2 {:08X}", reg.read());
}
if let Some(reg) = &self.vrr_vmax {
eprint!(" vrr_vmax {:08X}", reg.read());
}
if let Some(reg) = &self.vrr_vmaxshift {
eprint!(" vrr_vmaxshift {:08X}", reg.read());
}
if let Some(reg) = &self.vrr_vmin {
eprint!(" vrr_vmin {:08X}", reg.read());
}
if let Some(reg) = &self.vrr_vtotal_prev {
eprint!(" vrr_vtotal_prev {:08X}", reg.read());
}
eprint!(" vsync {:08X}", self.vsync.read());
eprint!(" vsyncshift {:08X}", self.vsyncshift.read());
eprint!(" vtotal {:08X}", self.vtotal.read());
eprintln!();
}
pub fn modeset(&mut self, pipe: &mut Pipe, timing: &edid::DetailedTiming) {
let hactive = (timing.horizontal_active_pixels as u32) - 1;
let htotal = hactive + (timing.horizontal_blanking_pixels as u32);
let hsync_start = hactive + (timing.horizontal_front_porch as u32);
let hsync_end = hsync_start + (timing.horizontal_sync_width as u32);
let vactive = (timing.vertical_active_lines as u32) - 1;
let vtotal = vactive + (timing.vertical_blanking_lines as u32);
let vsync_start = vactive + (timing.vertical_front_porch as u32);
let vsync_end = vsync_start + (timing.vertical_sync_width as u32);
// Configure horizontal sync
self.htotal.write(hactive | (htotal << 16));
self.hblank.write(hactive | (htotal << 16));
self.hsync.write(hsync_start | (hsync_end << 16));
// Configure vertical sync
self.vtotal.write(vactive | (vtotal << 16));
self.vblank.write(vactive | (vtotal << 16));
self.vsync.write(vsync_start | (vsync_end << 16));
// Configure pipe
pipe.srcsz.write(vactive | (hactive << 16));
}
pub fn kabylake(gttmm: &MmioRegion) -> Result<Vec<Self>> {
let mut transcoders = Vec::with_capacity(4);
//TODO: Transcoder EDP
for (i, name) in ["A", "B", "C"].iter().enumerate() {
transcoders.push(Transcoder {
name,
index: i,
// IHD-OS-KBL-Vol 2c-1.17 TRANS_CLK_SEL
clk_sel: unsafe { gttmm.mmio(0x46140 + i * 0x4)? },
clk_sel_shift: 29,
// IHD-OS-KBL-Vol 2c-1.17 TRANS_CONF
conf: unsafe { gttmm.mmio(0x70008 + i * 0x1000)? },
// IHD-OS-KBL-Vol 2c-1.17 TRANS_DDI_FUNC_CTL
ddi_func_ctl: unsafe { gttmm.mmio(0x60400 + i * 0x1000)? },
ddi_func_ctl_ddi_shift: 28,
// HDMI scrambling not supported on Kaby Lake
ddi_func_ctl_hdmi_scrambling: 0,
ddi_func_ctl_high_tmds_char_rate: 0,
// N/A
ddi_func_ctl2: None,
// IHD-OS-KBL-Vol 2c-1.17 TRANS_HBLANK
hblank: unsafe { gttmm.mmio(0x60004 + i * 0x1000)? },
// IHD-OS-KBL-Vol 2c-1.17 TRANS_HSYNC
hsync: unsafe { gttmm.mmio(0x60008 + i * 0x1000)? },
// IHD-OS-KBL-Vol 2c-1.17 TRANS_HTOTAL
htotal: unsafe { gttmm.mmio(0x60000 + i * 0x1000)? },
// IHD-OS-KBL-Vol 2c-1.17 TRANS_MSA_MISC
msa_misc: unsafe { gttmm.mmio(0x60410 + i * 0x1000)? },
// IHD-OS-KBL-Vol 2c-1.17 TRANS_MULT
mult: unsafe { gttmm.mmio(0x6002C + i * 0x1000)? },
// N/A
push: None,
// IHD-OS-KBL-Vol 2c-1.17 TRANS_SPACE
space: unsafe { gttmm.mmio(0x60020 + i * 0x1000)? },
// IHD-OS-KBL-Vol 2c-1.17 TRANS_STEREO3D_CTL
stereo3d_ctl: unsafe { gttmm.mmio(0x70020 + i * 0x1000)? },
// IHD-OS-KBL-Vol 2c-1.17 TRANS_VBLANK
vblank: unsafe { gttmm.mmio(0x60010 + i * 0x1000)? },
// N/A
vrr_ctl: None,
vrr_flipline: None,
vrr_status: None,
vrr_status2: None,
vrr_vmax: None,
vrr_vmaxshift: None,
vrr_vmin: None,
vrr_vtotal_prev: None,
// IHD-OS-KBL-Vol 2c-1.17 TRANS_VSYNC
vsync: unsafe { gttmm.mmio(0x60014 + i * 0x1000)? },
// IHD-OS-KBL-Vol 2c-1.17 TRANS_VSYNCSHIFT
vsyncshift: unsafe { gttmm.mmio(0x60028 + i * 0x1000)? },
// IHD-OS-KBL-Vol 2c-1.17 TRANS_VTOTAL
vtotal: unsafe { gttmm.mmio(0x6000C + i * 0x1000)? },
});
}
Ok(transcoders)
}
pub fn tigerlake(gttmm: &MmioRegion) -> Result<Vec<Self>> {
let mut transcoders = Vec::with_capacity(4);
for (i, name) in ["A", "B", "C", "D"].iter().enumerate() {
transcoders.push(Transcoder {
name,
index: i,
// IHD-OS-TGL-Vol 2c-12.21 TRANS_CLK_SEL
clk_sel: unsafe { gttmm.mmio(0x46140 + i * 0x4)? },
clk_sel_shift: 28,
// IHD-OS-TGL-Vol 2c-12.21 TRANS_CONF
conf: unsafe { gttmm.mmio(0x70008 + i * 0x1000)? },
// IHD-OS-TGL-Vol 2c-12.21 TRANS_DDI_FUNC_CTL
ddi_func_ctl: unsafe { gttmm.mmio(0x60400 + i * 0x1000)? },
ddi_func_ctl_ddi_shift: 27,
ddi_func_ctl_hdmi_scrambling: 1 << 0,
ddi_func_ctl_high_tmds_char_rate: 1 << 4,
// IHD-OS-TGL-Vol 2c-12.21 TRANS_DDI_FUNC_CTL2
ddi_func_ctl2: Some(unsafe { gttmm.mmio(0x60404 + i * 0x1000)? }),
// IHD-OS-TGL-Vol 2c-12.21 TRANS_HBLANK
hblank: unsafe { gttmm.mmio(0x60004 + i * 0x1000)? },
// IHD-OS-TGL-Vol 2c-12.21 TRANS_HSYNC
hsync: unsafe { gttmm.mmio(0x60008 + i * 0x1000)? },
// IHD-OS-TGL-Vol 2c-12.21 TRANS_HTOTAL
htotal: unsafe { gttmm.mmio(0x60000 + i * 0x1000)? },
// IHD-OS-TGL-Vol 2c-12.21 TRANS_MSA_MISC
msa_misc: unsafe { gttmm.mmio(0x60410 + i * 0x1000)? },
// IHD-OS-TGL-Vol 2c-12.21 TRANS_MULT
mult: unsafe { gttmm.mmio(0x6002C + i * 0x1000)? },
// IHD-OS-TGL-Vol 2c-12.21 TRANS_PUSH
push: Some(unsafe { gttmm.mmio(0x60A70 + i * 0x1000)? }),
// IHD-OS-TGL-Vol 2c-12.21 TRANS_SPACE
space: unsafe { gttmm.mmio(0x60020 + i * 0x1000)? },
// IHD-OS-TGL-Vol 2c-12.21 TRANS_STEREO3D_CTL
stereo3d_ctl: unsafe { gttmm.mmio(0x70020 + i * 0x1000)? },
// IHD-OS-TGL-Vol 2c-12.21 TRANS_VBLANK
vblank: unsafe { gttmm.mmio(0x60010 + i * 0x1000)? },
// IHD-OS-TGL-Vol 2c-12.21 TRANS_VRR_CTL
vrr_ctl: Some(unsafe { gttmm.mmio(0x60420 + i * 0x1000)? }),
// IHD-OS-TGL-Vol 2c-12.21 TRANS_VRR_FLIPLINE
vrr_flipline: Some(unsafe { gttmm.mmio(0x60438 + i * 0x1000)? }),
// IHD-OS-TGL-Vol 2c-12.21 TRANS_VRR_STATUS
vrr_status: Some(unsafe { gttmm.mmio(0x6042C + i * 0x1000)? }),
// IHD-OS-TGL-Vol 2c-12.21 TRANS_VRR_STATUS2
vrr_status2: Some(unsafe { gttmm.mmio(0x6043C + i * 0x1000)? }),
// IHD-OS-TGL-Vol 2c-12.21 TRANS_VRR_VMAX
vrr_vmax: Some(unsafe { gttmm.mmio(0x60424 + i * 0x1000)? }),
// IHD-OS-TGL-Vol 2c-12.21 TRANS_VRR_VMAXSHIFT
vrr_vmaxshift: Some(unsafe { gttmm.mmio(0x60428 + i * 0x1000)? }),
// IHD-OS-TGL-Vol 2c-12.21 TRANS_VRR_VMIN
vrr_vmin: Some(unsafe { gttmm.mmio(0x60434 + i * 0x1000)? }),
// IHD-OS-TGL-Vol 2c-12.21 TRANS_VRR_VTOTAL_PREV
vrr_vtotal_prev: Some(unsafe { gttmm.mmio(0x60480 + i * 0x1000)? }),
// IHD-OS-TGL-Vol 2c-12.21 TRANS_VSYNC
vsync: unsafe { gttmm.mmio(0x60014 + i * 0x1000)? },
// IHD-OS-TGL-Vol 2c-12.21 TRANS_VSYNCSHIFT
vsyncshift: unsafe { gttmm.mmio(0x60028 + i * 0x1000)? },
// IHD-OS-TGL-Vol 2c-12.21 TRANS_VTOTAL
vtotal: unsafe { gttmm.mmio(0x6000C + i * 0x1000)? },
})
}
Ok(transcoders)
}
}
@@ -0,0 +1,105 @@
use driver_graphics::GraphicsScheme;
use event::{user_data, EventQueue};
use pcid_interface::{irq_helpers::pci_allocate_interrupt_vector, PciFunctionHandle};
use std::{
io::{Read, Write},
os::fd::AsRawFd,
};
mod device;
use self::device::Device;
fn main() {
pcid_interface::pci_daemon(daemon);
}
fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> ! {
let pci_config = pcid_handle.config();
let mut name = pci_config.func.name();
name.push_str("_ihdg");
common::setup_logging(
"graphics",
"pci",
&name,
common::output_level(),
common::file_level(),
);
log::info!("IHDG {}", pci_config.func.display());
let device = Device::new(&mut pcid_handle, &pci_config.func)
.expect("ihdgd: failed to initialize device");
let irq_file = pci_allocate_interrupt_vector(&mut pcid_handle, "ihdgd");
// Needs to be before GraphicsScheme::new to avoid a deadlock due to initnsmgr blocking on
// /scheme/event as it is already blocked on opening /scheme/display.ihdg.*.
// FIXME change the initnsmgr to not block on openat for the target scheme.
let event_queue: EventQueue<Source> =
EventQueue::new().expect("ihdgd: failed to create event queue");
let mut scheme = GraphicsScheme::new(device, format!("display.ihdg.{}", name), false);
user_data! {
enum Source {
Input,
Irq,
Scheme,
}
}
event_queue
.subscribe(
scheme.inputd_event_handle().as_raw_fd() as usize,
Source::Input,
event::EventFlags::READ,
)
.unwrap();
event_queue
.subscribe(
irq_file.irq_handle().as_raw_fd() as usize,
Source::Irq,
event::EventFlags::READ,
)
.unwrap();
event_queue
.subscribe(
scheme.event_handle().raw(),
Source::Scheme,
event::EventFlags::READ,
)
.unwrap();
libredox::call::setrens(0, 0).expect("ihdgd: failed to enter null namespace");
daemon.ready();
let all = [Source::Input, Source::Irq, Source::Scheme];
for event in all
.into_iter()
.chain(event_queue.map(|e| e.expect("ihdgd: failed to get next event").user_data))
{
match event {
Source::Input => scheme.handle_vt_events(),
Source::Irq => {
let mut irq = [0; 8];
irq_file.irq_handle().read(&mut irq).unwrap();
if scheme.adapter_mut().handle_irq() {
irq_file.irq_handle().write(&mut irq).unwrap();
scheme.adapter_mut().handle_events();
scheme.tick().unwrap();
}
}
Source::Scheme => {
scheme
.tick()
.expect("ihdgd: failed to handle scheme events");
}
}
}
std::process::exit(0);
}
@@ -0,0 +1,23 @@
[package]
name = "vesad"
description = "VESA driver"
version = "0.1.0"
edition = "2021"
[dependencies]
drm-sys.workspace = true
orbclient.workspace = true
ransid.workspace = true
redox_syscall.workspace = true
redox_event.workspace = true
common = { path = "../../common" }
daemon = { path = "../../../daemon" }
driver-graphics = { path = "../driver-graphics" }
libredox.workspace = true
[features]
default = []
[lints]
workspace = true
@@ -0,0 +1,133 @@
extern crate orbclient;
extern crate syscall;
use driver_graphics::GraphicsScheme;
use event::{user_data, EventQueue};
use std::collections::HashMap;
use std::env;
use std::os::fd::AsRawFd;
use crate::scheme::{FbAdapter, FrameBuffer};
mod scheme;
fn main() {
common::init();
daemon::Daemon::new(daemon);
}
fn daemon(daemon: daemon::Daemon) -> ! {
if env::var("FRAMEBUFFER_WIDTH").is_err() {
println!("vesad: No boot framebuffer");
daemon.ready();
std::process::exit(0);
}
let width = usize::from_str_radix(
&env::var("FRAMEBUFFER_WIDTH").expect("FRAMEBUFFER_WIDTH not set"),
16,
)
.expect("failed to parse FRAMEBUFFER_WIDTH");
let height = usize::from_str_radix(
&env::var("FRAMEBUFFER_HEIGHT").expect("FRAMEBUFFER_HEIGHT not set"),
16,
)
.expect("failed to parse FRAMEBUFFER_HEIGHT");
let phys = usize::from_str_radix(
&env::var("FRAMEBUFFER_ADDR").expect("FRAMEBUFFER_ADDR not set"),
16,
)
.expect("failed to parse FRAMEBUFFER_ADDR");
let stride = usize::from_str_radix(
&env::var("FRAMEBUFFER_STRIDE").expect("FRAMEBUFFER_STRIDE not set"),
16,
)
.expect("failed to parse FRAMEBUFFER_STRIDE");
println!(
"vesad: {}x{} stride {} at 0x{:X}",
width, height, stride, phys
);
if phys == 0 {
println!("vesad: Boot framebuffer at address 0");
daemon.ready();
std::process::exit(0);
}
let mut framebuffers = vec![unsafe { FrameBuffer::new(phys, width, height, stride) }];
//TODO: ideal maximum number of outputs?
let bootloader_env = std::fs::read_to_string("/scheme/sys/env")
.expect("failed to read env")
.lines()
.map(|line| {
let (env, value) = line.split_once('=').unwrap();
(env.to_owned(), value.to_owned())
})
.collect::<HashMap<String, String>>();
for i in 1..1024 {
match bootloader_env.get(&format!("FRAMEBUFFER{}", i)) {
Some(var) => match unsafe { FrameBuffer::parse(&var) } {
Some(fb) => {
println!(
"vesad: framebuffer {}: {}x{} stride {} at 0x{:X}",
i, fb.width, fb.height, fb.stride, fb.phys
);
framebuffers.push(fb);
}
None => {
eprintln!("vesad: framebuffer {}: failed to parse '{}'", i, var);
}
},
None => break,
};
}
let mut scheme =
GraphicsScheme::new(FbAdapter { framebuffers }, "display.vesa".to_owned(), true);
user_data! {
enum Source {
Input,
Scheme,
}
}
let event_queue: EventQueue<Source> =
EventQueue::new().expect("vesad: failed to create event queue");
event_queue
.subscribe(
scheme.inputd_event_handle().as_raw_fd() as usize,
Source::Input,
event::EventFlags::READ,
)
.unwrap();
event_queue
.subscribe(
scheme.event_handle().raw(),
Source::Scheme,
event::EventFlags::READ,
)
.unwrap();
libredox::call::setrens(0, 0).expect("vesad: failed to enter null namespace");
daemon.ready();
let all = [Source::Input, Source::Scheme];
for event in all
.into_iter()
.chain(event_queue.map(|e| e.expect("vesad: failed to get next event").user_data))
{
match event {
Source::Input => scheme.handle_vt_events(),
Source::Scheme => {
scheme
.tick()
.expect("vesad: failed to handle scheme events");
}
}
}
panic!();
}
@@ -0,0 +1,275 @@
use std::alloc::{self, Layout};
use std::convert::TryInto;
use std::ptr::{self, NonNull};
use std::sync::Mutex;
use driver_graphics::kms::connector::{KmsConnectorDriver, KmsConnectorStatus};
use driver_graphics::kms::objects::{KmsCrtc, KmsCrtcState, KmsObjectId, KmsObjects};
use driver_graphics::{Buffer, CursorPlane, Damage, GraphicsAdapter};
use drm_sys::{
DRM_CAP_DUMB_BUFFER, DRM_CAP_DUMB_PREFER_SHADOW, DRM_CLIENT_CAP_CURSOR_PLANE_HOTSPOT,
};
use syscall::{EINVAL, PAGE_SIZE};
#[derive(Debug)]
pub struct FbAdapter {
pub framebuffers: Vec<FrameBuffer>,
}
#[derive(Debug)]
pub struct Connector {
width: u32,
height: u32,
framebuffer_id: usize,
}
impl KmsConnectorDriver for Connector {
type State = ();
}
impl GraphicsAdapter for FbAdapter {
type Connector = Connector;
type Crtc = ();
type Buffer = GraphicScreen;
type Framebuffer = ();
fn name(&self) -> &'static [u8] {
b"vesad"
}
fn desc(&self) -> &'static [u8] {
b"VESA"
}
fn init(&mut self, objects: &mut KmsObjects<Self>) {
for (framebuffer_id, framebuffer) in self.framebuffers.iter().enumerate() {
let crtc = objects.add_crtc((), ());
objects.add_connector(
Connector {
width: framebuffer.width as u32,
height: framebuffer.height as u32,
framebuffer_id,
},
(),
&[crtc],
);
}
}
fn get_cap(&self, cap: u32) -> syscall::Result<u64> {
match cap {
DRM_CAP_DUMB_BUFFER => Ok(1),
DRM_CAP_DUMB_PREFER_SHADOW => Ok(0),
_ => Err(syscall::Error::new(EINVAL)),
}
}
fn set_client_cap(&self, cap: u32, _value: u64) -> syscall::Result<()> {
match cap {
DRM_CLIENT_CAP_CURSOR_PLANE_HOTSPOT => Ok(()),
_ => Err(syscall::Error::new(EINVAL)),
}
}
fn probe_connector(&mut self, objects: &mut KmsObjects<Self>, id: KmsObjectId) {
let mut connector = objects.get_connector(id).unwrap().lock().unwrap();
let connector = &mut *connector;
connector.connection = KmsConnectorStatus::Connected;
connector.update_from_size(connector.driver_data.width, connector.driver_data.height);
}
fn create_dumb_buffer(&mut self, width: u32, height: u32) -> (Self::Buffer, u32) {
(
GraphicScreen::new(width as usize, height as usize),
width * 4,
)
}
fn map_dumb_buffer(&mut self, framebuffer: &Self::Buffer) -> *mut u8 {
framebuffer.ptr.as_ptr().cast::<u8>()
}
fn create_framebuffer(&mut self, _buffer: &Self::Buffer) -> Self::Framebuffer {
()
}
fn set_crtc(
&mut self,
objects: &KmsObjects<Self>,
crtc: &Mutex<KmsCrtc<Self>>,
state: KmsCrtcState<Self>,
damage: Damage,
) -> syscall::Result<()> {
let mut crtc = crtc.lock().unwrap();
let buffer = state
.fb_id
.map(|fb_id| objects.get_framebuffer(fb_id))
.transpose()?;
crtc.state = state;
for connector in objects.connectors() {
let connector = connector.lock().unwrap();
if connector.state.crtc_id != objects.crtc_ids()[crtc.crtc_index as usize] {
continue;
}
let framebuffer_id = connector.driver_data.framebuffer_id;
let framebuffer = &mut self.framebuffers[framebuffer_id];
if let Some(buffer) = buffer {
buffer.buffer.sync(framebuffer, damage)
} else {
let onscreen_ptr = framebuffer.onscreen as *mut u32; // FIXME use as_mut_ptr once stable
for row in 0..framebuffer.height {
unsafe {
ptr::write_bytes(
onscreen_ptr.add(row * framebuffer.stride),
0,
framebuffer.width,
);
}
}
}
}
Ok(())
}
fn hw_cursor_size(&self) -> Option<(u32, u32)> {
None
}
fn handle_cursor(&mut self, _cursor: &CursorPlane<Self::Buffer>, _dirty_fb: bool) {
unimplemented!("Vesad does not support this function");
}
}
#[derive(Debug)]
pub struct FrameBuffer {
pub onscreen: *mut [u32],
pub phys: usize,
pub width: usize,
pub height: usize,
pub stride: usize,
}
impl FrameBuffer {
pub unsafe fn new(phys: usize, width: usize, height: usize, stride: usize) -> Self {
let size = stride * height;
let virt = common::physmap(
phys,
size * 4,
common::Prot {
read: true,
write: true,
},
common::MemoryType::WriteCombining,
)
.expect("vesad: failed to map framebuffer") as *mut u32;
let onscreen = ptr::slice_from_raw_parts_mut(virt, size);
Self {
onscreen,
phys,
width,
height,
stride,
}
}
pub unsafe fn parse(var: &str) -> Option<Self> {
fn parse_number(part: &str) -> Option<usize> {
let (start, radix) = if part.starts_with("0x") {
(2, 16)
} else {
(0, 10)
};
match usize::from_str_radix(&part[start..], radix) {
Ok(ok) => Some(ok),
Err(err) => {
eprintln!("vesad: failed to parse '{}': {}", part, err);
None
}
}
}
let mut parts = var.split(',');
let phys = parse_number(parts.next()?)?;
let width = parse_number(parts.next()?)?;
let height = parse_number(parts.next()?)?;
let stride = parse_number(parts.next()?)?;
Some(Self::new(phys, width, height, stride))
}
}
#[derive(Debug)]
pub struct GraphicScreen {
width: usize,
height: usize,
ptr: NonNull<[u32]>,
}
impl GraphicScreen {
fn new(width: usize, height: usize) -> GraphicScreen {
let len = width * height;
let layout = Self::layout(len);
let ptr = unsafe { alloc::alloc_zeroed(layout) };
let ptr = ptr::slice_from_raw_parts_mut(ptr.cast(), len);
let ptr = NonNull::new(ptr).unwrap_or_else(|| alloc::handle_alloc_error(layout));
GraphicScreen { width, height, ptr }
}
#[inline]
fn layout(len: usize) -> Layout {
// optimizes to an integer mul
Layout::array::<u32>(len)
.unwrap()
.align_to(PAGE_SIZE)
.unwrap()
}
}
impl Drop for GraphicScreen {
fn drop(&mut self) {
let layout = Self::layout(self.ptr.len());
unsafe { alloc::dealloc(self.ptr.as_ptr().cast(), layout) };
}
}
impl Buffer for GraphicScreen {
fn size(&self) -> usize {
self.width * self.height * 4
}
}
impl GraphicScreen {
fn sync(&self, framebuffer: &mut FrameBuffer, sync_rect: Damage) {
let sync_rect = sync_rect.clip(
self.width.try_into().unwrap(),
self.height.try_into().unwrap(),
);
let start_x: usize = sync_rect.x.try_into().unwrap();
let start_y: usize = sync_rect.y.try_into().unwrap();
let w: usize = sync_rect.width.try_into().unwrap();
let h: usize = sync_rect.height.try_into().unwrap();
let offscreen_ptr = self.ptr.as_ptr() as *mut u32;
let onscreen_ptr = framebuffer.onscreen as *mut u32; // FIXME use as_mut_ptr once stable
for row in start_y..start_y + h {
unsafe {
ptr::copy(
offscreen_ptr.add(row * self.width + start_x),
onscreen_ptr.add(row * framebuffer.stride + start_x),
w,
);
}
}
}
}
@@ -0,0 +1,28 @@
[package]
name = "virtio-gpud"
description = "VirtIO-GPU driver"
version = "0.1.0"
edition = "2021"
authors = ["Anhad Singh <andypython@protonmail.com>"]
[dependencies]
drm-sys.workspace = true
log.workspace = true
static_assertions.workspace = true
futures = { version = "0.3.28", features = ["executor"] }
anyhow.workspace = true
common = { path = "../../common" }
daemon = { path = "../../../daemon" }
driver-graphics = { path = "../driver-graphics" }
virtio-core = { path = "../../virtio-core" }
pcid = { path = "../../pcid" }
redox_event.workspace = true
redox_syscall.workspace = true
orbclient.workspace = true
spin.workspace = true
libredox.workspace = true
[lints]
workspace = true
@@ -0,0 +1,615 @@
//! `virtio-gpu` is a virtio based graphics adapter. It can operate in 2D mode and in 3D mode.
//!
//! XXX: 3D mode will offload rendering ops to the host gpu and therefore requires a GPU with 3D support
//! on the host machine.
// Notes for the future:
//
// `virtio-gpu` 2D acceleration is just blitting. 3D acceleration has 2 kinds:
// - virgl - OpenGL
// - venus - Vulkan
//
// The Venus driver requires support for the following from the `virtio-gpu` kernel driver:
// - VIRTGPU_PARAM_3D_FEATURES
// - VIRTGPU_PARAM_CAPSET_QUERY_FIX
// - VIRTGPU_PARAM_RESOURCE_BLOB
// - VIRTGPU_PARAM_HOST_VISIBLE
// - VIRTGPU_PARAM_CROSS_DEVICE
// - VIRTGPU_PARAM_CONTEXT_INIT
//
// cc https://docs.mesa3d.org/drivers/venus.html
// cc https://docs.mesa3d.org/drivers/virgl.html
use std::os::fd::AsRawFd;
use std::sync::atomic::{AtomicU32, Ordering};
use driver_graphics::GraphicsAdapter;
use event::{user_data, EventQueue};
use pcid_interface::PciFunctionHandle;
use virtio_core::utils::VolatileCell;
use virtio_core::MSIX_PRIMARY_VECTOR;
mod scheme;
//const VIRTIO_GPU_F_VIRGL: u32 = 0;
const VIRTIO_GPU_F_EDID: u32 = 1;
//const VIRTIO_GPU_F_RESOURCE_UUID: u32 = 2;
//const VIRTIO_GPU_F_RESOURCE_BLOB: u32 = 3;
//const VIRTIO_GPU_F_CONTEXT_INIT: u32 = 4;
const VIRTIO_GPU_EVENT_DISPLAY: u32 = 1 << 0;
const VIRTIO_GPU_MAX_SCANOUTS: usize = 16;
#[repr(C)]
pub struct GpuConfig {
/// Signals pending events to the driver.
pub events_read: VolatileCell<u32>, // read-only
/// Clears pending events in the device (write-to-clear).
pub events_clear: VolatileCell<u32>, // write-only
pub num_scanouts: VolatileCell<u32>,
pub num_capsets: VolatileCell<u32>,
}
impl GpuConfig {
#[inline]
pub fn num_scanouts(&self) -> u32 {
self.num_scanouts.get()
}
}
#[derive(Debug, Copy, Clone, PartialEq)]
#[repr(u32)]
pub enum CommandTy {
Undefined = 0,
// 2D commands
GetDisplayInfo = 0x0100,
ResourceCreate2d,
ResourceUnref,
SetScanout,
ResourceFlush,
TransferToHost2d,
ResourceAttachBacking,
ResourceDetachBacking,
GetCapsetInfo,
GetCapset,
GetEdid,
ResourceAssignUuid,
ResourceCreateBlob,
SetScanoutBlob,
// 3D commands
CtxCreate = 0x0200,
CtxDestroy,
CtxAttachResource,
CtxDetachResource,
ResourceCreate3d,
TransferToHost3d,
TransferFromHost3d,
Submit3d,
ResourceMapBlob,
ResourceUnmapBlob,
// cursor commands
UpdateCursor = 0x0300,
MoveCursor,
// success responses
RespOkNodata = 0x1100,
RespOkDisplayInfo,
RespOkCapsetInfo,
RespOkCapset,
RespOkEdid,
RespOkResourceUuid,
RespOkMapInfo,
// error responses
RespErrUnspec = 0x1200,
RespErrOutOfMemory,
RespErrInvalidScanoutId,
RespErrInvalidResourceId,
RespErrInvalidContextId,
RespErrInvalidParameter,
}
static_assertions::const_assert_eq!(core::mem::size_of::<CommandTy>(), 4);
const VIRTIO_GPU_FLAG_FENCE: u32 = 1 << 0;
//const VIRTIO_GPU_FLAG_INFO_RING_IDX: u32 = 1 << 1;
#[derive(Debug)]
#[repr(C)]
pub struct ControlHeader {
pub ty: CommandTy,
pub flags: u32,
pub fence_id: u64,
pub ctx_id: u32,
pub ring_index: u8,
padding: [u8; 3],
}
impl ControlHeader {
pub fn with_ty(ty: CommandTy) -> Self {
Self {
ty,
..Default::default()
}
}
}
impl Default for ControlHeader {
fn default() -> Self {
Self {
ty: CommandTy::Undefined,
flags: 0,
fence_id: 0,
ctx_id: 0,
ring_index: 0,
padding: [0; 3],
}
}
}
#[derive(Debug, Copy, Clone)]
#[repr(C)]
pub struct GpuRect {
pub x: u32,
pub y: u32,
pub width: u32,
pub height: u32,
}
impl GpuRect {
pub fn new(x: u32, y: u32, width: u32, height: u32) -> Self {
Self {
x,
y,
width,
height,
}
}
}
#[derive(Debug)]
#[repr(C)]
pub struct DisplayInfo {
rect: GpuRect,
pub enabled: u32,
pub flags: u32,
}
#[derive(Debug)]
#[repr(C)]
pub struct GetDisplayInfo {
pub header: ControlHeader,
pub display_info: [DisplayInfo; VIRTIO_GPU_MAX_SCANOUTS],
}
impl Default for GetDisplayInfo {
fn default() -> Self {
Self {
header: ControlHeader {
ty: CommandTy::GetDisplayInfo,
..Default::default()
},
display_info: unsafe { core::mem::zeroed() },
}
}
}
static RESOURCE_ALLOC: AtomicU32 = AtomicU32::new(1); // XXX: 0 is reserved for whatever that takes `resource_id`.
#[derive(PartialEq, Eq, Debug, Copy, Clone)]
#[repr(C)]
pub struct ResourceId(u32);
impl ResourceId {
const NONE: ResourceId = ResourceId(0);
fn alloc() -> Self {
ResourceId(RESOURCE_ALLOC.fetch_add(1, Ordering::SeqCst))
}
}
#[derive(Debug, Copy, Clone)]
#[repr(u32)]
pub enum ResourceFormat {
Unknown = 0,
Bgrx = 2,
Xrgb = 4,
}
#[derive(Debug)]
#[repr(C)]
pub struct ResourceCreate2d {
pub header: ControlHeader,
resource_id: ResourceId,
format: ResourceFormat,
width: u32,
height: u32,
}
impl ResourceCreate2d {
fn new(resource_id: ResourceId, format: ResourceFormat, width: u32, height: u32) -> Self {
Self {
header: ControlHeader::with_ty(CommandTy::ResourceCreate2d),
resource_id,
format,
width,
height,
}
}
}
#[derive(Debug)]
#[repr(C)]
pub struct MemEntry {
pub address: u64,
pub length: u32,
pub padding: u32,
}
#[derive(Debug)]
#[repr(C)]
pub struct AttachBacking {
pub header: ControlHeader,
pub resource_id: ResourceId,
pub num_entries: u32,
}
impl AttachBacking {
pub fn new(resource_id: ResourceId, num_entries: u32) -> Self {
Self {
header: ControlHeader::with_ty(CommandTy::ResourceAttachBacking),
resource_id,
num_entries,
}
}
}
#[derive(Debug)]
#[repr(C)]
pub struct DetachBacking {
pub header: ControlHeader,
pub resource_id: ResourceId,
pub padding: u32,
}
impl DetachBacking {
pub fn new(resource_id: ResourceId) -> Self {
Self {
header: ControlHeader::with_ty(CommandTy::ResourceDetachBacking),
resource_id,
padding: 0,
}
}
}
#[derive(Debug)]
#[repr(C)]
pub struct ResourceFlush {
pub header: ControlHeader,
pub rect: GpuRect,
pub resource_id: ResourceId,
pub padding: u32,
}
impl ResourceFlush {
pub fn new(resource_id: ResourceId, rect: GpuRect) -> Self {
Self {
header: ControlHeader::with_ty(CommandTy::ResourceFlush),
rect,
resource_id,
padding: 0,
}
}
}
#[derive(Debug)]
#[repr(C)]
pub struct ResourceUnref {
pub header: ControlHeader,
pub resource_id: ResourceId,
pub padding: u32,
}
impl ResourceUnref {
pub fn new(resource_id: ResourceId) -> Self {
Self {
header: ControlHeader::with_ty(CommandTy::ResourceUnref),
resource_id,
padding: 0,
}
}
}
#[repr(C)]
#[derive(Debug)]
pub struct SetScanout {
pub header: ControlHeader,
pub rect: GpuRect,
pub scanout_id: u32,
pub resource_id: ResourceId,
}
impl SetScanout {
pub fn new(scanout_id: u32, resource_id: ResourceId, rect: GpuRect) -> Self {
Self {
header: ControlHeader::with_ty(CommandTy::SetScanout),
rect,
scanout_id,
resource_id,
}
}
}
#[derive(Debug)]
#[repr(C)]
pub struct XferToHost2d {
pub header: ControlHeader,
pub rect: GpuRect,
pub offset: u64,
pub resource_id: ResourceId,
pub padding: u32,
}
impl XferToHost2d {
pub fn new(resource_id: ResourceId, rect: GpuRect, offset: u64) -> Self {
Self {
header: ControlHeader::with_ty(CommandTy::TransferToHost2d),
rect,
offset,
resource_id,
padding: 0,
}
}
}
#[derive(Debug)]
#[repr(C)]
pub struct GetEdid {
pub header: ControlHeader,
pub scanout: u32,
pub padding: u32,
}
impl GetEdid {
pub fn new(scanout_id: u32) -> Self {
Self {
header: ControlHeader::with_ty(CommandTy::GetEdid),
scanout: scanout_id,
padding: 0,
}
}
}
#[derive(Debug)]
#[repr(C)]
pub struct GetEdidResp {
pub header: ControlHeader,
pub size: u32,
pub padding: u32,
pub edid: [u8; 1024],
}
impl GetEdidResp {
pub fn new() -> Self {
Self {
header: ControlHeader::with_ty(CommandTy::GetEdid),
size: 0,
padding: 0,
edid: [0; 1024],
}
}
}
#[derive(Debug)]
#[repr(C)]
pub struct CursorPos {
pub scanout_id: u32,
pub x: i32,
pub y: i32,
_padding: u32,
}
impl CursorPos {
pub fn new(scanout_id: u32, x: i32, y: i32) -> Self {
Self {
scanout_id,
x,
y,
_padding: 0,
}
}
}
/* VIRTIO_GPU_CMD_UPDATE_CURSOR, VIRTIO_GPU_CMD_MOVE_CURSOR */
#[derive(Debug)]
#[repr(C)]
pub struct UpdateCursor {
pub header: ControlHeader,
pub pos: CursorPos,
pub resource_id: ResourceId,
pub hot_x: i32,
pub hot_y: i32,
_padding: u32,
}
impl UpdateCursor {
pub fn update_cursor(x: i32, y: i32, hot_x: i32, hot_y: i32, resource_id: ResourceId) -> Self {
Self {
header: ControlHeader::with_ty(CommandTy::UpdateCursor),
pos: CursorPos::new(0, x, y),
resource_id,
hot_x,
hot_y,
_padding: 0,
}
}
}
pub struct MoveCursor {
pub header: ControlHeader,
pub pos: CursorPos,
pub resource_id: ResourceId,
pub hot_x: i32,
pub hot_y: i32,
_padding: u32,
}
impl MoveCursor {
pub fn move_cursor(x: i32, y: i32) -> Self {
Self {
header: ControlHeader::with_ty(CommandTy::MoveCursor),
pos: CursorPos::new(0, x, y),
resource_id: ResourceId(0),
hot_x: 0,
hot_y: 0,
_padding: 0,
}
}
}
static DEVICE: spin::Once<virtio_core::Device> = spin::Once::new();
fn main() {
pcid_interface::pci_daemon(daemon_runner);
}
fn daemon_runner(redox_daemon: daemon::Daemon, pcid_handle: PciFunctionHandle) -> ! {
daemon(redox_daemon, pcid_handle).unwrap();
unreachable!();
}
fn daemon(daemon: daemon::Daemon, mut pcid_handle: PciFunctionHandle) -> anyhow::Result<()> {
common::setup_logging(
"graphics",
"pci",
"virtio-gpud",
common::output_level(),
common::file_level(),
);
// Double check that we have the right device.
//
// 0x1050 - virtio-gpu
let pci_config = pcid_handle.config();
assert_eq!(pci_config.func.full_device_id.device_id, 0x1050);
log::info!("virtio-gpu: initiating startup sequence :^)");
let device = DEVICE.try_call_once(|| virtio_core::probe_device(&mut pcid_handle))?;
let config = unsafe { &mut *(device.device_space as *mut GpuConfig) };
// Negotiate features.
let has_edid = device.transport.check_device_feature(VIRTIO_GPU_F_EDID);
if has_edid {
device.transport.ack_driver_feature(VIRTIO_GPU_F_EDID);
}
device.transport.finalize_features();
// Queue for sending control commands.
let control_queue = device
.transport
.setup_queue(MSIX_PRIMARY_VECTOR, &device.irq_handle)?;
// Queue for sending cursor updates.
let cursor_queue = device
.transport
.setup_queue(MSIX_PRIMARY_VECTOR, &device.irq_handle)?;
device.transport.setup_config_notify(MSIX_PRIMARY_VECTOR);
device.transport.run_device();
// Needs to be before GpuScheme::new to avoid a deadlock due to initnsmgr blocking on
// /scheme/event as it is already blocked on opening /scheme/display.virtio-gpu.
// FIXME change the initnsmgr to not block on openat for the target scheme.
let event_queue: EventQueue<Source> =
EventQueue::new().expect("virtio-gpud: failed to create event queue");
let mut scheme = scheme::GpuScheme::new(
config,
control_queue.clone(),
cursor_queue.clone(),
device.transport.clone(),
has_edid,
)?;
daemon.ready();
user_data! {
enum Source {
Input,
Scheme,
Interrupt,
}
}
event_queue
.subscribe(
scheme.inputd_event_handle().as_raw_fd() as usize,
Source::Input,
event::EventFlags::READ,
)
.unwrap();
event_queue
.subscribe(
scheme.event_handle().raw(),
Source::Scheme,
event::EventFlags::READ,
)
.unwrap();
event_queue
.subscribe(
device.irq_handle.as_raw_fd() as usize,
Source::Interrupt,
event::EventFlags::READ,
)
.unwrap();
let all = [Source::Input, Source::Scheme, Source::Interrupt];
for event in all
.into_iter()
.chain(event_queue.map(|e| e.expect("virtio-gpud: failed to get next event").user_data))
{
match event {
Source::Input => scheme.handle_vt_events(),
Source::Scheme => {
scheme
.tick()
.expect("virtio-gpud: failed to process scheme events");
}
Source::Interrupt => loop {
let before_gen = device.transport.config_generation();
let events = scheme.adapter().config.events_read.get();
if events & VIRTIO_GPU_EVENT_DISPLAY != 0 {
let (adapter, objects) = scheme.adapter_and_kms_objects_mut();
futures::executor::block_on(async { adapter.update_displays().await.unwrap() });
for connector_id in objects.connector_ids().to_vec() {
adapter.probe_connector(objects, connector_id);
}
scheme.notify_displays_changed();
scheme
.adapter_mut()
.config
.events_clear
.set(VIRTIO_GPU_EVENT_DISPLAY);
}
let after_gen = device.transport.config_generation();
if before_gen == after_gen {
break;
}
},
}
}
std::process::exit(0);
}
@@ -0,0 +1,528 @@
use std::fmt;
use std::sync::{Arc, Mutex};
use common::{dma::Dma, sgl};
use driver_graphics::kms::connector::{KmsConnectorDriver, KmsConnectorStatus};
use driver_graphics::kms::objects::{KmsCrtc, KmsCrtcState, KmsObjectId, KmsObjects};
use driver_graphics::{Buffer as DrmBuffer, CursorPlane, Damage, GraphicsAdapter, GraphicsScheme};
use drm_sys::{
DRM_CAP_CURSOR_HEIGHT, DRM_CAP_CURSOR_WIDTH, DRM_CAP_DUMB_BUFFER, DRM_CAP_DUMB_PREFER_SHADOW,
DRM_CLIENT_CAP_CURSOR_PLANE_HOTSPOT,
};
use syscall::{EINVAL, PAGE_SIZE};
use virtio_core::spec::{Buffer, ChainBuilder, DescriptorFlags};
use virtio_core::transport::{Error, Queue, Transport};
use crate::*;
impl Into<GpuRect> for Damage {
fn into(self) -> GpuRect {
GpuRect {
x: self.x,
y: self.y,
width: self.width,
height: self.height,
}
}
}
#[derive(Debug)]
pub struct VirtGpuConnector {
display_id: u32,
}
impl KmsConnectorDriver for VirtGpuConnector {
type State = ();
}
pub struct VirtGpuFramebuffer<'a> {
queue: Arc<Queue<'a>>,
id: ResourceId,
sgl: sgl::Sgl,
width: u32,
height: u32,
}
impl<'a> fmt::Debug for VirtGpuFramebuffer<'a> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("VirtGpuFramebuffer")
.field("id", &self.id)
.field("sgl", &self.sgl)
.field("width", &self.width)
.field("height", &self.height)
.finish_non_exhaustive()
}
}
impl DrmBuffer for VirtGpuFramebuffer<'_> {
fn size(&self) -> usize {
(self.width * self.height * 4) as usize
}
}
impl Drop for VirtGpuFramebuffer<'_> {
fn drop(&mut self) {
futures::executor::block_on(async {
let request = Dma::new(ResourceUnref::new(self.id)).unwrap();
let header = Dma::new(ControlHeader::default()).unwrap();
let command = ChainBuilder::new()
.chain(Buffer::new(&request))
.chain(Buffer::new(&header).flags(DescriptorFlags::WRITE_ONLY))
.build();
self.queue.send(command).await;
});
}
}
#[derive(Debug, Clone)]
pub struct Display {
enabled: bool,
width: u32,
height: u32,
edid: Vec<u8>,
active_resource: Option<ResourceId>,
}
pub struct VirtGpuAdapter<'a> {
pub config: &'a mut GpuConfig,
control_queue: Arc<Queue<'a>>,
cursor_queue: Arc<Queue<'a>>,
transport: Arc<dyn Transport>,
has_edid: bool,
displays: Vec<Display>,
hidden_cursor: Option<Arc<VirtGpuFramebuffer<'a>>>,
}
impl<'a> fmt::Debug for VirtGpuAdapter<'a> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("VirtGpuAdapter")
.field("displays", &self.displays)
.finish_non_exhaustive()
}
}
impl VirtGpuAdapter<'_> {
pub async fn update_displays(&mut self) -> Result<(), Error> {
let display_info = self.get_display_info().await?;
let raw_displays = &display_info.display_info[..self.config.num_scanouts() as usize];
self.displays.resize(
raw_displays.len(),
Display {
enabled: false,
width: 0,
height: 0,
edid: vec![],
active_resource: None,
},
);
for (i, info) in raw_displays.iter().enumerate() {
log::info!(
"virtio-gpu: display {i} ({}x{}px)",
info.rect.width,
info.rect.height
);
self.displays[i].enabled = info.enabled != 0;
if info.rect.width == 0 || info.rect.height == 0 {
// QEMU gives all displays other than the first a zero width and height, but trying
// to attach a zero sized framebuffer to the display will result an error, so
// default to 640x480px.
self.displays[i].width = 640;
self.displays[i].height = 480;
} else {
self.displays[i].width = info.rect.width;
self.displays[i].height = info.rect.height;
}
if self.has_edid {
let edid = self.get_edid(i as u32).await?;
self.displays[i].edid = edid.edid[..edid.size as usize].to_vec();
}
}
Ok(())
}
async fn send_request<T>(&self, request: Dma<T>) -> Result<Dma<ControlHeader>, Error> {
let header = Dma::new(ControlHeader::default())?;
let command = ChainBuilder::new()
.chain(Buffer::new(&request))
.chain(Buffer::new(&header).flags(DescriptorFlags::WRITE_ONLY))
.build();
self.control_queue.send(command).await;
Ok(header)
}
async fn send_request_fenced<T>(&self, request: Dma<T>) -> Result<Dma<ControlHeader>, Error> {
let mut header = Dma::new(ControlHeader::default())?;
header.flags |= VIRTIO_GPU_FLAG_FENCE;
let command = ChainBuilder::new()
.chain(Buffer::new(&request))
.chain(Buffer::new(&header).flags(DescriptorFlags::WRITE_ONLY))
.build();
self.control_queue.send(command).await;
Ok(header)
}
async fn get_display_info(&self) -> Result<Dma<GetDisplayInfo>, Error> {
let header = Dma::new(ControlHeader::with_ty(CommandTy::GetDisplayInfo))?;
let response = Dma::new(GetDisplayInfo::default())?;
let command = ChainBuilder::new()
.chain(Buffer::new(&header))
.chain(Buffer::new(&response).flags(DescriptorFlags::WRITE_ONLY))
.build();
self.control_queue.send(command).await;
assert!(response.header.ty == CommandTy::RespOkDisplayInfo);
Ok(response)
}
async fn get_edid(&self, scanout_id: u32) -> Result<Dma<GetEdidResp>, Error> {
let header = Dma::new(GetEdid::new(scanout_id))?;
let response = Dma::new(GetEdidResp::new())?;
let command = ChainBuilder::new()
.chain(Buffer::new(&header))
.chain(Buffer::new(&response).flags(DescriptorFlags::WRITE_ONLY))
.build();
self.control_queue.send(command).await;
assert!(response.header.ty == CommandTy::RespOkEdid);
Ok(response)
}
fn update_cursor(
&mut self,
cursor: &VirtGpuFramebuffer,
x: i32,
y: i32,
hot_x: i32,
hot_y: i32,
) {
//Transfering cursor resource to host
futures::executor::block_on(async {
let transfer_request = Dma::new(XferToHost2d::new(
cursor.id,
GpuRect {
x: 0,
y: 0,
width: 64,
height: 64,
},
0,
))
.unwrap();
let header = self.send_request_fenced(transfer_request).await.unwrap();
assert_eq!(header.ty, CommandTy::RespOkNodata);
});
//Update the cursor position
let request = Dma::new(UpdateCursor::update_cursor(x, y, hot_x, hot_y, cursor.id)).unwrap();
futures::executor::block_on(async {
let command = ChainBuilder::new().chain(Buffer::new(&request)).build();
self.cursor_queue.send(command).await;
});
}
fn move_cursor(&mut self, x: i32, y: i32) {
let request = Dma::new(MoveCursor::move_cursor(x, y)).unwrap();
futures::executor::block_on(async {
let command = ChainBuilder::new().chain(Buffer::new(&request)).build();
self.cursor_queue.send(command).await;
});
}
fn disable_cursor(&mut self) {
if self.hidden_cursor.is_none() {
let (width, height) = self.hw_cursor_size().unwrap();
let (cursor, stride) = self.create_dumb_buffer(width, height);
unsafe {
core::ptr::write_bytes(
cursor.sgl.as_ptr() as *mut u8,
0,
(stride * height) as usize,
);
}
self.hidden_cursor = Some(Arc::new(cursor));
}
let hidden_cursor = self.hidden_cursor.as_ref().unwrap().clone();
self.update_cursor(&hidden_cursor, 0, 0, 0, 0);
}
}
impl<'a> GraphicsAdapter for VirtGpuAdapter<'a> {
type Connector = VirtGpuConnector;
type Crtc = ();
type Buffer = VirtGpuFramebuffer<'a>;
type Framebuffer = ();
fn name(&self) -> &'static [u8] {
b"virtio-gpud"
}
fn desc(&self) -> &'static [u8] {
b"VirtIO GPU"
}
fn init(&mut self, objects: &mut KmsObjects<Self>) {
futures::executor::block_on(async {
self.update_displays().await.unwrap();
});
for display_id in 0..self.config.num_scanouts.get() {
let crtc = objects.add_crtc((), ());
objects.add_connector(VirtGpuConnector { display_id }, (), &[crtc]);
}
}
fn get_cap(&self, cap: u32) -> syscall::Result<u64> {
match cap {
DRM_CAP_DUMB_BUFFER => Ok(1),
DRM_CAP_DUMB_PREFER_SHADOW => Ok(0),
DRM_CAP_CURSOR_WIDTH => Ok(64),
DRM_CAP_CURSOR_HEIGHT => Ok(64),
_ => Err(syscall::Error::new(EINVAL)),
}
}
fn set_client_cap(&self, cap: u32, _value: u64) -> syscall::Result<()> {
match cap {
// FIXME hide cursor plane unless this client cap is set
DRM_CLIENT_CAP_CURSOR_PLANE_HOTSPOT => Ok(()),
_ => Err(syscall::Error::new(EINVAL)),
}
}
fn probe_connector(&mut self, objects: &mut KmsObjects<Self>, id: KmsObjectId) {
futures::executor::block_on(async {
let mut connector = objects.get_connector(id).unwrap().lock().unwrap();
let display = &self.displays[connector.driver_data.display_id as usize];
connector.connection = if display.enabled {
KmsConnectorStatus::Connected
} else {
KmsConnectorStatus::Disconnected
};
if self.has_edid {
connector.update_from_edid(&display.edid);
drop(connector);
let blob = objects.add_blob(display.edid.clone());
objects.get_connector(id).unwrap().lock().unwrap().edid = blob;
} else {
connector.update_from_size(display.width, display.height);
}
});
}
fn create_dumb_buffer(&mut self, width: u32, height: u32) -> (Self::Buffer, u32) {
futures::executor::block_on(async {
let bpp = 32;
let fb_size = width as usize * height as usize * bpp / 8;
let sgl = sgl::Sgl::new(fb_size).unwrap();
unsafe {
core::ptr::write_bytes(sgl.as_ptr() as *mut u8, 255, fb_size);
}
let res_id = ResourceId::alloc();
// Create a host resource using `VIRTIO_GPU_CMD_RESOURCE_CREATE_2D`.
let request = Dma::new(ResourceCreate2d::new(
res_id,
ResourceFormat::Bgrx,
width,
height,
))
.unwrap();
let header = self.send_request(request).await.unwrap();
assert_eq!(header.ty, CommandTy::RespOkNodata);
// Use the allocated framebuffer from the guest ram, and attach it as backing
// storage to the resource just created, using `VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING`.
let mut mem_entries =
unsafe { Dma::zeroed_slice(sgl.chunks().len()).unwrap().assume_init() };
for (entry, chunk) in mem_entries.iter_mut().zip(sgl.chunks().iter()) {
*entry = MemEntry {
address: chunk.phys as u64,
length: chunk.length.next_multiple_of(PAGE_SIZE) as u32,
padding: 0,
};
}
let attach_request =
Dma::new(AttachBacking::new(res_id, mem_entries.len() as u32)).unwrap();
let header = Dma::new(ControlHeader::default()).unwrap();
let command = ChainBuilder::new()
.chain(Buffer::new(&attach_request))
.chain(Buffer::new_unsized(&mem_entries))
.chain(Buffer::new(&header).flags(DescriptorFlags::WRITE_ONLY))
.build();
self.control_queue.send(command).await;
assert_eq!(header.ty, CommandTy::RespOkNodata);
(
VirtGpuFramebuffer {
queue: self.control_queue.clone(),
id: res_id,
sgl,
width,
height,
},
width * 4,
)
})
}
fn map_dumb_buffer(&mut self, buffer: &Self::Buffer) -> *mut u8 {
buffer.sgl.as_ptr()
}
fn create_framebuffer(&mut self, _buffer: &Self::Buffer) -> Self::Framebuffer {
()
}
fn set_crtc(
&mut self,
objects: &KmsObjects<Self>,
crtc: &Mutex<KmsCrtc<Self>>,
state: KmsCrtcState<Self>,
damage: Damage,
) -> syscall::Result<()> {
futures::executor::block_on(async {
let mut crtc = crtc.lock().unwrap();
let framebuffer = state
.fb_id
.map(|fb_id| objects.get_framebuffer(fb_id))
.transpose()?;
crtc.state = state;
for connector in objects.connectors() {
let connector = connector.lock().unwrap();
if connector.state.crtc_id != objects.crtc_ids()[crtc.crtc_index as usize] {
continue;
}
let display_id = connector.driver_data.display_id;
let Some(framebuffer) = framebuffer else {
let scanout_request = Dma::new(SetScanout::new(
display_id,
ResourceId::NONE,
GpuRect::new(0, 0, 0, 0),
))
.unwrap();
let header = self.send_request(scanout_request).await.unwrap();
assert_eq!(header.ty, CommandTy::RespOkNodata);
self.displays[display_id as usize].active_resource = None;
return Ok(());
};
let req = Dma::new(XferToHost2d::new(
framebuffer.buffer.id,
GpuRect {
x: 0,
y: 0,
width: framebuffer.width,
height: framebuffer.height,
},
0,
))
.unwrap();
let header = self.send_request(req).await.unwrap();
assert_eq!(header.ty, CommandTy::RespOkNodata);
// FIXME once we support resizing we also need to check that the current and target size match
if self.displays[display_id as usize].active_resource != Some(framebuffer.buffer.id)
{
let scanout_request = Dma::new(SetScanout::new(
display_id,
framebuffer.buffer.id,
GpuRect::new(0, 0, framebuffer.width, framebuffer.height),
))
.unwrap();
let header = self.send_request(scanout_request).await.unwrap();
assert_eq!(header.ty, CommandTy::RespOkNodata);
self.displays[display_id as usize].active_resource =
Some(framebuffer.buffer.id);
}
let flush = ResourceFlush::new(
framebuffer.buffer.id,
damage.clip(framebuffer.width, framebuffer.height).into(),
);
let header = self.send_request(Dma::new(flush).unwrap()).await.unwrap();
assert_eq!(header.ty, CommandTy::RespOkNodata);
}
Ok(())
})
}
fn hw_cursor_size(&self) -> Option<(u32, u32)> {
Some((64, 64))
}
fn handle_cursor(&mut self, cursor: &CursorPlane<Self::Buffer>, dirty_fb: bool) {
if let Some(buffer) = &cursor.buffer {
if dirty_fb {
self.update_cursor(buffer, cursor.x, cursor.y, cursor.hot_x, cursor.hot_y);
} else {
self.move_cursor(cursor.x, cursor.y);
}
} else {
if dirty_fb {
self.disable_cursor();
}
}
}
}
pub struct GpuScheme {}
impl<'a> GpuScheme {
pub fn new(
config: &'a mut GpuConfig,
control_queue: Arc<Queue<'a>>,
cursor_queue: Arc<Queue<'a>>,
transport: Arc<dyn Transport>,
has_edid: bool,
) -> Result<GraphicsScheme<VirtGpuAdapter<'a>>, Error> {
let adapter = VirtGpuAdapter {
config,
control_queue,
cursor_queue,
transport,
has_edid,
displays: vec![],
hidden_cursor: None,
};
Ok(GraphicsScheme::new(
adapter,
"display.virtio-gpu".to_owned(),
false,
))
}
}
+1
View File
@@ -0,0 +1 @@
/target
+18
View File
@@ -0,0 +1,18 @@
[package]
name = "hwd"
description = "ACPI and DeviceTree handling daemon"
version = "0.1.0"
edition = "2018"
[dependencies]
fdt.workspace = true
log.workspace = true
ron.workspace = true
libredox = { workspace = true, default-features = false, features = ["std", "call"] }
amlserde = { path = "../amlserde" }
common = { path = "../common" }
daemon = { path = "../../daemon" }
[lints]
workspace = true
@@ -0,0 +1,111 @@
use amlserde::{AmlSerde, AmlSerdeValue};
use std::{error::Error, fs, process::Command};
use super::Backend;
pub struct AcpiBackend {
rxsdt: Vec<u8>,
}
impl Backend for AcpiBackend {
fn new() -> Result<Self, Box<dyn Error>> {
let rxsdt = fs::read("/scheme/kernel.acpi/rxsdt")?;
// Spawn acpid
//TODO: pass rxsdt data to acpid?
#[allow(deprecated, reason = "we can't yet move this to init")]
daemon::Daemon::spawn(Command::new("acpid"));
Ok(Self { rxsdt })
}
fn probe(&mut self) -> Result<(), Box<dyn Error>> {
// Read symbols from acpi scheme
let entries = fs::read_dir("/scheme/acpi/symbols")?;
// TODO: Reimplement with getdents?
let symbols_fd = libredox::Fd::open(
"/scheme/acpi/symbols",
libredox::flag::O_DIRECTORY | libredox::flag::O_RDONLY,
0,
)?;
for entry_res in entries {
let entry = entry_res?;
if let Some(file_name) = entry.file_name().to_str() {
if file_name.ends_with("_CID") || file_name.ends_with("_HID") {
let symbol_fd = symbols_fd.openat(file_name, libredox::flag::O_RDONLY, 0)?;
let stat = symbol_fd.stat()?;
let mut buf: Vec<u8> = vec![0u8; stat.st_size as usize];
let count = symbol_fd.read(&mut buf)?;
buf.truncate(count);
let ron = String::from_utf8(buf)?;
let AmlSerde { name, value } = ron::from_str(&ron)?;
let id = match value {
AmlSerdeValue::Integer(integer) => {
let vendor = integer & 0xFFFF;
let device = (integer >> 16) & 0xFFFF;
let vendor_rev = ((vendor & 0xFF) << 8) | vendor >> 8;
let vendor_1 = (((vendor_rev >> 10) & 0x1f) as u8 + 64) as char;
let vendor_2 = (((vendor_rev >> 5) & 0x1f) as u8 + 64) as char;
let vendor_3 = (((vendor_rev >> 0) & 0x1f) as u8 + 64) as char;
//TODO: simplify this nibble swap
let device_1 = (device >> 4) & 0xF;
let device_2 = (device >> 0) & 0xF;
let device_3 = (device >> 12) & 0xF;
let device_4 = (device >> 8) & 0xF;
format!(
"{}{}{}{:01X}{:01X}{:01X}{:01X}",
vendor_1,
vendor_2,
vendor_3,
device_1,
device_2,
device_3,
device_4
)
}
AmlSerdeValue::String(string) => string,
_ => {
log::warn!("{}: unsupported value {:x?}", name, value);
continue;
}
};
let what = match id.as_str() {
// https://uefi.org/specs/ACPI/6.5/05_ACPI_Software_Programming_Model.html
"ACPI0003" => "Power source",
"ACPI0006" => "GPE block",
"ACPI0007" => "Processor",
"ACPI0010" => "Processor control",
// https://uefi.org/sites/default/files/resources/devids%20%285%29.txt
"PNP0000" => "AT interrupt controller",
"PNP0100" => "AT timer",
"PNP0103" => "HPET",
"PNP0200" => "AT DMA controller",
"PNP0303" => "IBM Enhanced (101/102-key, PS/2 mouse support)",
"PNP030B" => "PS/2 keyboard",
"PNP0400" => "Standard LPT printer port",
"PNP0501" => "16550A-compatible COM port",
"PNP0A03" | "PNP0A08" => "PCI bus",
"PNP0A05" => "Generic ACPI bus",
"PNP0A06" => "Generic ACPI Extended-IO bus (EIO bus)",
"PNP0B00" => "AT real-time clock",
"PNP0C01" => "System board",
"PNP0C02" => "Reserved resources",
"PNP0C04" => "Math coprocessor",
"PNP0C09" => "Embedded controller",
"PNP0C0A" => "Battery",
"PNP0C0B" => "Fan",
"PNP0C0C" => "Power button",
"PNP0C0D" => "Lid sensor",
"PNP0C0E" => "Sleep button",
"PNP0C0F" => "PCI interrupt link",
"PNP0C50" => "I2C HID",
"PNP0F13" => "PS/2 port for PS/2-style mouse",
_ => "?",
};
log::debug!("{}: {} ({})", name, id, what);
}
}
}
Ok(())
}
}
@@ -0,0 +1,45 @@
use std::{error::Error, fs};
use super::Backend;
pub struct DeviceTreeBackend {
dtb: Vec<u8>,
}
impl DeviceTreeBackend {
fn dump(node: &fdt::node::FdtNode<'_, '_>, level: usize) {
let mut line = String::new();
for _ in 0..level {
line.push_str(" ");
}
line.push_str(node.name);
if let Some(compatible) = node.compatible() {
line.push_str(":");
for id in compatible.all() {
line.push_str(" ");
line.push_str(id);
}
}
log::debug!("{}", line);
for child in node.children() {
Self::dump(&child, level + 1);
}
}
}
impl Backend for DeviceTreeBackend {
fn new() -> Result<Self, Box<dyn Error>> {
let dtb = fs::read("/scheme/kernel.dtb")?;
let dt = fdt::Fdt::new(&dtb).map_err(|err| format!("failed to parse dtb: {}", err))?;
Ok(Self { dtb })
}
fn probe(&mut self) -> Result<(), Box<dyn Error>> {
let dt = fdt::Fdt::new(&self.dtb).map_err(|err| format!("failed to parse dtb: {}", err))?;
let root = dt
.find_node("/")
.ok_or_else(|| format!("failed to find root node"))?;
Self::dump(&root, 0);
Ok(())
}
}
@@ -0,0 +1,16 @@
use std::error::Error;
use super::Backend;
pub struct LegacyBackend;
impl Backend for LegacyBackend {
fn new() -> Result<Self, Box<dyn Error>> {
Ok(Self)
}
fn probe(&mut self) -> Result<(), Box<dyn Error>> {
log::info!("TODO: handle driver spawning from legacy backend");
Ok(())
}
}
@@ -0,0 +1,14 @@
use std::error::Error;
mod acpi;
mod devicetree;
mod legacy;
pub use self::{acpi::AcpiBackend, devicetree::DeviceTreeBackend, legacy::LegacyBackend};
pub trait Backend {
fn new() -> Result<Self, Box<dyn Error>>
where
Self: Sized;
fn probe(&mut self) -> Result<(), Box<dyn Error>>;
}
+59
View File
@@ -0,0 +1,59 @@
use std::process;
mod backend;
use self::backend::{AcpiBackend, Backend, DeviceTreeBackend, LegacyBackend};
fn daemon(daemon: daemon::Daemon) -> ! {
common::setup_logging(
"misc",
"hwd",
"hwd",
common::output_level(),
common::file_level(),
);
// Prefer DTB if available (matches kernel preference)
let mut backend: Box<dyn Backend> = match DeviceTreeBackend::new() {
Ok(ok) => {
log::info!("using devicetree backend");
Box::new(ok)
}
Err(err) => {
log::debug!("cannot use devicetree backend: {}", err);
match AcpiBackend::new() {
Ok(ok) => {
log::info!("using ACPI backend");
Box::new(ok)
}
Err(err) => {
log::debug!("cannot use ACPI backend: {}", err);
log::info!("using legacy backend");
Box::new(LegacyBackend)
}
}
}
};
//TODO: launch pcid based on backend information?
// Must launch after acpid but before probe calls /scheme/acpi/symbols
#[allow(deprecated, reason = "we can't yet move this to init")]
daemon::Daemon::spawn(process::Command::new("pcid"));
daemon.ready();
//TODO: HWD is meant to locate PCI/XHCI/etc devices in ACPI and DeviceTree definitions and start their drivers
match backend.probe() {
Ok(()) => {
process::exit(0);
}
Err(err) => {
log::error!("failed to probe with error {}", err);
process::exit(1);
}
}
}
fn main() {
daemon::Daemon::new(daemon);
}
+37
View File
@@ -0,0 +1,37 @@
## Drivers for InitFS ##
# ahcid
[[drivers]]
name = "AHCI storage"
class = 1
subclass = 6
command = ["/scheme/initfs/lib/drivers/ahcid"]
# ided
[[drivers]]
name = "IDE storage"
class = 1
subclass = 1
command = ["/scheme/initfs/lib/drivers/ided"]
# nvmed
[[drivers]]
name = "NVME storage"
class = 1
subclass = 8
command = ["/scheme/initfs/lib/drivers/nvmed"]
[[drivers]]
name = "virtio-blk"
class = 1
subclass = 0
vendor = 0x1AF4
device = 0x1001
command = ["/scheme/initfs/lib/drivers/virtio-blkd"]
[[drivers]]
name = "virtio-gpu"
class = 3
vendor = 0x1AF4
device = 0x1050
command = ["/scheme/initfs/lib/drivers/virtio-gpud"]
@@ -0,0 +1 @@
/target
@@ -0,0 +1,21 @@
[package]
name = "ps2d"
description = "PS/2 driver"
version = "0.1.0"
edition = "2018"
[dependencies]
bitflags.workspace = true
log.workspace = true
orbclient.workspace = true
redox_event.workspace = true
redox_syscall.workspace = true
redox-scheme.workspace = true
libredox.workspace = true
common = { path = "../../common" }
daemon = { path = "../../../daemon" }
inputd = { path = "../../inputd" }
[lints]
workspace = true
@@ -0,0 +1,389 @@
//! PS/2 controller, see:
//! - https://wiki.osdev.org/I8042_PS/2_Controller
//! - http://www.mcamafia.de/pdf/ibm_hitrc07.pdf
use common::{
io::{Io, ReadOnly, WriteOnly},
timeout::Timeout,
};
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
use common::io::Pio;
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
use common::io::Mmio;
use log::{debug, error, info, trace, warn};
use std::fmt;
#[derive(Debug)]
pub enum Error {
CommandRetry,
NoMoreTries,
ReadTimeout,
WriteTimeout,
CommandTimeout(Command),
WriteConfigTimeout(ConfigFlags),
KeyboardCommandFail(KeyboardCommand),
KeyboardCommandDataFail(KeyboardCommandData),
}
bitflags! {
pub struct StatusFlags: u8 {
const OUTPUT_FULL = 1;
const INPUT_FULL = 1 << 1;
const SYSTEM = 1 << 2;
const COMMAND = 1 << 3;
// Chipset specific
const KEYBOARD_LOCK = 1 << 4;
// Chipset specific
const SECOND_OUTPUT_FULL = 1 << 5;
const TIME_OUT = 1 << 6;
const PARITY = 1 << 7;
}
}
bitflags! {
#[derive(Clone, Copy, Debug)]
pub struct ConfigFlags: u8 {
const FIRST_INTERRUPT = 1 << 0;
const SECOND_INTERRUPT = 1 << 1;
const POST_PASSED = 1 << 2;
// 1 << 3 should be zero
const CONFIG_RESERVED_3 = 1 << 3;
const FIRST_DISABLED = 1 << 4;
const SECOND_DISABLED = 1 << 5;
const FIRST_TRANSLATE = 1 << 6;
// 1 << 7 should be zero
const CONFIG_RESERVED_7 = 1 << 7;
}
}
#[derive(Clone, Copy, Debug)]
#[repr(u8)]
#[allow(dead_code)]
enum Command {
ReadConfig = 0x20,
WriteConfig = 0x60,
DisableSecond = 0xA7,
EnableSecond = 0xA8,
TestSecond = 0xA9,
TestController = 0xAA,
TestFirst = 0xAB,
Diagnostic = 0xAC,
DisableFirst = 0xAD,
EnableFirst = 0xAE,
WriteSecond = 0xD4,
}
#[derive(Clone, Copy, Debug)]
#[repr(u8)]
#[allow(dead_code)]
enum KeyboardCommand {
EnableReporting = 0xF4,
SetDefaultsDisable = 0xF5,
SetDefaults = 0xF6,
Reset = 0xFF,
}
#[derive(Clone, Copy, Debug)]
#[repr(u8)]
enum KeyboardCommandData {
ScancodeSet = 0xF0,
}
// Default timeout in microseconds
const DEFAULT_TIMEOUT: u64 = 50_000;
// Reset timeout in microseconds
const RESET_TIMEOUT: u64 = 1_000_000;
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub struct Ps2 {
data: Pio<u8>,
status: ReadOnly<Pio<u8>>,
command: WriteOnly<Pio<u8>>,
//TODO: keep in state instead
pub mouse_resets: usize,
}
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
pub struct Ps2 {
data: Mmio<u8>,
status: ReadOnly<Mmio<u8>>,
command: WriteOnly<Mmio<u8>>,
//TODO: keep in state instead
pub mouse_resets: usize,
}
impl Ps2 {
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub fn new() -> Self {
Ps2 {
data: Pio::new(0x60),
status: ReadOnly::new(Pio::new(0x64)),
command: WriteOnly::new(Pio::new(0x64)),
mouse_resets: 0,
}
}
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
pub fn new() -> Self {
unimplemented!()
}
fn status(&mut self) -> StatusFlags {
StatusFlags::from_bits_truncate(self.status.read())
}
fn wait_read(&mut self, micros: u64) -> Result<(), Error> {
let timeout = Timeout::from_micros(micros);
loop {
if self.status().contains(StatusFlags::OUTPUT_FULL) {
return Ok(());
}
timeout.run().map_err(|()| Error::ReadTimeout)?
}
}
fn wait_write(&mut self, micros: u64) -> Result<(), Error> {
let timeout = Timeout::from_micros(micros);
loop {
if !self.status().contains(StatusFlags::INPUT_FULL) {
return Ok(());
}
timeout.run().map_err(|()| Error::WriteTimeout)?
}
}
fn command(&mut self, command: Command) -> Result<(), Error> {
self.wait_write(DEFAULT_TIMEOUT)
.map_err(|_| Error::CommandTimeout(command))?;
self.command.write(command as u8);
Ok(())
}
fn read(&mut self) -> Result<u8, Error> {
self.read_timeout(DEFAULT_TIMEOUT)
}
fn read_timeout(&mut self, micros: u64) -> Result<u8, Error> {
self.wait_read(micros)?;
let data = self.data.read();
Ok(data)
}
fn write(&mut self, data: u8) -> Result<(), Error> {
self.wait_write(DEFAULT_TIMEOUT)?;
self.data.write(data);
Ok(())
}
fn retry<T, F: Fn(&mut Self) -> Result<T, Error>>(
&mut self,
name: fmt::Arguments,
retries: usize,
f: F,
) -> Result<T, Error> {
trace!("{}", name);
let mut res = Err(Error::NoMoreTries);
for retry in 0..retries {
res = f(self);
match res {
Ok(ok) => {
return Ok(ok);
}
Err(ref err) => {
debug!("{}: retry {}/{}: {:?}", name, retry + 1, retries, err);
}
}
}
res
}
fn config(&mut self) -> Result<ConfigFlags, Error> {
self.retry(format_args!("read config"), 4, |x| {
x.command(Command::ReadConfig)?;
x.read()
})
.map(ConfigFlags::from_bits_truncate)
}
fn set_config(&mut self, config: ConfigFlags) -> Result<(), Error> {
self.retry(format_args!("write config {:?}", config), 4, |x| {
x.command(Command::WriteConfig)?;
x.write(config.bits())
.map_err(|_| Error::WriteConfigTimeout(config))?;
Ok(0)
})?;
Ok(())
}
fn keyboard_command_inner(&mut self, command: u8) -> Result<u8, Error> {
self.write(command)?;
match self.read()? {
0xFE => Err(Error::CommandRetry),
value => Ok(value),
}
}
fn keyboard_command(&mut self, command: KeyboardCommand) -> Result<u8, Error> {
self.retry(format_args!("keyboard command {:?}", command), 4, |x| {
x.keyboard_command_inner(command as u8)
.map_err(|_| Error::KeyboardCommandFail(command))
})
}
fn keyboard_command_data(
&mut self,
command: KeyboardCommandData,
data: u8,
) -> Result<u8, Error> {
self.retry(
format_args!("keyboard command {:?} {:#x}", command, data),
4,
|x| {
let res = x
.keyboard_command_inner(command as u8)
.map_err(|_| Error::KeyboardCommandDataFail(command))?;
if res != 0xFA {
warn!("keyboard incorrect result of set command: {command:?} {res:02X}");
return Ok(res);
}
x.write(data)?;
x.read()
},
)
}
pub fn mouse_command_async(&mut self, command: u8) -> Result<(), Error> {
self.command(Command::WriteSecond)?;
self.write(command as u8)
}
pub fn next(&mut self) -> Option<(bool, u8)> {
let status = self.status();
if status.contains(StatusFlags::OUTPUT_FULL) {
let data = self.data.read();
Some((!status.contains(StatusFlags::SECOND_OUTPUT_FULL), data))
} else {
None
}
}
pub fn init_keyboard(&mut self) -> Result<(), Error> {
let mut b;
{
// Enable first device
self.command(Command::EnableFirst)?;
}
{
// Reset keyboard
b = self.keyboard_command(KeyboardCommand::Reset)?;
if b == 0xFA {
b = self.read().unwrap_or(0);
if b != 0xAA {
error!("keyboard failed self test: {:02X}", b);
}
} else {
error!("keyboard failed to reset: {:02X}", b);
}
}
{
// Set scancode set to 2
let scancode_set = 2;
b = self.keyboard_command_data(KeyboardCommandData::ScancodeSet, scancode_set)?;
if b != 0xFA {
error!(
"keyboard failed to set scancode set {}: {:02X}",
scancode_set, b
);
}
}
Ok(())
}
pub fn init(&mut self) -> Result<(), Error> {
{
// Disable devices
self.command(Command::DisableFirst)?;
self.command(Command::DisableSecond)?;
}
// Disable clocks, disable interrupts, and disable translate
{
// Since the default config may have interrupts enabled, and the kernel may eat up
// our data in that case, we will write a config without reading the current one
let config = ConfigFlags::POST_PASSED
| ConfigFlags::FIRST_DISABLED
| ConfigFlags::SECOND_DISABLED;
self.set_config(config)?;
}
// The keyboard seems to still collect bytes even when we disable
// the port, so we must disable the keyboard too
self.retry(format_args!("keyboard defaults"), 4, |x| {
// Set defaults and disable scanning
let b = x.keyboard_command(KeyboardCommand::SetDefaultsDisable)?;
if b != 0xFA {
error!("keyboard failed to set defaults: {:02X}", b);
return Err(Error::CommandRetry);
}
Ok(b)
})?;
{
// Perform the self test
self.command(Command::TestController)?;
let r = self.read()?;
if r != 0x55 {
warn!("self test unexpected value: {:02X}", r);
}
}
// Initialize keyboard
if let Err(err) = self.init_keyboard() {
error!("failed to initialize keyboard: {:?}", err);
return Err(err);
}
// Enable second device
let enable_mouse = match self.command(Command::EnableSecond) {
Ok(()) => true,
Err(err) => {
error!("failed to initialize mouse: {:?}", err);
false
}
};
{
// Enable keyboard data reporting
// Use inner function to prevent retries
// Response is ignored since scanning is now on
if let Err(err) = self.keyboard_command_inner(KeyboardCommand::EnableReporting as u8) {
error!("failed to initialize keyboard reporting: {:?}", err);
//TODO: fix by using interrupts?
}
}
// Enable clocks and interrupts
{
let config = ConfigFlags::POST_PASSED
| ConfigFlags::FIRST_INTERRUPT
| ConfigFlags::FIRST_TRANSLATE
| if enable_mouse {
ConfigFlags::SECOND_INTERRUPT
} else {
ConfigFlags::SECOND_DISABLED
};
self.set_config(config)?;
}
Ok(())
}
}
@@ -0,0 +1,135 @@
#[macro_use]
extern crate bitflags;
extern crate orbclient;
extern crate syscall;
use std::fs::OpenOptions;
use std::io::Read;
use std::os::unix::fs::OpenOptionsExt;
use std::os::unix::io::AsRawFd;
use std::process;
use common::acquire_port_io_rights;
use event::{user_data, EventQueue};
use inputd::ProducerHandle;
use crate::state::Ps2d;
mod controller;
mod mouse;
mod state;
mod vm;
fn daemon(daemon: daemon::Daemon) -> ! {
common::setup_logging(
"input",
"ps2",
"ps2",
common::output_level(),
common::file_level(),
);
acquire_port_io_rights().expect("ps2d: failed to get I/O permission");
let input = ProducerHandle::new().expect("ps2d: failed to open input producer");
user_data! {
enum Source {
Keyboard,
Mouse,
Time,
}
}
let event_queue: EventQueue<Source> =
EventQueue::new().expect("ps2d: failed to create event queue");
let mut key_file = OpenOptions::new()
.read(true)
.write(true)
.custom_flags(syscall::O_NONBLOCK as i32)
.open("/scheme/serio/0")
.expect("ps2d: failed to open /scheme/serio/0");
event_queue
.subscribe(
key_file.as_raw_fd() as usize,
Source::Keyboard,
event::EventFlags::READ,
)
.unwrap();
let mut mouse_file = OpenOptions::new()
.read(true)
.write(true)
.custom_flags(syscall::O_NONBLOCK as i32)
.open("/scheme/serio/1")
.expect("ps2d: failed to open /scheme/serio/1");
event_queue
.subscribe(
mouse_file.as_raw_fd() as usize,
Source::Mouse,
event::EventFlags::READ,
)
.unwrap();
let time_file = OpenOptions::new()
.read(true)
.write(true)
.custom_flags(syscall::O_NONBLOCK as i32)
.open(format!("/scheme/time/{}", syscall::CLOCK_MONOTONIC))
.expect("ps2d: failed to open /scheme/time");
event_queue
.subscribe(
time_file.as_raw_fd() as usize,
Source::Time,
event::EventFlags::READ,
)
.unwrap();
libredox::call::setrens(0, 0).expect("ps2d: failed to enter null namespace");
daemon.ready();
let mut ps2d = Ps2d::new(input, time_file);
let mut data = [0; 256];
for event in event_queue.map(|e| e.expect("ps2d: failed to get next event").user_data) {
// There are some gotchas with ps/2 controllers that require this weird
// way of doing things. You read key and mouse data from the same
// place. There is a status register that may show you which the data
// came from, but if it is even implemented it can have a race
// condition causing keyboard data to be read as mouse data.
//
// Due to this, we have a kernel driver doing a small amount of work
// to grab bytes and sort them based on the source
let (file, keyboard) = match event {
Source::Keyboard => (&mut key_file, true),
Source::Mouse => (&mut mouse_file, false),
Source::Time => {
ps2d.time_event();
continue;
}
};
loop {
let count = match file.read(&mut data) {
Ok(0) => break,
Ok(count) => count,
Err(_) => break,
};
for i in 0..count {
ps2d.handle(keyboard, data[i]);
}
}
}
process::exit(0);
}
fn main() {
daemon::Daemon::new(daemon);
}
@@ -0,0 +1,387 @@
use crate::controller::Ps2;
use std::time::Duration;
pub const RESET_RETRIES: usize = 10;
pub const RESET_TIMEOUT: Duration = Duration::from_millis(1000);
pub const COMMAND_TIMEOUT: Duration = Duration::from_millis(100);
#[derive(Clone, Copy, Debug)]
#[repr(u8)]
#[allow(dead_code)]
enum MouseCommand {
SetScaling1To1 = 0xE6,
SetScaling2To1 = 0xE7,
StatusRequest = 0xE9,
GetDeviceId = 0xF2,
EnableReporting = 0xF4,
SetDefaultsDisable = 0xF5,
SetDefaults = 0xF6,
Reset = 0xFF,
}
#[derive(Clone, Copy, Debug)]
#[repr(u8)]
enum MouseCommandData {
SetResolution = 0xE8,
SetSampleRate = 0xF3,
}
#[derive(Debug)]
struct MouseTx {
write: &'static [u8],
write_i: usize,
read: Vec<u8>,
read_bytes: usize,
}
impl MouseTx {
fn new(write: &'static [u8], read_bytes: usize, ps2: &mut Ps2) -> Result<Self, ()> {
let mut this = Self {
write,
write_i: 0,
read: Vec::with_capacity(read_bytes),
read_bytes,
};
this.try_write(ps2)?;
Ok(this)
}
fn try_write(&mut self, ps2: &mut Ps2) -> Result<(), ()> {
if let Some(write) = self.write.get(self.write_i) {
if let Err(err) = ps2.mouse_command_async(*write) {
log::error!("failed to write {:02X} to mouse: {:?}", write, err);
return Err(());
}
}
Ok(())
}
fn handle(&mut self, data: u8, ps2: &mut Ps2) -> Result<bool, ()> {
if self.write_i < self.write.len() {
if data == 0xFA {
self.write_i += 1;
self.try_write(ps2)?;
} else {
log::error!("unknown mouse response {:02X}", data);
return Err(());
}
} else {
self.read.push(data);
}
Ok(self.write_i >= self.write.len() && self.read.len() >= self.read_bytes)
}
}
#[derive(Clone, Copy, Debug)]
#[repr(u8)]
#[allow(dead_code)]
enum MouseId {
/// Mouse sends three bytes
Base = 0x00,
/// Mouse sends fourth byte with scroll
Intellimouse1 = 0x03,
/// Mouse sends fourth byte with scroll, button 4, and button 5
//TODO: support this mouse type
Intellimouse2 = 0x04,
}
// From Synaptics TouchPad Interfacing Guide
#[derive(Clone, Copy, Debug)]
#[repr(u8)]
pub enum TouchpadCommand {
Identify = 0x00,
}
#[derive(Debug)]
pub enum MouseState {
/// No mouse found
None,
/// Ready to initialize mouse
Init,
/// Reset command is sent
Reset,
/// BAT completion code returned
Bat,
/// Identify touchpad
IdentifyTouchpad { tx: MouseTx },
/// Enable intellimouse features
EnableIntellimouse { tx: MouseTx },
/// Status request
Status { index: usize },
/// Device ID update
DeviceId,
/// Enable reporting command sent
EnableReporting { id: u8 },
/// Mouse is streaming
Streaming { id: u8 },
}
#[derive(Debug)]
#[must_use]
pub enum MouseResult {
None,
Packet(u8, bool),
Timeout(Duration),
}
impl MouseState {
pub fn reset(&mut self, ps2: &mut Ps2) -> MouseResult {
if ps2.mouse_resets < RESET_RETRIES {
ps2.mouse_resets += 1;
} else {
log::error!("tried to reset mouse {} times, giving up", ps2.mouse_resets);
*self = MouseState::None;
return MouseResult::None;
}
match ps2.mouse_command_async(MouseCommand::Reset as u8) {
Ok(()) => {
*self = MouseState::Reset;
MouseResult::Timeout(RESET_TIMEOUT)
}
Err(err) => {
log::error!("failed to send mouse reset command: {:?}", err);
//TODO: retry reset?
*self = MouseState::None;
MouseResult::None
}
}
}
fn enable_reporting(&mut self, id: u8, ps2: &mut Ps2) -> MouseResult {
match ps2.mouse_command_async(MouseCommand::EnableReporting as u8) {
Ok(()) => {
*self = MouseState::EnableReporting { id };
MouseResult::Timeout(COMMAND_TIMEOUT)
}
Err(err) => {
log::error!("failed to enable mouse reporting: {:?}", err);
//TODO: reset mouse?
*self = MouseState::None;
MouseResult::None
}
}
}
fn request_status(&mut self, ps2: &mut Ps2) -> MouseResult {
match ps2.mouse_command_async(MouseCommand::StatusRequest as u8) {
Ok(()) => {
*self = MouseState::Status { index: 0 };
MouseResult::Timeout(COMMAND_TIMEOUT)
}
Err(err) => {
log::error!("failed to request mouse status: {:?}", err);
//TODO: reset mouse instead?
self.request_id(ps2)
}
}
}
fn request_id(&mut self, ps2: &mut Ps2) -> MouseResult {
match ps2.mouse_command_async(MouseCommand::GetDeviceId as u8) {
Ok(()) => {
*self = MouseState::DeviceId;
MouseResult::Timeout(COMMAND_TIMEOUT)
}
Err(err) => {
log::error!("failed to request mouse id: {:?}", err);
//TODO: reset mouse instead?
self.enable_reporting(MouseId::Base as u8, ps2)
}
}
}
fn identify_touchpad(&mut self, ps2: &mut Ps2) -> MouseResult {
let cmd = TouchpadCommand::Identify as u8;
match MouseTx::new(
&[
// Ensure command alignment
MouseCommand::SetScaling1To1 as u8,
// Send special identify touchpad command
MouseCommandData::SetResolution as u8,
0,
MouseCommandData::SetResolution as u8,
0,
MouseCommandData::SetResolution as u8,
0,
MouseCommandData::SetResolution as u8,
0,
// Status request
MouseCommand::StatusRequest as u8,
],
3,
ps2,
) {
Ok(tx) => {
*self = MouseState::IdentifyTouchpad { tx };
MouseResult::Timeout(COMMAND_TIMEOUT)
}
Err(()) => self.enable_intellimouse(ps2),
}
}
fn enable_intellimouse(&mut self, ps2: &mut Ps2) -> MouseResult {
match MouseTx::new(
&[
MouseCommandData::SetSampleRate as u8,
200,
MouseCommandData::SetSampleRate as u8,
100,
MouseCommandData::SetSampleRate as u8,
80,
],
0,
ps2,
) {
Ok(tx) => {
*self = MouseState::EnableIntellimouse { tx };
MouseResult::Timeout(COMMAND_TIMEOUT)
}
Err(()) => self.request_id(ps2),
}
}
pub fn handle(&mut self, data: u8, ps2: &mut Ps2) -> MouseResult {
match *self {
MouseState::None | MouseState::Init => {
//TODO: enable port in this case, mouse hotplug may send 0xAA 0x00
log::error!(
"received mouse byte {:02X} when mouse not initialized",
data
);
MouseResult::None
}
MouseState::Reset => {
if data == 0xFA {
log::debug!("mouse reset ok");
MouseResult::Timeout(RESET_TIMEOUT)
} else if data == 0xAA {
log::debug!("BAT completed");
*self = MouseState::Bat;
MouseResult::Timeout(COMMAND_TIMEOUT)
} else {
log::warn!("unknown mouse response {:02X} after reset", data);
self.reset(ps2)
}
}
MouseState::Bat => {
if data == MouseId::Base as u8 {
// Enable intellimouse features
log::debug!("BAT mouse id {:02X} (base)", data);
self.identify_touchpad(ps2)
} else if data == MouseId::Intellimouse1 as u8 {
// Extra packet already enabled
log::debug!("BAT mouse id {:02X} (intellimouse)", data);
self.enable_reporting(data, ps2)
} else {
log::warn!("unknown mouse id {:02X} after BAT", data);
MouseResult::Timeout(RESET_TIMEOUT)
}
}
MouseState::IdentifyTouchpad { ref mut tx } => {
match tx.handle(data, ps2) {
Ok(done) => {
if done {
//TODO: handle touchpad identification
// If tx.read[1] == 0x47, this is a synaptics touchpad
self.request_status(ps2)
} else {
MouseResult::Timeout(COMMAND_TIMEOUT)
}
}
Err(()) => self.enable_intellimouse(ps2),
}
}
MouseState::EnableIntellimouse { ref mut tx } => match tx.handle(data, ps2) {
Ok(done) => {
if done {
self.request_status(ps2)
} else {
MouseResult::Timeout(COMMAND_TIMEOUT)
}
}
Err(()) => self.request_status(ps2),
},
MouseState::Status { index } => {
match index {
0 => {
//TODO: check response
*self = MouseState::Status { index: 1 };
MouseResult::Timeout(COMMAND_TIMEOUT)
}
1 => {
*self = MouseState::Status { index: 2 };
MouseResult::Timeout(COMMAND_TIMEOUT)
}
2 => {
*self = MouseState::Status { index: 3 };
MouseResult::Timeout(COMMAND_TIMEOUT)
}
_ => self.request_id(ps2),
}
}
MouseState::DeviceId => {
if data == 0xFA {
// Command OK response
//TODO: handle this separately?
MouseResult::Timeout(COMMAND_TIMEOUT)
} else if data == MouseId::Base as u8 || data == MouseId::Intellimouse1 as u8 {
log::debug!("mouse id {:02X}", data);
self.enable_reporting(data, ps2)
} else {
log::warn!("unknown mouse id {:02X} after requesting id", data);
self.reset(ps2)
}
}
MouseState::EnableReporting { id } => {
log::debug!("mouse id {:02X} enable reporting {:02X}", id, data);
//TODO: handle response ok/error
*self = MouseState::Streaming { id };
MouseResult::None
}
MouseState::Streaming { id } => {
MouseResult::Packet(data, id == MouseId::Intellimouse1 as u8)
}
}
}
pub fn handle_timeout(&mut self, ps2: &mut Ps2) -> MouseResult {
match *self {
MouseState::None | MouseState::Streaming { .. } => MouseResult::None,
MouseState::Init => {
// The state uses a timeout on init to request a reset
self.reset(ps2)
}
MouseState::Reset => {
log::warn!("timeout waiting for mouse reset");
self.reset(ps2)
}
MouseState::Bat => {
log::warn!("timeout waiting for BAT completion");
self.reset(ps2)
}
MouseState::IdentifyTouchpad { .. } => {
//TODO: retry?
log::warn!("timeout identifying touchpad");
self.request_status(ps2)
}
MouseState::EnableIntellimouse { .. } => {
//TODO: retry?
log::warn!("timeout enabling intellimouse");
self.request_status(ps2)
}
MouseState::Status { index } => {
log::warn!("timeout waiting for mouse status {}", index);
self.request_id(ps2)
}
MouseState::DeviceId => {
log::warn!("timeout requesting mouse id");
self.enable_reporting(0, ps2)
}
MouseState::EnableReporting { id } => {
log::warn!("timeout enabling reporting");
//TODO: limit number of retries
self.enable_reporting(id, ps2)
}
}
}
}
@@ -0,0 +1,487 @@
use inputd::ProducerHandle;
use log::{error, warn};
use orbclient::{ButtonEvent, KeyEvent, MouseEvent, MouseRelativeEvent, ScrollEvent};
use std::{
convert::TryInto,
fs::File,
io::{Read, Write},
time::Duration,
};
use syscall::TimeSpec;
use crate::controller::Ps2;
use crate::mouse::{MouseResult, MouseState};
use crate::vm;
bitflags! {
pub struct MousePacketFlags: u8 {
const LEFT_BUTTON = 1;
const RIGHT_BUTTON = 1 << 1;
const MIDDLE_BUTTON = 1 << 2;
const ALWAYS_ON = 1 << 3;
const X_SIGN = 1 << 4;
const Y_SIGN = 1 << 5;
const X_OVERFLOW = 1 << 6;
const Y_OVERFLOW = 1 << 7;
}
}
fn timespec_from_duration(duration: Duration) -> TimeSpec {
TimeSpec {
tv_sec: duration.as_secs().try_into().unwrap(),
tv_nsec: duration.subsec_nanos().try_into().unwrap(),
}
}
fn duration_from_timespec(timespec: TimeSpec) -> Duration {
Duration::new(
timespec.tv_sec.try_into().unwrap(),
timespec.tv_nsec.try_into().unwrap(),
)
}
pub struct Ps2d {
ps2: Ps2,
vmmouse: bool,
vmmouse_relative: bool,
input: ProducerHandle,
time_file: File,
extended: bool,
mouse_x: i32,
mouse_y: i32,
mouse_left: bool,
mouse_middle: bool,
mouse_right: bool,
mouse_state: MouseState,
mouse_timeout: Option<TimeSpec>,
packets: [u8; 4],
packet_i: usize,
}
impl Ps2d {
pub fn new(input: ProducerHandle, time_file: File) -> Self {
let mut ps2 = Ps2::new();
ps2.init().expect("failed to initialize");
// FIXME add an option for orbital to disable this when an app captures the mouse.
let vmmouse_relative = false;
let vmmouse = vm::enable(vmmouse_relative);
// TODO: QEMU hack, maybe do this when Init timed out?
if vmmouse {
// 3 = MouseId::Intellimouse1
MouseState::Bat.handle(3, &mut ps2);
}
let mut this = Ps2d {
ps2,
vmmouse,
vmmouse_relative,
input,
time_file,
extended: false,
mouse_x: 0,
mouse_y: 0,
mouse_left: false,
mouse_middle: false,
mouse_right: false,
mouse_state: MouseState::Init,
mouse_timeout: None,
packets: [0; 4],
packet_i: 0,
};
if !this.vmmouse {
// This triggers initializing the mouse
this.handle_mouse(None);
}
this
}
pub fn irq(&mut self) {
while let Some((keyboard, data)) = self.ps2.next() {
self.handle(keyboard, data);
}
}
pub fn time_event(&mut self) {
let mut time = TimeSpec::default();
match self.time_file.read(&mut time) {
Ok(_count) => {}
Err(err) => {
log::error!("failed to read time file: {}", err);
return;
}
}
if let Some(mouse_timeout) = self.mouse_timeout {
if time.tv_sec > mouse_timeout.tv_sec
|| (time.tv_sec == mouse_timeout.tv_sec && time.tv_nsec >= mouse_timeout.tv_nsec)
{
self.handle_mouse(None);
}
}
}
pub fn handle(&mut self, keyboard: bool, data: u8) {
if keyboard {
if data == 0xE0 {
self.extended = true;
} else {
let (ps2_scancode, pressed) = if data >= 0x80 {
(data - 0x80, false)
} else {
(data, true)
};
let scancode = if self.extended {
self.extended = false;
match ps2_scancode {
0x1C => orbclient::K_NUM_ENTER,
0x1D => orbclient::K_RIGHT_CTRL,
0x20 => orbclient::K_VOLUME_TOGGLE,
0x22 => orbclient::K_MEDIA_PLAY_PAUSE,
0x24 => orbclient::K_MEDIA_STOP,
0x10 => orbclient::K_MEDIA_REWIND,
0x19 => orbclient::K_MEDIA_FAST_FORWARD,
0x2E => orbclient::K_VOLUME_DOWN,
0x30 => orbclient::K_VOLUME_UP,
0x35 => orbclient::K_NUM_SLASH,
0x38 => orbclient::K_ALT_GR,
0x47 => orbclient::K_HOME,
0x48 => orbclient::K_UP,
0x49 => orbclient::K_PGUP,
0x4B => orbclient::K_LEFT,
0x4D => orbclient::K_RIGHT,
0x4F => orbclient::K_END,
0x50 => orbclient::K_DOWN,
0x51 => orbclient::K_PGDN,
0x52 => orbclient::K_INS,
0x53 => orbclient::K_DEL,
0x5B => orbclient::K_LEFT_SUPER,
0x5C => orbclient::K_RIGHT_SUPER,
0x5D => orbclient::K_APP,
0x5E => orbclient::K_POWER,
0x5F => orbclient::K_SLEEP,
/* 0x80 to 0xFF used for press/release detection */
_ => {
if pressed {
warn!("unknown extended scancode {:02X}", ps2_scancode);
}
0
}
}
} else {
match ps2_scancode {
/* 0x00 unused */
0x01 => orbclient::K_ESC,
0x02 => orbclient::K_1,
0x03 => orbclient::K_2,
0x04 => orbclient::K_3,
0x05 => orbclient::K_4,
0x06 => orbclient::K_5,
0x07 => orbclient::K_6,
0x08 => orbclient::K_7,
0x09 => orbclient::K_8,
0x0A => orbclient::K_9,
0x0B => orbclient::K_0,
0x0C => orbclient::K_MINUS,
0x0D => orbclient::K_EQUALS,
0x0E => orbclient::K_BKSP,
0x0F => orbclient::K_TAB,
0x10 => orbclient::K_Q,
0x11 => orbclient::K_W,
0x12 => orbclient::K_E,
0x13 => orbclient::K_R,
0x14 => orbclient::K_T,
0x15 => orbclient::K_Y,
0x16 => orbclient::K_U,
0x17 => orbclient::K_I,
0x18 => orbclient::K_O,
0x19 => orbclient::K_P,
0x1A => orbclient::K_BRACE_OPEN,
0x1B => orbclient::K_BRACE_CLOSE,
0x1C => orbclient::K_ENTER,
0x1D => orbclient::K_CTRL,
0x1E => orbclient::K_A,
0x1F => orbclient::K_S,
0x20 => orbclient::K_D,
0x21 => orbclient::K_F,
0x22 => orbclient::K_G,
0x23 => orbclient::K_H,
0x24 => orbclient::K_J,
0x25 => orbclient::K_K,
0x26 => orbclient::K_L,
0x27 => orbclient::K_SEMICOLON,
0x28 => orbclient::K_QUOTE,
0x29 => orbclient::K_TICK,
0x2A => orbclient::K_LEFT_SHIFT,
0x2B => orbclient::K_BACKSLASH,
0x2C => orbclient::K_Z,
0x2D => orbclient::K_X,
0x2E => orbclient::K_C,
0x2F => orbclient::K_V,
0x30 => orbclient::K_B,
0x31 => orbclient::K_N,
0x32 => orbclient::K_M,
0x33 => orbclient::K_COMMA,
0x34 => orbclient::K_PERIOD,
0x35 => orbclient::K_SLASH,
0x36 => orbclient::K_RIGHT_SHIFT,
0x37 => orbclient::K_NUM_ASTERISK,
0x38 => orbclient::K_ALT,
0x39 => orbclient::K_SPACE,
0x3A => orbclient::K_CAPS,
0x3B => orbclient::K_F1,
0x3C => orbclient::K_F2,
0x3D => orbclient::K_F3,
0x3E => orbclient::K_F4,
0x3F => orbclient::K_F5,
0x40 => orbclient::K_F6,
0x41 => orbclient::K_F7,
0x42 => orbclient::K_F8,
0x43 => orbclient::K_F9,
0x44 => orbclient::K_F10,
0x45 => orbclient::K_NUM,
0x46 => orbclient::K_SCROLL,
0x47 => orbclient::K_NUM_7,
0x48 => orbclient::K_NUM_8,
0x49 => orbclient::K_NUM_9,
0x4A => orbclient::K_NUM_MINUS,
0x4B => orbclient::K_NUM_4,
0x4C => orbclient::K_NUM_5,
0x4D => orbclient::K_NUM_6,
0x4E => orbclient::K_NUM_PLUS,
0x4F => orbclient::K_NUM_1,
0x50 => orbclient::K_NUM_2,
0x51 => orbclient::K_NUM_3,
0x52 => orbclient::K_NUM_0,
0x53 => orbclient::K_NUM_PERIOD,
/* 0x54 to 0x55 unused */
0x56 => 0x56, // UK Backslash
0x57 => orbclient::K_F11,
0x58 => orbclient::K_F12,
/* 0x59 to 0x7F unused */
/* 0x80 to 0xFF used for press/release detection */
_ => {
if pressed {
warn!("unknown scancode {:02X}", ps2_scancode);
}
0
}
}
};
if scancode != 0 {
self.input
.write_event(
KeyEvent {
character: '\0',
scancode,
pressed,
}
.to_event(),
)
.expect("failed to write key event");
}
}
} else if self.vmmouse {
for _i in 0..256 {
let (status, _, _, _) = unsafe { vm::cmd(vm::ABSPOINTER_STATUS, 0) };
//TODO if ((status & VMMOUSE_ERROR) == VMMOUSE_ERROR)
let queue_length = status & 0xffff;
if queue_length == 0 {
break;
}
if queue_length % 4 != 0 {
error!("queue length not a multiple of 4: {}", queue_length);
break;
}
let (status, dx, dy, dz) = unsafe { vm::cmd(vm::ABSPOINTER_DATA, 4) };
if self.vmmouse_relative {
if dx != 0 || dy != 0 {
self.input
.write_event(
MouseRelativeEvent {
dx: dx as i32,
dy: dy as i32,
}
.to_event(),
)
.expect("ps2d: failed to write mouse event");
}
} else {
let x = dx as i32;
let y = dy as i32;
if x != self.mouse_x || y != self.mouse_y {
self.mouse_x = x;
self.mouse_y = y;
self.input
.write_event(MouseEvent { x, y }.to_event())
.expect("ps2d: failed to write mouse event");
}
};
if dz != 0 {
self.input
.write_event(
ScrollEvent {
x: 0,
y: -(dz as i32),
}
.to_event(),
)
.expect("ps2d: failed to write scroll event");
}
let left = status & vm::LEFT_BUTTON == vm::LEFT_BUTTON;
let middle = status & vm::MIDDLE_BUTTON == vm::MIDDLE_BUTTON;
let right = status & vm::RIGHT_BUTTON == vm::RIGHT_BUTTON;
if left != self.mouse_left
|| middle != self.mouse_middle
|| right != self.mouse_right
{
self.mouse_left = left;
self.mouse_middle = middle;
self.mouse_right = right;
self.input
.write_event(
ButtonEvent {
left,
middle,
right,
}
.to_event(),
)
.expect("ps2d: failed to write button event");
}
}
} else {
self.handle_mouse(Some(data));
}
}
pub fn handle_mouse(&mut self, data_opt: Option<u8>) {
// log::trace!(
// "handle_mouse state {:?} data {:?}",
// self.mouse_state,
// data_opt
// );
let mouse_res = match data_opt {
Some(data) => self.mouse_state.handle(data, &mut self.ps2),
None => self.mouse_state.handle_timeout(&mut self.ps2),
};
self.mouse_timeout = None;
let (packet_data, extra_packet) = match mouse_res {
MouseResult::None => {
return;
}
MouseResult::Packet(packet_data, extra_packet) => (packet_data, extra_packet),
MouseResult::Timeout(duration) => {
// Read current time
let mut time = TimeSpec::default();
match self.time_file.read(&mut time) {
Ok(_count) => {}
Err(err) => {
log::error!("failed to read time file: {}", err);
return;
}
}
// Add duration to time
time = timespec_from_duration(duration_from_timespec(time) + duration);
// Write next time
match self.time_file.write(&time) {
Ok(_count) => {}
Err(err) => {
log::error!("failed to write time file: {}", err);
}
}
self.mouse_timeout = Some(time);
return;
}
};
self.packets[self.packet_i] = packet_data;
self.packet_i += 1;
let flags = MousePacketFlags::from_bits_truncate(self.packets[0]);
if !flags.contains(MousePacketFlags::ALWAYS_ON) {
error!("mouse misalign {:X}", self.packets[0]);
self.packets = [0; 4];
self.packet_i = 0;
} else if self.packet_i >= self.packets.len() || (!extra_packet && self.packet_i >= 3) {
if !flags.contains(MousePacketFlags::X_OVERFLOW)
&& !flags.contains(MousePacketFlags::Y_OVERFLOW)
{
let mut dx = self.packets[1] as i32;
if flags.contains(MousePacketFlags::X_SIGN) {
dx -= 0x100;
}
let mut dy = -(self.packets[2] as i32);
if flags.contains(MousePacketFlags::Y_SIGN) {
dy += 0x100;
}
let mut dz = 0;
if extra_packet {
let mut scroll = (self.packets[3] & 0xF) as i8;
if scroll & (1 << 3) == 1 << 3 {
scroll -= 16;
}
dz = -scroll as i32;
}
if dx != 0 || dy != 0 {
self.input
.write_event(MouseRelativeEvent { dx, dy }.to_event())
.expect("ps2d: failed to write mouse event");
}
if dz != 0 {
self.input
.write_event(ScrollEvent { x: 0, y: dz }.to_event())
.expect("ps2d: failed to write scroll event");
}
let left = flags.contains(MousePacketFlags::LEFT_BUTTON);
let middle = flags.contains(MousePacketFlags::MIDDLE_BUTTON);
let right = flags.contains(MousePacketFlags::RIGHT_BUTTON);
if left != self.mouse_left
|| middle != self.mouse_middle
|| right != self.mouse_right
{
self.mouse_left = left;
self.mouse_middle = middle;
self.mouse_right = right;
self.input
.write_event(
ButtonEvent {
left,
middle,
right,
}
.to_event(),
)
.expect("ps2d: failed to write button event");
}
} else {
warn!(
"overflow {:X} {:X} {:X} {:X}",
self.packets[0], self.packets[1], self.packets[2], self.packets[3]
);
}
self.packets = [0; 4];
self.packet_i = 0;
}
}
}
@@ -0,0 +1,107 @@
// This code is informed by the QEMU implementation found here:
// https://github.com/qemu/qemu/blob/master/hw/input/vmmouse.c
//
// As well as the Linux implementation here:
// http://elixir.free-electrons.com/linux/v4.1/source/drivers/input/mouse/vmmouse.c
use core::arch::asm;
use log::{error, info, trace};
const MAGIC: u32 = 0x564D5868;
const PORT: u16 = 0x5658;
pub const GETVERSION: u32 = 10;
pub const ABSPOINTER_DATA: u32 = 39;
pub const ABSPOINTER_STATUS: u32 = 40;
pub const ABSPOINTER_COMMAND: u32 = 41;
pub const CMD_ENABLE: u32 = 0x45414552;
pub const CMD_DISABLE: u32 = 0x000000f5;
pub const CMD_REQUEST_ABSOLUTE: u32 = 0x53424152;
pub const CMD_REQUEST_RELATIVE: u32 = 0x4c455252;
const VERSION: u32 = 0x3442554a;
pub const RELATIVE_PACKET: u32 = 0x00010000;
pub const LEFT_BUTTON: u32 = 0x20;
pub const RIGHT_BUTTON: u32 = 0x10;
pub const MIDDLE_BUTTON: u32 = 0x08;
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
pub unsafe fn cmd(cmd: u32, arg: u32) -> (u32, u32, u32, u32) {
let a: u32;
let b: u32;
let c: u32;
let d: u32;
// ebx can't be used as input or output constraint in rust as LLVM reserves it.
// Use xchg to pass it through r9 instead while restoring the original value in
// rbx when leaving the inline asm block. si and di are clobbered too.
#[cfg(not(target_arch = "x86"))]
asm!(
"xchg r9, rbx; in eax, dx; xchg r9, rbx",
inout("eax") MAGIC => a,
inout("r9") arg => b,
inout("ecx") cmd => c,
inout("edx") PORT as u32 => d,
out("rsi") _,
out("rdi") _,
);
// On x86 we don't have a spare register, so push ebx to the stack instead.
#[cfg(target_arch = "x86")]
asm!(
"push ebx; mov ebx, edi; in eax, dx; mov edi, ebx; pop ebx",
inout("eax") MAGIC => a,
inout("edi") arg => b,
inout("ecx") cmd => c,
inout("edx") PORT as u32 => d,
);
(a, b, c, d)
}
#[cfg(not(any(target_arch = "x86", target_arch = "x86_64")))]
pub unsafe fn cmd(cmd: u32, arg: u32) -> (u32, u32, u32, u32) {
unimplemented!()
}
pub fn enable(relative: bool) -> bool {
trace!("Enable vmmouse");
unsafe {
let (eax, ebx, _, _) = cmd(GETVERSION, 0);
if ebx != MAGIC || eax == 0xFFFFFFFF {
info!("No vmmouse support");
return false;
}
let _ = cmd(ABSPOINTER_COMMAND, CMD_ENABLE);
let (status, _, _, _) = cmd(ABSPOINTER_STATUS, 0);
if (status & 0x0000ffff) == 0 {
info!("No vmmouse");
return false;
}
let (version, _, _, _) = cmd(ABSPOINTER_DATA, 1);
if version != VERSION {
error!(
"Invalid vmmouse version: {} instead of {}",
version, VERSION
);
let _ = cmd(ABSPOINTER_COMMAND, CMD_DISABLE);
return false;
}
if relative {
cmd(ABSPOINTER_COMMAND, CMD_REQUEST_RELATIVE);
} else {
cmd(ABSPOINTER_COMMAND, CMD_REQUEST_ABSOLUTE);
}
}
return true;
}
@@ -0,0 +1 @@
/target
@@ -0,0 +1,24 @@
[package]
name = "usbhidd"
description = "USB HID driver"
version = "0.1.0"
authors = ["4lDO2 <4lDO2@protonmail.com>"]
edition = "2018"
license = "MIT"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
anyhow.workspace = true
bitflags.workspace = true
log.workspace = true
orbclient.workspace = true
redox_syscall.workspace = true
rehid = { git = "https://gitlab.redox-os.org/redox-os/rehid.git" }
xhcid = { path = "../../usb/xhcid" }
common = { path = "../../common" }
inputd = { path = "../../inputd" }
[lints]
workspace = true
@@ -0,0 +1,457 @@
use anyhow::{Context, Result};
use std::{env, thread, time};
use inputd::ProducerHandle;
use orbclient::KeyEvent as OrbKeyEvent;
use rehid::{
report_desc::{ReportTy, REPORT_DESC_TY},
report_handler::ReportHandler,
usage_tables::{GenericDesktopUsage, UsagePage},
};
use xhcid_interface::{
ConfigureEndpointsReq, DevDesc, EndpDirection, EndpointTy, PortId, PortReqRecipient,
XhciClientHandle,
};
mod reqs;
fn send_key_event(display: &mut ProducerHandle, usage_page: u16, usage: u16, pressed: bool) {
let scancode = match usage_page {
0x07 => match usage {
0x04 => orbclient::K_A,
0x05 => orbclient::K_B,
0x06 => orbclient::K_C,
0x07 => orbclient::K_D,
0x08 => orbclient::K_E,
0x09 => orbclient::K_F,
0x0A => orbclient::K_G,
0x0B => orbclient::K_H,
0x0C => orbclient::K_I,
0x0D => orbclient::K_J,
0x0E => orbclient::K_K,
0x0F => orbclient::K_L,
0x10 => orbclient::K_M,
0x11 => orbclient::K_N,
0x12 => orbclient::K_O,
0x13 => orbclient::K_P,
0x14 => orbclient::K_Q,
0x15 => orbclient::K_R,
0x16 => orbclient::K_S,
0x17 => orbclient::K_T,
0x18 => orbclient::K_U,
0x19 => orbclient::K_V,
0x1A => orbclient::K_W,
0x1B => orbclient::K_X,
0x1C => orbclient::K_Y,
0x1D => orbclient::K_Z,
0x1E => orbclient::K_1,
0x1F => orbclient::K_2,
0x20 => orbclient::K_3,
0x21 => orbclient::K_4,
0x22 => orbclient::K_5,
0x23 => orbclient::K_6,
0x24 => orbclient::K_7,
0x25 => orbclient::K_8,
0x26 => orbclient::K_9,
0x27 => orbclient::K_0,
0x28 => orbclient::K_ENTER,
0x29 => orbclient::K_ESC,
0x2A => orbclient::K_BKSP,
0x2B => orbclient::K_TAB,
0x2C => orbclient::K_SPACE,
0x2D => orbclient::K_MINUS,
0x2E => orbclient::K_EQUALS,
0x2F => orbclient::K_BRACE_OPEN,
0x30 => orbclient::K_BRACE_CLOSE,
0x31 => orbclient::K_BACKSLASH,
// 0x32 non-us # and ~
0x32 => 0x56,
0x33 => orbclient::K_SEMICOLON,
0x34 => orbclient::K_QUOTE,
0x35 => orbclient::K_TICK,
0x36 => orbclient::K_COMMA,
0x37 => orbclient::K_PERIOD,
0x38 => orbclient::K_SLASH,
0x39 => orbclient::K_CAPS,
0x3A => orbclient::K_F1,
0x3B => orbclient::K_F2,
0x3C => orbclient::K_F3,
0x3D => orbclient::K_F4,
0x3E => orbclient::K_F5,
0x3F => orbclient::K_F6,
0x40 => orbclient::K_F7,
0x41 => orbclient::K_F8,
0x42 => orbclient::K_F9,
0x43 => orbclient::K_F10,
0x44 => orbclient::K_F11,
0x45 => orbclient::K_F12,
0x46 => orbclient::K_PRTSC,
0x47 => orbclient::K_SCROLL,
// 0x48 pause
0x49 => orbclient::K_INS,
0x4A => orbclient::K_HOME,
0x4B => orbclient::K_PGUP,
0x4C => orbclient::K_DEL,
0x4D => orbclient::K_END,
0x4E => orbclient::K_PGDN,
0x4F => orbclient::K_RIGHT,
0x50 => orbclient::K_LEFT,
0x51 => orbclient::K_DOWN,
0x52 => orbclient::K_UP,
0x53 => orbclient::K_NUM,
0x54 => orbclient::K_NUM_SLASH,
0x55 => orbclient::K_NUM_ASTERISK,
0x56 => orbclient::K_NUM_MINUS,
0x57 => orbclient::K_NUM_PLUS,
0x58 => orbclient::K_NUM_ENTER,
0x59 => orbclient::K_NUM_1,
0x5A => orbclient::K_NUM_2,
0x5B => orbclient::K_NUM_3,
0x5C => orbclient::K_NUM_4,
0x5D => orbclient::K_NUM_5,
0x5E => orbclient::K_NUM_6,
0x5F => orbclient::K_NUM_7,
0x60 => orbclient::K_NUM_8,
0x61 => orbclient::K_NUM_9,
0x62 => orbclient::K_NUM_0,
// 0x62 num .
// 0x64 non-us \ and |
0x64 => orbclient::K_APP,
0x66 => orbclient::K_POWER,
// 0x67 num =
// unmapped values
0xE0 => orbclient::K_LEFT_CTRL,
0xE1 => orbclient::K_LEFT_SHIFT,
0xE2 => orbclient::K_ALT,
0xE3 => orbclient::K_LEFT_SUPER,
0xE4 => orbclient::K_RIGHT_CTRL,
0xE5 => orbclient::K_RIGHT_SHIFT,
0xE6 => orbclient::K_ALT_GR,
0xE7 => orbclient::K_RIGHT_SUPER,
// reserved values
_ => {
log::warn!("unknown usage_page {:#x} usage {:#x}", usage_page, usage);
return;
}
},
_ => {
log::warn!("unknown usage_page {:#x}", usage_page);
return;
}
};
let key_event = OrbKeyEvent {
character: '\0',
scancode,
pressed,
};
match display.write_event(key_event.to_event()) {
Ok(_) => (),
Err(err) => {
log::warn!("failed to send key event to orbital: {}", err);
}
}
}
fn main() -> Result<()> {
let mut args = env::args().skip(1);
const USAGE: &'static str = "usbhidd <scheme> <port> <interface>";
let scheme = args.next().expect(USAGE);
let port = args
.next()
.expect(USAGE)
.parse::<PortId>()
.expect("Expected port ID");
let interface_num = args
.next()
.expect(USAGE)
.parse::<u8>()
.expect("Expected integer as input of interface");
let name = format!("{}_{}_{}_hid", scheme, port, interface_num);
common::setup_logging(
"usb",
"usbhid",
&name,
common::output_level(),
common::file_level(),
);
log::info!(
"USB HID driver spawned with scheme `{}`, port {}, interface {}",
scheme,
port,
interface_num
);
let handle = XhciClientHandle::new(scheme, port).context("Failed to open XhciClientHandle")?;
let desc: DevDesc = handle
.get_standard_descs()
.context("Failed to get standard descriptors")?;
log::info!(
"USB HID driver: {:?} serial {:?}",
desc.product_str.as_ref().map(|s| s.as_str()).unwrap_or(""),
desc.serial_str.as_ref().map(|s| s.as_str()).unwrap_or(""),
);
log::debug!("{:X?}", desc);
let mut endp_count = 0;
let (conf_desc, (if_desc, endp_desc_opt, hid_desc)) = desc
.config_descs
.iter()
.find_map(|conf_desc| {
let if_desc = conf_desc.interface_descs.iter().find_map(|if_desc| {
if if_desc.number == interface_num {
let endp_desc_opt = if_desc.endpoints.iter().find_map(|endp_desc| {
endp_count += 1;
if endp_desc.ty() == EndpointTy::Interrupt
&& endp_desc.direction() == EndpDirection::In
{
Some((endp_count, endp_desc.clone()))
} else {
None
}
});
let hid_desc = if_desc.hid_descs.iter().find_map(|hid_desc| {
//TODO: should we do any filtering?
Some(hid_desc)
})?;
Some((if_desc.clone(), endp_desc_opt, hid_desc))
} else {
endp_count += if_desc.endpoints.len();
None
}
})?;
Some((conf_desc.clone(), if_desc))
})
.context("Failed to find suitable configuration")?;
handle
.configure_endpoints(&ConfigureEndpointsReq {
config_desc: conf_desc.configuration_value,
interface_desc: Some(interface_num),
alternate_setting: Some(if_desc.alternate_setting),
hub_ports: None,
})
.context("Failed to configure endpoints")?;
//TODO: do we need to set protocol to report? It fails for mice.
//TODO: dynamically create good values, fix xhcid so it does not block on each request
// This sets all reports to a duration of 4ms
reqs::set_idle(&handle, 1, 0, interface_num as u16).context("Failed to set idle")?;
let report_desc_len = hid_desc.desc_len;
assert_eq!(hid_desc.desc_ty, REPORT_DESC_TY);
let mut report_desc_bytes = vec![0u8; report_desc_len as usize];
handle
.get_descriptor(
PortReqRecipient::Interface,
REPORT_DESC_TY,
0,
//TODO: should this be an index into interface_descs?
interface_num as u16,
&mut report_desc_bytes,
)
.context("Failed to retrieve report descriptor")?;
let mut handler =
ReportHandler::new(&report_desc_bytes).expect("failed to parse report descriptor");
let report_len = match endp_desc_opt {
Some((_endp_num, endp_desc)) => endp_desc.max_packet_size as usize,
None => handler.total_byte_length as usize,
};
let mut report_buffer = vec![0u8; report_len];
let report_ty = ReportTy::Input;
let report_id = 0;
let mut display = ProducerHandle::new().context("Failed to open input socket")?;
let mut endpoint_opt = match endp_desc_opt {
Some((endp_num, _endp_desc)) => match handle.open_endpoint(endp_num as u8) {
Ok(ok) => Some(ok),
Err(err) => {
log::warn!("failed to open endpoint {endp_num}: {err}");
None
}
},
None => None,
};
let mut left_shift = false;
let mut right_shift = false;
let mut last_mouse_pos = (0, 0);
let mut last_buttons = [false, false, false];
loop {
//TODO: get frequency from device
//TODO: use sleeps when accuracy is better: thread::sleep(time::Duration::from_millis(10));
let timer = time::Instant::now();
while timer.elapsed() < time::Duration::from_millis(1) {
thread::yield_now();
}
if let Some(endpoint) = &mut endpoint_opt {
// interrupt transfer
endpoint
.transfer_read(&mut report_buffer)
.context("failed to get report")?;
} else {
// control transfer
reqs::get_report(
&handle,
report_ty,
report_id,
//TODO: should this be an index into interface_descs?
interface_num as u16,
&mut report_buffer,
)
.context("failed to get report")?;
}
let mut mouse_pos = last_mouse_pos;
let mut mouse_dx = 0i32;
let mut mouse_dy = 0i32;
let mut scroll_y = 0i32;
let mut buttons = last_buttons;
for event in handler
.handle(&report_buffer)
.expect("failed to parse report")
{
log::debug!("{}", event);
if event.usage_page == UsagePage::GenericDesktop as u16 {
if event.usage == GenericDesktopUsage::X as u16 {
if event.relative {
mouse_dx += event.value as i32;
} else {
mouse_pos.0 = event.value as i32;
}
} else if event.usage == GenericDesktopUsage::Y as u16 {
if event.relative {
mouse_dy += event.value as i32;
} else {
mouse_pos.1 = event.value as i32;
}
} else if event.usage == GenericDesktopUsage::Wheel as u16 {
//TODO: what is X scroll?
if event.relative {
scroll_y += event.value as i32;
} else {
log::warn!("absolute mouse wheel not supported");
}
} else {
log::info!(
"unsupported generic desktop usage 0x{:X}:0x{:X} value {}",
event.usage_page,
event.usage,
event.value
);
}
} else if event.usage_page == UsagePage::KeyboardOrKeypad as u16 {
let (pressed, shift_opt) = if event.value != 0 {
(true, Some(left_shift | right_shift))
} else {
(false, None)
};
if event.usage == 0xE1 {
left_shift = pressed;
} else if event.usage == 0xE5 {
right_shift = pressed;
}
send_key_event(&mut display, event.usage_page, event.usage, pressed);
} else if event.usage_page == UsagePage::Button as u16 {
if event.usage > 0 && event.usage as usize <= buttons.len() {
buttons[event.usage as usize - 1] = event.value != 0;
} else {
log::info!(
"unsupported buttons usage 0x{:X}:0x{:X} value {}",
event.usage_page,
event.usage,
event.value
);
}
} else if event.usage_page >= 0xFF00 {
// Ignore vendor defined event
} else {
log::info!(
"unsupported usage 0x{:X}:0x{:X} value {}",
event.usage_page,
event.usage,
event.value
);
}
}
if mouse_pos != last_mouse_pos {
last_mouse_pos = mouse_pos;
// TODO
// ps2d uses 0..=65535 as range, while usb uses 0..=32767. orbital
// expects the former range, so multiply by two here to temporarily
// align with orbital expectation. This workaround will make cursor
// looks out of sync in QEMU using virtio-vga with usb-tablet.
let mouse_event = orbclient::event::MouseEvent {
x: mouse_pos.0 * 2,
y: mouse_pos.1 * 2,
};
match display.write_event(mouse_event.to_event()) {
Ok(_) => (),
Err(err) => {
log::warn!("failed to send mouse event to orbital: {}", err);
}
}
}
if mouse_dx != 0 || mouse_dy != 0 {
// TODO: This is a filter to prevent random mouse jumps
if mouse_dx > -127 && mouse_dx < 127 {
let mouse_event = orbclient::event::MouseRelativeEvent {
dx: mouse_dx,
dy: mouse_dy,
};
match display.write_event(mouse_event.to_event()) {
Ok(_) => (),
Err(err) => {
log::warn!("failed to send mouse event to orbital: {}", err);
}
}
}
}
if scroll_y != 0 {
let scroll_event = orbclient::event::ScrollEvent { x: 0, y: scroll_y };
match display.write_event(scroll_event.to_event()) {
Ok(_) => (),
Err(err) => {
log::warn!("failed to send scroll event to orbital: {}", err);
}
}
}
if buttons != last_buttons {
last_buttons = buttons;
let button_event = orbclient::event::ButtonEvent {
left: buttons[0],
right: buttons[1],
middle: buttons[2],
};
match display.write_event(button_event.to_event()) {
Ok(_) => (),
Err(err) => {
log::warn!("failed to send button event to orbital: {}", err);
}
}
}
// log::trace!("took {}ms", timer.elapsed().as_millis())
}
}

Some files were not shown because too many files have changed in this diff Show More